Wiki Contributions

Comments

OK firstly if we are talking fundamental physical limits how would sniper drones not be viable? Are you saying a flying platform could never compensate for recoil even if precisely calibrated before? What about fundamentals for guided bullets - a bullet with over 50% chance of hitting a target is worth paying for.

Your points - 1. The idea is a larger shell (not regular sized bullet) just obscures the sensor for a fraction of a second in a coordinated attack with the larger Javelin type missile. Such shell/s may be considerably larger than a regular bullet, but much cheaper than a missile. Missile or sniper size drones could be fitted with such shells depending on what was the optimal size.

Example shell (without 1K range I assume) however note that currently chaff is not optimized for the described attack, the fact that there is currently not a shell suited for this use is not evidence against it being impractical to create.

The principle here is about efficiency and cost. I maintain that against armor with hard kill defense it is more efficient to have a combined attack of sensor blinding and anti-armor missiles than just missiles alone. e.g it may take 10 simul Javelin to take out a target vs 2 Javelin and 50 simul chaff shells. The second attack will be cheaper, and the optimized "sweet spot" will always have some sensor blinding attack in it. Do you claim that the optimal coordinated attack would have zero sensor blinding?

2. Leading on from (1) I don't claim light drones will be. I regard a laser as a serious obstacle that is attacked with the swarm attack described before the territory is secured. That is blind the senor/obscure the laser, simul converge with missiles. The drones need to survive just long enough to shoot off the shells (i.e. come out from ground cover, shoot, get back). While a laser can destroy a shell in flight, can it take out 10-50 smaller blinding shells fired from 1000m at once?

(I give 1000m as an example too, flying drones would use ground cover to get as close as they could. I assume they will pretty much always be able to get within 1000m against a ground target using the ground as cover)

First, a pause straightforwardly buys you time in many worlds where counterfactual (no-pause) timelines were shorter than the duration of the pause.

Only if you pause everything that could bring ASI. That is hardware, training runs, basic science on learning algorithms, brain studies etc.

Another perspective.

If you believe like me that it is >90% that the current LLM approach is plateauing then your cost/benefit for pausing large training runs is different. I believe that the current AI lacks something like the generalization power of the human brain, this can be seen where Tesla auto-pilot has needed >10,000* the training data as a person and is still not human level. This could potentially be overcome by a better architecture, or could require different hardware as well because of the Von Neumann Bottleneck. If this is the case then a pause on large training runs can hardly be helpful. I believe that if LLM are not X-risk, then their capabilities should be fully explored and integrated fast into society to provide defense against more dangerous AI. It is a radically improved architecture or hardware that you should be worried about.

Three potential sources of danger

  1. Greatly improved architecture
  2. Large training run with current arch
  3. Greatly improved HW

We are paying more attention to (2) when to me it is the least impactful of the three and could even hurt. There are obvious ways this can hurt the cause. 

  1. If such training runs are not dangerous then the AI safety group loses credibility. 
  2. It could give a false sense of security when a different arch requiring much less training appears and is much more dangerous than the largest LLM. 
  3. It removes the chance to learn alignment and safety details from such large LLM

A clear path to such a better arch is studying neurons. Whether this is Dishbrain,  through progress in neural interfaces, brain scanning or something else, I believe it is very likely by 2030 we will have understood the brain/neural algorithm, characterized it pretty well and of course have the ability to attempt to implement it in our hardware.

So in terms of pauses, I think one targeted towards chip factories is better. It is achievable and it is clear to me that if you delay a large factory opening by 5  years, then you can't make up the lost time in anything like the same way for software.

Stopping (1) seems impossible i.e. "Don't study the human brain" seems likely to backfire. We would of course like some agreement that if a much better arch is discovered, it isn't immediately implemented.

I think value alignment will be expected/enforced as a negative to some extent. E.g. don't do something obviously bad (many such things are illegal anyway) and I expect that constraint to get tighter. That could give some kind of status quo bias on what AI tools are allowed to do also as an unknown new thing could be bad or seen as bad.

Already the AI could "do what I mean and check" a lot better. for coding tasks etc it will often do the wrong thing when it could clarify. I would like to see a confidence indicator that it knows what I want before it continues. I don't want to guess how much to clarify which what I currently have to do - this wastes time and mental effort. You are right there will be commercial pressure to do something at least somewhat similar.

How soon with what degree of confidence do you have? I think they have a big slower model that isn't that much of a performance improvement and hardly economic to release.

ChatGPT interface like I usually do for GPT4.0. some GPT4.0 queries done by cursor AI IDE

I have just used it for coding for 3+ hours and found it quite frustrating. Definitely faster than GPT 4.0 but less capable. More like an improvement for 3.5. To me a seems a lot like LLM progress is plateauing. 

Anyway in order to be significantly more useful a coding assistant needs to be able to see debug output, in mostly real time, have the ability to start/stop the program, automatically make changes, keep the user in the loop and read/use GUI as that is often an important part of what we are doing. I havn't used any LLM that are even low-average ability at debugging kind of thought processes yet.

Not following - where could the 'low hanging fruit' possibly be hiding? We have many of "Other attributes conducive to breakthroughs are a ..." in our world of 8 billion. The data strongly suggests we are in diminishing returns. What qualities could an AI of Einstein intelligence realistically have that would let it make such progress where no person has. It would seem you would need to appeal to other less well defined qualities such as 'creativity' and argue that for some reason the AI would have much more of that. But that seems similar to just arguing that it in fact has > Einstein intelligence.

Capabilities are likely to cascade once you get to Einstein-level intelligence, not just because an AI will likely be able to form a good understanding of how it works and use this to optimize itself to become smarter[4][5], but also because it empirically seems to be the case that when you’re slightly better than all other humans at stuff like seeing deep connections between phenomena, this can enable you to solve hard tasks like particular research problems much much faster (as the example of Einstein suggests).

  1. Aka: Around Einstein-level, relatively small changes in intelligence can lead to large changes in what one is capable to accomplish.

OK but if that were true then there would have been many more Einstein like breakthroughs since then. More likely is that such low hanging fruit have been plucked and a similar intellect is well into diminishing returns. That is given our current technological society and >50 year history of smart people trying to work on everything if there are such breakthroughs to be made, then the IQ required is now higher than in Einsteins day.

No I have not seen a detailed argument about this, just the claim that once centralization goes past a certain point there is no coming back. I would like to see such an argument/investigation as I think it is quite important. "Yuval Harari" does say something similar in "Sapiens" 

Load More