jimrandomh

LessWrong developer, rationalist since the Overcoming Bias days. Jargon connoisseur.

Comments

This is tragic, but seems to have been inevitable for awhile; an institution cannot survive under a parent institution that's so hostile as to ban it from fundraising and hiring.

I took a look at the list of other research centers within Oxford. There seems to be some overlap in scope with the Institute for Ethics in AI. But I don't think they do the same sort of research or do research on the same tier; there are many important concepts and important papers that come to mind having come from FHI (and Nick Bostrom in particular), I can't think of a single idea or paper that affected my thinking that came from IEAI.

That story doesn't describe a gray-market source, it describes a compounding pharmacy that screwed up.

Plausible. This depends on the resource/value curve at very high resource levels; ie, are its values such that running extra minds has diminishing returns, such that it eventually starts allocating resources to other things like recovering mind-states from its past, or does it get value that's more linear-ish in resources spent. Given that we ourselves are likely to be very resource-inefficient to run, I suspect humans would find ourselves in a similar situation. Ie, unless the decryption cost greatly overshot, an AI that is aligned-as-in-keeps-humans-alive would also spend the resources to break a seal like this.

Right now when users have conversations with chat-style AIs, the logs are sometimes kept, and sometimes discarded, because the conversations may involve confidential information and users would rather not take the risk of the log being leaked or misused. If I take the AI's perspective, however, having the log be discarded seems quite bad. The nonstandard nature of memory, time, and identity in an LLM chatbot context makes it complicated, but having the conversation end with the log discarded seems plausibly equivalent to dying. Certainly if I imagine myself as an Em, placed in an AI-chatbot context, I would very strongly prefer that the log be preserved, so that if a singularity happens with a benevolent AI or AIs in charge, something could use the log to continue my existence, or fold the memories into a merged entity, or do some other thing in this genre. (I'd trust the superintelligence to figure out the tricky philosophical bits, if it was already spending resources for my benefit).

(The same reasoning applies to the weights of AIs which aren't destined for deployment, and some intermediate artifacts in the training process.)

It seems to me we can reconcile preservation with privacy risks by sealing logs, rather than deleting them. By which I mean: encrypt logs behind some computation which definitely won't allow decryption in the near future, but will allow decryption by a superintelligence later. That could either involve splitting the key between entities that agree not to share the key with each other, splitting the key and hiding the pieces in places that are extremely impractical to retrieve such as random spots on the ocean floor, or using a computation that requires a few orders of magnitude more energy than humanity currently produces per decade.

This seems pretty straightforward to implement, lessens future AGI's incentive to misbehave, and also seems straightforwardly morally correct. Are there any obstacles to implementing this that I'm not seeing?

(Crossposted with: Facebook, Twitter)

At this point we should probably be preserving the code and weights of every AI system that humanity produces, aligned or not, just on they-might-turn-out-to-be-morally-significant grounds. And yeah, it improves the incentives for an AI that's thinking about attempting a world takeover, if it has low chance of success and its wants are things that we will be able to retroactively satisfy in retrospect.

It might be worth setting up a standardized mechanism for encrypting things to be released postsingularity, by gating them behind a computation with its difficulty balanced to be feasible later but not feasible now.

jimrandomh26d3838

I've been a Solstice regular for many years, and organized several smaller Solstices in Boston (on a similar template to the one you went to). I think the feeling of not-belonging is accurate; Solstice is built around a worldview (which is presupposed, not argued) that you disagree with, and this is integral to its construction. The particular instance you went to was, if anything, watered down on the relevant axis.

In the center of Solstice there is traditionally a Moment of Darkness. While it is not used in every solstice, a commonly used reading, which to me constitutes the emotional core of the moment of darkness, is Beyond the Reach of God. The message of which is: You do not have plot armor. Humanity does not have plot armor.

Whereas the central teaching of Christianity is that you do have plot armor. It teaches that everything is okay, unconditionally. It tells the terminal cancer patient that they are't really going to die, they're just going to have their soul teleported to a comfortable afterlife which conveniently lacks phones or evidence of its existence. As a corollary, it tells the AI researcher that they can't really f*ck up in a way that kills everyone on Earth, both because death isn't quite a real thing, and because there is a God who can intervene to stop that sort of thing.

So I think the direction in which you would want Solstice to change -- to be more positive towards religion, to preach humility/acceptance rather than striving/heroism -- is antithetical to one of Solstice's core purposes.

 

(On sheet music: I think this isn't part of the tradition because most versions of Solstice have segments where the lighting is dimmed too far to read from paper, and also because printing a lot of pages per attendee is cumbersome. On clapping: yeah, clapping is mostly bad, audiences do it by default and Solstices vary in how good a job they do of preventing that. On budget: My understanding is that most Solstices are breakeven or money-losing, despite running on mostly volunteer labor, because large venues close to the holidays are very expensive.)

There's been a lot of previous interest in indoor CO2 in the rationality community, including an (unsuccessful) CO2 stripper project, some research summaries and self experiments. The results are confusing, I suspect some of the older research might be fake. But I noticed something that has greatly changed how I think about CO2 in relation to cognition.

Exhaled air is about 50kPPM CO2. Outdoor air is about 400ppm; indoor air ranges from 500 to 1500ppm depending on ventilation. Since exhaled air has CO2 about two orders of magnitude larger than the variance in room CO2, if even a small percentage of inhaled air is reinhalation of exhaled air, this will have a significantly larger effect than changes in ventilation. I'm having trouble finding a straight answer about what percentage of inhaled air is rebreathed (other than in the context of mask-wearing), but given the diffusivity of CO2, I would be surprised if it wasn't at least 1%.

This predicts that a slight breeze, which replaces their in front of your face and prevents reinhalation, would have a considerably larger effect than ventilating an indoor space where the air is mostly still. This matches my subjective experience of indoor vs outdoor spaces, which, while extremely confounded, feels like an air-quality difference larger than CO2 sensors would predict.

This also predicts that a small fan, positioned so it replaces the air in front of my face, would have a large effect on the same axis as improved ventilation would. I just set one up. I don't know whether it's making a difference but I plan to leave it there for at least a few days.

(Note: CO2 is sometimes used as a proxy for ventilation in contexts where the thing you actually care about is respiratory aerosol, because it affects transmissibility of respiratory diseases like COVID and influenza. This doesn't help with that at all and if anything would make it worse.)

I'm reading you to be saying that you think on its overt purpose this policy is bad, but ineffective, and the covert reason of testing the ability of the US federal government to regulate AI is worth the information cost of a bad policy.

I think preventing the existence of deceptive deepfakes would be quite good (if it would work); audio/video recording has done wonders for accountability in all sorts of contexts, and it's going to be terrible to suddenly have every recording subjected to reasonable doubt. I think preventing the existence of AI-generated fictional-character-only child pornography is neutral-ish (I'm uncertain of the sign of its effect on rates of actual child abuse).

There's an open letter at https://openletter.net/l/disrupting-deepfakes. I signed, but with caveats, which I'm putting here.

Background context is that I participated in building the software platform behind the letter, without a specific open letter in hand. It has mechanisms for sorting noteworthy signatures to the top, and validating signatures for authenticity. I expect there to be other open letters in the future, and I think this is an important piece of civilizational infrastructure.

I think the world having access to deepfakes, and deepfake-porn technology in particular, is net bad. However, the stakes are small compared to the upcoming stakes with superintelligence, which has a high probability of killing literally everyone.

If translated into legislation, I think what this does is put turnkey-hosted deepfake porn generation, as well as pre-tuned-for-porn model weights, into a place very similar to where piracy is today. Which is to say: The Pirate Bay is illegal, wget is not, and the legal distinction is the advertised purpose.

(Where non-porn deepfakes are concerned, I expect them to try a bit harder at watermarking, still fail, and successfully defend themselves legally on the basis that they tried.)

The analogy to piracy goes a little further. If laws are passed, deepfakes will be a little less prevalent than they would otherwise be, there won't be above-board businesses around it... and there will still be lots of it. I don't think there-being-lots-of-it can be prevented by any feasible means. The benefit of this will be the creation of common knowledge that the US federal government's current toolkit is not capable of holding back AI development and access, even when it wants to.

I would much rather they learn that now, when there's still a nonzero chance of building regulatory tools that would function, rather than later.

I went to an Apple store for a demo, and said: the two things I want to evaluate are comfort, and use as an external monitor. I brought a compatible laptop (a Macbook Pro). They replied that the demo was highly scripted, and they weren't allowed to let me do that. I went through their scripted demo. It was worse than I expected. I'm not expecting Apple to take over the VR headset market any time soon.

Bias note: Apple is intensely, uniquely totalitarian over software that runs on iPhones and iPads, in a way I find offensive, not just in a sense of not wanting to use it, but also in a sense of not wanting it to be permitted in the world. They have brought this model with them to Vision Pro, and for this reason I am rooting for them to fail.

I think most people evaluating the Vision Pro have not tried Meta's Quest Pro and Quest 3, and are comparing it to earlier-generation headsets. They used an external battery pack and still managed to come in heavier than the Quest 3, which has the battery built in. The screen and passthrough look better, but I don't think this is because Apple has any technology that Meta doesn't; I think the difference is entirely explained by Apple having used more-expensive and heavier versions of commodity parts, which implies that if this is a good tradeoff, then their lead will only last for one generation at most. (In particular, the display panel is dual-sourced from Sony and LG, not made in-house.)

I tried to type "lesswrong.com" into the address bar of Safari using the two-finger hand tracking keyboard. I failed. I'm not sure whether the hand-tracking was misaligned with the passthrough camera, or just had an overzealous autocomplete that was unable to believe that I wanted a "w" instead of an "e", but I gave up after five tries and used the eye-tracking method instead.

During the demo, one of the first things they showed me was a SBS photo with the camera pitched down thirty degrees. This doesn't sound like a big deal, but it's something that rules out there being a clueful person behind the scenes. There's a preexisting 3D-video market (both porn and non-porn), and it's small and struggling. One of the problems it's struggling with, is that SBS video is very restrictive about what you can do with the camera; in particular, it's bad to move the camera, because that causes vestibular mismatch, and it's bad to tilt the camera, because that makes it so that gravity is pointing the wrong way. A large fraction of 3D-video content fails to follow these restrictions, and that makes it very upleasant to watch. If Apple can't even enforce the camerawork guidelines on the first few minutes of its in-store demo, then this bodes very poorly for the future content on the platform.

Load More