I enjoyed reading through people's views of the AGI Predictions. I know only a little about AI and a little bit more about SETI (Search for Extraterrestrial Intelligence), so I decided to make a similar poll. 

The First Contact scenario with extraterrestrial intelligence shares certain similarities with the emergence of AGI. In the following, I ask similar questions to those in the AGI post. In addition, I add questions derived from the SETI literature. These are intended to reflect on AGI from a new perspective. 

In SETI, it is debated that First Contact can cause great disturbance and change, positive and negative. After all, civilizations which are capable of contacting us across the oceans of time and space are expected to be much more advanced - just like AGI is expected to be "superior" (whatever that means) to baseline humans.

 

The form of First Contact

 

Its timeline

 

Our response

 

The consequences

 

Meta Questions

 

And finally, of course:

New Comment
8 comments, sorted by Click to highlight new comments since:

I was unconvinced at first, but in the end I see this post as a good way to shift slightly the perspectives on AGI to make people think a bit more about their answers. Well done!

The twist at the end though. I had to go back and re-think my answers :P

Why do people have such low credences for "The effect of First contact is mostly harmful (e.g., selfish ETI, hazards)"? Most alien minds probably don't care about us? But perhaps caring about variety is evolutionarily convergent? If not, why wouldn't our "first contact" be extremely negative (given their tech advantage)?

My personal reasons:

  1. I assumed the question was about the first few decades after "first contact".
  2. A large chunk of my probability mass is on first contact being unintentional, and something neither side can do much about. Or perhaps one "side" is unaware of it. Like if we receive some message directed to no one in particular, or recording the remnants of some extreme cosmic event that seems mighty unatural. 
  3. It feels like we're near certain to have created an AGI by then. I am unsure enough about the long term time scales of AGI improvement, and their limits, that I can assign some credence to the AGI we make possessing relatively advanced technology. And so, it may be in a good bargainning position. If we make plenty of AI, maybe they'll be less powerful individually, but they should still be quite potent in the face of a superior adversary.

You should alter questions to make it clear "we" is meant to be humans or whatever we makes that succeeds us.

Also, perhaps a queston on whether "first contact" will be us detecting them without their being aware of it.

What’s SETI winter?

Like AGI winter, a time of reduced funding

"Will world GDP double within 4 years after receiving a message from outer space?"

Given what many of us here expect about future economic growth in the coming century, and the answers about preconditions for finding ETI, I suspect the answer to this might be dominated by when and at what tech level humans will reach before receiving the message happens.

 

"We ask ETI "do we live in a simulation"? They answer "yes"."

My own guess is that they answer "mu" or some equivalent. Especially if they're AGI.