In reply to:

The problem with AGI is not that AIs have no ability to learn "concepts", it's that the G in 'AGI' is very likely ill-defined. Even humans are not 'general intelligences', they're just extremely capable aggregates of narrow intelligences that collectively implement the rather complex task we call "being a human". Narrow AIs that implement 'deep learning' can learn 'concepts' that are tailored to their specific task; for instance. the DeepDream AI famously learns a variety of 'concepts' that relate to something looking like a dog. And sometimes these concepts turn out to be usable in a different task, but this is essentially a matter of luck. In the Amazon reviews case, the 'sentiment' of a review turned out to be a good predictor of what the review would say, even after controlling for the sorts of low-order correlations in the text that character-based RNNs can be expected to model most easily. I don't see this as especially surprising, or as having much implication about possible 'AGI'.

Humans are general intelligences, and that is exactly about having completely general concepts. Is there something you cannot think about? Suppose there is. Then let's think about that thing. There is now nothing you cannot think about. No current computer AI can do this; when they can, they will in fact be AGIs.

Open thread, Apr. 03 - Apr. 09, 2017

Notes for future OT posters:

1. Please add the 'open_thread' tag.

2. Check if there is an active Open Thread before posting a new one. (Immediately before; refresh the list-of-threads page before posting.)

3. Open Threads should start on Monday, and end on Sunday.

4. Unflag the two options "Notify me of new top level comments on this article" and "

Comments

sorted by
magical algorithm
Highlighting new comments since Today at 5:05 PM
Select new highlight date
All comments loaded

Final version of thesis going out within 4 days. Getting back into a semi-regular schedule after PhD defense, death in the family, and convergence of job-search on a likely candidate in quick succession. Astrobiology writing likely to restart soon. Possible topics include:

  • schools of thought in life origins research
  • the nature of LUCA
  • recent work on the evolution of potentiated smart animal lineages on Earth
  • WTF are eukaryotes anyway
  • the fallacies of the Fermi paradox/ 'great filter' concepts
  • the fallacies of SETI as it is currently performed

I'm thinking on writing a post on doing 'lazy altruism', meaning 'something having a somewhat lasting effect that costs the actor only a small inconvenience, and is not specifically calculated to do the most amount of good - only the most amount per this exact effort."

Not sure I'm not too lazy to expand on it, though.

Tyler Cowen and Ezra Klein discuss things. Notably:

Ezra Klein: The rationality community.

Tyler Cowen: Well, tell me a little more what you mean. You mean Eliezer Yudkowsky?

Ezra Klein: Yeah, I mean Less Wrong, Slate Star Codex. Julia Galef, Robin Hanson. Sometimes Bryan Caplan is grouped in here. The community of people who are frontloading ideas like signaling, cognitive biases, etc.

Tyler Cowen: Well, I enjoy all those sources, and I read them. That’s obviously a kind of endorsement. But I would approve of them much more if they called themselves the irrationality community. Because it is just another kind of religion. A different set of ethoses. And there’s nothing wrong with that, but the notion that this is, like, the true, objective vantage point I find highly objectionable. And that pops up in some of those people more than others. But I think it needs to be realized it’s an extremely culturally specific way of viewing the world, and that’s one of the main things travel can teach you.

I think no one would argue that the rationality community is at all divorced from the culture that surrounds it. People talk about culture constantly, and are looking for ways to change the culture to better address shared goals. It's sort of silly to say that that means it should be called the "irrationality community." Tyler Cowen is implicitly putting himself at the vantage point of a more objective observer with the criticism, which I find ironic.

Where Tyler is wrong is that it's not JUST another kind of culture. It's a culture with a particular set of shared assumptions, and it's nihilistic to imply that all cultures are equal no matter from what shared assumptions they issue forth. Cultures are not interchangeable. Tyler would also have to admit (and I'm guessing he likely would admit if pressed directly) that his culture of mainstream academic thought is "just another kind of religion" to exactly the same extent that rationality is, it's just less self-aware about that fact.

As an aside, I think Lumifer is a funny name. I always associate it with Lumiere from Beauty and the Beast, and with Lucifer. Basically I always picture your posts as coming from a cross between a cartoon candle and Satan.

It's sort of silly to say that that means it should be called the "irrationality community."

Notice the name of this website. It is not "The Correct Way To Do Everything".

it's not JUST another kind of culture. It's a culture with a particular set of shared assumptions

Don't ALL cultures have their own particular set of shared assumptions? Tyler's point is that the rationalist culture, says Tyler, sets itself above all others as it claims to possess The Truth (or at least know the True Paths leading in that general direction) -- and yet most cultures have similar claims.

Lucifer

Lucifer is the bringer of light (Latin: lux). Latin also has another word for light: lumen (it's the same root but with the -men suffix). Just sayin' :-P

But I will also admit that the idea of an all-singing all-dancing candelabra has merit, too :-)

It's sort of silly to say that that means it should be called the "irrationality community." Tyler Cowen is implicitly putting himself at the vantage point of a more objective observer with the criticism, which I find ironic.

It did seem to be a pretty bold and frontal critique. And "irrationality community" is probably silly. But I agree LW, et al has at times a religious and dogmatic feel to it. In this way the RC becomes something like the opposite of the label it carries. That seems to be his point.

As an aside, I think Lumifer is a funny name. I always associate it with Lumiere from Beauty and the Beast, and with Lucifer. Basically I always picture your posts as coming from a cross between a cartoon candle and Satan.

Yes. Yes.

If this wasn't exactly the mental image I had of Lumifer before, then it is now.

Maybe a bit more Satan than cartoon

¯\_(ツ)_/¯

(posted mostly for the challenge of writing this properly in markdown syntax. actually, it was quite easy)

A bit tongue-in-cheeck, but how about taking Tyler's unfair label as a proposal?

We could start the rationality religion, without the metaphysics or ideology of ordinary religion. Our God could be everything we do not know. We worship love. Our savior is the truth. We embrace forgiveness as the game-theoretical optimal modified tit-for-tat solution to a repeated game. And so on.

We thoroughly investigate and aggregate the best knowledge humanity currently has on how to live. And we create Rationality Temples worldwide. There will be weekly congregations, with talks on a sequence, with following discussions, on topics such as signalling, bayesian thinking, cognitive biases. We propose a three step way to heaven on earth: identifying worthwhile causes, charting effective solutions and taking actions to achieve it. Lifetime goal is writing a sequence. Compassion meditation and visualisation prayer once per day. Haha, okay perhaps I'm overdoing it.

Using the well-established concepts, rituals and memes of religion is easy to mock, but what if it is also an effective way to build our community and reach our goals?

what if it is also an effective way to build our community and reach our goals?

It surely is an effective way, since by this mean all kinds of silly causes have been pursued. But creating a religion out of rationality (lowercase) would defeat its purpose: in the span of a year, rationality would become the password to learn by memory and the beginning structures will solidify as an attire.
Religions are appealing exactly because they exempt their members from thinking on their own and accepting hard truths: rationality has instead more in common with martial arts, they are mostly a question of training and learning to take many hits.

irrationality community

No one is more critical of us than ourselves. "LessWrong" is lesswrong for being humble about it. Hopefully that humility sticks around for a very long time.

No one is more critical of us than ourselves.

This seems untrue. For example, RationalWiki.

In the past I could also have pointed to some individuals (who AFAIK were not associated with RW, but they could have been) who I think would have counted. I can't think of any right now, but I expect they still exist.

Humility is good, but calibration is better.

Curious about if this is worth making into it's own weekly thread. Curious as to what's being worked on, in personal life, work life or just "cool stuff". I would like people to share, after all we happen to have similar fields of interest and similar fields we are trying to tackle.

Projects sub-thread:

  • What are you working on this week (a few words or a serious breakdown)(if you have a list feel free to dump it here)?
  • What do you want to be asked about next week? What do you expect to have done by then?
  • Have you noticed anything odd or puzzling to share with us?
  • Are you looking for someone with experience in a specific field to save you some search time?
  • What would you describe are your biggest bottlenecks?

I am working on:

  • learning colemak
  • vulnerability, circling (relational therapy stuff) investigating the ideas around it.
  • trialling supplements: Creatine, Protein, Citrulline malate. Adding in Vitamins: C, D, Fish oil, Calcium, Magnesium, Iron, Distant future trials: 5HTP, SAMe. preliminary SAMe and 5HTP were that they make me feel like crap.
  • promoting the voluntary euthanasia party of NSW (Australia) (political reform of the issue)
  • emptying my schedule to afford me more "free" time in which to write up posts.
  • contemplating book topic ideas.
  • trying to get better routines going, contemplating things like "I only sleep at home", and cancelling all meeting and turning off all connected media for a week.

my biggest bottlenecks are myself (sometimes focus) and being confident at what will have a return VS not have a return. (hence many small experiements)

Translating a novel (really, a collection of essays) about WWII massacres of Jews in Kyiv & the rise of neo-nazism in post-Soviet republics (and much in between). It will take me a few months, probably, since this is a side job.

Overall impression: the past is easier to deal with, because it is too horrible. Imagine 10^5 deaths. Although I unfortunately know the place where it happened, & he includes personal stories (more like tidbits), so the suspension of disbelief takes some effort to maintain. But the 'present' part - a series of the author's open letters to mass-media and various officials about pogroms and suchlike that went unpunished - is hard: he keeps saying the same over and over. (Literally. And his style is heavy on the reader.) After a while the eye glazes over & notices only that the dates and the addresses change, but the content doesn't, except for the growing list of people who had not answered.

Just had not answered.

Now this is - easy to imagine.

Maybe this isn't odd, but I had thought it would be the other way around.

the past is easier to deal with, because it is too horrible. Imagine 10^5 deaths.

" A single death is a tragedy; a million deaths is a statistic" -- a meme

Ah, but what if you have walked above their bones?

You always walk over bones, it's just that you know of some and don't know of others.

I don't know how you do it, but you seldom fail to cheer me up. Even a little bit.

Thanks.

From the book, on 'The Doctors' plot' of 1953:

Among the listed people who provided medical help to the party and state leaders, there was an abrupt addition - V. V. Zakusov, professor of pharmacology. He didn't take part directly in the leaders' treatment - he was at first only called in for an opinion, and given to sign the conclusion about the prescriptions that the 'doctors-murderers' had issued to hasten their patients' death. Vasili Vasilyevitch Zakusov took the feather and, well aware of what lied ahead, wrote this: "The best doctors in the world will sign such prescriptions." In that moment he stopped being an expert and became a suspect. In jail, even after tortures, he didn't withdraw his conclusion.

I'm working on

  • a graphing library for python (ggplot2's conceptual model with a pythonic API)
  • writing cliffs notes for Order Without Law (a book about how people settle disputes without turning to the legal system)
  • learning ukelele with Yousician (currently making extremely slow progress on the "double stops" lesson, I find it really hard to consistently strum two strings at once)
  • trying to write an essay about how heuristics can sometimes be anti-inductive (they become less accurate the more they're applied), and how we don't really seem to have any cultural awareness of this problem even though it seems important

I might have the last one complete by next week, but the others are fairly long-term projects.

Reminds me, we didn't have a bragging thread for some time.

What would you describe are your biggest bottlenecks?

I think I'd like to see this as a separate topic (probably monthly, because such things take time).

However, just do as you wish, and then we'll see how it works.

I'm working on a podcast read-through of the web serial Worm. We've just put out the fifth episode of the series. It's becoming pretty popular by our standards.

I recently made an attempt to restart my Music-RNN project:

https://www.youtube.com/playlist?list=PL-Ewp2FNJeNJp1K1PF_7NCjt2ZdmsoOiB

Basically went and made the dataset five times bigger and got... a mediocre improvement.

The next step is to figure out Connectionist Temporal Classification and attempt to implement Text-To-Speech with it. And somehow incorporate pitch recognition as well so I can create the next Vocaloid. :V

Also, because why not brag while I'm here, I have an attempt at an Earthquake Predictor in the works... right now it only predicts the high frequency, low magnitude quakes, rather than the low frequency, high magnitude quakes that would actually be useful... you can see the site where I would be posting daily updates if I weren't so lazy...

http://www.earthquakepredictor.net/

Other than that... I was recently also working on holographic word vectors in the same vein as Jones & Mewhort (2007), but shelved that because I could not figure out how to normalize/standardize the blasted things reliably enough to get consistent results across different random initializations.

Oh, also was working on a Visual Novel game with an artist friend who was previously my girlfriend... but due to um... breaking up, I've had trouble finding the motivation to keep working on it.

So many silly projects... so little time.

Our article about using nuclear submarines as refuges in case of a global catastrophe has been accepted for the Futures journal and its preprint is available online.

Abstract

Recently many methods for reducing the risk of human extinction have been suggested, including building refuges underground and in space. Here we will discuss the perspective of using military nuclear submarines or their derivatives to ensure the survival of a small portion of humanity who will be able to rebuild human civilization after a large catastrophe. We will show that it is a very cost-effective way to build refuges, and viable solutions exist for various budgets and timeframes. Nuclear submarines are surface independent, and could provide energy, oxygen, fresh water and perhaps even food for their inhabitants for years. They are able to withstand close nuclear explosions and radiation. They are able to maintain isolation from biological attacks and most known weapons. They already exist and need only small adaptation to be used as refuges. But building refuges is only “Plan B” of existential risk preparation; it is better to eliminate such risks than try to survive them.

Full text: http://www.sciencedirect.com/science/article/pii/S0016328716303494?np=y&npKey=6dcc6d35057e4c51bfd8d6933ab62c6d4a1604b5b71a40f060eb49dc7f42c9a1

Back when SeaSteading forum was going, one of the folks was working on building cement submarines, as a pre-design to growing them like coral.

"Modafinil-Induced Changes in Functional Connectivity in the Cortex and Cerebellum of Healthy Elderly Subjects"

http://journal.frontiersin.org/article/10.3389/fnagi.2017.00085/full

"CEDs may also help to maintain optimal brain functioning or compensate for subtle and or subclinical deficits associated with brain aging or early-stage dementia."

"In the modafinil group, in the post-drug period, we found an increase of centrality that occurred bilaterally in the BA17, thereby suggesting an increase of the FC of the visual cortex with other brain regions due to drug action." FC analysis revealed connectivity increase within the cerebellar Crus I, Crus II areas, and VIIIa lobule, the right inferior frontal sulcus (IFS), and the left middle frontal gyrus (MFG)."

"These frontal areas are known to modulate attention levels and some core processes associated with executive functions, and, specifically, inhibitory control and working memory (WM). These functions depend on each other and co-activate frontal areas along with the posterior visual cortex to re-orient attention toward visual stimuli and also enhance cognitive efficiency.

Data on behavioral effects of modafinil administration in our study group are missing and, at this stage, we can only provide theoretical speculations for the functional correlates of the regional activations that we have found to be promoted by the drug."

I'm still mulling over the whole "rationalism as a religion". I've come to the conclusion that there are indeed two axioms that are shared by the rational-sphere that we-cannot-quite-prove, and whose variations produce different cultures.
I call them underlying reality and people are perfect.
"Underlying reality" (U): refers to the existence of a stratum of reality that is independent from our senses and our thoughts, whose configurations gives the notion of truth as correspondence.
"People are perfect" (P): instead refers to the truth of mental ideation that people might have, whether they are (or there's a subset that is) always right.
Here's a rough scheme:
U, P: religion. Our feelings reflect directly the inspiration of a higher source of truth.
U, not P: rationalism. We are imperfect hardware in a vast and mostly unknowable world.
not U, P: the most uncertain category. Perhaps magic? There's no fixed, underlying truth but our thoughts can influnce it.
not U, not P: postmodernism. Nothing is true and everything's debatable.

I might make this a little more precise in a proper post.

In "Strong AI Isn't Here Yet", Sarah Constantin writes that she believes AGI will require another major conceptual breakthrough in our understanding before it can be built, and it will not simply be scaled up or improved versions of the deep learning algorithms that already exist.

To argue this, she makes the case that current deep learning algorithms have no way to learn "concepts" and only operate on "percepts." She says:

I suspect that, similarly, we’d have to have understanding of how concepts work on an algorithmic level in order to train conceptual learning.

However, I feel that her argument was lacking in terms of tangible evidence for the claim that deep-learning algorithms do not learn any high-level concepts. It seems to be based on the observation that we currently do not know how to explicitly represent concepts in mathematical or algorithmic terms. But I think if we are to take this as a belief, we should try to predict how the world would look differently if deep-learning algorithms could learn concepts entirely on their own, without us understanding how.

So what kind of problems, if solved by neural networks, would surprise us if this belief was held? Well, to name a couple of experiments that surprise me, I would probably point out DCGAN and InfoGAN. In the former, they are able to extract visual "concepts" out of the generator network by taking the latent vectors of all the examples that share one kind of attribute of their choosing (in the paper they take "smiling" / "not smiling" and "glasses" / "no glasses") and averaging them. Then they are able to construct new images by doing vector arithmetic in the latent space using this vector and passing them through the generator, so you can take a picture of someone without glasses and add glasses to them without altering the rest of their face, for example. In the second paper, their network learns a secondary latent variable vector that extracts disentagled features from the data. Most surprisingly, their network seems to learn such concepts as "rotation" (among other things) from a data set of 2D faces, even though there is no way to express the concept of three dimensions in this network or have that encoded as prior knowledge somehow.

Just this morning in fact, OpenAI revealed that they had done a very large scale deep-learning experiment using multiplicative LSTMs on Amazon review data. What was more surprising than just the fact they had beaten the benchmark accuracy on sentiment analysis, was that they had done it in an unsupervised manner by using the LSTMs to predict the next character in a given sequence of characters. They discovered that a single neuron in the hidden layer of this LSTM seemed to extract the overall sentiment of the review, and was somehow using this knowledge to get better at predicting the sequence. I would find this very surprising if I believed it were unlikely or impossible for neural networks to extract high-level "concepts" out of data without explicitly encoding it into the network structure or the data.

What I'm getting at here is that we should be able to set benchmarks on certain well-defined problems and say "Any AI that solves this problem has done concept learning and does concept-level reasoning", and update based on what types of algorithms solve those problems. And when that list of problems gets smaller and smaller, we really need to watch out to see if we have redefined the meaning of "concept" or drawn to the tautological conclusion that the problem really didn't require concept level reasoning after all. I feel like that has already happened to a certain degree.

The problem with AGI is not that AIs have no ability to learn "concepts", it's that the G in 'AGI' is very likely ill-defined. Even humans are not 'general intelligences', they're just extremely capable aggregates of narrow intelligences that collectively implement the rather complex task we call "being a human". Narrow AIs that implement 'deep learning' can learn 'concepts' that are tailored to their specific task; for instance. the DeepDream AI famously learns a variety of 'concepts' that relate to something looking like a dog. And sometimes these concepts turn out to be usable in a different task, but this is essentially a matter of luck. In the Amazon reviews case, the 'sentiment' of a review turned out to be a good predictor of what the review would say, even after controlling for the sorts of low-order correlations in the text that character-based RNNs can be expected to model most easily. I don't see this as especially surprising, or as having much implication about possible 'AGI'.

Humans are general intelligences, and that is exactly about having completely general concepts. Is there something you cannot think about? Suppose there is. Then let's think about that thing. There is now nothing you cannot think about. No current computer AI can do this; when they can, they will in fact be AGIs.