This blog post is cross-posted from my personal blog. It will touch on two related topics:

  1. Why and how I applied to MIRI and failed to secure an internship.
  2. My experience at the AI Risk for Computer Scientists workshop (AIRCS).

If you're only interested in the AIRCS workshop, you may skip to the second section directly. Ideally, I'd have liked to make two separate entries, as they may pertain to different points of interest. However, both topics are extremely intertwined to me, and I could not make a clear cut in this experience. I should also note that me failing to secure an internship at MIRI probably have had a drastic impact in how I write about it, if only because I'd have been more constrained in what I describe had I got the internship.

With respect to people's name, I'll adhere to the following rule: Only mention names if what they said was done so publicly. That means that for books, facebook public pages or lectures, I will assume I can use the name of the author or teacher, and I will keep the name to myself for private discussions. You should probably also read Buck's comment, answering to this post.

Miri and me.

I discovered MIRI through Eliezer Yudkowsky, as I first began reading HPMOR and then Rationality, from A.I. to Zombie. Like almost everyone, I'm not sure what it is MIRI exactly do. I know at least that MIRI's intended goal is to save the world from unaligned AGI. But whatever it is they concretely do, it seems quite fun - I mean, it seems to involve type theory and category theory. I also read some articles they wrote, and skimmed through many other. While interesting, I've never caught enough details to actually imagine how to even start implementing anything they speak of. Reading some of their writings reminded me of several epistemology books I read years ago, but written more precisely, with clear code in mind, and to a computer-science-savy audience.

As I said, fun stuff, fun work!

In February 2019, Eliezer Yudkowsky shared on facebook a post by Buck Schlegeris stating that "I think that every EA who is a software engineer should apply to work at MIRI, if you can imagine wanting to work at MIRI." (and some more stuff). When I read that, I thought I should give it at least a try. I really didn't know how I could have helped them given my professional background - mostly logic and automata theory - but if they say that we should give it a try anyway, let's. I must admit that I was still skeptical back then, and didn't know exactly what they do, but I thought I would eventually come to get it. And even if it did not seem that they'll save the world from anything in the end(1), it still seemed a place I'd have loved to work at, assuming they are similar to the less wrongers I met at the european LessWrong community week-end.

Note that Buck Schlegeris's message does not directly concerns me. I'm not EA, but only EA adjacent. I still fail to see any effective actions I could take appart from giving some money, and when I applied for 80k career coaching, they told me they couldn't help(2). It also does not concern me because I used to be a post doc researcher who sometimes programmed, and not a software engineer in itself. But I wanted to let them decide wether I'd be a good fit or not.

The application process went as follows: First it started off with a triplebyte quizz. This one was extremely easy. I think there was at most two questions which answers I didn't know. The second part was two one hour calls with a MIRI researcher. The first call was a general presentation of myself, how I've heard of MIRI, why I am interested in working there, what were my past experiences, and so on. I told the interviewer something such as:

I am not even convinced that MIRI's work is important. At best, I am convinced that you were convinced that it is important. But even if EY is a good writer who I admire, the fact that he his set on the importance of his mission is not sufficient for me to think he is right. Furthermore, MIRI gets money by convincing people that their work is important, which means that MIRI has a strong incentive to be persuasive, whether or not your beliefs are true, and whether or not you still hold that belief. I do believe that when MIRI was created, it was not clear they would ever get money. If EY is as good as he seems to be when he writes, he could probably have cashed money in in far easier ways. So the best argument I have currently regarding AGI alignment's importance is that the founders of MIRI thought it was important at the time of MIRI's creation.

Honestly, after saying all of this, I thought my application was over and we would have stopped there, but it seemed okay with him. The second interview was more technical, the interviewer asked me plenty of questions on various topics pertaining to computer science, programming and theory. He also gave me a short programming exercice, which I succeeded (I won't tell you what the questions and exercice were for obvious reasons). I should emphasize that my answers were far from being perfect. When discussing with friends I learned that I had got some answers wrong; I had questions related to the Coq language for instance, I gave the closest answer I knew, which was haskell/ocaml related while Coq's construction was greatly generalized as far as I understand it(3). One question asked me how I would create a data structure allowing to have an efficient access to some functions. I gave a correct answer, I knew that the worst time complexity was logarithmic, but was not able to prove it. After the interview, I did realize that actually the proof was extremely easy.

The interviewer told me that, before taking a decision, MIRI wanted to meet me at a workshop, so they invited me to AIRCS, and that they also wanted me to take a 2 day long programming test. I still don't understand that: what's the point of inviting me if the test fails ? It would appear more cost efficient to wait until after the test to decide whether they want me to come or not (I don't think I ever asked it out loud, I was already happy to have a trip to California for free). I want to emphasize that at this point, I still had no idea of why they had found my application interesting, and what I would actually do if I worked for them. I kind of hoped that I'd get an answer eventually. I was noticing my confusion, as a good rationalist should do. Alas, as always when I notice my confusion, I stay confused and can't figure out what to do with it.

Last part of the interview was a two day long programming problem. There was a main programming task, with two options, I did only one of them. I mean, the second one seems way more interesting, but with my knowledge today, I fear that I would have needed a week to really read enough and implement it correctly. As I told my interviewer, it relates to a topic I do not know well. There was one thing which I had particularly appreciated: I was being paid for the whole time doing this programming exercice for them. This is a practice I've never seen elsewhere. I do not wish to disclose how much I have been paid, but I'll state that two hours at that rate was more than a day at the French PhD rate. I didn't even ask to be paid; I hadn't even thought that being paid for a job interview was possible. The task had to be done in 14 hours, i.e. after 2 days of 7 hours work each. I did not know if that rate also applied to me being an intern, but that salary was nice regardless, especially since they'd paid very quickly - it takes three months for every payment back in France(4). As a quick note here: thank you to everyone who gave at MIRI :p.

Since it seemed that MIRI was actually interested in my application, I believed that I should read more about MIRI. I was mostly reading what MIRI wrote, but to have an informed opinion, I thought I should read what other people wrote about it. Sometimes, EY made fun of the sneer club on twitter. I also saw a copy of an email sent to a lot of people related to CFAR about some creepy stuff that allegedly occurred at MIRI/CFAR. I wanted to get an idea of what I was getting myself into, so I started to browse through all of it. My inner Slytherin argues that the sneer club is filled with rationalists who posts non important stuff so that the actual important stuff get lost in the middle of everything else. I'm not going to write down everything I've read, but let me write a few examples as to explain of why I still went through with the application process. I've read that MIRI's work is not important, this one I might agree with but if I'm getting paid and there is only a one percent chance the work is important, that's still far better than most other job I can find now. Miri is male-dominated... well; according to their "team" page, it's hard to argue with this one too, however, given that I'm looking for a job as a computer programmer, I fear that's a problem I'll have everywhere. "At a MIRI's workshop, some guy touched a woman without her consent, and did it again after she asked him to stop and he was banned for this". I do not want to belittle the importance of sexual assault, but if MIRI's reaction was to ban him, I guess that's better than most conference I've heard of. I have no more details, so I don't know how easy it was to report the assault to staff, but I assume that if there were any problem here, the author would have wrote it. I also read that some people have mentioned the idea that infohazard could be used to blackmail people. I kind of assume that less wrong/rationality groups is the best place for this kind of scams, but I'm still okay with this, especially since I have not heard of people actually implementing the idea.

Long story short: I can understand why LW/MIRI sounds like a cult (after all, they ask for money to save the world). But since I was actually being offered things by them and obtained the best wage I ever had for the working test, it was hard to be frightened by this cult-idea. I did some time spent money which were related to LW. But nothing seems to be a sloppery slope. I meant, I paid for EY's book, but I paid far more for the entire Tolkien/Pratchett/Gaimann/Asimov... collection, and no one ever told me they were cults. I also sent a few hundreds euros to Berlin's LW community, but for this price, I did have 3 nights at an hostel, and 4 days of food. That seems quite reasonable so I highly doubt there are making huge profits on it.

Let's now skip a whole month and go to AIRCS.

AIRCS

Before discussing AIRCS, I want to stress this is a personal view, which may not represent the way anyone else felt about AIRCS. In particular, the fact that, after meeting me at AIRCS, I received an email telling me I won't have an internship at MIRI have probably affected my views. At least because I would probably have been more MIRI aligned if I was actually at MIRI nowadays. I tried to write as much as possible before receiving MIRI's answer to remain as "objective" as possible, however they were quite quick to answer, and I'm a slow writer. And anyway, as you'll see below, the simple fact that I was at AIRCS to help them decide whether they hire me or not meant that my experience was distinct from the others'.

Now that this is written down, I can begin to describe AIRCS.

Normal life

Before going into activities in the workshop's planning, I'm going to speak of the workshop generally. After all, the workshop was 5 night and 4 day long, so of course we've spent times doing other stuff. Usually, there were 1h10 long activities separated by 20 minutes breaks, while lunch and dinner breaks were longer. The last night was an "after party" instead of workshops, and we had two breakout sessions prior. In those, people were free to offer whatever they wanted. There was a lecture about category theory and quantum circuits if I remember correctly. There was some coding in Haskell. I said I would gladly do an introduction to Anki, but no one found this more interesting than the other stuff - This still led to some anki-related questions later by some participants. During some breaks, there were "two minutes clean up", where everyone were cleaning whatever they saw near them. Usually, the clean up were announced in the main room, which means that people far from it didn't hear it and so didn't clean what was near them. Since most of the disorder was actually in the main room that was not an issue. At the end of the two minute cleaning, we resumed the break.

AI Risk for Computer Scientists is a worskhop which, as far as I can tell, is not really for computer scientists. A facebook friend of mine, president of EA France, told me she was invited even though she had a PhD in economics and is not doing any computer science. Most of the people there were computer scientists, however, and most of the non-computer scientists were mathematicians. A lot of the small talk was related to math and computer science and were often far more technical than the workshops themselves, which was quite unexpected to me. Edward Kmett was there and spoke a lot about haskell and theories behind programming languages, which was extremely interesting and not related at all to AI.

A lot of people at the event were vegan, and so at each meal, there was a huge choice of good vegan food, which is incredible to me ! So I was even more intrigued as to why we almost exclusively used throw-away utensil, plates and bowls; as most vegan texts I've read are about how meat arms the environment. I assume that if EA cares about animal suffering in itself, then using throwaways is less of a direct suffering factor.

I had at least four goals arriving at this workshop:

  • The first one was to discover what could actually be done to advance research related to AI safety.
  • The second one was to understand why people believe that there is such a high risk that an unaligned AGI will be created in the foreseeable future.
  • The third one was to get an internship at MIRI; and thus to draw a salary again.
  • The last one was to discover and enjoy California, the bay area, and this rationalist community I've read a lot about.

Task four is mostly a success, task two is kind of a success and the other two tasks are complete failures.

Workshop activities

Almost all activities were done in small groups, with either a half, a third or a fourth of all participants and so the lectures/activities were repeated two to four times during the day. This way, we all had done the same activities, and being in small groups also allow to have real discussions with the professor and the other participants. Groups were changing a lot, so that we also met all participants.

As far as I understand it, plenty of people are panicked when they really understand what AI risks are. So Anna Salamon gave us a rule: We don't speak of AI safety to people who do not express the desire to hear about it. When I asked for more informations, she specified that it is okay to mention the words "AI Safety"; but not to give any details until the other person is sure they want to hear about it. In practice, this means it is okay to share a book/post on AI safety, but we should warn the person to read it only if they feel ready. Which leads to a related problem: some people never experienced an existential crisis or anxiety attack of their life, so it's all too possible they can't really "be ready". On the same note, another researcher at MIRI answered me -when asked as to why they don't hold a lecture explaining the imminence of AI- that they do not want to be the one explaining to everyone why we're all going to die in the next few decades. On the one hand, I guess I understand why, but on the other hand, I'd say it's kind of the point of being there !

To give a more concrete example of how they tried to fight potential panic: during one of the activity, Anna Salamon asked us to list all the reasons why it could be a bad idea of thinking about AI risk. An obvious answer was that thinking about AI may help AGI be created, and so our defeat would come sooner. Another answer, as explained above, would be that some people would panic and not be able to do anything anymore. One of my answer was about a theory we are not allowed to discuss, which if it were true would mean that it's a very bad idea of thinking about it altogether. However, as I expected, she didn't write in on the white board. I still felt that to be exhaustive, I had to mention it to her.

As you can see, the staff really cared about us and wanted to be sure that we would be able to manage the thoughts related to AI risk. This led to a lot of talks which were not directly related to AI. Given the importance of non AI-related talks, I'm going to start to describe the workshop by listing examples of activities which were not AI related.

I do not know how much of the stuff we saw is usual at CFAR. There were talks on bucketing, inner sim, world models, question substitution, double crux, focusing... I'm not going to try to explain all of this, as I would probably do a bad job; just go to a CFAR event to learn about them. Or read their book, it seems they just published it. To test those tools, we had to apply them to specific problems; either an AI-related problem, or a personnal one. I don't even know how to start thinking about AI problems, so I decided to choose personal problems.

Sadly, there has been some timing issues, and the professor usually did not have the time to see every participants to help them with the exercices.

If all of this helps to think clearly about problems and how to solve them, then it seems strange to me that CFAR uses it only for AI stuff. If it does help to deal with the horror of unaligned AI, why isn't it used to speak of global warming ? While I admit that the narrative about AI risk is even worse than the one about global warming, I do think that if you really try to grasp the whole impact of global warming you also need some mental help to consider the horror it entails. The same thing would also be true if you want to consider global poverty, slavery, and so many other things in our world.

There were also a lot of so-called "circle". I've heard that circles are not usually done at CFAR, because circles are less effective at mainstream workshop than in AIRCS workshop. We've been told when we started circling that we would not really learn what circle is, that all circles are different. That we could see plenty of circles and they would not seem similar, the same way we could discover a lot of music and think that there is nothing in common between them. For a computer scientist, this did seem extremely strange and dubious. How can I evaluate whether the exercice was correctly done or not ? Or even what was the purpose of this exercice ? As far I as understand, one goal of the circle is to be able to get what we feel and to put words on it. In particular, circle is also about going meta, and being able to think about the current thinking process. I do feel like that's something I was already quite good at, it looks a lot like blogging - I mostly blog in French, so most of you could not really check out yourself. When I want to write about something and fail, I usually try to understand what my problem is; and then I write to explain what I would have liked to write and why I failed doing it. This is also something I really often do with one of my partner; which leads to conversations too meta for my taste sometime, but which helps discussion in the long run. My point here is that it felt so much like something I often do that I didn't really felt like it was new for me. Or maybe I just didn't understand what circling is and just used instead something I already knew. Who knows ? There were two circles in small group the first two days, and a big long circle with everyone the third day. Someone had been more vocal and honest than myself, and asked why we were doing this. We only have four days, they said, and we are supposed to be talking about AI. Personally, I decided to trust MIRI and to suspend my judgement, expecting that at some point I'd get the meaning in all of this.

Job interview

Okay, let's go back to the job interview part. You'll now understand why both «describing AIRCS» and «applying at MIRI» had to be in the same blog post.

I ask you to remember that I was actually at AIRCS because MIRI wanted to meet me before any decision on hiring me. That means that, during circles, I was asked to be as honest as possible about my feelings while also being considered for an internship. This is extremely awkward. Similarly, to discover CFAR tool, they asked us to consider a personal problem we currently have. One of my biggest problem is about having a salary again, which probably means having a job. Other personal problems (like finding a town where I want to live in) are extremely related to the problem of having a job. But the staff member teaching us CFAR materials may also eventually have to give their reading on me. Which means that the person to help me learn how to solve my problems is potentially a part of the problem's solution. The last day, we had to fill a feedback form about the workshop... That is, they asked me to fill a feedback from about a part of my job interview BEFORE I actually had any answers(5).

All of this was extremely awkward. And it really made the workshop hard to enjoy, as I spent time and time again wondering how I should talk about those problems, while talking about them could effectively change my chances of having the job.

I appreciate and entirely realize that the AIRCS workshop is not just a recruiting process. I do believe that all staff members were honest in saying there were meeting people, teaching interesting stuff, and that no one was actually taking notes on me. No one had any thought process focused on me either. The first time I spoke of this problem to someone from MIRI, during a 1-to-1 discussion, he told me that I should not consider this as a job interview but at a place to socialize and potentially network. But just because they do not think of AIRCS as a job interview does not mean AIRCS is not a job interview. Case in point: half a week after the workshop, the recruiter(Buck) told me that "After discussing some more, we decided that we don't want to move forward with you right now". So the workshop really was what led them to decide not to hire me.

To be more exhaustive, there is a second possibility; maybe after the 2 days long exam, the recruiter already knew they would not hire me. However, they do want to have a lot of people attending AIRCS for reasons I'll discuss later. Thinking that I would not be interested in attending if I knew I didn't have the job, they didn't let me know. However, this would require the recruiter to be able to lie really consistently during the whole workshop, which would be quite impressive.

During a trip to the beach, I finally had the courage to tell the recruiter that AIRCS is quite complex to navigate for me, when it's both a CFAR workshop and a job interview. I assumed that, since the MIRI wanted honest communication, telling them my thoughts would be the best I could do anyway. They answered that they see how having activities such as circles could be a problem, but that there was nothing they could do. I answered two things. First: they could mention people coming to AIRCS for a future job interview that some things will be akward for them; but that they have the same workshop as everyone else so they'll have to deal with it. And I also answered that my problems would be solved if I had an answer already. Since I did know that there was no way for them to already confirm that I had the internship, the conclusion would have been that the workshop would be nicer if they could confirm I was not going to get it. I immediately added by reflex that I was not asking for it. That I was just being exhaustive. I was dishonest then, I was pretty sure I wouldn't have the job, and actually knowing it would have made things quite simpler. However, I had the feeling that asking this would have decreased my probability of being hired, and so I avoided doing it. Furthermore, I do understand why it's generally a bad idea to tell unknown people in your buildings that they won't have the job. Some may react pretty badly, and damage your properties. There was no way for me to convince them that it would be safe to tell me I would not get hired if it were the case. Furthermore, other people were there in order to work at MIRI, and he could not just give informations to me and ignore everyone else.

I do not believe that my first advice will be listened to. During a discussion, the last night near the fire, the recruiter was discussing with some other miri staff and participants. And at some point they mentioned MIRI's recruiting process. I think that they were mentioning that they loved recruiting because it leads them to work with extremly interesting people, but that it's hard to find them. Given that my goal was explicitly to be recruited, and that I didn't have any answers yet, it was extremely awkward for me. I can't state explicitly why, after all I didn't have to add anything to their remark. But even if I can't explain why I think that, I still firmly believe that it's the kind of things a recruiter should avoid saying near their potential hire.

Ok, now, let's go back to AI related activities. The disparity in the participants' respective knowledge was very impressive to me. One of the participants was doing a PhD related to AI safety; while another one knew the subject so well that they could cite researchers and say that "in research paper A, person B states C", and that was related to the current discussion. And some other participants were totally new to all of this and came there by pure curiosity, or as to understand why EA spoke so much of AI. While the actual activities were created to be accessible to everyone, the discussions between participants was not always so. Because of course, people who already know a lot about those topics speak of the facts they know, they use their knowledge when they discuss with the staff.

Most discussions were pretty high level. For example, someone presented a talk where they explained how they tried and failed to model and simulate a brain of C. Elegansis. A worm with an extremely simple and well understood brain. They explained to us a lot of things about biology, and how they had been trying and scanning precisely a brain. If I understood correctly, they told us they failed due to technical constraints and what those were. They believe that, nowadays, we can theoretically create the technology to solve this problem. However there is no one interested in said technology, so it won't be developed and be available to the market. Furthermore, all of their research was done prior to them discovering AI safety stuff so it's good that no one created such a precise model of a - even if just a worm - brain.

Another talk was about a possible AI timeline. This paragraph is going to describe a 70 minutes-long talk, so I'll be omitting a lot of stuff. The main idea of the timeline was that, if you multiply the number of times a neuron fire every seconds, the number of neurons in the brain, the number of living creatures on earth on one side, and if you multiply the number of processors, their speed, and the memory of computers on the other, then you obtain two numbers you can compare. And according to this comparison, we should have been able to simulate a human with today's supercomputer, and tomorrows computer. Similarly, with tomorrows supercomputer we should be able to simulate the whole evolution of life. However, it seems no one did anything close to those simulations. So, Buck Shlegeris, who gave the talk, asked us: "all of this are just words. What are your thoughts" and "what's wrong with this timeline" ? Those are actually questions he asked us a lot: he wanted our opinions, and why we disagreed. That's quite an interesting way to check how we think I guess, and an extremely hard one to answer. Because there were so many thing wrong with the talk we head. I think I answered that one does not only have to simulate brains, but also all of their environments. And the interactions between brains, as opposed to just each brains in their own simulation. But honestly, my real answer would be that it's just not how science and technology works. You can't just hand wave ideas and expect things to work. By default, things do not work. They work when an actual precise effort have been made, because someone had a plan. I used to teach introduction to programming; and the talk reminded me of some of my students asking why their program does not work - while I was trying to understand why they believed it would work in the first place. Even with parallel super computer, having a program simulating the evolution from a single cell to humanity is a task that does not seems trivial, and there is no reason for it to have occurred by itself. The fact that this does not occur is the default. I guess I could also have answered that most programs bugs when they start, and if you need to use super computer for weeks, even a really passionate hacker is not going to have enough resources today to actually do this work. However, I didn't gave this long answer; I didn't feel like it was something I could easily convey with words. That the whole point made no sense. And it probably would not have helped my recruitment either.

AI Discussion

I'm not going to list in details every lectures. One major aspect of it however, is that even after all of them I didn't know what one can concretely do to help with AI safety/alignment problems. Neither did I know why they believed that it was such a high risk. So I went to speak with staff members directly and asked them those questions. One staff member told me that I could go to AI alignment forums, read on the problem they were discussing and, if I had any ideas, write them down. If I had been really honest, I would have asked "why do you want to hire developers, I was hoping to learn that, but I still have no ideas as to what I would be doing if hired". I did not. Because at some point it felt like it was of bad taste to recall that that was my initial goal, and I was pretty sure that I would not be hired anyway.

It would be interesting to read the forum and thinking about those questions myself. As I wrote earlier, MIRI's stuff does seem fun to consider. But then again, there are tons of fun stuff to consider in the world, and plenty of things I don't know yet and want to learn. Currently, what I must learn, I believe, is what will help me have a job. It won't save lives. But it is an interesting perspective to me. And even once I have a stable job, there will be so much more things to be curious about; I'm not sure that I'll just take the time to work on AI. After all, if they don't think I can help them by doing this full-time, I don't see how I could help by doing it part-time.

The question I asked most during discussions was something along the line of :

I do agree that an unaligned AI would be extremly dangerous. I will accept the idea that spending the career of a hundred researchers in this field is totally worth it if there is even a two percent chance of having every life destroyed in the next century. Can you explain to me why you believe that the probability is at least two percent ?

I think that I was being quite generous with my question. Low probability, high time-frame. According to Anna Salamon's rule I gave earlier, if you don't feel ready to hear the answer I received, and understand why there is such a risk, you should skip to next section.

The first person who answered this question told me they had followed the recent progress of machine learning and thought that we are less than ten breakthrough away from AGI. So it is going to occur imminently. It also means that I can't get convinced the same way as I have not seen the speed of progress of the field. Another answer I got was that there is so many progress going so quick that AGI is bound to occur. Personally, I do agree that progress is quick; and I expect to be amazed and astonished multiple time again in the years to come. However, it does not necessarily means that we are going to reach AGI. Let's say that the progress is going exponentially fast, and that AGI is aleph_0 (the smallest uncountable ordinal). Then the speed does not really matter, the function can't reach aleph_0. I've been answered that anyway, even if AI gets stuck again and fails to reach AGI, we will reach AGI as soon as we know how to model and simulates brain. I must admit that I found Age of EM extremly interesting as well as frightening. However I have no certitudes that we will reach EMs one day. In both cases, AGI and EMs, I see the same problem: I can imagine that we need fundamental breakthrough to make real progress. However, while private companies are happy to do more and more amazing technological creations, it does not seems to me that they are willing to do decade-long researches which won't bring back any money in the next few years. If you need a biological/chemical discovery which requires something as large as the hardron collider, then I don't see any private company financing it. Especially since, once the thing is discovered, other companies may use it. On the other hand and as far as I know, universities have less and less fundings worldwide, and professors need to be able to justify they'll publish papers soon and regularly. All in all, I really don't think research is going to be able to make fundamental important discoveries that would allow to create a new kind of science probably required for AGI/Ems. I held this last discussion just after the introduction to «double crux». And I don't know whether this introduction helped, but after this discussion, I understood that one big difference between me and most of the staff is that I'm far more pessimistic about the future of research than they are.

The point of the workshop

Some people asked what was the point of the workshop. Why had MIRI paid the travel and food of ~20 participants, and provided the staff to work with us. It was supposed to be about AI and we spent a lot of time doing circles. We didn't really learn how to help with alignment problems. I'm going to write down some answers to that that I heard during the workshop. It is beneficial to see a lot of things that are not related to AI, because we need to have a really big surface in order to interact with plenty of parts of the world (this is a chemical reaction metaphor). Maybe some useful ideas will be coming from some other fields, fields that are not yet related to AI safety. So the AI safety field need to interact with a lot of people from the outside. Some people might get interested enough to actually want to work on AI safety after this workshop, and go to MIRI or to some other places which work on those stuff, such as CHAI, FHI, etc. Some people may come to AIRCS alumni workshop, where they get more technical (I've not yet seen anything related to the alumni workshop, and I doubt I'll ever be invited - I didn't really give any new ideas in the first workshop). Also, if at some point they need a lot of people to work on the alignment problem, they'll have all the AIRCS alumni they can contact. AIRCS alumnis have been introduced to their ideas and could start working quicker on those topics. However, I didn't understand why they would suddenly need so urgently a team to work on the alignment problem (and it sounds a little bit Terminator-ish).

Now, there is a second question: what was the point of me going there. I mean, I went there because I was invited and curious. But what was their goal when inviting me ? While I was not told exactly why my application was rejected after the workshop, I have strong intuitions about it. First, I was not able to participate in discussions as much as the other participants did. As I said a lot of time, I am not convinced that the risk is as high as they perceive it. And I found that most talks were so high levels that there was no way for me to say anything concrete about them. At least not without seeing how they actually would try to implement things first. Worse, I was saying all of this out loud! A staff member from MIRI told me that my questions meant I didn't really understood the risks. And understanding them is, supposedly, a prerequisite to be able to work on AI alignment. So it was easy to conclude that I would not get the internship. But all of this was already known to the recruiter after the first screening we had. I already told him I skimmed though MIRI's post but had not had read a lot. And I was not even convinced that it was a serious risk. We already had a discussion where they tried to understand my thoughts on this, and why I did not believe in the scenarios they were depicting. As I wrote in the introduction, I notice I am confused and I'm quite perplex by the fact that my confusion has only increased. Every time a company decides not to hire me, I would love to know why, at least as to avoid making the same mistakes again. Miri here is an exception. I can see only so many reasons not to hire me that the outcome was unsurprising. The process and they considering me in the first place was.

The confusion does not decrease.

P.S. Thanks to ^,^(Joy_void_joy) for proof-reading this post. All typos remain mine.

Footnotes:

  1. Most probable ending. Because either someone crafts an unaligned AGI by not taking MIRI's position into consideration, or because AGI is impossible
  2. I still wonder why. I assume there are already a lot of computer scientists wanting to have an impact, and that they specialize more in the US and UK whereas I was in France back then.
  3. My friend tells me that actually, it's not even a generalization. Both concepts are unrelated, appart that they have the same name.
  4. There are actually teacher assistants who are still waiting for their 2018-2019 salary.
  5. To be honest, at this point, I was pretty sure I wasn't going to have the job. But not quite sure enough to feel comfortable about it.
New Comment
15 comments, sorted by Click to highlight new comments since:
[-]Buck380

(I'm unsure whether I should write this comment referring to the author of this post in second or third person; I think I'm going to go with third person, though it feels a bit awkward. Arthur reviewed this comment before I posted it.)

Here are a couple of clarifications about things in this post, which might be relevant for people who are using it to learn about the MIRI recruiting process. Note that I'm the MIRI recruiter Arthur describes working with.

General comments:

I think Arthur is a really smart, good programmer. Arthur doesn't have as much background with AI safety stuff as many people who I consider as candidates for MIRI work, but it seemed worth spending effort on bringing Arthur to AIRCS etc because it would be really cool if it worked out.

Arthur reports a variety of people in this post as saying things that I think are somewhat misinterpreted, and I disagree with several of the things he describes them as saying.

I still don't understand that: what's the point of inviting me if the test fails ? It would appear more cost efficient to wait until after the test to decide whether they want me to come or not (I don't think I ever asked it out loud, I was already happy to have a trip to California for free).

I thought it was very likely Arthur would do well on the two-day project (he did).

I do not wish to disclose how much I have been paid, but I'll state that two hours at that rate was more than a day at the French PhD rate. I didn't even ask to be paid; I hadn't even thought that being paid for a job interview was possible.

It's considered good practice to pay people to do work for trials; we paid Arthur a rate which is lower than you'd pay a Bay Area software engineer as a contractor, and I was getting Arthur to do somewhat unusually difficult (though unusually interesting) work.

I assume that if EA cares about animal suffering in itself, then using throwaways is less of a direct suffering factor.

Yep

So Anna Salamon gave us a rule: We don't speak of AI safety to people who do not express the desire to hear about it. When I asked for more informations, she specified that it is okay to mention the words "AI Safety"; but not to give any details until the other person is sure they want to hear about it. In practice, this means it is okay to share a book/post on AI safety, but we should warn the person to read it only if they feel ready. Which leads to a related problem: some people never experienced an existential crisis or anxiety attack of their life, so it's all too possible they can't really "be ready".

I think this is a substantial misunderstanding of what Anna said. I don't think she was trying to propose a rule that people should follow, and she definitely wasn't explaining a rule of the AIRCS workshop or something; I think she was doing something a lot more like talking about something she thought about how people should relate to AI risk. I might come back and edit this comment later to say more.

That means that, during circles, I was asked to be as honest as possible about my feelings while also being considered for an internship. This is extremely awkward.

For the record, I think that "being asked to be as honest as possible" is a pretty bad description of what circling is, though I'm sad that it came across this way to Arthur (I've already talked to him about this)

But just because they do not think of AIRCS as a job interview does not mean AIRCS is not a job interview. Case in point: half a week after the workshop, the recruiter told me that "After discussing some more, we decided that we don't want to move forward with you right now". So the workshop really was what led them to decide not to hire me.

For the record, the workshop indeed made the difference about whether we wanted to make Arthur an offer right then. I think this is totally reasonable--Arthur is a smart guy, but not that involved with the AI safety community; my best guess before the AIRCS workshop was that he wouldn't be a good fit at MIRI immediately because of his insufficient background in AI safety, and then at the AIRCS workshop I felt like it turned out that this guess was right and the gamble hadn't paid off (though I told Arthur, truthfully, that I hoped he'd keep in contact).

During a trip to the beach, I finally had the courage to tell the recruiter that AIRCS is quite complex to navigate for me, when it's both a CFAR workshop and a job interview.

:( This is indeed awkward and I wish I knew how to do it better. My main strategy is to be as upfront and accurate with people as I can; AFAICT, my level of transparency with applicants is quite unusual. This often isn't sufficient to make everything okay.

First: they could mention people coming to AIRCS for a future job interview that some things will be awkward for them; but that they have the same workshop as everyone else so they'll have to deal with it.

I think I do mention this (and am somewhat surprised that it was a surprise for Arthur)

Furthermore, I do understand why it's generally a bad idea to tell unknown people in your buildings that they won't have the job.

I wasn't worried about Arthur destroying the AIRCS venue; I needed to confer with my coworkers before making a decision.

I do not believe that my first advice will be listened to. During a discussion, the last night near the fire, the recruiter was discussing with some other miri staff and participants. And at some point they mentioned MIRI's recruiting process. I think that they were mentioning that they loved recruiting because it leads them to work with extremely interesting people, but that it's hard to find them. Given that my goal was explicitly to be recruited, and that I didn't have any answers yet, it was extremely awkward for me. I can't state explicitly why, after all I didn't have to add anything to their remark. But even if I can't explain why I think that, I still firmly believe that it's the kind of things a recruiter should avoid saying near their potential hire.

I don't quite understand what Arthur's complaint is here, though I agree that it's awkward having people be at events with people who are considering hiring them.

Miri here is an exception. I can see only so many reasons not to hire me that the outcome was unsurprising. The process and they considering me in the first place was.

Arthur is really smart and it seemed worth getting him more involved in all this stuff.

Hi,

Thank you for your long and detailed answer. I'm amazed that you were able to do it so quickly after the post's publication. Especially since you sent me your answer by email while I just published my post on LW without showing it to anyone first.

Arthur reports a variety of people in this post as saying things that I think are somewhat misinterpreted, and I disagree with several of the things he describes them as saying.

I added a link to this comment in the top of the post. I am not surprised to learn that I misunderstood some things which were said during the workshop honestly. Those were 5 pretty intense days, and there was no way for me to have perfect memory of everything. However, I won't correct the post; this is a text explaining as honestly as possible how I felt about the event. Those kinds of misunderstanding are parts of the events too. I really hope that people reading this kind of posts do understand that it's a personal text and that they should form their own view. Given that it's a LW blog post and not a newspaper/research article, I feel like it's okay.

It's considered good practice to pay people to do work for trials; we paid Arthur a rate which is lower than you'd pay a Bay Area software engineer as a contractor, and I was getting Arthur to do somewhat unusually difficult (though unusually interesting) work.

I do confirm that it was interesting.

I guess that I do not know what is good practice in California or not. I spent hundreds of euros for job interviews in France, when I had to pay for train/plane/hotel to go meet a potential employer, and I kind of assume that looking for a job is an expensive task.

I think this is a substantial misunderstanding of what Anna said. I don't think she was trying to propose a rule that people should follow, and she definitely wasn't explaining a rule of the AIRCS workshop or something; I think she was doing something a lot more like talking about something she thought about how people should relate to AI risk. I might come back and edit this comment later to say more.

I mostly understand it as a common rule, not as an AIRCS rule. This rule seems similar to the rule "do not show pictures of slaughterhouse to people who didn't decide by themselves to check how slaughterhouse are". On the one hand, it can be argued that if people knew how badly animals were treated, things would get better for them. It remains that, even if you believe that, showing slaughterhouse's picture to random people who were not prepared would be an extremely mean thing to do to them.

AFAICT, my level of transparency with applicants is quite unusual. This often isn't sufficient to make everything okay.

Would it be a LW post if I didn't mention a single biais ? I wonder whether there is an illusion of transparency here. There are some informations you write there that would have been helpful to have beforehand, and that I don't recall hearing. For example, "my best guess before the AIRCS workshop was that he wouldn't be a good fit at MIRI immediately because of his insufficient background in AI safety". On the one hand, it could be expected that I understand that I would not be a good fit, given that I don't have AI safety background. That would makes sens in most companies actually. On the other hand, the way I perceive MIRI is that you're quite unusual, so I could assume that you mainly are looking for devs' wanting to work with rationalist, and that it would be okay if those people needs some time to teach themselves everything they need to learn.

Given that both hypothesis are possible, I see how it can seem more transparent to you than it actually was for me. However, I must admit that on my side, I was not totally transparent, since I didn't ask you to clarify immediately. More generally, the point I want to make here is that my goal is not to blame you, nor the MIRI, nor AIRCS, nor myself. I would hate if this post or comment was read as me wanting to complain. When I wrote the post, I thought about what I would have wanted to read before going to AIRCS; and tried to write it. While I do have some negative remarks, I hope that it globally appears as a positive post. I did state it, and I repeat it: I did appreciate coming to AIRCS.

First: they could mention people coming to AIRCS for a future job interview that some things will be awkward for them; but that they have the same workshop as everyone else so they'll have to deal with it. I think I do mention this (and am somewhat surprised that it was a surprise for Arthur)

I may have forgotten then. I don't claim my memory is perfect. It's entirely possible that I did not take this warning seriously enough. If at some point someone read this post before going to AIRCS, I hope it'll help them take this into account. Even if I do not think that what was important for me will actually be important for them, so maybe that'll be useless in the end.

I don't quite understand what Arthur's complaint is here, though I agree that it's awkward having people be at events with people who are considering hiring them.

I honestly can't state exactly what felt wrong. This is actually a paragraph I spent a lot of time, because I didn't find an exact answer. I finally decided to state what I felt, without being able to explain the reason behind it. Which by the way seems a lot what I understood about circling the way it was presented to my group the first day.

Arthur is really smart and it seemed worth getting him more involved in all this stuff.

Thank you.

This rule seems similar to the rule "do not show pictures of slaughterhouse to people who didn't decide by themselves to check how slaughterhouse are". On the one hand, it can be argued that if people knew how badly animals were treated, things would get better for them. It remains that, even if you believe that, showing slaughterhouse's picture to random people who were not prepared would be an extremely mean thing to do to them.
 

Huh. That’s a surprisingly interesting analogy. I will think more on it. Thx.

Hi Arthur,

Thanks for the descriptions — it is interesting for me to hear about your experiences, and I imagine a number of others found it interesting too.

A couple clarifications from my perspective:

First: AIRCS is co-run by both CFAR and MIRI, and is not entirely a MIRI recruiting program, although it is partly that! (You might know this part already, but it seems like useful context.)

We are hoping that different people go on from AIRCS to a number of different AI safety career paths. For example:

  • Some people head straight from AIRCS to MIRI.
  • Some people attend AIRCS workshops multiple times, spaced across months or small years, while they gradually get familiar with AI safety and related fields.
  • Some people realize after an AIRCS workshop that AI safety is not a good fit for them.
  • Some people, after attending one or perhaps many AIRCS workshops, go on to do AI safety research at an organization that isn’t MIRI. All of these are good and intended outcomes from our perspective! AI safety could use more good technical researchers, and AIRCS is a long-term investment toward improving the number of good computer scientists (and mathematicians and others) who have some background in the field. (Although it is also partly aimed to help with MIRI's recruiting in particular.)

Separately, I did not mean to "give people a rule" to "not speak about AI safety to people who do not express interest." I mean, neither I nor AIRCS as an organization have some sort of official request that people follow a “rule” of that sort. I do personally usually follow a rule of that sort, though (with exceptions). Also, when people ask me for advice about whether to try to “spread the word” about AI risk, I often share that I personally am a bit cautious about when and how I talk with people about AI risk; and I often share details about that.

I do try to have real conversations with people that reply to their curiosity and/or to their arguments/questions/etc., without worrying too much which directions such conversations will update them toward.

I try to do this about AI safety, as about other topics. And when I do this about AI safety (or other difficult topics), I try to help people have enough “space” that they can process things bit-by-bit if they want. I think it is often easier and healthier to take in a difficult topic at one’s own pace. But all of this is tricky, and I would not claim to know the one true answer to how everyone should talk about AI safety.

Also, I appreciate hearing about the bits you found distressing; thank you. Your comments make sense to me. I wonder if we’ll be able to find a better format in time. We keep getting bits of feedback and making small adjustments, but it is a slow process. Job applications are perhaps always a bit awkward, but iterating on “how do we make it less awkward” does seem to yield slow-but-of-some-value modifications over time.

You are welcome. I won't repeat what I answered to Buck's comment; some part of the answer are certainly relevant here. In particular regarding the above-mentioned "rule". While I did not write about «space» above, I hope my point was clear. You and all the staff were making sure we were able to process things safely. While I would not have been able to state explicitly your goals, I was trying to emphasize that you did care about those questions.

AIRCS [...] is not entirely a MIRI recruiting program I believe that I explicitly stated that a lot of people are here for different reasons than being recruited at MIRI. And that your goal are also to hope that people will work at other place on AI safety. I'm actually surprised that you have to clarify it. I fear I was not as clear as I hoped to be.

There is one last thing I should have added to the post. I wrote this for myself. More precisely, I wrote what I would have wanted to read before applying at MIRI/attending AIRCS. I wrote what I expect other applicants might find useful eventually. It seems safe to assume that those applicants sometime reads Less Wrong and so might looks for similar posts and fin this one. Currently, all comments had been made by LW/MIRI/CFAR's staff, which means that this is not (yet) a success. Anyway, you were not the intended audience, even if I assumed that somehow, some AIRCS staff would hear about this post. I didn't try to write anything that you would appreciate. After all, you know AIRCS better than me. And it's entirely possible that you might find some critiques to be unfair. I'm happy to read that you do appreciate and found interesting some parts of what I wrote. Note that most of the things I wrote here related to AIRCS were already present in my feed back form. There were some details I omitted in the form (I don't need to tell you that there is a lot of vegan options), and some details I omit here (In particular the one mentioning people by name). So there was already a text in which you were the intended audience.

This was all really interesting, thanks for writing it and for being so open with your thinking, I think it's really valuable. Lots of this hiring process sounds very healthy - for instance, I'm glad to hear they pay you well for the hours you spend doing work trial projects.

As far as I understand it, plenty of people are panicked when they really understand what AI risks are. So Anna Salamon gave us a rule: We don't speak of AI safety to people who do not express the desire to hear about it. When I asked for more informations, she specified that it is okay to mention the words "AI Safety"; but not to give any details until the other person is sure they want to hear about it. In practice, this means it is okay to share a book/post on AI safety, but we should warn the person to read it only if they feel ready. Which leads to a related problem: some people never experienced an existential crisis or anxiety attack of their life, so it's all too possible they can't really "be ready"...

As you can see, the staff really cared about us and wanted to be sure that we would be able to manage the thoughts related to AI risk.

Yeah, Anna talked in a bunch of detail about her thinking on this in this comment on the recent CFAR AMA, in case you're interested in seeing more examples of ways people can get confused when thinking about AI risk.

Every time a company decides not to hire me, I would love to know why, at least as to avoid making the same mistakes again. Miri here is an exception. I can see only so many reasons not to hire me that the outcome was unsurprising. The process and they considering me in the first place was.

My take here is a bit different from yours. I think it's best to collaborate with interesting people who have unique ways of thinking, and it's more important to focus on that than "I will only hire people who agree with me". When I'm thinking about new people I've met and whether to hang out with them more / work with them more, I rarely am thinking about whether or not they also often use slogans like "x-risk" and "AI safety", but primarily how they think and whether I'd get value out of working with them / hanging out with them. 

The process you describe at CFAR sounds like a way to focus on finding interesting people: withholding judgement for as long as possible about whether you should work together, while giving you and them lots of space and time to build a connection. This lets you talk through the ideas that are interesting to both of you, and generally understand each other better than a 2-hour interview or coding project offers. 

Ed Kmett seems like a central example here; my (perhaps mistaken) belief is that he's primarily doing a lot of non-MIRI work he finds valuable, and is inspired by that more than the other kinds of research MIRI is doing, but he and other researchers at MIRI find ways to collaborate, and when they do he does really cool stuff. I expect there was a period where he engaged with the ideas around AI alignment in a lot of detail, and has opinions about them, and of course at some point that was important to whether he wanted to work at MIRI, but I expect Nate and others would be very excited about him being around, regardless of whether he thought their project was terribly important, given his broader expertise and ways of thinking about functional programming. I think it's great to spend time with interesting people finding out more about their interests and how they think, and that this stuff is more valuable than taking a group of people where the main thing you know is that they passed the coding part of the interview, and primarily spending time persuading them that your research is important.

Even given that, I'm sorry you had a stressful/awkward time trying to pretend this was casual and not directly important for you for financial and employment reasons. It's a dynamic I've experience not infrequently within the Bay Area rationalist gatherings (and EA gatherings globally), of being at large social events and trying to ignore how much they can affect e.g. your hiring prospects (I don't think MIRI is different in this regard from spaces involving other orgs like 80k, OpenPhil, etc). I'll add that, as above, I do indeed think that MIRI staff had not themselves made a judgment at the time of inviting you to the workshop. Also, I'll mention my sense is that it was affecting you somewhat more than it does for the median person at such events, who I think mostly have an overall strongly positive experience and an okay time ignoring this part of it. Still, I think it's a widespread effect, and I'm honestly not sure what to do about it.

You're welcome. I'm happy to read that people find it interesting.

Regarding Ed, your explanation seems different than what I recall hearing him says. But, I'll follow my rule: I won't repeat what a person said unless it was given in a public talk. I'll let him know we mention him, and if he want, he'll answer himself. I'd just states that, honestly, I would have loved to collaborate with Ed. He had really interesting discussions. However, he already has an assistant, I have not heard anything making me believes he would need a second one.

Congratulations on ending my long-time LW lurker status and prompting me to comment for once. =)

I think Ben's comment hits pretty close to the state of affairs. I have been internalizing MIRI's goals and looking for obstacles in the surrounding research space that I can knock down to make their (our? our) work go more smoothly, either in the form of subgoals or backwards chaining required capabilities to get a sense of how to proceed.

Why do I work around the edges? Mostly because if I take the vector I'm trying to push the world and the direction MIRI is trying to push the world, and project one onto the other, it currently seems to be the approach that maximizes the dot product. Some of what I want to do doesn't seem to work behind closed doors, because I need a lot more outside feedback, or have projects that just need to bake more before being turned into a more product-like direction.

Sometimes I lend a hand behind the curtain where one of their projects comes close to something I have experience to offer, other times, its trying to offer a reframing of the problem, or offering libraries or connections to folks that they can built atop.

I can say that I've very much enjoyed working with MIRI over the last 16 months or so, and I think the relationship has been mutually beneficial. Nate has given me a lot of latitude in choosing what to work on and how I can contribute, and I have to admit he couldn't have come up with a more effective way to leash me to his cause had he tried.

I've attend so many of these AIRCS workshops in part to better understand how to talk about AI safety. (I think I've been to something like 8-10 of them so far?) For better or worse, I have a reputation I have in the outside functional programming community, and I'd hate to give folks the wrong impression of MIRI, simply by dint of my having done insufficient homework, so I've been using this as a way to sharpen my arguments and gather a more nuanced understanding of what ways to talk about AI safety, rationality, 80,000 hours, EA, x-risks, etc. work for my kind of audience, and which seem to fall short, especially when talking to the kind of hardcore computer scientists and mathematicians I tend to talk to.

Arthur, I greatly enjoyed interacting with you at the workshop -- who knew that an expert on logic and automata theory was exactly what I needed at that moment!? -- and I'm sorry that the MIRI recruitment attempt didn't move forward.

Cool, all seems good (and happy to be corrected) :-)

This is basically off-topic, but just for the record, regarding...

someone presented a talk where they explained how they tried and failed to model and simulate a brain of C. Elegans.... Furthermore, all of their research was done prior to them discovering AI safety stuff so it's good that no one created such a precise model of a - even if just a worm - brain.

That was me; I have never believed (at least not yet) that it’s good that the C. elegans nervous system is still not understood; to the contrary, I wish more neuroscientists were working on such a “full-stack” understanding (whole nervous system down to individual cells). What I meant to say is that I am personally no longer compelled to put my attention toward C. elegans, compared to work that seems more directly AI-safety-adjacent.

I could imagine someone making a case that understanding low-end biological nervous systems would bring us closer to unfriendly AI than to friendly AI, and perhaps someone did say such a thing at AIRCS, but I don’t recall it and I doubt I would agree. More commonly, people make the case that nervous-system uploading technology brings us closer to friendly AI in the form of eventually uploading humans—but that is irrelevant one way or the other if de novo AGI is developed by the middle of this century.

One final point: it is possible that understanding simple nervous systems gives humanity a leg up on interpretability (of non-engineered, neural decision-making), without providing new capabilities until somewhere around spider level. I don’t have much confidence that any systems-neuroscience techniques for understanding C. elegans or D. rerio would transfer to interpreting AI’s decision-making or motivational structure, but it is plausible enough that I currently consider such work to be weakly good for AI safety.

Thank you for your answer Davidad. For some reason, I was pretty sure that I did ask you something as "why did you try to do that if it could leads to AI faster, through em" and that your answer was something like "I probably would not have done if I already knew about AI safety questions". But I guess I did recall badly. I'm honestly started to be effraied by the number of things I did get wrong during those 4 days

I think that they were mentioning that they loved recruiting because it leads them to work with extremely interesting people, but that it's hard to find them. Given that my goal was explicitly to be recruited, and that I didn't have any answers yet, it was extremely awkward for me. I can't state explicitly why, after all I didn't have to add anything to their remark.

Reading this, it sounds to me like the bad thing here might have been the unintentional implication that people who are not hired by MIRI are not extremely interesting people. I'm sure that's not what they meant, but it's understandable that that might be awkward!

[-][anonymous]20

As I said a lot of time, I am not convinced that the risk is as high as they perceive it. And I found that most talks were so high levels that there was no way for me to say anything concrete about them. At least not without seeing how they actually would try to implement things first. Worse, I was saying all of this out loud! A staff member from MIRI told me that my questions meant I didn’t really understood the risks.

It sounds to me that you actually understood the arguments about risk rather well and paid an unusual amount of attention to detail, which is a very good thing. It's completely MIRI's loss that you weren't hired, as it appears you are a very clear-headed thinker and pay close attention to fine details.

As someone who has gone through a AIRCS and left with a similar amount of confusion, let me say that it is not you that is at fault. I hope you find what you are looking for!

Hi Mark,

This maybe doesn't make much difference for the rest of your comment, but just FWIW: the workshop you attended in Sept 2016 not part of the AIRCS series. It was a one-off experiment, funded by an FLI grant, called "CFAR for ML", where we ran most of a standard CFAR workshop and then tacked on an additional day of AI alignment discussion at the end.

The AIRCS workshops have been running ~9 times/year since Feb 2018, have been evolving pretty rapidly, and in recent iterations involve a higher ratio of either AI risk content, or content about how cognitive biases etc. that seem to arise in discussion about AI risk in particular. They have somewhat smaller cohorts for more 1-on-1 conversation (~15 participants instead of 23). They are co-run with MIRI, which "CFAR for ML" was not. They have a slightly different team and a slightly different beast.

Which... doesn't mean you wouldn't have had most of the same perceptions if you'd come to a recent AIRCS! You might well have. From a distance perhaps all our workshops are pretty similar. And I can see calling "CFAR for ML" "AIRCS", since it was in fact partially about AI risk and was aimed mostly at computer scientists, which is what "AIRCS" stands for. Still, we locally care a good bit of the distinctions between our programs, so I did want to clarify.

[-][anonymous]50

Thanks for the correction! Yes, from a distance the description of your workshops seem pretty similar. I thought the "CFAR for ML" was a prototype for AIRCS, and assumed it would have similar structure and format. Many of Arthur's descriptions seemed familiar to my memories of the CFAR for ML workshop.