The 5th Annual LessWrong Review has come to a close!

Review Facts

There were 5330 posts published in 2022. 

Here's how many posts passed through the different review phases.

PhaseNo. of postsEligibility
Nominations Phase579Any 2022 post could be given preliminary votes
Review Phase363 Posts with 2+ votes could be reviewed
Voting Phase168Posts with 1+ reviews could be voted on

Here how many votes and voters there were by karma bracket.

 Karma BucketNo. of VotersNo. of Votes Cast 
 Any3335007 
 1+3074944 
 10+2984902 
 100+2454538 
 1,000+1212801 
 10,000+24816 

To give some context on this annual tradition, here are the absolute numbers compared to last year and to the first year of the LessWrong Review.

  201820212022 
 Voters59238333 
 Nominations75452579 
 Reviews120209227 
 Votes127228705007 
 Total LW Posts170345065330 

Review Prizes

There were lots of great reviews this year! Here's a link to all of them

Of 227 reviews we're giving 31 of them prizes. 

This follows up on Habryka who gave out about half of these prizes 2 months ago.

Note that two users were paid to produce reviews and so will not be receiving the prize money. They're still here because I wanted to indicate that they wrote some really great reviews.

Click below to expand and see who won prizes.

Excellent ($200) (7 reviews)
Great ($100) (6 reviews)
Good ($50) (18 reviews)

We'll reach out to prizewinners in the coming weeks to give you your prizes.

We have been working on a new way of celebrating the best posts of the year

The top 50 posts of each year are being celebrated in a new way! Read this companion post to find out all the details, but for now here's a preview of the sorts of changes we've made for the top-voted posts of the annual review.

And there's a new LeastWrong page with the top 50 posts from all 5 annual reviews so far, sorted into categories.

You can learn more about what we've built in the companion post.

Okay, now onto the voting results!

Voting Results

Voting is visualized here with dots of varying sizes, roughly indicating that a user thought a post was "good" (+1), "important" (+4), or "extremely important" (+9). 

Green dots indicate positive votes. Red indicate negative votes.

If a user spent more than their budget of 500 points, all of their votes were scaled down slightly, so some of the circles are slightly smaller than others.

These are the 161 posts that got a net positive score, out of 168 posts that were eligible for the vote.

0 AGI Ruin: A List of Lethalities
1 MIRI announces new "Death With Dignity" strategy
2 Where I agree and disagree with Eliezer
3 Let’s think about slowing down AI
4 Reward is not the optimization target
5 Six Dimensions of Operational Adequacy in AGI Projects
6 It Looks Like You're Trying To Take Over The World
7 Staring into the abyss as a core life skill
8 You Are Not Measuring What You Think You Are Measuring
9 Without specific countermeasures, the easiest path to transformative AI likely leads to AI takeover
10 Sazen
11 Luck based medicine: my resentful story of becoming a medical miracle
12 Inner and outer alignment decompose one hard problem into two extremely hard problems
13 On how various plans miss the hard bits of the alignment challenge
14 Simulators
15 Epistemic Legibility
16 Tyranny of the Epistemic Majority
17 Counterarguments to the basic AI x-risk case
18 What Are You Tracking In Your Head?
19 Safetywashing
20 Threat-Resistant Bargaining Megapost: Introducing the ROSE Value
21 Nonprofit Boards are Weird
22 Optimality is the tiger, and agents are its teeth
23 chinchilla's wild implications
24 Losing the root for the tree
25 Worlds Where Iterative Design Fails
26 Decision theory does not imply that we get to have nice things
27 Comment reply: my low-quality thoughts on why CFAR didn't get farther with a "real/efficacious art of rationality"
28 What an actually pessimistic containment strategy looks like
29 Introduction to abstract entropy
30 A Mechanistic Interpretability Analysis of Grokking
31 The Redaction Machine
32 Butterfly Ideas
33 Causal Scrubbing: a method for rigorously testing interpretability hypotheses [Redwood Research]
34 Language models seem to be much better than humans at next-token prediction
35 Toni Kurz and the Insanity of Climbing Mountains
36 Useful Vices for Wicked Problems
37 What should you change in response to an "emergency"? And AI risk
38 Models Don't "Get Reward"
39 How To Go From Interpretability To Alignment: Just Retarget The Search
40 Security Mindset: Lessons from 20+ years of Software Security Failures Relevant to AGI Alignment
41 Why Agent Foundations? An Overly Abstract Explanation
42 A central AI alignment problem: capabilities generalization, and the sharp left turn
43 Humans provide an untapped wealth of evidence about alignment
44 Learning By Writing
45 Limerence Messes Up Your Rationality Real Bad, Yo
46 The Onion Test for Personal and Institutional Honesty
47 Counter-theses on Sleep
48 The shard theory of human values
49 How "Discovering Latent Knowledge in Language Models Without Supervision" Fits Into a Broader Alignment Scheme
50 ProjectLawful.com: Eliezer's latest story, past 1M words
51 Intro to Naturalism: Orientation
52 Why I think strong general AI is coming soon
53 How might we align transformative AI if it’s developed very soon?
54 It’s Probably Not Lithium
55 (My understanding of) What Everyone in Technical Alignment is Doing and Why
56 Plans Are Predictions, Not Optimization Targets
57 Takeoff speeds have a huge effect on what it means to work on AI x-risk
58 The Feeling of Idea Scarcity
59 Six (and a half) intuitions for KL divergence
60 Trigger-Action Planning
61 Have You Tried Hiring People?
62 The Wicked Problem Experience
63 What does it take to defend the world against out-of-control AGIs?
64 On Bounded Distrust
65 Setting the Zero Point
66 [Interim research report] Taking features out of superposition with sparse autoencoders
67 Limits to Legibility
68 Harms and possibilities of schooling
69 Look For Principles Which Will Carry Over To The Next Paradigm
70 Steam
71 High Reliability Orgs, and AI Companies
72 Toy Models of Superposition
73 Editing Advice for LessWrong Users
74 Deep Learning Systems Are Not Less Interpretable Than Logic/Probability/Etc
75 why assume AGIs will optimize for fixed goals?
76 Lies Told To Children
77 Revisiting algorithmic progress
78 Things that can kill you quickly: What everyone should know about first aid
79 Postmortem on DIY Recombinant Covid Vaccine
80 Reflections on six months of fatherhood
81 Some Lessons Learned from Studying Indirect Object Identification in GPT-2 small
82 The Plan - 2022 Update
83 12 interesting things I learned studying the discovery of nature's laws
84 Impossibility results for unbounded utilities
85 Searching for outliers
86 Greyed Out Options
87 “Pivotal Act” Intentions: Negative Consequences and Fallacious Arguments
88 Do bamboos set themselves on fire?
89 Murphyjitsu: an Inner Simulator algorithm
90 Deliberate Grieving
91 We Choose To Align AI
92 The alignment problem from a deep learning perspective
93 Slack matters more than any outcome
94 Consider your appetite for disagreements
95 everything is okay
96 Mysteries of mode collapse
97 Slow motion videos as AI risk intuition pumps
98 ITT-passing and civility are good; "charity" is bad; steelmanning is niche
99 Meadow Theory
100 The next decades might be wild
101 Marriage, the Giving What We Can Pledge, and the damage caused by vague public commitments
102 Lessons learned from talking to >100 academics about AI safety
103 Activated Charcoal for Hangover Prevention: Way more than you wanted to know
104 More Is Different for AI
105 How satisfied should you expect to be with your partner?
106 How my team at Lightcone sometimes gets stuff done
107 The metaphor you want is "color blindness," not "blind spot."
108 Logical induction for software engineers
109 Call For Distillers
110 Fiber arts, mysterious dodecahedrons, and waiting on “Eureka!”
111 A Longlist of Theories of Impact for Interpretability
112 On A List of Lethalities
113 LOVE in a simbox is all you need
114 A transparency and interpretability tech tree
115 DeepMind alignment team opinions on AGI ruin arguments
116 Contra shard theory, in the context of the diamond maximizer problem
117 On infinite ethics
118 Wisdom Cannot Be Unzipped
119 Different perspectives on concept extrapolation
120 Utilitarianism Meets Egalitarianism
121 The ignorance of normative realism bot
122 Shah and Yudkowsky on alignment failures
123 Nuclear Energy - Good but not the silver bullet we were hoping for
124 Patient Observation
125 Monks of Magnitude
126 AI coordination needs clear wins
127 Actually, All Nuclear Famine Papers are Bunk
128 New Frontiers in Mojibake
129 My take on Jacob Cannell’s take on AGI safety
130 Introducing Pastcasting: A tool for forecasting practice
131 K-complexity is silly; use cross-entropy instead
132 Beware boasting about non-existent forecasting track records
133 Clarifying AI X-risk
134 Narrative Syncing
135 publishing alignment research and exfohazards
136 Deontology and virtue ethics as "effective theories" of consequentialist ethics
137 Range and Forecasting Accuracy
138 Trends in GPU price-performance
139 How To Observe Abstract Objects
140 Criticism of EA Criticism Contest
141 Takeaways from our robust injury classifier project [Redwood Research]
142 Bad at Arithmetic, Promising at Math
143 Don't use 'infohazard' for collectively destructive info
144 Conditions for mathematical equivalence of Stochastic Gradient Descent and Natural Selection
145 Human values & biases are inaccessible to the genome
146 I learn better when I frame learning as Vengeance for losses incurred through ignorance, and you might too
147 Jailbreaking ChatGPT on Release Day
148 Open technical problem: A Quinean proof of Löb's theorem, for an easier cartoon guide
149 Review: Amusing Ourselves to Death
150 QNR prospects are important for AI alignment research
151 Disagreement with bio anchors that lead to shorter timelines
152 Why all the fuss about recursive self-improvement?
153 LessWrong Has Agree/Disagree Voting On All New Comment Threads
154 Opening Session Tips & Advice
155 Searching for Search
156 Refining the Sharp Left Turn threat model, part 1: claims and mechanisms
157 Takeaways from a survey on AI alignment resources
158 Trying to disambiguate different questions about whether RLHF is “good”
159 Benign Boundary Violations
160 How To: A Workshop (or anything)
New Comment
3 comments, sorted by Click to highlight new comments since:

Just noticing that every post has at least one negative vote, which feels interesting for some reason.

Technically the optimal way to spend your points to influence the vote outcome is to center them (i.e. have the mean be zero). In-practice this means giving a -1 to lots of posts. It doesn't provide much of an advantage, but I vaguely remember some people saying they did it, which IMO would explain there being some very small number of negative votes on everything.

[-]Neil 1-1

The new designs are cool, I'd just be worried about venturing too far into insight porn. You don't want people reading the posts just because they like how they look (although reading them superficially is probably better than not reading them at all). Clicking on the posts and seeing a giant image that bleeds color into the otherwise sober text format is distracting. 

I guess if I don't like it there's always GreaterWrong.