When I check ArXiv for new AI alignment research papers, I see mostly capabilities research papers, presumably because most researchers are working on capabilities. I wonder if there’s alignment-related value to be extracted from all that capabilities research, and how we might get at it. Is anyone working on this, or does anyone have any good ideas?

New Answer
New Comment

1 Answers sorted by

Martin Vlach

20

I'm fairly interested in that topic and wrote a short draft here explaining a few basic reasons to explicitly develop capabilities measuring tools as it would improve risk mitigations. What resonates from your question is that for 'known categories' we could start from what the papers recognise and dig deeper for more fine coarsed (sub-)capabilities.

(Your link seems to be missing.)

2 comments, sorted by Click to highlight new comments since:
[-]P.30

Do you mean from what already exists or from changing the direction of new research?

I mean extracting insights from capabilities research that currently exists, not changing the direction of new research. For example, specification gaming is on everyone's radar because it was observed in capabilities research (the authors of the linked post compiled this list of specification-gaming examples, some of which are from the 1980s). I wonder how much more opportunity there might be to piggyback on existing capabilities research for alignment purposes, and maybe to systemize that going forward.