AI Reading List: STUVWG
Below is a portion of my informal list of readings related to Artificial Intelligence (AI). This started out as a very short list created for use in conjunction with an academic presentation and has now grown much larger. Please let me know if you have any corrections, additions, suggestions, etc. It is very idiosyncratic and not meant to be comprehensive. Please feel free to share with others.
Artificial Intelligence (AI) Reading List, by Philip Rubin
STUVWG: Science, Technology, and Utopian Visions Working Group:
STUVWG, The Science, Technology, and Utopian Visions Working Group, is an independent group of academics, technologists, artists, community members, educators, and others that has met for over twenty years to consider the impacts on society of developments related to science and technology. It was originally part of the Whitney Humanities Center at Yale, but now functions independently.
Some of the current participants are: Bonnie Kaplan (founder), Raphael (“Rafi”) Ryger (organizer), Joseph Carvalko Jr., Alice Fischer, Michael Fischer, Gary Kopf, Ali Montazer, Philip Rubin, Sydney Spiesel, Carlos Torre, Shlomit Yanisky-Ravid.
This page is dedicated to the memory of Christina Spiesel.
Below are suggested readings along with comments from Rafi Ryger related to the meeting of Feb. 6, 2025.
The primary reading for understanding intelligence, looking at both the evolution of biological intelligence and at AI, particularly in its latest offerings based on artificial neural nets (with many auxiliary mechanisms involved), is:
Max Bennett. A Brief History of Intelligence: Evolution, AI, and the Five Breakthroughs that Made Our Brains. 2023.
This is a masterpiece of clear, engaging writing, inspiring insight and broad education on relevant biology and engineering to inform our discussions of AI, but should not be rushed, better allowed more weeks to think through. An easier read, and right in line with our STUV leanings, is:
Arvind Narayanan, Sayash Kapoor. AI Snake Oil: What Artificial Intelligence Can Do, What It Can't, and How to Tell the Difference. 2024.
Bonnie Kaplan has been giving talks on AI in medicine for many months. I have asked her to take a look at the August issue of the Harvard Medicine alumni-oriented magazine presenting several articles on diverse applications of AI in different roles in medicine. She may give us her assessment of what is real and interesting and what is overly optimistic and over-sold. In either case, I expect we will learn. The issue may be found online at; Harvard Medical Magazine, Autumn 2024.
Joe Carvalko has brought to our attention his article,
Joseph Carvalko. “Generative AI, Ingenuity, and Law.” IEEE, 2024. https://carvalko.com/wp-content/uploads/2024/07/Generative_AI_Ingenuity_and_Law.pdf
Joe addresses not the question of whether recent “generative AI” and its derivative products work as billed — the snake oil question -- but rather, assuming any one such really does work, at least partially, what else may be going on or may follow as a repercussion that may not be desirable. What policies can be put in place, and what is already being tried, to avoid adverse repercussions that may not be evident in testing and immediate-term usage, all without nipping in the bud development work that could potentially be enormously beneficial? We should, I expect, find what the Europeans are trying — the AI Act of 2024 in particular — worthy of discussion.
Sydney Spiesel has suggested an article wondering whether AI has passed a plateau we are reaching as we run out of fresh and substantially broadening training data:
“OpenAI cofounder Ilya Sutskever says the way AI is built is about to change,” by Kylie Robison,The Verge, Dec 13, 2024,.
Sydney has also pointed to an eyebrows-raising article calling into question the nothing-to-worry-about argument regarding the potential for extra-design willfulness on the part of AI:
“OpenAI's new ChatGPT o1 model will try to escape if it thinks it'll be shut down — then lies about it.” tom’s guide, December 6, 2024, by Alyse Stanley.
In the latter regard and going further is the following article Joe has sent our way:
“AI Safety Alert: Frontier Models Surpass Self-Replication Threshold. AI Safety Alert: Unveiling the Alarming Self-Replication Capabilities of Frontier Models.” By Keith Torrence, aiagenticforce.substack.com, Dec 16, 2024.
Thanks to Alice Fischer for suggesting that her colleague Vahid Behzadan join us in this meeting (and perhaps beyond?)! Vahid sends us the following:
“Given my background in AI safety, I'd like to propose a few additional readings, should there be sufficient time and interest among the members:”
“Concrete Problems in AI Safety”:
Dario Amodei, Chris Olah, Jacob Steinhardt, Paul Christiano, John Schulman, Dan Mané, Concrete Problems in AI Safety”, arXiv:1606.06565 [cs.AI]. “A holistic, fundamental overview of technical risks in AI, not limited to LLMs. ”
Chapters 1 & 2 of the AI Safety book: Dan Hendrycks, Introduction to AI Safety, Ethics, and Society. 2019
Chapters 1 & 2 of the Long-Term Risk Research Agenda. Center of Long-Term Risk. “Cooperation, Conflict, and Transformative Artificial Intelligence: A Research Agenda”, Jan. 2020.
Stuart Russell. Human Compatible: Artificial Intelligence and the Problem of Control. 2019. “An engaging exploration of aligning AI with human values.”
DeepSeek-V3 Technical Report (PDF).
“Please feel free to pick any (or none) of these to share with the group as you see fit. I'd love to hear the group's thoughts if any of these topics resonate.”
Philip Rubin also suggested:
Zeynep Tufekci. The Dangerous A.I. Nonsense That Trump and Biden Fell For. The New York Times, Feb. 5, 2025.
Gary Marcus. ChatGPT in Shambles. Marcus on AI, Feb. 4, 2025.
Gary Marcus. Deep Research, Deep Bullshit, and the potential (model) collapse of science. Sam Altman’s hype might just bite us all in the behind. Marcus on AI, Feb. 3, 2025.