Location: 1970 Port Laurent Place, Newport Beach, CA 92660 Host: Michael Michalchik
Email: michaelmichalchik@gmail.com (For questions or requests)
Time: 2:00 p.m. – 5:00 p.m. PT
https://docs.google.com/document/d/1ofEhbYQF4p3n-PsKbEJNSonFZ0vN86hHgA43yVPDN2Y/edit?usp=sharing
Text (paper):
Title Link: Early science acceleration experiments with GPT‑5 — Sébastien Bubeck et al.
URL: https://drive.google.com/file/d/1WK-gLuF_MLUyo4Y4OuWL6jZoC6hkY51P/view?usp=sharing
Audio highlights (podcast):
Title Link: NotebookLM AI podcast of paper highlights
URL: https://notebooklm.google.com/notebook/25328d16-29dd-490a-94e1-750239cbcd56?artifactId=43c3f296-15f5-4f02-abc5-ca6f6f03a2db
Summary: Case‑study paper describing how GPT‑5 assisted ongoing research across mathematics, physics, astronomy, computer science, biology, and materials science. Reported contributions include concrete research steps, documented human‑AI interaction patterns, and four new, independently verified results in mathematics. Authors describe where AI accelerated progress, where it failed or required human steering, and what verification and experiment‑design practices made collaborations productive. Takeaway: modest domain‑level advances with strong implications for pace, methodology, authorship, and reproducibility as frontier models scale.
Questions (Topic 1):
Video:
Title Link: What tech bros get wrong
URL: https://youtu.be/yTniD9_D2l8?si=CxN5R7H5SyVZo6GD
Text (essay adaptation by Claude Opus 4.1):
Title Link: Essay adaptation of the video
URL: https://docs.google.com/document/d/12HwtZl1o-N-koe68vScVOjiznQWHi4-BhT_VyEtswp8/edit?usp=sharing
Summary: The essay frames a gap between SF’s AI‑boom worldview and everyday users: technological determinism and solutionism act as hidden operating assumptions that concentrate power, excuse weak consent, and misread what people actually want. Technology is built by specific people making specific choices under incentives—not a force of nature. Examples like the Cybertruck and Humane AI Pin illustrate preference‑laundering (“our aesthetic = the future”). The piece calls for rejecting inevitabilism, re‑centering real user preferences, and scrutinizing technocratic drift in AI policy and culture.
Questions (Topic 2):
Reading the linked pieces is optional. Bring your notes, disagreements, and use‑cases. See you Saturday, 2–5 p.m. If you need accessibility accommodations or have questions, email michaelmichalchik@gmail.com.
Posted on: