Location: 1970 Port Laurent Place, Newport Beach, CA 92660 Host: Michael Michalchik Email: michaelmichalchik@gmail.com (For questions or requests) Time: 2:00 p.m. – 5:00 p.m. PT
Summary: Case‑study paper describing how GPT‑5 assisted ongoing research across mathematics, physics, astronomy, computer science, biology, and materials science. Reported contributions include concrete research steps, documented human‑AI interaction patterns, and four new, independently verified results in mathematics. Authors describe where AI accelerated progress, where it failed or required human steering, and what verification and experiment‑design practices made collaborations productive. Takeaway: modest domain‑level advances with strong implications for pace, methodology, authorship, and reproducibility as frontier models scale.
Questions (Topic 1):
What constitutes an acceptable verification pipeline for AI‑suggested results (replication, formal proofs, preregistration, adversarial checks)?
Where did AI save the most researcher hours (literature triage, conjecture generation, experiment design, code), and which steps still demanded human judgment?
How should credit and authorship be handled when AI materially contributes to a result? What norms would prevent perverse incentives?
If AI changes the bottleneck from “idea generation” to “result vetting,” what new failure modes emerge (spurious findings, code‑as‑proof, cherry‑picking) and how do we guard against them?
Summary: The essay frames a gap between SF’s AI‑boom worldview and everyday users: technological determinism and solutionism act as hidden operating assumptions that concentrate power, excuse weak consent, and misread what people actually want. Technology is built by specific people making specific choices under incentives—not a force of nature. Examples like the Cybertruck and Humane AI Pin illustrate preference‑laundering (“our aesthetic = the future”). The piece calls for rejecting inevitabilism, re‑centering real user preferences, and scrutinizing technocratic drift in AI policy and culture.
Questions (Topic 2):
How do we distinguish genuine inevitabilities (e.g., scaling hardware limits) from ideology‑driven “inevitabilism” in AI narratives? What empirical tests apply?
What governance or product‑discovery habits counter technological determinism and solutionism without lapsing into anti‑tech nostalgia?
When do founder aesthetics and class incentives improve products, and when do they systematically misfire? What feedback loops or veto points should exist?
In AI policy, how do we prevent de facto technocracy (expert capture) while preserving competent technical input?
Wrap‑up
Reading the linked pieces is optional. Bring your notes, disagreements, and use‑cases. See you Saturday, 2–5 p.m. If you need accessibility accommodations or have questions, email michaelmichalchik@gmail.com.
Location: 1970 Port Laurent Place, Newport Beach, CA 92660 Host: Michael Michalchik
Email: michaelmichalchik@gmail.com (For questions or requests)
Time: 2:00 p.m. – 5:00 p.m. PT
https://docs.google.com/document/d/1ofEhbYQF4p3n-PsKbEJNSonFZ0vN86hHgA43yVPDN2Y/edit?usp=sharing
Topic 1: Early Science Acceleration Experiments with GPT‑5
Text (paper):
Title Link: Early science acceleration experiments with GPT‑5 — Sébastien Bubeck et al.
URL: https://drive.google.com/file/d/1WK-gLuF_MLUyo4Y4OuWL6jZoC6hkY51P/view?usp=sharing
Audio highlights (podcast):
Title Link: NotebookLM AI podcast of paper highlights
URL: https://notebooklm.google.com/notebook/25328d16-29dd-490a-94e1-750239cbcd56?artifactId=43c3f296-15f5-4f02-abc5-ca6f6f03a2db
Summary: Case‑study paper describing how GPT‑5 assisted ongoing research across mathematics, physics, astronomy, computer science, biology, and materials science. Reported contributions include concrete research steps, documented human‑AI interaction patterns, and four new, independently verified results in mathematics. Authors describe where AI accelerated progress, where it failed or required human steering, and what verification and experiment‑design practices made collaborations productive. Takeaway: modest domain‑level advances with strong implications for pace, methodology, authorship, and reproducibility as frontier models scale.
Questions (Topic 1):
Topic 2: What Tech Bros Get Wrong (video + essay)
Video:
Title Link: What tech bros get wrong
URL: https://youtu.be/yTniD9_D2l8?si=CxN5R7H5SyVZo6GD
Text (essay adaptation by Claude Opus 4.1):
Title Link: Essay adaptation of the video
URL: https://docs.google.com/document/d/12HwtZl1o-N-koe68vScVOjiznQWHi4-BhT_VyEtswp8/edit?usp=sharing
Summary: The essay frames a gap between SF’s AI‑boom worldview and everyday users: technological determinism and solutionism act as hidden operating assumptions that concentrate power, excuse weak consent, and misread what people actually want. Technology is built by specific people making specific choices under incentives—not a force of nature. Examples like the Cybertruck and Humane AI Pin illustrate preference‑laundering (“our aesthetic = the future”). The piece calls for rejecting inevitabilism, re‑centering real user preferences, and scrutinizing technocratic drift in AI policy and culture.
Questions (Topic 2):
Wrap‑up
Reading the linked pieces is optional. Bring your notes, disagreements, and use‑cases. See you Saturday, 2–5 p.m. If you need accessibility accommodations or have questions, email michaelmichalchik@gmail.com.
Posted on: