This event has already taken place.
Did you attend? Please share a testimonial!
(Contact me if you have any issues.)
Offered 2 Times. Choose One.
Something peculiar is happening here.
AI labs are writing constitutions for machines while governments are fighting over whether they themselves still get to write the rules. Anthropic has Claude's Constitution. Amanda Askell gives the thing virtues, character, and "hard constraints." Jill Lepore, writing in The New Yorker, asks the obvious question: what does it mean when a private company starts sounding more constitution-minded than the constitutional order around it?
At the same time, the federal government wants one national AI framework and has argued that state laws are an obstacle to innovation, while California and other states keep trying to impose safety, privacy, and reporting guardrails anyway. So this is not a story about "whether governance exists." Governance exists. It is simply fragmented, contradictory, and fighting with itself in public.
And then there is the more antagonizing part: the discourse of alignment somehow feels like the main question is whether the model is "good", whatever the heck that means. But lurking just below that is the real thorny (intractable?) question — who (what?) has authority? Who sets the red lines. Who decides what surveillance is allowed. What decides is a right answer or plan or guidance when a question is asked of a machine intelligence? Who gets to build these intelligences (whether a model or the things that wrap around them or the devices/services that rely on them) and are they in any way regulated the way we might regulate food, building, medicine, or sandwich shop for the good of the public? Who decides whether autonomous weapons are acceptable. Who decides whether a private model vendor, a procurement office, a governor, a legislature, or the Pentagon gets the last word?
Recent reporting has made this whole show basically impossible to ignore. There is the White House push for federal pre-emption. There are the state fights over guardrails, access, and privacy. There is the Anthropic conflict around surveillance and autonomous-weapons boundaries (as if a private company should be the one drawing the lines around what counts as acceptable surveillance or autonomous-weapons behavior — remember Google promised to do no evil once..and changed its mind once the PR value of that promise shifted). There is Ezra Klein talking with Ben Buchanan about AGI preparedness as a state-capacity problem rather than a mere product launch problem. And beneath all of this is a much larger issue: not just how powerful these systems become, but what kind of institutional order is being built around them.
Access, for one thing, is becoming constitutional.
Who gets the good model? Who gets the throttled one? Who gets denied? Premium tiers, restricted compute, export controls, enterprise procurement filters, national-security exceptions, and platform-controlled inference all point to the same thing: whatever "democratic access" to AI is supposed to mean, it is not likely to arrive as some evenly distributed civic blessing. It will be rationed, credentialed, subsidized, denied, and politically narrated. That is not a bug in the story. That is the story.
And if the liberal-democratic constitution is weak, other constitutions rush in. Managerial constitutions. Security-state constitutions. Corporate constitutions. Technocratic constitutions. Perhaps even the anti-constitution: not rights, not representation, not public contestation, but optimization, expertise, performance thresholds, and restricted participation for the supposedly qualified few. So then the question becomes: what does a technocratic order actually look like in everyday life? What badges, notices, clearances, memos, dashboards, handbooks, and eligibility forms tell you that governance has shifted from public legitimacy to expert administration?
This is where General Seminar does what General Seminar does best.
We are not going to sit around and perform AI ethics. We are going to turn the current argument into legible artifacts from inside these possible worlds. We will make the governance struggle visible as a procurement red-line memo, an AI vendor safety disclosure sheet, a state-federal conflict map, an internal policy brief for leadership, a legislative hearing question pack, an incident report from a near-future governance failure, a Right to Access AI bill one-pager, an inference rationing notice, an AI access-tier chart, an anti-constitution manifesto, a technocratic operating charter, a model-governance credential badge, or an expert-council memo authorizing restricted decision rights.
That is the move here: not AGI prophecy, but institutional preparation. Not ethics theater, but governance rehearsal. Not hand-wringing over whether the future will be strange, but making the strangeness tangible enough that policy people, strategists, and institutional actors can actually do something with it.
If your job is to brief leadership, write policy, shape organizational strategy, evaluate risk, or explain to others what kind of AI order is taking shape around us, this General Seminar is for you.
Get your ticket now.
Produced right here in Venice Beach California by Near Future Laboratory
Learn how to get reimbursed for this professional learning & development package!
This one is for policy advisors and legislative staff, strategy and innovation leads, technologists, public-interest technologists, designers in AI, legislators, policy wonks, lawyers/law students, governance and trust teams, founders, and executives who are suddenly realizing that “alignment” is not just a theoretical model-lab notion but a real, live governance problem with real institutional entanglements. This all is becoming about procurement policy, institutional doctrine, public relations, access control and, you know — geopolitical infrastructure for a world where inference and compute are finite national resources, and not global shared resources.
This is not ethics theater. It is governance rehearsal. We are going to use General Seminar's peculiar method of artifact-making and salon-style discussion to make the conflict over AI rules, legitimacy, and access tangible enough to brief, argue, and actually use.
Also, check out the Run of Show below for more. See you soon.
You should come to this one if you are tired of the gap between AI discourse and institutional reality.
You will leave with a clearer map of the AI governance conflict surface: private guardrails, public law, procurement constraints, export controls, access regimes, and the institutional improvisations that are quietly becoming normal.
You will leave with a more usable language for talking about alignment as a governance problem rather than a vague moral aspiration.
You will leave with a sharper lens on AI access as a policy question: who gets model access, under what conditions, and what forms of hierarchy or dependency that creates.
And you will leave with artifacts you can actually use to brief leadership, frame debate with colleagues, or pressure-test your own institution's assumptions about who should be allowed to decide what powerful models can do.
🔑 Key Takeaways 🔑
How Does It Work?
General Seminar offers an alternative to conventional one-way learning experiences like podcasts, TED Talks, or Masterclasses. This is a hands-on seminar where we are all actively engaged in collaborative sense-making by imagining into and creatively conjuring artifacts that represent the futures indicated by the topic. This is followed by sharing and discussions where you will be able to consider how today's emergent themes and cultural idioms may become tomorrow's normal-ordinary-everyday. The goal of General Seminar is to collectively imagine potential futures and derive actionable insights. This is a unique forum for creative stimulation and lateral thinking. General Seminar will invigorates your imagination, leveraging the remarkable efficacy of the imagination to turn ideas into tangible artifacts.
Run of Show
General Seminar is a 90-minute webinar-structured event designed to maximize engagement and insight. Conducted online (and live, in-person on special occasions!)
❥ Reception (10 minutes): Welcome, context, and a fast briefing on the present governance struggle around AI: constitutions, guardrails, access, procurement, and state power.
❥ Policy Prototyping Breakouts (25 minutes): Small groups turn the discourse into tangible governance artifacts: memos, access notices, anti-constitutions, safety disclosures, legislative questions, and technocratic operating charters.
❥ Seminar (50 minutes): Facilitated salon-like discussion on legitimacy, institutional authority, uneven access, and the policy implications of letting labs, markets, and security states write the rules by default.
❥ Wrap-Up (5 minutes): What we learned, what to bring back on Monday, and what questions your institution should be asking now rather than later.
This immersive, collaborative format distinguishes General Seminar as a space for strategic sense-making, policy rehearsal, and concrete institutional imagination.
General Seminar S07/E04 went to a near future in which AI constitutions were not treated as abstract policy documents or lab-side safety language. They appeared as everyday signals: fees, licenses, notices, exclusions, stickers, street signs, receipts, operating instructions, and small bureaucratic forms that quietly tell people what kinds of intelligent systems are allowed to do what, where, and for whom.
The session produced artifacts and observations from that future. The point was not to settle the policy debate in the abstract, but to make the governance problem tangible enough to inspect. If AI has a constitution, most people will not encounter it as a grand founding document. They will encounter it through the material culture of normal life.
One recurring theme was multiplicity. There may not be one AI constitution. There may be many overlapping constitutions: corporate, municipal, federal, institutional, security-state, cooperative, technocratic, and local. Different places may encode different permissions, liabilities, access rights, or restrictions. The future might feel less like “AI follows the rules” and more like “which rules apply here, and who wrote them?”
A grocery receipt that includes an alignment fee and what appears to be a fund for universal access to AI compute. In this future, access to machine intelligence has become civic infrastructure, and ordinary purchases carry the traces of that policy choice.
An autonomous vehicle occupant license. The artifact asks what happens when model access, eligibility, and compliance become attached to mobility. If a person can lose access to a model, can they also lose access to services that depend on that model?
A packaged legal agentic unit with its own operating instructions. This points to a world where inexpensive legal agents might help with parking tickets, small claims, or everyday disputes, but only if they satisfy the licensing and doctrinal requirements of a particular jurisdiction.
A street sign for a local policy zone. One scenario followed autonomous delivery robots that become intelligent enough to operate as small businesses: buying inventory, choosing locations, routing themselves, and entering the marketplace as agentic commercial actors.
A companion street sign suggesting the opposite policy choice in a different area. The contrast matters: governance becomes legible through local boundaries, exceptions, compromises, and the visible marks of competing AI constitutions.
An intelligent food delivery robot operating independently in the marketplace. The mundane policy question becomes concrete: if an agentic entity can plan, purchase, sell, optimize, and compete, what kind of licensing, liability, taxation, labor rule, or public notice should follow it around the city?
The broader implication across the artifacts is the same: constitutions for AI will not only live in model cards, policy memos, or constitutional training documents. They will show up in the receipt, the sticker, the sign, the form, the ticket, the license, the procurement rule, and the access notice. Governance is never only in the model. It becomes part of the material and administrative texture of everyday life.