Archive
Discover and discuss technology tools
Explore the Tiscuss archive by category or keyword, then jump into conversations around what matters most.
Billionaires Propose AI Job Loss Compensation
**This week: the billionaires who broke the economy want to pay you to shut up about it.** Last week, Elon Musk pinned a post to the top of his X profile: "Universal HIGH INCOME via checks issued by the Federal government is the best way to deal with unemployment caused by AI." Sam Altman wants to go bigger — "universal extreme wealth", paid in compute tokens. Amodei says UBI may be "part of the answer." Khosla says it's a necessary safety net. All of them, in unison. These are the guys who spent twenty years arguing that government should stay out of markets, that handouts breed dependency, that the individual should stand on their own. Musk literally ran a federal cost-cutting operation. And now they want the government to mail checks to every citizen. Why? Because they broke the thing, and they know it. The people building the tools that eat the jobs are pre-emptively offering to pay for the damage — on their terms, through their platforms, using their math. **A universal basic income paid by the people who automated your job is not a safety net. It's a leash.**
AI Infrastructure: Should AI Companies Generate Half Their Own Electri
People are growingly becoming more affected by the surge of electricity needed to power these data centers, is it reasonable or even possible? Maybe im letting my imagination take a hold of me but I think it’s crazy that all these people are ending up paying for things that they don’t want a part of.
Navigating AI Agent Governance: A Growing Organizational Challenge
Something I've been thinking about that doesn't get discussed enough outside of technical circles: the organizational and safety implications of uncoordinated AI agent deployment. Companies are shipping agents fast. Customer service agents, coding agents, data analysis agents, internal ops agents. Each team builds their own. Each agent gets its own rules, its own permissions, its own behavior. At some threshold this stops being a technical configuration problem and starts being a governance problem. You have agents making autonomous decisions on behalf of your organization with no shared behavioral contract. No unified view of what your AI systems are authorized to do. Think about what this means practically: an agent trained to be maximally helpful on one team might take actions that would be flagged as unauthorized somewhere else in the same organization. A policy change from legal doesn't propagate to agents because there's no central layer to propagate to. Nobody knows which agents have access to what data. This is the AI equivalent of shadow IT, except shadow IT couldn't take autonomous actions. What's the right mental model for governing a fleet of AI agents? Treat each agent like an employee with a defined role and access policy? Build an org chart for agents? Create a behavioral constitution that all agents inherit? Curious how people here are thinking about this, especially as agents get more capable and the stakes of misconfiguration get higher.
Democratic AI Governance: The Real Solution
Democratic AI Governance: The Real Solution In an era where artificial intelligence (AI) is transforming industries and societies, the need for effective govern…