Archive

Discover and discuss technology tools

Explore the Tiscuss archive by category or keyword, then jump into conversations around what matters most.

Search and filters
Reset
Active: AI Infrastructure / query: Vision / page 1 of 1 / 2 total
AI Infrastructure

Nvidia Exec: AI Currently More Expensive Than Human Workers

Nvidia’s vice president of applied deep learning, Bryan Catanzaro, recently stated that for his team, “the cost of compute is far beyond the costs of the employees,” highlighting that AI is currently more expensive than human workers. This challenges the narrative that widespread tech layoffs (including Meta’s planned cut of \~8,000 jobs and Microsoft’s voluntary buyouts) signal an imminent replacement of humans by AI. An MIT study from 2024 supports this, finding that AI automation is economically viable in only 23% of roles where vision is central, and cheaper for humans in the remaining 77%. Despite heavy AI investment—Big Tech has announced $740 billion in capital expenditures so far this year, a 69% increase from 2025—there is still no clear evidence of broad productivity gains or job displacement from AI. AI spending is driving up costs, with some executives like Uber’s CTO saying their budgets have already been “blown away.” Experts describe the situation as a short-term mismatch: high hardware, energy, and inference costs make AI less efficient than humans right now, though future improvements in infrastructure, model efficiency, and pricing models could tip the balance toward greater economic viability in the coming years.

Global · General · Apr 29, 2026
AI Infrastructure

AI Forensics: The Missing Link in AI Decision-Making

I work in AI security and compliance. This just bothers me a little bit, putting AI systems in front of decisions that change people’s lives via insurance claims, hiring, credit, defense applications and when someone asks wait, why did the system do that? we basically have nothing that would hold up in a courtroom. The explainability tools we have right now? SHAP, LIME, attention maps but they’re research tools. They’re not evidence. Researchers have shown you can build a model that actively discriminates while producing perfectly clean looking explanations. They have unbounded error, they give you different answers on different runs, and there’s no way for the other side’s lawyer to independently check the work. That’s a problem if you’re trying to meet Daubert standards. And the regulatory side is moving just as fast. EU AI Act has record keeping requirements coming online. The FY26 NDAA has an AI cybersecurity framework provision with implementation due mid 2026. States are doing their own thing. Courts are starting to actually push back on AI evidence under FRE 702. There is a ton of AI observability tooling out there. Great for ops. There’s governance platforms. Great for policy. But when it comes to something that’s actually forensic grade where opposing counsel is actively trying to tear it apart, where a third party can independently verify what happened without just trusting the vendor,I’m not seeing it. What am I missing?

Global · Developers · Apr 27, 2026
PreviousPage 1 / 1Next