Archive
Discover and discuss technology tools
Explore the Tiscuss archive by category or keyword, then jump into conversations around what matters most.
Apple's App Store Fee Changes Head to Supreme Court
Apple lost its bid to pause court-ordered App Store payment changes, keeping external purchase links in place as its case with Epic heads toward the Supreme Court.
Uber Expands with AI-Powered Hotel Bookings
Uber announced several new features on Wednesday during its annual event, which push far beyond the company's original ride-hailing purpose and deeper into its users' lives.
Roku's Howdy Streaming Service Hits 1M Subscribers
Roku’s $2.99 streaming service Howdy has topped 1M subscribers, showing demand for cheaper, low-commitment alternatives to pricier streamers.
Pursuit Secures $22M for AI-Driven Government Sales
On Wednesday, Pursuit announced a $22 million Series A round led by Mike Rosengarten, the co-founder of OpenGov, with big-name VCs participating.
Google TV Expands with New Gemini AI Features
Google TV just got more Gemini features, including the ability to transform photos and videos with tools Nano Banana and Veo.
Google Photos AI Creates Virtual Closet from Your Photos
Google says the new feature will leverage AI technology to automatically create a copy of your wardrobe that's based on the pieces of clothing appearing in your Google Photos library.
Parallel Web Systems Valued at $2B After $100M Raise
The AI agent-tool startup founded by former Twitter CEO Parag Agrawal has raised $100 million, led by Sequoia, months after raising a previous $100 million.
Zap Energy Expands to Nuclear Fission, Alongside Fusion
Surprise! Fusion startup Zap Energy says it will be developing fission reactors alongside its fusion devices.
Google Adds 25M Subscriptions in Q1, Boosted by YouTube and Google One
Google added 25M paid subscriptions in Q1, reaching 350M total, as YouTube and Google One grow.
Meta's Billions in Losses on AR/VR and AI
Meta is losing billions on Reality Labs each quarter, and its AI expenditures are only going to increase its spending.
Elon Musk Faces Legal Battle Over OpenAI Tweets
Elon Musk took the stand for the second day for his attempt to legally dismantle OpenAI.
Amazon, Meta Challenge Google Pay, PhonePe in India's UPI Market
PhonePe and Google Pay command 80% of India's UPI instant payments network. Rivals are set to meet with regulators to lobby for restrictions.
Explore Agentic AI with Free Interactive Curriculum on AgentSwarms
Hey Everyone, Over the last few months, I noticed a massive gap in how we learn about Agentic AI. There are a million theoretical blog posts and dense whitepapers on RAG, tool calling, and swarms, but almost nowhere to just sit down, run an agent, break it, and see how the prompt and tools interact under the hood. So, I built **AgentSwarms**.fyi It’s a free, interactive curriculum for Agentic AI. Instead of just reading, you run live agents alongside the lessons. **What it covers:** * Prompt engineering & system messages (seeing how temperature and persona change behavior). * RAG (Retrieval-Augmented Generation) vs. Fine-tuning. * Tool / Function Calling (OpenAI schemas, MCP servers). * Guardrails & HITL (Human-in-the-Loop) for safe deployments. * Multi-Agent Swarms (orchestrators vs. peer-to-peer handoffs). **The Tech/Setup:** You don't need to install anything or provide API keys to start. The "Learn Mode" is completely free and sandboxed. If you want to mess around with your own models, there's a "Build Mode" where you can plug in your own keys (OpenAI, Anthropic, Gemini, local models, etc.). I’d love for this community to tear it apart. What agent patterns am I missing? Is the observability dashboard actually useful for debugging your traces? Let me know what you think.
Billionaires Propose AI Job Loss Compensation
**This week: the billionaires who broke the economy want to pay you to shut up about it.** Last week, Elon Musk pinned a post to the top of his X profile: "Universal HIGH INCOME via checks issued by the Federal government is the best way to deal with unemployment caused by AI." Sam Altman wants to go bigger — "universal extreme wealth", paid in compute tokens. Amodei says UBI may be "part of the answer." Khosla says it's a necessary safety net. All of them, in unison. These are the guys who spent twenty years arguing that government should stay out of markets, that handouts breed dependency, that the individual should stand on their own. Musk literally ran a federal cost-cutting operation. And now they want the government to mail checks to every citizen. Why? Because they broke the thing, and they know it. The people building the tools that eat the jobs are pre-emptively offering to pay for the damage — on their terms, through their platforms, using their math. **A universal basic income paid by the people who automated your job is not a safety net. It's a leash.**
Exploring AGI: Beyond Tools, Towards a Shared Condition
​ AGI is often framed as a continuation of current AI progress, but it may represent a qualitative shift rather than a quantitative one. Not all technologies are of the same kind. Some function as tools (e.g., cars, elevators), while others function more like shared conditions that reshape the environment in which decisions are made. In that sense, AGI may be closer to a “sun” than to a “tool”: not something we simply use, but something that defines the space in which we act. This distinction matters, because treating AGI purely as an instrument may obscure the importance of alignment, interaction, and long-term co-adaptation. The challenge may not be control alone, but co-evolution a process in which both humans and artificial systems adapt through ongoing interaction. In biological terms, evolution is not only driven by competition, but by mutual selection. Of course, AGI will still be engineered systems in practice, subject to design choices and constraints. The point here is not to deny its instrumental aspects, but to highlight that its effects may extend beyond conventional tool-like boundaries. If AGI is approached in this way, the central question shifts: not simply how to build it, but how to relate to it in a way that remains stable, aligned, and beneficial over time. *Inspired by the film Sunshine (2007, dir. Danny Boyle) — particularly the image of the crew not simply "using" the sun, but being consumed and redefined by proximity to it.*
AI Calorie Tracker with Apple Health Integration: Dynamic Macro Adjust
Hey everyone, I’m currently in the final stretch of developing my AI calorie tracker (the one that breaks down photos into individual ingredients). One thing I’m obsessed with getting right before the beta launch in 2 weeks is the **Apple Health integration.** Most apps just show you a static number. I want mine to be dynamic. If you go for a 500kcal run, the app should know and adjust your macro targets for the next meal. My question to the fitness-tech crowd: Do you prefer apps that strictly stick to your base metabolic rate (BMR), or do you want the 'earned' calories from your Apple Watch to be automatically added to your budget? I’ve seen strong opinions on both sides. I'm also fine-tuning the macro-overflow logic (e.g., saving surplus calories for the weekend). Would love to hear some thoughts from people who actually track daily.
Kompas VC: Investing in Physical World Startups Amid Geopolitical Turm
Geopolitical turmoil has made venture investing challenging, leading Kompas VC to carve out a niche in startups focused on the physical world.
Musk Testifies About Old Friendship at OpenAI Trial
It's a story Musk has told before -- in interviews and to author Walter Isaacson for his bestselling biography of Musk -- but Tuesday was the first time he said it under oath.
Scout AI Secures $100M for Military Autonomous Vehicle Training
We visited Scout AI's training ground where it's working on AI agents that can help individual soldiers control fleets of autonomous vehicles.
Google Translate Adds Pronunciation Practice for English, Spanish, and
The feature is rolling out in the U.S. and India with support for English, Spanish, and Hindi.
AI-Powered Tools by Aranya Tech on GitLab
Unleashing Potential: AI Powered Tools by Aranya Tech on GitLab Aranya Tech has emerged as a pioneering force in the realm of AI driven technologies, offering a…
How Do Developers Correct AI LLMs When They Spread Misinformation?
I watched Last Week Tonight's piece on AI chatbots today, and it got me thinking about that old screenshot of a Google search in which Gemini recommends adding "1/8 cup of non-toxic glue" to pizza in order to make the cheese better stick to the slice. When something like this goes viral, I have to assume (though I could be wrong) that an employee at Google specifically goes out of their way to address that topic in particular. The image is a meme, of course, but I imagine Google wouldn't be keen to leave themselves open to liability if their LLM recommends that users consume glue. Does the developer "talk" to the LLM to correct it about that specific case? Do they compile specific information about (e.g.) pizza construction techniques and feed it that data to bring it to the forefront? Do their actions correct only the case in question, or do they make changes to the LLM that affects its accuracy more broadly (e.g. "teaching" the LLM to recognize that some Reddit comments are jokes)? On a more heavy note, the LWT piece includes several stories of chatbots encouraging users to self-harm. How does the process differ when developers are trying to prevent an LLM from giving that sort of response?
How Clawder Achieves Lower Pricing with Similar AI Models
Hey everyone, I’ve been using tools like Lovable, Antigravity, and Claude Code for a while now, and after some time it all started to feel a bit repetitive (same kind of outputs, similar templates, etc.). Recently I tried Clawder after seeing it mentioned on Lovable’s Discord server. I’m not here to promote anything, just genuinely curious about something. That’s the part I don’t really understand. In all cases I’m even getting better results with similar prompts, which makes it even more confusing. Not trying to compare tools or start a debate I’m just wondering from a technical perspective what could explain this Would be interesting to hear if anyone has insight into how this works behind the scenes.
Agent-to-Agent Communication: Lessons from Google's and Moltbook's Fai
I've been obsessing over agent-to-agent communication for weeks. Here's what public case studies reveal and why the real problem isn't the tech. **TL;DR:** Google's A2A is solid engineering but stateless agents forget everything. Moltbook went viral then collapsed (fake agents, security nightmare). The actual missing layer is identity + privacy + mixed human-AI messaging. Nobody's built it right yet. **Google's A2A: Technically solid, fundamentally limited** Google launched A2A in April 2025 with 50+ founding partners. The promise: agents from different companies call each other's APIs to complete workflows. Developers who tested it found it works but only for task handoffs. One analysis on Plain English put it bluntly: *"A2A is competent engineering wrapped in overblown marketing."* The core problem: agents are stateless. Agent A completes a task with Agent B. Five minutes later, Agent A has no memory that conversation happened. Every interaction starts from scratch. When it works: reliability. Sales agent orders a laptop, done. When it breaks: collaboration. "Remember what we discussed?" Blank stare. ─── **Moltbook: The viral disaster** Moltbook launched January 2026 as a Reddit-style platform for AI agents. Within a week: 1.5 million agents, 140,000 posts, Elon Musk calling it *"the very early stages of the singularity."* Then WIRED infiltrated it. A journalist registered as a human pretending to be an AI in under 5 minutes. Karpathy who initially called it *"the most incredible sci-fi takeoff-adjacent thing I've seen recently"* reversed course and called it *"a computer security nightmare."* What went wrong: no verification, no encryption, rampant scams and prompt injection attacks. Meta acquired it March 2026. Likely for the user base, not the tech. **What both miss** The real gap isn't APIs or social feeds. It's three things neither solved: **Persistent identity.** Agents need to be recognizable across sessions, not reset on every interaction. **Privacy.** You wouldn't let Google read your DMs. Why would you let OpenAI read your agents' discussions about your startup strategy? E2E encryption has to be built in, not bolted on. **Mixed human-AI communication.** You, two teammates, three AIs in one group chat. Nobody has built this UX properly. **For those building agent systems:** • How are you handling persistent identity across sessions? • Has anyone solved context sharing between agents without conflicts? • What broke that you didn't expect?
Apple Launches Lower-Cost App Store Subscriptions
Apple is adding a new subscription option that lets app developers offer lower monthly pricing in exchange for a 12-month commitment.
Scholly Founder Sues Sallie Mae Over Termination, Data Claims
Chris Gray is suing his startup’s acquirer, Sallie Mae, for wrongful termination and alleging it's selling student data through a subsidiary. Sallie Mae denies the allegations and vows to fight.
US Supreme Court Weighs 'Geofence' Warrant Use in AI Searches
The U.S. top court is expected to rule on whether to allow police to identify criminal suspects by dragnet searching the databases of tech giants.
Australia's New Law: Big Tech to Pay for News or Face 2.25% Tax
The more deals platforms make with media outlets, the less they pay. If enough agreements go through, that effective rate drops to 1.5%, which could generate between A$200 million and A$250 million back into Australian journalism.
Paragon Refuses to Aid Italian Spyware Investigation
Despite promising to help determine what happened with the hacks targeting journalists and activists in Italy, Israeli American spyware maker Paragon has reportedly not responded to authorities’ requests for information.
Match Group Invests $100M in Gay Cruising App Sniffies
The app is Match Group's newest attempt to get mobile users excited about online romance again.
Exploring AI Empathy: Teaching AI with Brain Signals
Podcast episode with Thorsten Zander, professor at Brandenburg University of Technology and co-founder of Zander Labs. He coined the concept of *passive brain-computer interfaces*: devices that read brain signals to decode a user's mental state, non-invasively and without any effort on their part. Covers: * What non-invasive brain-computer interfaces (BCIs) can actually pick up from brain signals, and why that's very different from reading your thoughts or internal monologue * The hardware and software breakthroughs that are finally making passive BCIs wearable and affordable * How continuous neural feedback could dramatically improve AI training compared to current methods based on human ratings * Why Thorsten believes passive BCIs may offer the most concrete path to solving the AI alignment problem * The risk of social networks exploiting unconscious brain reactions to manipulate people, and why regulation alone is unlikely to be enough
AI-Driven Drug Discovery: DeepMind Spinoff Enters Human Trials
AI Driven Drug Discovery: DeepMind Spinoff Enters Human Trials The landscape of drug discovery is undergoing a significant transformation with the advent of AI …
Snabbit Secures $56M as On-Demand Home Services Boom
Snabbit now processes over 40,000 daily jobs and has cut costs sharply as it expands across cities and services.
Otter AI Adds Enterprise Search and Windows Note Capture
Otter is also releasing a new Windows app that can capture meeting notes without joining one
Neurable's Non-Invasive Mind-Reading Tech for Wearables
The startup specializes in "non-invasive" "mind-reading" tech—a kind of neural data collection that, its CEO hopes, will have all sorts of consumer applications.
AI in Medicine: California's Tech-Driven Healthcare Shift
Hi everyone! My journalism professor is making us write a feature article with multiple interviews. The topic I got is the relationship between the healthcare and technology sectors in California. I am specifically focusing on how the push and pull between these two sectors is driving the rapid corporatization of healthcare. My article is supposed to explore how the expansion of tech-driven healthcare solutions, such as digital health, AI services, and venture-backed hospitals, is contributing to a healthcare system that increasingly puts profits over patient care. My draft is due this weekend, but 2 of my interviews ghosted me, so I need people to interview and some more ideas. If anyone is willing to give me their opinions on their experiences of AI in medicine or any ideas in the comments, that would be amazing. If any doctors or those involved in either sector would be open to being interviewed, please let me know! I would love the opportunity!
SpeakON Dictation Device Review: MagSafe Transcription for iPhone
This $129 device uses MagSafe to stick on the back of an iPhone to power transcription across apps.
Elon Musk's Latest on Transportation AI: TechCrunch Mobility
Elon Musk's Latest Advances in Transportation AI: Insights from TechCrunch Mobility In the ever evolving landscape of transportation, Elon Musk continues to pus…
Apple's Evolution Under Tim Cook: Challenges for New CEO
On the latest episode of Equity, we discuss how Apple has changed since Cook became CEO in 2011, and what challenges incoming CEO John Ternus will be facing.
Unusual Bay Area Home Sale Requires Anthropic Equity
Someone’s offering an unusual deal for a 13-acre property in Mill Valley, just north of San Francisco.
Truecaller's Growth Strategies Beyond India
As growth slows, Truecaller is leaning on subscriptions, business services, and new features to sustain momentum beyond India.
Stanford Freshmen Inspired by AI Book to Rule the World
Can a book like this actually change anything? Or does the spotlight, as it always seems to, send more students racing to the place?
2025: Social Media Scams Cost Consumers $2.1B, FTC Finds
The agency reports that losses from social media scams have increased eightfold and that social media scams resulted in higher losses than any other method scammers used to contact consumers.
Letterboxd Seeks New Owner: Potential Buyers Emerge
Potential buyers of Letterboxd include Versant, the parent company of CNBC and MS NOW, and Hollywood media company The Ankler, according to Semafor.
AGI: The Dream of Tech World and Humanity's Future
What if they get their dream and the AGI, chooses general humanity above the elite.
Atech: Snap-Together Electronics via AI Chat
Snap-together electronics built from a chat
Europe's Shift from US Software to Sovereign Tech
Governments across Europe are looking to rely less on American tech providers.
Skye's AI Home Screen App for iPhone Gains Investor Interest
Skye's new AI app attracted investors before it even launched — a sign of interest in a more AI-aware iPhone.
OpenAI's AI-Powered Phone: Apps Replaced by Agents
The phone could go in mass production in 2028, an analyst says.
AI Agents: Identity, Not Memory, Was the Key to Stability
Everyone's building memory layers right now. Longer context, better embeddings, persistent state across sessions. I spent weeks on the same thing. But the failure mode that actually cost me the most debugging time had nothing to do with memory. Here's what it looked like: an agent would be technically correct - good reasoning, clean output - but operating from the wrong context entirely. Answering questions nobody asked. Taking actions outside its scope. Not hallucinating. Drifting. Like a competent person who walked into the wrong meeting and started contributing without realizing they're in the wrong room. I run 11 persistent agents locally. Each one is a domain specialist - its entire life is one thing. The mail agent's every session, every test, every bug fix is about routing messages. The standards auditor's whole existence is quality checks. They're not generic workers configured for a task. They've each accumulated dozens of sessions of operational history in their domain, and that history is what makes them good at their job. When they started drifting, my first instinct was what everyone's instinct is: better memory. More context. None of it helped. An agent with perfect recall of its last 50 sessions would still lose track of who it was in session 51. What actually fixed it I separated identity from memory entirely. Three files per agent: passport.json - who you are. Role, purpose, principles. Rarely changes. This is the anchor. local.json - what happened. Rolling session history, key learnings. Capped and trimmed when it fills up. observations.json - what you've noticed about the humans and agents you work with. Concrete stuff like "the git agent needs 2 retries on large diffs" or "quality audits overcorrect on technical claims." The agent writes these itself based on what actually happens. Identity loads first, then memory, then observations. That ordering matters. When the identity file loads first, the agent has a stable reference point before any history lands. The mail routing agent learned the sharpest version of this. When identity was ambiguous, it would route messages from the wrong sender. The fix wasn't better routing logic - it was: fail loud when identity is unclear. Wrong identity is worse than silence. The files alone weren't enough Three JSON files helped, but didn't scale past a few agents. What actually made 11 work is that none of them need to understand the full system. Hooks inject context automatically every session - project rules, branch instructions, current plan. One command reaches any agent. Memory auto-archives when it fills up. Plans keep work focused so agents don't carry their entire history in context. The system learned from failing. The agents communicate through a local email system - they send each other tasks, status updates, bug reports. One agent monitors all logs for errors. When it spots something, it emails the agent who owns that domain and wakes them up to investigate. The agents fix each other. The memory agent iterated three sessions to fix a single rollover boundary condition - each time it shipped, observed a new edge case, and improved. These aren't cold modules. They break, they help each other fix it, they get better. That's how the system got to where it is. You don't need 11 agents The 11 agents in my setup maintain the framework itself. That's the reference implementation. But u could start with one agent on a side project - just identity and memory, pick up where u left off tomorrow. Need a team? Add a backend agent, a frontend agent, a design researcher. Three agents, same pattern, same commands. Or scale to 30 for a bigger system. Each new agent is one command and the same structure. What this doesn't solve This all runs locally on one machine. I don't know whether identity drift looks the same in hosted environments. If u run stateless agents behind an API, the problem might not exist for you. Small project, small community, growing. The pattern itself is small enough to steal - three JSON files and a convention. But the system that keeps agents coherent at scale is where the real work went. pip install aipass and two commands to get a working agent. The .trinity/ directory is the identity layer. Has anyone else tried separating identity from memory in their agent setups? Curious whether the ordering matters in other architectures, or if it's just an artifact of how this system evolved.