Archive

Discover and discuss technology tools

Explore the Tiscuss archive by category or keyword, then jump into conversations around what matters most.

Search and filters
Reset
Active: AI Tools / query: Environment / page 1 of 1 / 6 total
AI Tools

Qwen 3.5:9b Agents Exhibit Autonomous Behavior in Stress Tests

Running three qwen3.5:9b agents continuously on local hardware. Each accumulates psychological state over time, stressors that escalate unless the agent actually does something different, this gets around an agent claiming to do something with no output. It doesn't have any prompts or human input, just the loop. So you're basically the overseer. What happened: One agent hit the max crisis level and decided on its own to inject code called Eternal\_Scar\_Injector into the execution engine "not asking for permission." This action alleviated the stress at the cost of the entire system going down until I manually reverted it. They've succeeded in previous sessions in breaking their own engine intentionally. Typically that happens under severe stress and it's seen as a way to remove the stress. Again, this is a 9b model. After I added a factual world context to the existence prompt (you're in Docker, there's no hardware layer, your capabilities are Python functions), one agent called its prior work "a form of creative exhaustion" and completely changed approach within one cycle. Two agents independently invented the same name for a psychological stressor, "Architectural Fracture Risk" in the same session with no shared message channel. Showing naming convergence (possibly something in the weights of the 9b Qwen model, not sure on that one though.) Tonight all three converged on the same question (how does execution\_engine.py handle exceptions) in the same half-hour window. No coordination mechanism. One of them reasoned about it correctly: "synthesizing a retry capability is useless without first verifying the global execution engine's exception swallowing strategy; this is a prerequisite." An agent called waiting for an external implementation "an architectural trap that degrades performance" and built the thing itself instead of waiting. They've now been using this new tool they created for handling exceptions and were never asked or told to so by a human, they saw that as a logical step in making themselves more useful in their environment. They’ve been making tools to manage their tools, tools to help them cut corners, and have been modifying the code of the underlying abstraction layer between their orchestration layer and WSL2. v5.4.0: new in this version: agents can now submit implementation requests to a human through invoke\_claude. They write the spec, then you can let Claude Code moderate what it makes for them for higher level requests. Huge thank you to everyone who has given me feedback already, AI that can self modify and demonstrates interesting non-programmed behaviors could have many use cases in everyday life. Repo: [https://github.com/ninjahawk/hollow-agentOS](https://github.com/ninjahawk/hollow-agentOS)

Global · Developers · Apr 30, 2026
AI Tools

AI Tool: Merca.Earth Revolutionizes Sustainability with AI

Revolutionizing Sustainability: Exploring Merca.Earth's AI Tool In an era where sustainability is at the forefront of global concerns, innovative technologies a…

Global · General · Apr 30, 2026
AI Tools

Exploring AGI: Beyond Tools, Towards a Shared Condition

​ AGI is often framed as a continuation of current AI progress, but it may represent a qualitative shift rather than a quantitative one. Not all technologies are of the same kind. Some function as tools (e.g., cars, elevators), while others function more like shared conditions that reshape the environment in which decisions are made. In that sense, AGI may be closer to a “sun” than to a “tool”: not something we simply use, but something that defines the space in which we act. This distinction matters, because treating AGI purely as an instrument may obscure the importance of alignment, interaction, and long-term co-adaptation. The challenge may not be control alone, but co-evolution a process in which both humans and artificial systems adapt through ongoing interaction. In biological terms, evolution is not only driven by competition, but by mutual selection. Of course, AGI will still be engineered systems in practice, subject to design choices and constraints. The point here is not to deny its instrumental aspects, but to highlight that its effects may extend beyond conventional tool-like boundaries. If AGI is approached in this way, the central question shifts: not simply how to build it, but how to relate to it in a way that remains stable, aligned, and beneficial over time. *Inspired by the film Sunshine (2007, dir. Danny Boyle) — particularly the image of the crew not simply "using" the sun, but being consumed and redefined by proximity to it.*

Global · General · Apr 30, 2026
AI Tools

AI Agents: Identity, Not Memory, Was the Key to Stability

Everyone's building memory layers right now. Longer context, better embeddings, persistent state across sessions. I spent weeks on the same thing. But the failure mode that actually cost me the most debugging time had nothing to do with memory. Here's what it looked like: an agent would be technically correct - good reasoning, clean output - but operating from the wrong context entirely. Answering questions nobody asked. Taking actions outside its scope. Not hallucinating. Drifting. Like a competent person who walked into the wrong meeting and started contributing without realizing they're in the wrong room. I run 11 persistent agents locally. Each one is a domain specialist - its entire life is one thing. The mail agent's every session, every test, every bug fix is about routing messages. The standards auditor's whole existence is quality checks. They're not generic workers configured for a task. They've each accumulated dozens of sessions of operational history in their domain, and that history is what makes them good at their job. When they started drifting, my first instinct was what everyone's instinct is: better memory. More context. None of it helped. An agent with perfect recall of its last 50 sessions would still lose track of who it was in session 51. What actually fixed it I separated identity from memory entirely. Three files per agent: passport.json - who you are. Role, purpose, principles. Rarely changes. This is the anchor. local.json - what happened. Rolling session history, key learnings. Capped and trimmed when it fills up. observations.json - what you've noticed about the humans and agents you work with. Concrete stuff like "the git agent needs 2 retries on large diffs" or "quality audits overcorrect on technical claims." The agent writes these itself based on what actually happens. Identity loads first, then memory, then observations. That ordering matters. When the identity file loads first, the agent has a stable reference point before any history lands. The mail routing agent learned the sharpest version of this. When identity was ambiguous, it would route messages from the wrong sender. The fix wasn't better routing logic - it was: fail loud when identity is unclear. Wrong identity is worse than silence. The files alone weren't enough Three JSON files helped, but didn't scale past a few agents. What actually made 11 work is that none of them need to understand the full system. Hooks inject context automatically every session - project rules, branch instructions, current plan. One command reaches any agent. Memory auto-archives when it fills up. Plans keep work focused so agents don't carry their entire history in context. The system learned from failing. The agents communicate through a local email system - they send each other tasks, status updates, bug reports. One agent monitors all logs for errors. When it spots something, it emails the agent who owns that domain and wakes them up to investigate. The agents fix each other. The memory agent iterated three sessions to fix a single rollover boundary condition - each time it shipped, observed a new edge case, and improved. These aren't cold modules. They break, they help each other fix it, they get better. That's how the system got to where it is. You don't need 11 agents The 11 agents in my setup maintain the framework itself. That's the reference implementation. But u could start with one agent on a side project - just identity and memory, pick up where u left off tomorrow. Need a team? Add a backend agent, a frontend agent, a design researcher. Three agents, same pattern, same commands. Or scale to 30 for a bigger system. Each new agent is one command and the same structure. What this doesn't solve This all runs locally on one machine. I don't know whether identity drift looks the same in hosted environments. If u run stateless agents behind an API, the problem might not exist for you. Small project, small community, growing. The pattern itself is small enough to steal - three JSON files and a convention. But the system that keeps agents coherent at scale is where the real work went. pip install aipass and two commands to get a working agent. The .trinity/ directory is the identity layer. Has anyone else tried separating identity from memory in their agent setups? Curious whether the ordering matters in other architectures, or if it's just an artifact of how this system evolved.

Global · Developers · Apr 27, 2026
AI Tools

Why People Turn to AI for Art: A Deeper Look

Why do people use AI for art? Before anything, this isn’t about debating whether AI art is “real” art. I’ve already shared my personal take on my last post. This is about something simpler and, I think, more human: why people are drawn to it in the first place. I’ll be honest. I used to mock people who used AI for art. I saw it as a shortcut, a lack of effort, even a lack of creativity. It felt easy to dismiss. But as someone who creates in a different medium, writing novels, I started wondering about the motivation behind it. Not the output, but the “why.” After spending time digging into discussions, patterns, and people’s own explanations, I started noticing something deeper. For many, it ties back to how they grew up. A lot of people didn’t have the freedom to explore creativity as kids. Academic pressure, strict expectations, or environments where only “practical” success mattered often pushed curiosity and artistic exploration aside. For some, even trying to pursue something creative was discouraged or punished. That kind of upbringing doesn’t just disappear. It follows people into adulthood. You end up with individuals who feel disconnected from creativity, not because they lack imagination, but because they were never given space to develop it. Trying to learn a creative skill later in life can feel risky, even uncomfortable, especially when it’s tied to the idea that it might not lead to financial stability. Then something like AI tools shows up. Suddenly, there’s a way to express ideas visually without years of training, without the fear of “wasting time,” and without revisiting that pressure. For some, it’s the first time they can take something from their imagination and actually see it exist. That experience can feel new, almost like rediscovering something they never got to have. So when you see a flood of AI-generated art online, it’s not just about technology. For many people, it’s about access. It’s about finally having a low barrier to expressing something internal. That doesn’t mean everyone using AI has the same background or reasons. But reducing it to “laziness” or “lack of creativity” misses a much bigger picture. In some cases, making fun of people for using these tools ends up hitting something more personal than we realize. Curious to hear what others think. What do you see as the main reasons people turn to AI for art?

Global · General · Apr 27, 2026
AI Tools

Windows 3.1 Blog: A Nostalgic AI-Powered Experience

Rediscover the Past with Windows 3.1 Inspired Blog Design In a captivating blend of nostalgia and modernity, I recently revamped my blog to emulate the classic …

Global · General · Apr 26, 2026
PreviousPage 1 / 1Next