Archive
Discover and discuss technology tools
Explore the Tiscuss archive by category or keyword, then jump into conversations around what matters most.
Trading System V2: AI's Role in Deterministic Execution
Thanks to the incredible feedback on my last post, I’m officially moving away from the "distributed veto" system (where 8 LLM agents argue until they agree to trade). For v2, I am implementing a strict State Machine using a deterministic runtime (llm-nano-vm). The new rule is simple: Python owns the math and the execution contract. The LLM only interprets the context. I've sketched out a 5-module architecture, but before I start coding the new Python feature extractors, I want to sanity-check the exact roles I’m giving to the AI. Here is the blueprint: 1. The HTF Agent (Higher Timeframe - D1/H4) Python: Extracts structural levels, BOS/CHoCH, and premium/discount zones. LLM Role: Reads this hard data to determine the institutional narrative and select the most relevant Draw on Liquidity (DOL). 2. The Structure Agent (H1) Python: Identifies all valid Order Blocks (OB) and Fair Value Gaps (FVG) with displacement. LLM Role: Selects the highest-probability Point of Interest (POI) based on the HTF Agent's narrative. 3. The Trigger Agent (M15/M5) 100% Python (NO LLM): Purely deterministic. It checks for liquidity sweeps and LTF CHoCH inside the selected POI. 4. The Context Agent LLM Role: Cross-references active killzones, news blackouts, and currency correlations to either greenlight or veto the setup. 5. The Risk Agent 100% Python (NO LLM): Calculates Entry, SL, TP, Expected Value (EV), and position sizing. The state machine will only transition to EXECUTING if the deterministic Trigger and Risk modules say yes. The LLMs are basically just "context providers" for the state machine. My questions for the quants/architects here: Does this division of labor make sense? Am I giving the LLMs too much or too little responsibility in step 1 and 2? By making the Trigger layer (M15/M5) 100% deterministic, am I losing the core advantage of having an AI, or is this the standard way to avoid execution paralysis? Would you merge the HTF and Structure agents to reduce token constraints/hallucinations, or is separating them better for debugging? Would love to hear your thoughts before I dive into the codebase.
Google Expands Pentagon's AI Access After Anthropic's Refusal
After Anthropic refused to allow the DoD to use its AI for domestic mass surveillance and autonomous weapons, Google has signed a new contract with the department.
Wafaa.io: AI Tool for Secure Digital Contracts in Minutes
Create secure digital contracts in minutes
Meta's Space Solar Power Deal with Overview Energy
Overview Energy's first contract with Meta is a small step toward a future of space-based solar power.
Navigating AI Agent Governance: A Growing Organizational Challenge
Something I've been thinking about that doesn't get discussed enough outside of technical circles: the organizational and safety implications of uncoordinated AI agent deployment. Companies are shipping agents fast. Customer service agents, coding agents, data analysis agents, internal ops agents. Each team builds their own. Each agent gets its own rules, its own permissions, its own behavior. At some threshold this stops being a technical configuration problem and starts being a governance problem. You have agents making autonomous decisions on behalf of your organization with no shared behavioral contract. No unified view of what your AI systems are authorized to do. Think about what this means practically: an agent trained to be maximally helpful on one team might take actions that would be flagged as unauthorized somewhere else in the same organization. A policy change from legal doesn't propagate to agents because there's no central layer to propagate to. Nobody knows which agents have access to what data. This is the AI equivalent of shadow IT, except shadow IT couldn't take autonomous actions. What's the right mental model for governing a fleet of AI agents? Treat each agent like an employee with a defined role and access policy? Build an org chart for agents? Create a behavioral constitution that all agents inherit? Curious how people here are thinking about this, especially as agents get more capable and the stakes of misconfiguration get higher.