Archive
Discover and discuss technology tools
Explore the Tiscuss archive by category or keyword, then jump into conversations around what matters most.
HuggingFaceTB/nanowhale-100m: Revolutionary AI Tool for Advanced Langu
Revolutionizing Language Understanding: A Deep Dive into HuggingFaceTB/nanowhale 100m The landscape of artificial intelligence (AI) is continually evolving, and…
Trading System V2: AI's Role in Deterministic Execution
Thanks to the incredible feedback on my last post, I’m officially moving away from the "distributed veto" system (where 8 LLM agents argue until they agree to trade). For v2, I am implementing a strict State Machine using a deterministic runtime (llm-nano-vm). The new rule is simple: Python owns the math and the execution contract. The LLM only interprets the context. I've sketched out a 5-module architecture, but before I start coding the new Python feature extractors, I want to sanity-check the exact roles I’m giving to the AI. Here is the blueprint: 1. The HTF Agent (Higher Timeframe - D1/H4) Python: Extracts structural levels, BOS/CHoCH, and premium/discount zones. LLM Role: Reads this hard data to determine the institutional narrative and select the most relevant Draw on Liquidity (DOL). 2. The Structure Agent (H1) Python: Identifies all valid Order Blocks (OB) and Fair Value Gaps (FVG) with displacement. LLM Role: Selects the highest-probability Point of Interest (POI) based on the HTF Agent's narrative. 3. The Trigger Agent (M15/M5) 100% Python (NO LLM): Purely deterministic. It checks for liquidity sweeps and LTF CHoCH inside the selected POI. 4. The Context Agent LLM Role: Cross-references active killzones, news blackouts, and currency correlations to either greenlight or veto the setup. 5. The Risk Agent 100% Python (NO LLM): Calculates Entry, SL, TP, Expected Value (EV), and position sizing. The state machine will only transition to EXECUTING if the deterministic Trigger and Risk modules say yes. The LLMs are basically just "context providers" for the state machine. My questions for the quants/architects here: Does this division of labor make sense? Am I giving the LLMs too much or too little responsibility in step 1 and 2? By making the Trigger layer (M15/M5) 100% deterministic, am I losing the core advantage of having an AI, or is this the standard way to avoid execution paralysis? Would you merge the HTF and Structure agents to reduce token constraints/hallucinations, or is separating them better for debugging? Would love to hear your thoughts before I dive into the codebase.
Google TV Expands with New Gemini AI Features
Google TV just got more Gemini features, including the ability to transform photos and videos with tools Nano Banana and Veo.
NVIDIA Nemotron 3 Nano: 30B Parameter AI Model Released
NVIDIA Unveils Nemotron 3 Nano: A 30B Parameter AI Model NVIDIA has introduced the Nemotron 3 Nano, a state of the art AI model boasting 30 billion parameters. …
Nvidia's Nemotron-3 Nano Omni: 30B A3B Reasoning BF16
Nvidia's Nemotron 3 Nano Omni: 30B A3B Reasoning with BF16 The Nvidia Nemotron 3 Nano Omni, branded as a high performance reasoning model packed with 30 billion…