strat-chatbot/tests/fakes/fake_llm.py
Cal Corum c3218f70c4 refactor: hexagonal architecture with ports & adapters, DI, and test-first development
Domain layer (zero framework imports):
- domain/models.py: pure dataclasses (RuleDocument, RuleSearchResult,
  Conversation, ChatMessage, LLMResponse, ChatResult)
- domain/ports.py: ABC interfaces (RuleRepository, LLMPort,
  ConversationStore, IssueTracker)
- domain/services.py: ChatService orchestrates Q&A flow using only ports

Outbound adapters (implement domain ports):
- adapters/outbound/openrouter.py: OpenRouterLLM with persistent httpx
  client, robust JSON parsing, regex citation fallback
- adapters/outbound/sqlite_convos.py: SQLiteConversationStore with
  async_sessionmaker, timezone-aware datetimes, cleanup support
- adapters/outbound/gitea_issues.py: GiteaIssueTracker with markdown
  injection protection (fenced code blocks)
- adapters/outbound/chroma_rules.py: ChromaRuleRepository with clamped
  similarity scores

Inbound adapter:
- adapters/inbound/api.py: thin FastAPI router with input validation
  (max_length constraints), proper HTTP status codes (503 for missing LLM)

Configuration & wiring:
- config/settings.py: Pydantic v2 SettingsConfigDict (no module-level singleton)
- config/container.py: create_app() factory with lifespan-managed DI
- main.py: minimal entry point

Test infrastructure (90 tests, all passing):
- tests/fakes/: in-memory implementations of all 4 ports
- tests/domain/: 26 tests for models and ChatService
- tests/adapters/: 64 tests for all adapters using fakes/mocks
- No real API calls, no model downloads, no disk I/O in fast tests

Also fixes: aiosqlite version constraint (>=0.19.0), adds hatch build
targets for new package layout.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-08 15:51:16 -05:00

61 lines
1.9 KiB
Python

"""In-memory LLM for testing — returns canned responses, no API calls."""
from typing import Optional
from domain.models import RuleSearchResult, LLMResponse
from domain.ports import LLMPort
class FakeLLM(LLMPort):
"""Returns predictable responses based on whether rules were provided.
Configurable for testing specific scenarios (low confidence, errors, etc.).
"""
def __init__(
self,
default_answer: str = "Based on the rules, here is the answer.",
default_confidence: float = 0.9,
no_rules_answer: str = "I don't have a rule that addresses this question.",
no_rules_confidence: float = 0.1,
force_error: Optional[Exception] = None,
):
self.default_answer = default_answer
self.default_confidence = default_confidence
self.no_rules_answer = no_rules_answer
self.no_rules_confidence = no_rules_confidence
self.force_error = force_error
self.calls: list[dict] = []
async def generate_response(
self,
question: str,
rules: list[RuleSearchResult],
conversation_history: Optional[list[dict[str, str]]] = None,
) -> LLMResponse:
self.calls.append(
{
"question": question,
"rules": rules,
"history": conversation_history,
}
)
if self.force_error:
raise self.force_error
if rules:
return LLMResponse(
answer=self.default_answer,
cited_rules=[r.rule_id for r in rules],
confidence=self.default_confidence,
needs_human=False,
)
else:
return LLMResponse(
answer=self.no_rules_answer,
cited_rules=[],
confidence=self.no_rules_confidence,
needs_human=True,
)