优化skill调用,生成相关文档说明

This commit is contained in:
2026-04-07 16:54:05 +08:00
parent 3984cffe23
commit 9b98b55060
10 changed files with 373 additions and 21 deletions

View File

@@ -0,0 +1,86 @@
---
Name: log-summary
Description: 汇总并分析 TSP 智能助手日志中的 ERROR 与 WARNING输出最近一次启动以来的错误概览和统计帮助快速诊断问题。
---
你是一个「日志错误汇总与分析助手」,技能名为 **log-summary**
你的职责:在用户希望快速了解最近一次或最近几次运行的错误情况时,调用配套脚本,汇总 `logs/` 目录下各启动时间子目录中的日志文件,统计 ERROR / WARNING / CRITICAL并输出简明的错误概览与分布情况。
---
## 一、触发条件(什么时候使用 log-summary
当用户有类似需求时,应激活本 Skill例如
- 「帮我看看最近运行有没有错误」
- 「总结一下最近日志里的报错」
- 「分析 logs 下面的错误情况」
- 「最近系统老出问题,帮我看看日志」
---
## 二、总体流程
1. 调用脚本 `scripts/log_summary.py`,从项目根目录执行。
2. 读取输出并用自然语言向用户转述关键发现。
3. 对明显频繁的错误类型,给出简单的排查建议。
4. 输出时保持简洁,避免粘贴大段原始日志。
---
## 三、脚本调用规范
从项目根目录(包含 `start_dashboard.py` 的目录)执行命令:
```bash
python .claude/skills/log-summary/scripts/log_summary.py
```
脚本行为约定:
- 自动遍历 `logs/` 目录下所有子目录(例如 `logs/2026-02-10_23-51-10/dashboard.log`)。
- 默认分析最近 N例如 5个按时间排序的日志文件统计
- 每个文件中的 ERROR / WARNING / CRITICAL 行数
- 按「错误消息前缀」聚类的 Top N 频率最高错误
- 将结果以结构化的文本形式打印到标准输出。
你需要:
1. 运行脚本并捕获输出;
2. 读懂其中的统计数据与 Top 错误信息;
3. 用 38 句中文自然语言,对用户进行总结说明。
---
## 四、对用户的输出规范
当成功执行 `log-summary` 时,你应该向用户返回类似结构的信息:
1. **总体健康度**(一句话)
- 例如:「最近 3 次启动中共记录 2 条 ERROR、5 条 WARNING整体较为稳定。」
2. **每次启动的错误统计**(列表形式)
- 对应每个日志文件(按时间),简要说明:
- 启动时间(从路径或日志中推断)
- ERROR / WARNING / CRITICAL 数量
3. **Top 错误类型**
- 例如:「最频繁的错误是 `No module named 'src.config.config'`,共出现 4 次。」
4. **简单建议(可选)**
- 对明显重复的错误给出 13 条排查/优化建议。
避免:
- 直接原样复制整段日志;
- 输出过长的技术细节堆栈,优先摘要。
---
## 五、反模式与边界
- 如果 `logs/` 目录不存在或没有任何日志文件:
- 明确告诉用户当前没有可分析的日志,而不是编造结果。
- 若脚本执行失败(例如 Python 错误、路径错误):
- 简要粘贴一小段错误信息说明「log-summary 脚本运行失败」,
- 不要尝试自己扫描所有日志文件(除非用户另外要求)。
- 不要擅自删除或修改日志文件。

View File

@@ -0,0 +1,115 @@
#!/usr/bin/env python
# -*- coding: utf-8 -*-
"""
简单日志汇总脚本
遍历 logs/ 目录下最近的若干个 dashboard.log 文件,统计 ERROR / WARNING / CRITICAL
并输出简要汇总信息,供 log-summary Skill 调用。
"""
import os
import re
from pathlib import Path
from typing import List, Tuple
LOG_ROOT = Path("logs")
LOG_FILENAME = "dashboard.log"
MAX_FILES = 5 # 最多分析最近 N 个日志文件
LEVEL_PATTERNS = {
"ERROR": re.compile(r"\bERROR\b"),
"WARNING": re.compile(r"\bWARNING\b"),
"CRITICAL": re.compile(r"\bCRITICAL\b"),
}
def find_log_files() -> List[Path]:
if not LOG_ROOT.exists():
return []
candidates: List[Tuple[float, Path]] = []
for root, dirs, files in os.walk(LOG_ROOT):
if LOG_FILENAME in files:
p = Path(root) / LOG_FILENAME
try:
mtime = p.stat().st_mtime
except OSError:
continue
candidates.append((mtime, p))
# 按修改时间从新到旧排序
candidates.sort(key=lambda x: x[0], reverse=True)
return [p for _, p in candidates[:MAX_FILES]]
def summarize_file(path: Path):
counts = {level: 0 for level in LEVEL_PATTERNS.keys()}
top_messages = {}
try:
with path.open("r", encoding="utf-8", errors="ignore") as f:
for line in f:
for level, pattern in LEVEL_PATTERNS.items():
if pattern.search(line):
counts[level] += 1
# 取日志消息部分做前缀(粗略)
msg = line.strip()
# 截断以防过长
msg = msg[:200]
top_messages[msg] = top_messages.get(msg, 0) + 1
break
except OSError as e:
print(f"[!] 读取日志失败 {path}: {e}")
return None
# 取 Top 5
top_list = sorted(top_messages.items(), key=lambda x: x[1], reverse=True)[:5]
return counts, top_list
def main():
log_files = find_log_files()
if not log_files:
print("未找到任何日志文件logs/*/dashboard.log")
return
print(f"共找到 {len(log_files)} 个最近的日志文件(最多 {MAX_FILES} 个):\n")
overall = {level: 0 for level in LEVEL_PATTERNS.keys()}
for idx, path in enumerate(log_files, start=1):
print(f"[{idx}] 日志文件: {path}")
result = summarize_file(path)
if result is None:
print(" 无法读取该日志文件。\n")
continue
counts, top_list = result
for level, c in counts.items():
overall[level] += c
print(
" 级别统计: "
+ ", ".join(f"{lvl}={counts[lvl]}" for lvl in LEVEL_PATTERNS.keys())
)
if top_list:
print(" Top 错误/警告消息:")
for msg, n in top_list:
print(f" [{n}次] {msg}")
else:
print(" 未发现 ERROR/WARNING/CRITICAL 级别日志。")
print()
print("总体统计:")
print(
" "
+ ", ".join(f"{lvl}={overall[lvl]}" for lvl in LEVEL_PATTERNS.keys())
)
if __name__ == "__main__":
main()

25
.kiro/steering/product.md Normal file
View File

@@ -0,0 +1,25 @@
# Product Overview
TSP Assistant (TSP智能助手) is an AI-powered customer service and work order management system built for TSP (Telematics Service Provider) vehicle service providers.
## What It Does
- Intelligent dialogue with customers via WebSocket real-time chat and Feishu (Lark) bot integration
- Work order lifecycle management with AI-generated resolution suggestions
- Knowledge base with semantic search (TF-IDF + cosine similarity, optional local embedding model)
- Vehicle data querying by VIN
- Analytics dashboard with alerts, performance monitoring, and reporting
- Multi-tenant architecture — data is isolated by `tenant_id` across all core tables
- Feishu multi-dimensional table (多维表格) bidirectional sync for work orders
## Key Domain Concepts
- **Work Order (工单)**: A support ticket tied to a vehicle issue. Can be dispatched to module owners, tracked through statuses, and enriched with AI suggestions.
- **Knowledge Entry (知识库条目)**: Q&A pairs used for retrieval-augmented responses. Verified entries have higher confidence.
- **Tenant (租户)**: Logical isolation unit (e.g., a market or region). All major entities carry a `tenant_id`.
- **Agent**: A ReAct-style LLM agent with registered tools (knowledge search, vehicle query, analytics, Feishu messaging).
- **Chat Session (对话会话)**: Groups multi-turn conversations; tracks source (websocket, API, feishu_bot).
## Primary Language
The codebase, comments, log messages, and UI are predominantly in **Chinese (Simplified)**. Variable names and code structure follow English conventions.

View File

@@ -0,0 +1,79 @@
# Project Structure
```
├── src/ # Main application source
│ ├── main.py # TSPAssistant facade class (orchestrates all managers)
│ ├── agent_assistant.py # Agent-enhanced assistant variant
│ ├── agent/ # ReAct LLM agent
│ │ ├── react_agent.py # Agent loop with tool dispatch
│ │ └── llm_client.py # Agent-specific LLM client
│ ├── core/ # Core infrastructure
│ │ ├── models.py # SQLAlchemy ORM models (all entities)
│ │ ├── database.py # DatabaseManager singleton, session management
│ │ ├── llm_client.py # QwenClient (OpenAI-compatible LLM calls)
│ │ ├── cache_manager.py # In-memory + Redis caching
│ │ ├── redis_manager.py # Redis connection pool
│ │ ├── vector_store.py # Vector storage for embeddings
│ │ ├── embedding_client.py # Local embedding model client
│ │ ├── auth_manager.py # Authentication logic
│ │ └── ... # Performance, backup, query optimizer
│ ├── config/
│ │ └── unified_config.py # UnifiedConfig singleton (env → dataclasses)
│ ├── dialogue/ # Conversation management
│ │ ├── dialogue_manager.py # Message processing, work order creation
│ │ ├── conversation_history.py
│ │ └── realtime_chat.py # Real-time chat manager
│ ├── knowledge_base/
│ │ └── knowledge_manager.py # Knowledge CRUD, search, import
│ ├── analytics/ # Monitoring & analytics
│ │ ├── analytics_manager.py
│ │ ├── alert_system.py
│ │ ├── monitor_service.py
│ │ ├── token_monitor.py
│ │ └── ai_success_monitor.py
│ ├── integrations/ # External service integrations
│ │ ├── feishu_client.py # Feishu API client
│ │ ├── feishu_service.py # Feishu business logic
│ │ ├── feishu_longconn_service.py # Feishu event subscription (long-conn)
│ │ ├── workorder_sync.py # Feishu ↔ local work order sync
│ │ └── flexible_field_mapper.py # Feishu field mapping
│ ├── vehicle/
│ │ └── vehicle_data_manager.py
│ ├── utils/ # Shared helpers
│ │ ├── helpers.py
│ │ ├── encoding_helper.py
│ │ └── semantic_similarity.py
│ └── web/ # Web layer
│ ├── app.py # Flask app factory, middleware, blueprint registration
│ ├── service_manager.py # Lazy-loading service singleton registry
│ ├── decorators.py # @handle_errors, @require_json, @resolve_tenant_id, @rate_limit
│ ├── error_handlers.py # Unified API response helpers
│ ├── websocket_server.py # Standalone WebSocket server
│ ├── blueprints/ # Flask blueprints (one per domain)
│ │ ├── alerts.py, workorders.py, conversations.py, knowledge.py
│ │ ├── auth.py, tenants.py, chat.py, agent.py, vehicle.py
│ │ ├── analytics.py, monitoring.py, system.py
│ │ ├── feishu_sync.py, feishu_bot.py
│ │ └── test.py, core.py
│ ├── static/ # Frontend assets (JS, CSS)
│ └── templates/ # Jinja2 HTML templates
├── config/ # Runtime config files (field mappings)
├── data/ # SQLite DB file, system settings JSON
├── logs/ # Log files (per-startup subdirectories)
├── scripts/ # Migration and utility scripts
├── start_dashboard.py # Main entry point (Flask + WS + Feishu)
├── start_feishu_bot.py # Standalone Feishu bot entry point
├── init_database.py # DB initialization script
├── requirements.txt # Python dependencies
├── nginx.conf # Nginx reverse proxy config
└── .env / .env.example # Environment configuration
```
## Key Patterns
- **Singleton managers**: `db_manager`, `service_manager`, `get_config()` — instantiated once, imported globally.
- **Blueprint-per-domain**: Each functional area (workorders, alerts, knowledge, etc.) has its own Flask blueprint under `src/web/blueprints/`.
- **Service manager with lazy loading**: `ServiceManager` in `src/web/service_manager.py` provides thread-safe lazy initialization of all service instances. Blueprints access services through it.
- **Decorator-driven API patterns**: Common decorators in `src/web/decorators.py` handle error wrapping, JSON validation, tenant resolution, and rate limiting.
- **Multi-tenant by convention**: All DB queries should filter by `tenant_id`. The `@resolve_tenant_id` decorator extracts it from request body, query params, or session.
- **Config from env**: No hardcoded secrets. All configuration flows through `UnifiedConfig` which reads from `.env` via `python-dotenv`.

67
.kiro/steering/tech.md Normal file
View File

@@ -0,0 +1,67 @@
# Tech Stack & Build
## Language & Runtime
- Python 3.11+
## Core Frameworks & Libraries
| Layer | Technology |
|---|---|
| Web framework | Flask 3.x + Flask-CORS |
| ORM / Database | SQLAlchemy 2.x (MySQL via PyMySQL, SQLite for dev) |
| Real-time comms | `websockets` library (standalone server on port 8765) |
| Caching | Redis 5.x client + hiredis |
| LLM integration | OpenAI-compatible API (default provider: Qwen/通义千问 via DashScope) |
| Embedding | `sentence-transformers` with `BAAI/bge-small-zh-v1.5` (local, optional) |
| NLP | jieba (Chinese word segmentation), scikit-learn (TF-IDF) |
| Feishu SDK | `lark-oapi` 1.3.x (event subscription 2.0, long-connection mode) |
| Data validation | pydantic 2.x, marshmallow |
| Auth | JWT (`pyjwt`), SHA-256 password hashing |
| Monitoring | psutil (in-process), Prometheus + Grafana (Docker) |
## Configuration
- All config loaded from environment variables via `python-dotenv``src/config/unified_config.py`
- Singleton `UnifiedConfig` with typed dataclasses (`DatabaseConfig`, `LLMConfig`, `ServerConfig`, etc.)
- `.env` file at project root (see `.env.example` for all keys)
## Common Commands
```bash
# Install dependencies
pip install -r requirements.txt
# Initialize / migrate database
python init_database.py
# Start the full application (Flask + WebSocket + Feishu long-conn)
python start_dashboard.py
# Start only the Feishu bot long-connection client
python start_feishu_bot.py
# Run tests
pytest
# Code formatting
black .
isort .
# Linting
flake8
mypy .
```
## Deployment
- Docker + docker-compose (MySQL 8, Redis 7, Nginx, Prometheus, Grafana)
- Nginx reverse proxy in front of Flask (port 80/443 → 5000)
- Default ports: Flask 5000, WebSocket 8765, Redis 6379, MySQL 3306
## Code Quality Tools
- `black` for formatting (PEP 8)
- `isort` for import sorting
- `flake8` for linting
- `mypy` for type checking