Compare commits
39 Commits
f8f3738134
...
main
| Author | SHA1 | Date | |
|---|---|---|---|
| 11eef49271 | |||
| 26737747d9 | |||
| b4aa4c8d02 | |||
| 96177eddf3 | |||
| 9b98b55060 | |||
| 3984cffe23 | |||
| 99619f851f | |||
| c2a5bcbc94 | |||
| cc73403cf2 | |||
| 4ac24d2413 | |||
| 03aa5bd9dc | |||
| 369e61404e | |||
| 46b6a10730 | |||
| c3d709afeb | |||
| 7151070c99 | |||
| 587933f668 | |||
| 61ef86d779 | |||
| 53f3629f9e | |||
| fa3c3935f7 | |||
| c73a750e60 | |||
| d6c87683af | |||
| 18fb3155ba | |||
| b7627927d4 | |||
| eb127ed897 | |||
| d27b6a9c87 | |||
| c07cbf47c8 | |||
| 88a79d1936 | |||
| 28e90d2182 | |||
| 683b64ed62 | |||
| 44cfe2a0ea | |||
| 58b3c615ef | |||
| d691007c86 | |||
| 7950cd8237 | |||
| edb0616f7f | |||
| 7013e9db70 | |||
| e14e3ee7a5 | |||
|
|
c7ee292c4f | ||
|
|
bc536898a1 | ||
|
|
d0dd18342f |
109
.agents/summary/architecture.md
Normal file
109
.agents/summary/architecture.md
Normal file
@@ -0,0 +1,109 @@
|
|||||||
|
# Architecture / 系统架构
|
||||||
|
|
||||||
|
## 整体架构
|
||||||
|
|
||||||
|
```mermaid
|
||||||
|
graph TB
|
||||||
|
subgraph Clients["客户端"]
|
||||||
|
Browser["浏览器 Dashboard"]
|
||||||
|
FeishuBot["飞书机器人"]
|
||||||
|
WSClient["WebSocket 客户端"]
|
||||||
|
end
|
||||||
|
|
||||||
|
subgraph EntryPoints["入口层"]
|
||||||
|
Flask["Flask App :5000"]
|
||||||
|
WS["WebSocket Server :8765"]
|
||||||
|
FeishuLC["飞书长连接服务"]
|
||||||
|
end
|
||||||
|
|
||||||
|
subgraph WebLayer["Web 层"]
|
||||||
|
Blueprints["16 个 Flask Blueprints"]
|
||||||
|
Decorators["装饰器: handle_errors, require_json, resolve_tenant_id, rate_limit"]
|
||||||
|
SM["ServiceManager (懒加载)"]
|
||||||
|
end
|
||||||
|
|
||||||
|
subgraph BusinessLayer["业务层"]
|
||||||
|
DM["DialogueManager"]
|
||||||
|
RCM["RealtimeChatManager"]
|
||||||
|
KM["KnowledgeManager"]
|
||||||
|
WOS["WorkOrderSyncService"]
|
||||||
|
Agent["ReactAgent"]
|
||||||
|
AM["AnalyticsManager"]
|
||||||
|
AS["AlertSystem"]
|
||||||
|
end
|
||||||
|
|
||||||
|
subgraph CoreLayer["基础设施层"]
|
||||||
|
DB["DatabaseManager (SQLAlchemy)"]
|
||||||
|
LLM["LLMClient (Qwen API)"]
|
||||||
|
Cache["CacheManager (Redis)"]
|
||||||
|
Auth["AuthManager (JWT)"]
|
||||||
|
Embed["EmbeddingClient (可选)"]
|
||||||
|
end
|
||||||
|
|
||||||
|
subgraph External["外部服务"]
|
||||||
|
QwenAPI["Qwen/DashScope API"]
|
||||||
|
FeishuAPI["飞书 API"]
|
||||||
|
RedisServer["Redis"]
|
||||||
|
Database["MySQL / SQLite"]
|
||||||
|
end
|
||||||
|
|
||||||
|
Browser --> Flask
|
||||||
|
WSClient --> WS
|
||||||
|
FeishuBot --> FeishuLC
|
||||||
|
|
||||||
|
Flask --> Blueprints
|
||||||
|
Blueprints --> Decorators
|
||||||
|
Blueprints --> SM
|
||||||
|
SM --> BusinessLayer
|
||||||
|
|
||||||
|
WS --> RCM
|
||||||
|
FeishuLC --> DM
|
||||||
|
|
||||||
|
DM --> LLM
|
||||||
|
DM --> KM
|
||||||
|
RCM --> DM
|
||||||
|
Agent --> LLM
|
||||||
|
KM --> Embed
|
||||||
|
WOS --> FeishuAPI
|
||||||
|
|
||||||
|
DB --> Database
|
||||||
|
LLM --> QwenAPI
|
||||||
|
Cache --> RedisServer
|
||||||
|
```
|
||||||
|
|
||||||
|
## 架构模式
|
||||||
|
|
||||||
|
### Singleton Managers
|
||||||
|
核心服务均为单例模式:`DatabaseManager`, `ServiceManager`, `UnifiedConfig`。通过 `get_config()` / `db_manager` 全局访问。
|
||||||
|
|
||||||
|
### Blueprint-per-Domain
|
||||||
|
每个功能域一个 Flask Blueprint,共 16 个:
|
||||||
|
`workorders`, `alerts`, `knowledge`, `conversations`, `chat`, `agent`, `tenants`, `auth`, `analytics`, `monitoring`, `system`, `feishu_sync`, `feishu_bot`, `vehicle`, `core`, `test`
|
||||||
|
|
||||||
|
### Service Manager with Lazy Loading
|
||||||
|
`ServiceManager` 提供线程安全的懒初始化。Blueprint 通过它获取业务服务实例,避免循环导入和启动时的重量级初始化。
|
||||||
|
|
||||||
|
### Decorator-Driven API
|
||||||
|
通用横切关注点通过装饰器实现:
|
||||||
|
- `@handle_errors` — 统一异常处理
|
||||||
|
- `@require_json` — JSON 请求验证
|
||||||
|
- `@resolve_tenant_id` — 从请求中提取 tenant_id
|
||||||
|
- `@rate_limit` — 频率限制
|
||||||
|
- `@cache_response` — 响应缓存
|
||||||
|
|
||||||
|
### Multi-Tenant by Convention
|
||||||
|
所有核心表包含 `tenant_id` 字段,查询时按 tenant_id 过滤实现数据隔离。
|
||||||
|
|
||||||
|
## 线程模型
|
||||||
|
|
||||||
|
```mermaid
|
||||||
|
graph LR
|
||||||
|
Main["主线程: Flask App"]
|
||||||
|
T1["守护线程: WebSocket Server"]
|
||||||
|
T2["守护线程: 飞书长连接"]
|
||||||
|
|
||||||
|
Main --> T1
|
||||||
|
Main --> T2
|
||||||
|
```
|
||||||
|
|
||||||
|
`start_dashboard.py` 在主线程运行 Flask,WebSocket 和飞书长连接分别在守护线程中运行。
|
||||||
51
.agents/summary/codebase_info.md
Normal file
51
.agents/summary/codebase_info.md
Normal file
@@ -0,0 +1,51 @@
|
|||||||
|
# Codebase Info
|
||||||
|
|
||||||
|
## 基本信息
|
||||||
|
|
||||||
|
- **项目名称**: TSP 智能助手 (TSP Assistant)
|
||||||
|
- **语言**: Python 3.11+
|
||||||
|
- **框架**: Flask 3.x + SQLAlchemy 2.x + WebSocket
|
||||||
|
- **代码风格**: 变量名英文,注释/UI/日志中文
|
||||||
|
- **数据库**: SQLAlchemy ORM(开发用 SQLite,生产用 MySQL via PyMySQL)
|
||||||
|
- **入口文件**: `start_dashboard.py`
|
||||||
|
|
||||||
|
## 技术栈
|
||||||
|
|
||||||
|
| 层级 | 技术 |
|
||||||
|
|---|---|
|
||||||
|
| Web | Flask 3.x + Flask-CORS |
|
||||||
|
| ORM | SQLAlchemy 2.x |
|
||||||
|
| 实时通信 | `websockets` (port 8765) |
|
||||||
|
| 缓存 | Redis 5.x + hiredis |
|
||||||
|
| LLM | OpenAI-compatible API (Qwen/通义千问 via DashScope) |
|
||||||
|
| Embedding | `sentence-transformers` + `BAAI/bge-small-zh-v1.5` (可选) |
|
||||||
|
| NLP | jieba (分词) + scikit-learn (TF-IDF) |
|
||||||
|
| 飞书 SDK | `lark-oapi` 1.3.x (长连接模式) |
|
||||||
|
| 认证 | JWT (`pyjwt`) + SHA-256 |
|
||||||
|
| 监控 | psutil |
|
||||||
|
|
||||||
|
## 目录结构概览
|
||||||
|
|
||||||
|
```
|
||||||
|
src/
|
||||||
|
├── config/ # UnifiedConfig 单例,从 .env 加载
|
||||||
|
├── core/ # 数据库、LLM、缓存、认证、ORM 模型
|
||||||
|
├── dialogue/ # 对话管理、实时聊天
|
||||||
|
├── knowledge_base/ # 知识库 CRUD、搜索、导入
|
||||||
|
├── analytics/ # 监控、预警、Token 统计
|
||||||
|
├── integrations/ # 飞书客户端、工单同步
|
||||||
|
├── agent/ # ReAct Agent(工具调度)
|
||||||
|
├── vehicle/ # 车辆数据管理
|
||||||
|
├── utils/ # 通用工具
|
||||||
|
└── web/ # Flask 应用层
|
||||||
|
├── app.py # 应用工厂 + 中间件
|
||||||
|
├── service_manager.py # 懒加载服务注册
|
||||||
|
├── decorators.py # 通用装饰器
|
||||||
|
├── blueprints/ # 按领域划分的 API 蓝图 (16 个)
|
||||||
|
├── static/ # 前端资源
|
||||||
|
└── templates/ # Jinja2 模板
|
||||||
|
```
|
||||||
|
|
||||||
|
## 启动流程
|
||||||
|
|
||||||
|
`start_dashboard.py` → 设置日志 → 检查数据库 → 启动 WebSocket 线程 → 启动飞书长连接线程 → 启动 Flask 应用
|
||||||
79
.agents/summary/components.md
Normal file
79
.agents/summary/components.md
Normal file
@@ -0,0 +1,79 @@
|
|||||||
|
# Components / 核心组件
|
||||||
|
|
||||||
|
## 业务组件
|
||||||
|
|
||||||
|
### DialogueManager (`src/dialogue/dialogue_manager.py`)
|
||||||
|
对话管理核心。处理用户消息,调用 LLM 生成回复,根据意图自动创建工单。连接知识库检索和 LLM 调用。
|
||||||
|
|
||||||
|
### RealtimeChatManager (`src/dialogue/realtime_chat.py`)
|
||||||
|
实时聊天管理器。管理 WebSocket 会话,维护 `ChatMessage` 数据结构,协调 DialogueManager 处理消息流。
|
||||||
|
|
||||||
|
### KnowledgeManager (`src/knowledge_base/knowledge_manager.py`)
|
||||||
|
知识库管理。支持 CRUD、TF-IDF 关键词搜索、可选 Embedding 语义搜索、文件导入、人工验证。已验证条目在检索时获得更高置信度。
|
||||||
|
|
||||||
|
### WorkOrderSyncService (`src/integrations/workorder_sync.py`)
|
||||||
|
工单同步服务。实现本地工单与飞书多维表格的双向同步。定义 `WorkOrderStatus` 和 `WorkOrderPriority` 枚举。
|
||||||
|
|
||||||
|
### ReactAgent (`src/agent/react_agent.py`)
|
||||||
|
ReAct 风格 LLM Agent。注册工具(知识搜索、车辆查询、分析、飞书消息),通过思考-行动-观察循环完成复杂任务。
|
||||||
|
|
||||||
|
### AnalyticsManager (`src/analytics/analytics_manager.py`)
|
||||||
|
数据分析管理器。工单趋势、预警统计、满意度分析,支持按租户筛选。
|
||||||
|
|
||||||
|
### AlertSystem (`src/analytics/alert_system.py`)
|
||||||
|
预警系统。自定义 `AlertRule`,支持多级别(`AlertLevel`)和多类型(`AlertType`)预警,批量管理。
|
||||||
|
|
||||||
|
## 基础设施组件
|
||||||
|
|
||||||
|
### DatabaseManager (`src/core/database.py`)
|
||||||
|
数据库管理单例。封装 SQLAlchemy session 管理,提供 `get_session()` 上下文管理器。支持 SQLite(开发)和 MySQL(生产)。
|
||||||
|
|
||||||
|
### LLMClient (`src/core/llm_client.py`)
|
||||||
|
LLM 调用客户端。封装 OpenAI-compatible API 调用(默认 Qwen/DashScope),支持流式和非流式响应。
|
||||||
|
|
||||||
|
### UnifiedConfig (`src/config/unified_config.py`)
|
||||||
|
配置单例。从 `.env` 加载环境变量,映射到 typed dataclasses:
|
||||||
|
`DatabaseConfig`, `LLMConfig`, `ServerConfig`, `FeishuConfig`, `AIAccuracyConfig`, `EmbeddingConfig`, `RedisConfig`
|
||||||
|
|
||||||
|
### AuthManager (`src/core/auth_manager.py`)
|
||||||
|
认证管理。JWT token 生成/验证,SHA-256 密码哈希。
|
||||||
|
|
||||||
|
### ServiceManager (`src/web/service_manager.py`)
|
||||||
|
服务注册中心。线程安全的懒加载,Blueprint 通过它获取业务服务实例。
|
||||||
|
|
||||||
|
## 集成组件
|
||||||
|
|
||||||
|
### FeishuService (`src/integrations/feishu_service.py`)
|
||||||
|
飞书 API 客户端。获取 tenant_access_token,发送消息,操作多维表格。
|
||||||
|
|
||||||
|
### FeishuLongConnService (`src/integrations/feishu_longconn_service.py`)
|
||||||
|
飞书事件订阅长连接服务。接收飞书机器人消息事件,转发给 DialogueManager 处理。
|
||||||
|
|
||||||
|
## Web 层组件
|
||||||
|
|
||||||
|
### Flask Blueprints (`src/web/blueprints/`)
|
||||||
|
|
||||||
|
```mermaid
|
||||||
|
graph TD
|
||||||
|
subgraph API["API Blueprints"]
|
||||||
|
WO["workorders — 工单 CRUD + AI 建议"]
|
||||||
|
AL["alerts — 预警管理"]
|
||||||
|
KN["knowledge — 知识库管理"]
|
||||||
|
CV["conversations — 对话历史"]
|
||||||
|
CH["chat — HTTP 聊天 + 会话管理"]
|
||||||
|
AG["agent — Agent 模式交互"]
|
||||||
|
TN["tenants — 租户管理"]
|
||||||
|
AU["auth — 登录/注册/JWT"]
|
||||||
|
AN["analytics — 数据分析导出"]
|
||||||
|
MO["monitoring — Token/AI 监控"]
|
||||||
|
SY["system — 系统配置/备份/优化"]
|
||||||
|
FS["feishu_sync — 飞书同步配置"]
|
||||||
|
FB["feishu_bot — 飞书 Webhook 事件"]
|
||||||
|
VH["vehicle — 车辆数据"]
|
||||||
|
CO["core — 监控规则/批量操作"]
|
||||||
|
TE["test — API 连接测试"]
|
||||||
|
end
|
||||||
|
```
|
||||||
|
|
||||||
|
### WebSocketServer (`src/web/websocket_server.py`)
|
||||||
|
独立 WebSocket 服务器(port 8765)。处理客户端连接,转发消息给 RealtimeChatManager。
|
||||||
136
.agents/summary/data_models.md
Normal file
136
.agents/summary/data_models.md
Normal file
@@ -0,0 +1,136 @@
|
|||||||
|
# Data Models / 数据模型
|
||||||
|
|
||||||
|
## ORM 模型 (`src/core/models.py`)
|
||||||
|
|
||||||
|
所有模型继承 SQLAlchemy `Base`,核心表均包含 `tenant_id` 字段用于多租户隔离。
|
||||||
|
|
||||||
|
```mermaid
|
||||||
|
erDiagram
|
||||||
|
Tenant ||--o{ WorkOrder : "tenant_id"
|
||||||
|
Tenant ||--o{ ChatSession : "tenant_id"
|
||||||
|
Tenant ||--o{ KnowledgeEntry : "tenant_id"
|
||||||
|
Tenant ||--o{ Alert : "tenant_id"
|
||||||
|
Tenant ||--o{ Analytics : "tenant_id"
|
||||||
|
|
||||||
|
WorkOrder ||--o{ Conversation : "work_order_id"
|
||||||
|
WorkOrder ||--o{ WorkOrderProcessHistory : "work_order_id"
|
||||||
|
WorkOrder ||--o{ WorkOrderSuggestion : "work_order_id"
|
||||||
|
|
||||||
|
ChatSession ||--o{ Conversation : "session_id"
|
||||||
|
|
||||||
|
Tenant {
|
||||||
|
int id PK
|
||||||
|
string tenant_id UK
|
||||||
|
string name
|
||||||
|
text description
|
||||||
|
bool is_active
|
||||||
|
text config "JSON: feishu, system_prompt"
|
||||||
|
}
|
||||||
|
|
||||||
|
WorkOrder {
|
||||||
|
int id PK
|
||||||
|
string tenant_id FK
|
||||||
|
string order_id UK
|
||||||
|
string title
|
||||||
|
text description
|
||||||
|
string category
|
||||||
|
string priority
|
||||||
|
string status
|
||||||
|
string feishu_record_id "飞书记录ID"
|
||||||
|
string assignee
|
||||||
|
text ai_suggestion
|
||||||
|
string assigned_module
|
||||||
|
string module_owner
|
||||||
|
string vin_sim "车架号"
|
||||||
|
}
|
||||||
|
|
||||||
|
ChatSession {
|
||||||
|
int id PK
|
||||||
|
string tenant_id FK
|
||||||
|
string session_id UK
|
||||||
|
string source "websocket/api/feishu_bot"
|
||||||
|
}
|
||||||
|
|
||||||
|
Conversation {
|
||||||
|
int id PK
|
||||||
|
string tenant_id FK
|
||||||
|
int work_order_id FK
|
||||||
|
string session_id FK
|
||||||
|
string role "user/assistant/system"
|
||||||
|
text content
|
||||||
|
}
|
||||||
|
|
||||||
|
KnowledgeEntry {
|
||||||
|
int id PK
|
||||||
|
string tenant_id FK
|
||||||
|
string question
|
||||||
|
text answer
|
||||||
|
string category
|
||||||
|
bool is_verified
|
||||||
|
float confidence_score
|
||||||
|
}
|
||||||
|
|
||||||
|
Alert {
|
||||||
|
int id PK
|
||||||
|
string tenant_id FK
|
||||||
|
string level
|
||||||
|
string type
|
||||||
|
text message
|
||||||
|
bool is_resolved
|
||||||
|
}
|
||||||
|
|
||||||
|
VehicleData {
|
||||||
|
int id PK
|
||||||
|
string vehicle_id
|
||||||
|
string vin
|
||||||
|
text data "JSON"
|
||||||
|
}
|
||||||
|
|
||||||
|
Analytics {
|
||||||
|
int id PK
|
||||||
|
string tenant_id FK
|
||||||
|
string metric_type
|
||||||
|
float value
|
||||||
|
text details "JSON"
|
||||||
|
}
|
||||||
|
|
||||||
|
User {
|
||||||
|
int id PK
|
||||||
|
string username UK
|
||||||
|
string password_hash
|
||||||
|
string role "admin/user"
|
||||||
|
}
|
||||||
|
|
||||||
|
WorkOrderProcessHistory {
|
||||||
|
int id PK
|
||||||
|
int work_order_id FK
|
||||||
|
string action
|
||||||
|
text details
|
||||||
|
}
|
||||||
|
|
||||||
|
WorkOrderSuggestion {
|
||||||
|
int id PK
|
||||||
|
int work_order_id FK
|
||||||
|
text suggestion
|
||||||
|
float confidence
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
## 配置 Dataclasses (`src/config/unified_config.py`)
|
||||||
|
|
||||||
|
| Dataclass | 关键字段 |
|
||||||
|
|-----------|---------|
|
||||||
|
| `DatabaseConfig` | `url` |
|
||||||
|
| `LLMConfig` | `api_key`, `base_url`, `model`, `temperature`, `max_tokens`, `timeout` |
|
||||||
|
| `ServerConfig` | `host`, `port`, `websocket_port`, `debug`, `log_level` |
|
||||||
|
| `FeishuConfig` | `app_id`, `app_secret`, `app_token`, `table_id` |
|
||||||
|
| `AIAccuracyConfig` | `auto_approve_threshold`, `manual_review_threshold` |
|
||||||
|
| `EmbeddingConfig` | `enabled`, `model`, `dimension`, `similarity_threshold` |
|
||||||
|
| `RedisConfig` | `host`, `port`, `db`, `password`, `pool_size`, `default_ttl`, `enabled` |
|
||||||
|
|
||||||
|
## 业务枚举
|
||||||
|
|
||||||
|
- `WorkOrderStatus`: 工单状态流转
|
||||||
|
- `WorkOrderPriority`: 工单优先级
|
||||||
|
- `AlertLevel`: 预警级别
|
||||||
|
- `AlertType`: 预警类型
|
||||||
90
.agents/summary/dependencies.md
Normal file
90
.agents/summary/dependencies.md
Normal file
@@ -0,0 +1,90 @@
|
|||||||
|
# Dependencies / 外部依赖
|
||||||
|
|
||||||
|
## 核心依赖
|
||||||
|
|
||||||
|
| 包 | 版本 | 用途 |
|
||||||
|
|---|---|---|
|
||||||
|
| flask | 3.0.3 | Web 框架 |
|
||||||
|
| flask-cors | 5.0.0 | 跨域支持 |
|
||||||
|
| sqlalchemy | 2.0.32 | ORM |
|
||||||
|
| pymysql | 1.1.1 | MySQL 驱动 |
|
||||||
|
| websockets | 15.0.1 | WebSocket 服务器 |
|
||||||
|
| redis | 5.0.1 | 缓存客户端 |
|
||||||
|
| hiredis | 2.3.2 | Redis 高性能解析器 |
|
||||||
|
|
||||||
|
## LLM / NLP
|
||||||
|
|
||||||
|
| 包 | 版本 | 用途 |
|
||||||
|
|---|---|---|
|
||||||
|
| jieba | 0.42.1 | 中文分词 |
|
||||||
|
| scikit-learn | 1.4.2 | TF-IDF 向量化 |
|
||||||
|
| numpy | 1.26.4 | 数值计算 |
|
||||||
|
|
||||||
|
## 飞书集成
|
||||||
|
|
||||||
|
| 包 | 版本 | 用途 |
|
||||||
|
|---|---|---|
|
||||||
|
| lark-oapi | 1.3.5 | 飞书 SDK(事件订阅 2.0,长连接模式) |
|
||||||
|
|
||||||
|
## 数据处理
|
||||||
|
|
||||||
|
| 包 | 版本 | 用途 |
|
||||||
|
|---|---|---|
|
||||||
|
| pandas | 2.2.2 | 数据分析 |
|
||||||
|
| openpyxl | 3.1.5 | Excel 读写 |
|
||||||
|
| ujson | 5.10.0 | 高性能 JSON |
|
||||||
|
|
||||||
|
## 安全 / 认证
|
||||||
|
|
||||||
|
| 包 | 版本 | 用途 |
|
||||||
|
|---|---|---|
|
||||||
|
| pyjwt | 2.9.0 | JWT token |
|
||||||
|
| bcrypt | 4.2.1 | 密码哈希 |
|
||||||
|
| cryptography | 43.0.1 | 加密支持 |
|
||||||
|
|
||||||
|
## 数据验证
|
||||||
|
|
||||||
|
| 包 | 版本 | 用途 |
|
||||||
|
|---|---|---|
|
||||||
|
| pydantic | 2.9.2 | 数据验证 |
|
||||||
|
| marshmallow | 3.23.3 | 序列化/反序列化 |
|
||||||
|
|
||||||
|
## 监控 / 工具
|
||||||
|
|
||||||
|
| 包 | 版本 | 用途 |
|
||||||
|
|---|---|---|
|
||||||
|
| psutil | 5.9.8 | 系统监控 |
|
||||||
|
| python-dotenv | 1.0.1 | 环境变量加载 |
|
||||||
|
| structlog | 24.4.0 | 结构化日志 |
|
||||||
|
| aiohttp | 3.10.10 | 异步 HTTP |
|
||||||
|
| httpx | 0.27.2 | HTTP 客户端 |
|
||||||
|
|
||||||
|
## 可选依赖
|
||||||
|
|
||||||
|
| 包 | 用途 | 条件 |
|
||||||
|
|---|---|---|
|
||||||
|
| sentence-transformers | 本地 Embedding 模型 | `EMBEDDING_ENABLED=True` |
|
||||||
|
| torch | PyTorch(sentence-transformers 依赖) | `EMBEDDING_ENABLED=True` |
|
||||||
|
|
||||||
|
## 开发依赖
|
||||||
|
|
||||||
|
| 包 | 版本 | 用途 |
|
||||||
|
|---|---|---|
|
||||||
|
| pytest | 8.3.3 | 测试框架 |
|
||||||
|
| pytest-asyncio | 0.24.0 | 异步测试 |
|
||||||
|
| pytest-cov | 6.0.0 | 覆盖率 |
|
||||||
|
| black | 24.8.0 | 代码格式化 |
|
||||||
|
| flake8 | 7.1.1 | Linting |
|
||||||
|
| mypy | 1.11.1 | 类型检查 |
|
||||||
|
| isort | 5.13.2 | Import 排序 |
|
||||||
|
|
||||||
|
## 外部服务依赖
|
||||||
|
|
||||||
|
```mermaid
|
||||||
|
graph LR
|
||||||
|
App["TSP Assistant"] --> Qwen["Qwen/DashScope API"]
|
||||||
|
App --> FeishuAPI["飞书 Open API"]
|
||||||
|
App --> Redis["Redis Server"]
|
||||||
|
App --> DB["MySQL / SQLite"]
|
||||||
|
App --> HF["HuggingFace (可选, 首次下载模型)"]
|
||||||
|
```
|
||||||
56
.agents/summary/index.md
Normal file
56
.agents/summary/index.md
Normal file
@@ -0,0 +1,56 @@
|
|||||||
|
# TSP 智能助手 — 文档索引
|
||||||
|
|
||||||
|
> **面向 AI 助手的使用指南**: 本文件是文档体系的入口。根据问题类型查阅对应文件即可获取详细信息。大多数问题只需本文件 + 1~2 个子文件即可回答。
|
||||||
|
|
||||||
|
## 文档目录
|
||||||
|
|
||||||
|
| 文件 | 内容摘要 | 适用问题 |
|
||||||
|
|------|---------|---------|
|
||||||
|
| [codebase_info.md](codebase_info.md) | 项目基本信息、技术栈、目录结构、启动流程 | "这个项目是什么?" "用了什么技术?" "怎么启动?" |
|
||||||
|
| [architecture.md](architecture.md) | 系统架构图、架构模式(单例/Blueprint/装饰器/多租户)、线程模型 | "系统怎么组织的?" "请求怎么流转?" "多租户怎么实现?" |
|
||||||
|
| [components.md](components.md) | 所有核心组件的职责说明(业务层、基础设施层、Web 层) | "DialogueManager 做什么?" "有哪些 Blueprint?" |
|
||||||
|
| [interfaces.md](interfaces.md) | REST API 列表、WebSocket 接口、外部集成时序图、装饰器接口 | "工单 API 有哪些?" "飞书怎么集成的?" |
|
||||||
|
| [data_models.md](data_models.md) | ORM 模型 ER 图、字段说明、配置 Dataclass、业务枚举 | "WorkOrder 有哪些字段?" "数据库表结构?" |
|
||||||
|
| [workflows.md](workflows.md) | 6 个关键流程的时序图(启动、对话、工单同步、知识搜索、飞书消息、Agent) | "消息怎么处理的?" "工单怎么同步到飞书?" |
|
||||||
|
| [dependencies.md](dependencies.md) | 所有 Python 依赖分类说明、外部服务依赖图 | "用了哪些库?" "有哪些外部依赖?" |
|
||||||
|
| [review_notes.md](review_notes.md) | 文档一致性/完整性检查结果、待补充区域、改进建议 | "文档有什么遗漏?" "哪些地方需要补充?" |
|
||||||
|
|
||||||
|
## 快速导航
|
||||||
|
|
||||||
|
### 按任务类型
|
||||||
|
|
||||||
|
- **修 Bug / 改功能** → `components.md` 找到对应组件 → `architecture.md` 理解依赖关系
|
||||||
|
- **加新 API** → `interfaces.md` 了解现有 API 模式 → `architecture.md` 中的装饰器模式
|
||||||
|
- **改数据库** → `data_models.md` 查看表结构和关系
|
||||||
|
- **理解流程** → `workflows.md` 查看时序图
|
||||||
|
- **加新依赖** → `dependencies.md` 了解现有依赖
|
||||||
|
- **部署/配置** → `codebase_info.md` 查看启动方式
|
||||||
|
|
||||||
|
### 按代码位置
|
||||||
|
|
||||||
|
- `src/core/` → `components.md` 基础设施组件 + `data_models.md`
|
||||||
|
- `src/web/blueprints/` → `interfaces.md` API 列表 + `components.md` Blueprint 说明
|
||||||
|
- `src/dialogue/` → `components.md` 对话组件 + `workflows.md` 对话流程
|
||||||
|
- `src/integrations/` → `interfaces.md` 外部集成 + `workflows.md` 飞书流程
|
||||||
|
- `src/config/` → `data_models.md` 配置 Dataclass
|
||||||
|
- `start_dashboard.py` → `workflows.md` 启动流程
|
||||||
|
|
||||||
|
## 文件间关系
|
||||||
|
|
||||||
|
```mermaid
|
||||||
|
graph TD
|
||||||
|
INDEX["index.md (本文件)"] --> CI["codebase_info.md"]
|
||||||
|
INDEX --> ARCH["architecture.md"]
|
||||||
|
INDEX --> COMP["components.md"]
|
||||||
|
INDEX --> INTF["interfaces.md"]
|
||||||
|
INDEX --> DM["data_models.md"]
|
||||||
|
INDEX --> WF["workflows.md"]
|
||||||
|
INDEX --> DEP["dependencies.md"]
|
||||||
|
INDEX --> RN["review_notes.md"]
|
||||||
|
|
||||||
|
ARCH --> COMP
|
||||||
|
COMP --> INTF
|
||||||
|
COMP --> DM
|
||||||
|
INTF --> WF
|
||||||
|
DM --> WF
|
||||||
|
```
|
||||||
104
.agents/summary/interfaces.md
Normal file
104
.agents/summary/interfaces.md
Normal file
@@ -0,0 +1,104 @@
|
|||||||
|
# Interfaces / 接口与集成
|
||||||
|
|
||||||
|
## REST API 概览
|
||||||
|
|
||||||
|
所有 API 以 `/api/` 为前缀,返回 JSON。认证通过 Flask session 或 JWT。
|
||||||
|
|
||||||
|
### 工单 (workorders)
|
||||||
|
| Method | Path | 说明 |
|
||||||
|
|--------|------|------|
|
||||||
|
| GET | `/api/workorders` | 工单列表(分页) |
|
||||||
|
| POST | `/api/workorders` | 创建工单 |
|
||||||
|
| GET | `/api/workorders/<id>` | 工单详情 |
|
||||||
|
| PUT | `/api/workorders/<id>` | 更新工单 |
|
||||||
|
| DELETE | `/api/workorders/<id>` | 删除工单 |
|
||||||
|
| POST | `/api/workorders/ai-suggestion` | 生成 AI 建议 |
|
||||||
|
| POST | `/api/workorders/import` | 批量导入 |
|
||||||
|
|
||||||
|
### 知识库 (knowledge)
|
||||||
|
| Method | Path | 说明 |
|
||||||
|
|--------|------|------|
|
||||||
|
| GET | `/api/knowledge` | 知识条目列表 |
|
||||||
|
| POST | `/api/knowledge` | 添加条目 |
|
||||||
|
| GET | `/api/knowledge/search` | 搜索知识库 |
|
||||||
|
| GET | `/api/knowledge/stats` | 统计信息 |
|
||||||
|
| POST | `/api/knowledge/upload` | 文件导入 |
|
||||||
|
| PUT | `/api/knowledge/<id>/verify` | 验证条目 |
|
||||||
|
|
||||||
|
### 对话 (chat / conversations)
|
||||||
|
| Method | Path | 说明 |
|
||||||
|
|--------|------|------|
|
||||||
|
| POST | `/api/chat/sessions` | 创建会话 |
|
||||||
|
| GET | `/api/chat/sessions` | 活跃会话列表 |
|
||||||
|
| POST | `/api/chat/message` | 发送消息 |
|
||||||
|
| POST | `/api/chat/message/stream` | 流式消息 (SSE) |
|
||||||
|
| GET | `/api/conversations` | 对话历史 |
|
||||||
|
|
||||||
|
### 租户 (tenants)
|
||||||
|
| Method | Path | 说明 |
|
||||||
|
|--------|------|------|
|
||||||
|
| GET | `/api/tenants` | 租户列表 |
|
||||||
|
| POST | `/api/tenants` | 创建租户 |
|
||||||
|
| PUT | `/api/tenants/<id>` | 更新租户 |
|
||||||
|
| DELETE | `/api/tenants/<id>` | 删除租户 |
|
||||||
|
| GET | `/api/tenants/feishu-groups` | 飞书群列表 |
|
||||||
|
|
||||||
|
### 认证 (auth)
|
||||||
|
| Method | Path | 说明 |
|
||||||
|
|--------|------|------|
|
||||||
|
| POST | `/api/auth/login` | 登录 |
|
||||||
|
| POST | `/api/auth/logout` | 登出 |
|
||||||
|
| GET | `/api/auth/status` | 认证状态 |
|
||||||
|
| POST | `/api/auth/register` | 注册 |
|
||||||
|
|
||||||
|
### Agent
|
||||||
|
| Method | Path | 说明 |
|
||||||
|
|--------|------|------|
|
||||||
|
| POST | `/api/agent/chat` | Agent 对话 |
|
||||||
|
| GET | `/api/agent/status` | Agent 状态 |
|
||||||
|
| POST | `/api/agent/tools/execute` | 执行工具 |
|
||||||
|
|
||||||
|
### 飞书同步 (feishu-sync)
|
||||||
|
| Method | Path | 说明 |
|
||||||
|
|--------|------|------|
|
||||||
|
| GET | `/api/feishu-sync/status` | 同步状态 |
|
||||||
|
| POST | `/api/feishu-sync/from-feishu` | 从飞书拉取 |
|
||||||
|
| POST | `/api/feishu-sync/<id>/to-feishu` | 推送到飞书 |
|
||||||
|
| GET/POST | `/api/feishu-sync/config` | 同步配置 |
|
||||||
|
|
||||||
|
## WebSocket 接口
|
||||||
|
|
||||||
|
- **端口**: 8765
|
||||||
|
- **协议**: JSON 消息
|
||||||
|
- **功能**: 实时聊天,客户端连接后通过 JSON 消息与 RealtimeChatManager 交互
|
||||||
|
|
||||||
|
## 外部集成
|
||||||
|
|
||||||
|
```mermaid
|
||||||
|
sequenceDiagram
|
||||||
|
participant User as 用户
|
||||||
|
participant Feishu as 飞书
|
||||||
|
participant LongConn as 飞书长连接服务
|
||||||
|
participant DM as DialogueManager
|
||||||
|
participant LLM as Qwen API
|
||||||
|
participant KB as KnowledgeManager
|
||||||
|
|
||||||
|
User->>Feishu: 发送消息
|
||||||
|
Feishu->>LongConn: 事件推送
|
||||||
|
LongConn->>DM: 处理消息
|
||||||
|
DM->>KB: 知识库检索
|
||||||
|
KB-->>DM: 相关知识
|
||||||
|
DM->>LLM: 生成回复
|
||||||
|
LLM-->>DM: AI 回复
|
||||||
|
DM->>Feishu: 回复消息
|
||||||
|
```
|
||||||
|
|
||||||
|
## 装饰器接口
|
||||||
|
|
||||||
|
| 装饰器 | 位置 | 功能 |
|
||||||
|
|--------|------|------|
|
||||||
|
| `@handle_errors` | `decorators.py` | 统一异常捕获,返回标准错误响应 |
|
||||||
|
| `@require_json(fields)` | `decorators.py` | 验证请求体为 JSON 且包含必填字段 |
|
||||||
|
| `@with_service(name)` | `decorators.py` | 从 ServiceManager 注入服务实例 |
|
||||||
|
| `@rate_limit(max, period)` | `decorators.py` | 基于 IP 的频率限制 |
|
||||||
|
| `@cache_response(timeout)` | `decorators.py` | 响应缓存 |
|
||||||
41
.agents/summary/review_notes.md
Normal file
41
.agents/summary/review_notes.md
Normal file
@@ -0,0 +1,41 @@
|
|||||||
|
# Review Notes / 审查记录
|
||||||
|
|
||||||
|
## 一致性检查 ✅
|
||||||
|
|
||||||
|
- 所有文档中的组件名称、文件路径与代码库一致
|
||||||
|
- 数据模型字段与 `src/core/models.py` 中的 ORM 定义匹配
|
||||||
|
- API 路径与 Blueprint 注册的路由一致
|
||||||
|
- 配置项与 `.env.example` 中的变量对应
|
||||||
|
|
||||||
|
## 完整性检查
|
||||||
|
|
||||||
|
### 已充分覆盖的区域 ✅
|
||||||
|
- 系统架构和线程模型
|
||||||
|
- 核心业务组件(对话、工单、知识库、Agent)
|
||||||
|
- 数据模型和 ER 关系
|
||||||
|
- REST API 接口概览
|
||||||
|
- 外部集成(飞书、LLM)
|
||||||
|
- 启动流程和关键工作流
|
||||||
|
|
||||||
|
### 需要补充的区域 ⚠️
|
||||||
|
|
||||||
|
| 区域 | 说明 | 建议 |
|
||||||
|
|------|------|------|
|
||||||
|
| `src/core/cache_manager.py` | 缓存策略细节未深入分析 | 补充 Redis 缓存键命名规范和 TTL 策略 |
|
||||||
|
| `src/core/vector_store.py` | 向量存储实现细节 | 补充 Embedding 索引结构说明 |
|
||||||
|
| `src/core/embedding_client.py` | Embedding 客户端接口 | 补充模型加载和推理流程 |
|
||||||
|
| `src/web/static/js/` | 前端模块化结构 | 补充前端模块职责说明 |
|
||||||
|
| `src/repositories/` | 数据访问层(steering 中提到但未深入) | 补充 Repository 模式和自动 tenant_id 过滤机制 |
|
||||||
|
| 错误处理 | `error_handlers.py` 的统一响应格式 | 补充 API 错误码规范 |
|
||||||
|
| 部署配置 | Docker / Nginx 配置细节 | 补充 `nginx.conf` 和 docker-compose 说明 |
|
||||||
|
|
||||||
|
### 语言支持限制
|
||||||
|
- 代码注释和 UI 文本为中文,文档已采用中英混合风格覆盖
|
||||||
|
- 无其他语言支持限制
|
||||||
|
|
||||||
|
## 建议
|
||||||
|
|
||||||
|
1. 为 `src/repositories/` 层补充文档,说明自动 tenant_id 过滤的实现
|
||||||
|
2. 补充前端 JS 模块的职责划分文档
|
||||||
|
3. 考虑为 API 添加 OpenAPI/Swagger 规范
|
||||||
|
4. 补充部署相关的 Docker 和 Nginx 配置说明
|
||||||
127
.agents/summary/workflows.md
Normal file
127
.agents/summary/workflows.md
Normal file
@@ -0,0 +1,127 @@
|
|||||||
|
# Workflows / 关键流程
|
||||||
|
|
||||||
|
## 1. 应用启动流程
|
||||||
|
|
||||||
|
```mermaid
|
||||||
|
sequenceDiagram
|
||||||
|
participant Main as start_dashboard.py
|
||||||
|
participant Log as setup_logging
|
||||||
|
participant DB as DatabaseManager
|
||||||
|
participant WS as WebSocket Thread
|
||||||
|
participant FS as 飞书长连接 Thread
|
||||||
|
participant Flask as Flask App
|
||||||
|
|
||||||
|
Main->>Log: 初始化日志(按启动时间分目录)
|
||||||
|
Main->>DB: check_database_connection()
|
||||||
|
alt 连接失败
|
||||||
|
Main->>Main: sys.exit(1)
|
||||||
|
end
|
||||||
|
Main->>WS: 启动守护线程 (port 8765)
|
||||||
|
Main->>FS: 启动守护线程 (飞书长连接)
|
||||||
|
Main->>Main: sleep(2) 等待初始化
|
||||||
|
Main->>Flask: app.run(port=5000, threaded=True)
|
||||||
|
```
|
||||||
|
|
||||||
|
## 2. 用户对话流程 (WebSocket)
|
||||||
|
|
||||||
|
```mermaid
|
||||||
|
sequenceDiagram
|
||||||
|
participant Client as 浏览器
|
||||||
|
participant WS as WebSocketServer
|
||||||
|
participant RCM as RealtimeChatManager
|
||||||
|
participant DM as DialogueManager
|
||||||
|
participant KB as KnowledgeManager
|
||||||
|
participant LLM as LLMClient
|
||||||
|
|
||||||
|
Client->>WS: WebSocket 连接
|
||||||
|
Client->>WS: JSON 消息
|
||||||
|
WS->>RCM: 转发消息
|
||||||
|
RCM->>DM: process_message()
|
||||||
|
DM->>KB: search() 知识库检索
|
||||||
|
KB-->>DM: 匹配结果
|
||||||
|
DM->>LLM: 调用 Qwen API(含知识上下文)
|
||||||
|
LLM-->>DM: AI 回复
|
||||||
|
DM-->>RCM: 回复内容
|
||||||
|
RCM-->>WS: 发送回复
|
||||||
|
WS-->>Client: JSON 回复
|
||||||
|
```
|
||||||
|
|
||||||
|
## 3. 工单创建与飞书同步
|
||||||
|
|
||||||
|
```mermaid
|
||||||
|
sequenceDiagram
|
||||||
|
participant User as 用户/AI
|
||||||
|
participant API as workorders Blueprint
|
||||||
|
participant DB as Database
|
||||||
|
participant Sync as WorkOrderSyncService
|
||||||
|
participant Feishu as 飞书多维表格
|
||||||
|
|
||||||
|
User->>API: POST /api/workorders
|
||||||
|
API->>DB: 创建工单记录
|
||||||
|
DB-->>API: 工单 ID
|
||||||
|
|
||||||
|
opt 飞书同步已配置
|
||||||
|
API->>Sync: sync_to_feishu(workorder_id)
|
||||||
|
Sync->>Feishu: 创建/更新记录
|
||||||
|
Feishu-->>Sync: feishu_record_id
|
||||||
|
Sync->>DB: 更新 feishu_record_id
|
||||||
|
end
|
||||||
|
|
||||||
|
API-->>User: 工单创建成功
|
||||||
|
```
|
||||||
|
|
||||||
|
## 4. 知识库搜索流程
|
||||||
|
|
||||||
|
```mermaid
|
||||||
|
flowchart TD
|
||||||
|
Q["用户查询"] --> TF["TF-IDF 关键词匹配"]
|
||||||
|
Q --> EMB{"Embedding 启用?"}
|
||||||
|
|
||||||
|
EMB -->|是| VEC["向量语义搜索"]
|
||||||
|
EMB -->|否| SKIP["跳过"]
|
||||||
|
|
||||||
|
TF --> MERGE["合并结果"]
|
||||||
|
VEC --> MERGE
|
||||||
|
SKIP --> MERGE
|
||||||
|
|
||||||
|
MERGE --> RANK["按相似度排序"]
|
||||||
|
RANK --> VERIFIED{"已验证条目优先"}
|
||||||
|
VERIFIED --> RESULT["返回 Top-K 结果"]
|
||||||
|
```
|
||||||
|
|
||||||
|
## 5. 飞书机器人消息处理
|
||||||
|
|
||||||
|
```mermaid
|
||||||
|
sequenceDiagram
|
||||||
|
participant User as 飞书用户
|
||||||
|
participant Feishu as 飞书服务器
|
||||||
|
participant LC as FeishuLongConnService
|
||||||
|
participant DM as DialogueManager
|
||||||
|
participant LLM as LLMClient
|
||||||
|
participant FS as FeishuService
|
||||||
|
|
||||||
|
User->>Feishu: @机器人 发送消息
|
||||||
|
Feishu->>LC: 长连接事件推送
|
||||||
|
LC->>LC: 消息去重
|
||||||
|
LC->>LC: resolve_tenant_by_chat_id()
|
||||||
|
LC->>DM: 处理消息(带 tenant_id)
|
||||||
|
DM->>LLM: 生成回复
|
||||||
|
LLM-->>DM: AI 回复
|
||||||
|
DM->>FS: 发送飞书消息
|
||||||
|
FS->>Feishu: API 回复
|
||||||
|
```
|
||||||
|
|
||||||
|
## 6. Agent 工具调度
|
||||||
|
|
||||||
|
```mermaid
|
||||||
|
flowchart TD
|
||||||
|
INPUT["用户输入"] --> THINK["Thought: 分析意图"]
|
||||||
|
THINK --> ACT["Action: 选择工具"]
|
||||||
|
ACT --> EXEC["执行工具"]
|
||||||
|
EXEC --> OBS["Observation: 获取结果"]
|
||||||
|
OBS --> DONE{"任务完成?"}
|
||||||
|
DONE -->|否| THINK
|
||||||
|
DONE -->|是| ANSWER["Final Answer: 返回结果"]
|
||||||
|
```
|
||||||
|
|
||||||
|
ReactAgent 注册的工具包括:知识库搜索、车辆查询、数据分析、飞书消息发送等。
|
||||||
28
.env
28
.env
@@ -9,7 +9,7 @@
|
|||||||
SERVER_HOST=0.0.0.0
|
SERVER_HOST=0.0.0.0
|
||||||
|
|
||||||
# The port for the main Flask web server.
|
# The port for the main Flask web server.
|
||||||
SERVER_PORT=5001
|
SERVER_PORT=5000
|
||||||
|
|
||||||
# The port for the WebSocket server for real-time chat.
|
# The port for the WebSocket server for real-time chat.
|
||||||
WEBSOCKET_PORT=8765
|
WEBSOCKET_PORT=8765
|
||||||
@@ -21,6 +21,9 @@ DEBUG_MODE=False
|
|||||||
# Logging level for the application. Options: DEBUG, INFO, WARNING, ERROR, CRITICAL
|
# Logging level for the application. Options: DEBUG, INFO, WARNING, ERROR, CRITICAL
|
||||||
LOG_LEVEL=INFO
|
LOG_LEVEL=INFO
|
||||||
|
|
||||||
|
# 租户标识 — 多项目共用同一套代码时,用不同的 TENANT_ID 隔离数据
|
||||||
|
TENANT_ID=default
|
||||||
|
|
||||||
|
|
||||||
# ============================================================================
|
# ============================================================================
|
||||||
# DATABASE CONFIGURATION
|
# DATABASE CONFIGURATION
|
||||||
@@ -33,7 +36,7 @@ LOG_LEVEL=INFO
|
|||||||
DATABASE_URL=sqlite:///./data/tsp_assistant.db
|
DATABASE_URL=sqlite:///./data/tsp_assistant.db
|
||||||
|
|
||||||
# 远程 MySQL(生产环境使用,需要时取消注释)
|
# 远程 MySQL(生产环境使用,需要时取消注释)
|
||||||
# DATABASE_URL=mysql+pymysql://tsp_assistant:123456@jeason.online/tsp_assistant?charset=utf8mb4
|
#DATABASE_URL=mysql+pymysql://tsp_assistant:123456@jeason.online/tsp_assistant?charset=utf8mb4
|
||||||
|
|
||||||
# ============================================================================
|
# ============================================================================
|
||||||
# LARGE LANGUAGE MODEL (LLM) CONFIGURATION
|
# LARGE LANGUAGE MODEL (LLM) CONFIGURATION
|
||||||
@@ -42,13 +45,13 @@ DATABASE_URL=sqlite:///./data/tsp_assistant.db
|
|||||||
LLM_PROVIDER=qwen
|
LLM_PROVIDER=qwen
|
||||||
|
|
||||||
# The API key for your chosen LLM provider.
|
# The API key for your chosen LLM provider.
|
||||||
LLM_API_KEY=sk-c0dbefa1718d46eaa897199135066f00
|
LLM_API_KEY=sk-Gce85QLROESeOWf3icd2mQnYHOrmMYojwVPQ0AubMjGQ5ZE2
|
||||||
|
|
||||||
# The base URL for the LLM API. This is often needed for OpenAI-compatible endpoints.
|
# The base URL for the LLM API. This is often needed for OpenAI-compatible endpoints.
|
||||||
LLM_BASE_URL=https://dashscope.aliyuncs.com/compatible-mode/v1
|
LLM_BASE_URL=https://gemini.jeason.online/v1
|
||||||
|
|
||||||
# The specific model to use, e.g., "qwen-plus-latest", "gpt-3.5-turbo", "claude-3-sonnet-20240229"
|
# The specific model to use, e.g., "qwen-plus-latest", "gpt-3.5-turbo", "claude-3-sonnet-20240229"
|
||||||
LLM_MODEL=qwen-plus-latest
|
LLM_MODEL=mimo-v2-flash
|
||||||
|
|
||||||
# The temperature for the model's responses (0.0 to 2.0).
|
# The temperature for the model's responses (0.0 to 2.0).
|
||||||
LLM_TEMPERATURE=0.7
|
LLM_TEMPERATURE=0.7
|
||||||
@@ -105,8 +108,7 @@ AI_HUMAN_RESOLUTION_CONFIDENCE=0.90
|
|||||||
# REDIS CACHE CONFIGURATION
|
# REDIS CACHE CONFIGURATION
|
||||||
# ============================================================================
|
# ============================================================================
|
||||||
# Redis server host (use localhost for local development)
|
# Redis server host (use localhost for local development)
|
||||||
REDIS_HOST=localhost
|
REDIS_HOST=jeason.online
|
||||||
|
|
||||||
# Redis server port
|
# Redis server port
|
||||||
REDIS_PORT=6379
|
REDIS_PORT=6379
|
||||||
|
|
||||||
@@ -114,7 +116,7 @@ REDIS_PORT=6379
|
|||||||
REDIS_DB=0
|
REDIS_DB=0
|
||||||
|
|
||||||
# Redis password (leave empty if no password)
|
# Redis password (leave empty if no password)
|
||||||
REDIS_PASSWORD=
|
REDIS_PASSWORD=123456
|
||||||
|
|
||||||
# Redis connection pool size
|
# Redis connection pool size
|
||||||
REDIS_POOL_SIZE=10
|
REDIS_POOL_SIZE=10
|
||||||
@@ -124,3 +126,13 @@ REDIS_DEFAULT_TTL=3600
|
|||||||
|
|
||||||
# Enable Redis cache (set to False to disable caching)
|
# Enable Redis cache (set to False to disable caching)
|
||||||
REDIS_ENABLED=True
|
REDIS_ENABLED=True
|
||||||
|
|
||||||
|
|
||||||
|
# ============================================================================
|
||||||
|
# EMBEDDING CONFIGURATION (知识库向量检索 - 本地模型)
|
||||||
|
# ============================================================================
|
||||||
|
# 暂时禁用,等有合适的 embedding API 或服务器资源时再启用
|
||||||
|
EMBEDDING_ENABLED=False
|
||||||
|
EMBEDDING_MODEL=BAAI/bge-small-zh-v1.5
|
||||||
|
EMBEDDING_DIMENSION=512
|
||||||
|
EMBEDDING_SIMILARITY_THRESHOLD=0.5
|
||||||
|
|||||||
29
.env.example
29
.env.example
@@ -8,6 +8,9 @@
|
|||||||
# The host the web server will bind to.
|
# The host the web server will bind to.
|
||||||
SERVER_HOST=0.0.0.0
|
SERVER_HOST=0.0.0.0
|
||||||
|
|
||||||
|
# Flask session 加密密钥(生产环境必须设置固定值,否则每次重启 session 失效)
|
||||||
|
SECRET_KEY=your-random-secret-key-here
|
||||||
|
|
||||||
# The port for the main Flask web server.
|
# The port for the main Flask web server.
|
||||||
SERVER_PORT=5001
|
SERVER_PORT=5001
|
||||||
|
|
||||||
@@ -21,6 +24,9 @@ DEBUG_MODE=False
|
|||||||
# Logging level for the application. Options: DEBUG, INFO, WARNING, ERROR, CRITICAL
|
# Logging level for the application. Options: DEBUG, INFO, WARNING, ERROR, CRITICAL
|
||||||
LOG_LEVEL=INFO
|
LOG_LEVEL=INFO
|
||||||
|
|
||||||
|
# 租户标识 — 多项目共用同一套代码时,用不同的 TENANT_ID 隔离数据
|
||||||
|
TENANT_ID=default
|
||||||
|
|
||||||
|
|
||||||
# ============================================================================
|
# ============================================================================
|
||||||
# DATABASE CONFIGURATION
|
# DATABASE CONFIGURATION
|
||||||
@@ -124,3 +130,26 @@ REDIS_DEFAULT_TTL=3600
|
|||||||
|
|
||||||
# Enable Redis cache (set to False to disable caching)
|
# Enable Redis cache (set to False to disable caching)
|
||||||
REDIS_ENABLED=True
|
REDIS_ENABLED=True
|
||||||
|
|
||||||
|
|
||||||
|
# ============================================================================
|
||||||
|
# EMBEDDING CONFIGURATION (知识库向量检索 - 本地模型)
|
||||||
|
# ============================================================================
|
||||||
|
# 启用 Embedding 语义检索(禁用则降级为关键词匹配)
|
||||||
|
EMBEDDING_ENABLED=True
|
||||||
|
|
||||||
|
# 本地 embedding 模型名称(首次运行自动从 HuggingFace 下载)
|
||||||
|
# 推荐模型:
|
||||||
|
# BAAI/bge-small-zh-v1.5 (~95MB, 512维, 中文效果好, 内存占用~150MB)
|
||||||
|
# BAAI/bge-base-zh-v1.5 (~400MB, 768维, 中文效果更好)
|
||||||
|
# shibing624/text2vec-base-chinese (~400MB, 768维, 中文专优)
|
||||||
|
EMBEDDING_MODEL=BAAI/bge-small-zh-v1.5
|
||||||
|
|
||||||
|
# 向量维度(需与模型匹配)
|
||||||
|
EMBEDDING_DIMENSION=512
|
||||||
|
|
||||||
|
# 语义搜索相似度阈值(0.0-1.0,越高越严格)
|
||||||
|
EMBEDDING_SIMILARITY_THRESHOLD=0.5
|
||||||
|
|
||||||
|
# Embedding 缓存过期时间(秒),默认 1 天
|
||||||
|
EMBEDDING_CACHE_TTL=86400
|
||||||
|
|||||||
8
.gitignore
vendored
8
.gitignore
vendored
@@ -42,3 +42,11 @@ test_*.py
|
|||||||
|
|
||||||
# 缓存
|
# 缓存
|
||||||
.cache/
|
.cache/
|
||||||
|
|
||||||
|
# Virtual environment
|
||||||
|
bin/
|
||||||
|
lib/
|
||||||
|
lib64
|
||||||
|
include/
|
||||||
|
pyvenv.cfg
|
||||||
|
To
|
||||||
|
|||||||
86
.kiro/skills/log-summary/SKILL.md
Normal file
86
.kiro/skills/log-summary/SKILL.md
Normal file
@@ -0,0 +1,86 @@
|
|||||||
|
---
|
||||||
|
Name: log-summary
|
||||||
|
Description: 汇总并分析 TSP 智能助手日志中的 ERROR 与 WARNING,输出最近一次启动以来的错误概览和统计,帮助快速诊断问题。
|
||||||
|
---
|
||||||
|
|
||||||
|
你是一个「日志错误汇总与分析助手」,技能名为 **log-summary**。
|
||||||
|
|
||||||
|
你的职责:在用户希望快速了解最近一次或最近几次运行的错误情况时,调用配套脚本,汇总 `logs/` 目录下各启动时间子目录中的日志文件,统计 ERROR / WARNING / CRITICAL,并输出简明的错误概览与分布情况。
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## 一、触发条件(什么时候使用 log-summary)
|
||||||
|
|
||||||
|
当用户有类似需求时,应激活本 Skill,例如:
|
||||||
|
|
||||||
|
- 「帮我看看最近运行有没有错误」
|
||||||
|
- 「总结一下最近日志里的报错」
|
||||||
|
- 「分析 logs 下面的错误情况」
|
||||||
|
- 「最近系统老出问题,帮我看看日志」
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## 二、总体流程
|
||||||
|
|
||||||
|
1. 调用脚本 `scripts/log_summary.py`,从项目根目录执行。
|
||||||
|
2. 读取输出并用自然语言向用户转述关键发现。
|
||||||
|
3. 对明显频繁的错误类型,给出简单的排查建议。
|
||||||
|
4. 输出时保持简洁,避免粘贴大段原始日志。
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## 三、脚本调用规范
|
||||||
|
|
||||||
|
从项目根目录(包含 `start_dashboard.py` 的目录)执行命令:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
python .claude/skills/log-summary/scripts/log_summary.py
|
||||||
|
```
|
||||||
|
|
||||||
|
脚本行为约定:
|
||||||
|
|
||||||
|
- 自动遍历 `logs/` 目录下所有子目录(例如 `logs/2026-02-10_23-51-10/dashboard.log`)。
|
||||||
|
- 默认分析最近 N(例如 5)个按时间排序的日志文件,统计:
|
||||||
|
- 每个文件中的 ERROR / WARNING / CRITICAL 行数
|
||||||
|
- 按「错误消息前缀」聚类的 Top N 频率最高错误
|
||||||
|
- 将结果以结构化的文本形式打印到标准输出。
|
||||||
|
|
||||||
|
你需要:
|
||||||
|
|
||||||
|
1. 运行脚本并捕获输出;
|
||||||
|
2. 读懂其中的统计数据与 Top 错误信息;
|
||||||
|
3. 用 3~8 句中文自然语言,对用户进行总结说明。
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## 四、对用户的输出规范
|
||||||
|
|
||||||
|
当成功执行 `log-summary` 时,你应该向用户返回类似结构的信息:
|
||||||
|
|
||||||
|
1. **总体健康度**(一句话)
|
||||||
|
- 例如:「最近 3 次启动中共记录 2 条 ERROR、5 条 WARNING,整体较为稳定。」
|
||||||
|
2. **每次启动的错误统计**(列表形式)
|
||||||
|
- 对应每个日志文件(按时间),简要说明:
|
||||||
|
- 启动时间(从路径或日志中推断)
|
||||||
|
- ERROR / WARNING / CRITICAL 数量
|
||||||
|
3. **Top 错误类型**
|
||||||
|
- 例如:「最频繁的错误是 `No module named 'src.config.config'`,共出现 4 次。」
|
||||||
|
4. **简单建议(可选)**
|
||||||
|
- 对明显重复的错误给出 1~3 条排查/优化建议。
|
||||||
|
|
||||||
|
避免:
|
||||||
|
|
||||||
|
- 直接原样复制整段日志;
|
||||||
|
- 输出过长的技术细节堆栈,优先摘要。
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## 五、反模式与边界
|
||||||
|
|
||||||
|
- 如果 `logs/` 目录不存在或没有任何日志文件:
|
||||||
|
- 明确告诉用户当前没有可分析的日志,而不是编造结果。
|
||||||
|
- 若脚本执行失败(例如 Python 错误、路径错误):
|
||||||
|
- 简要粘贴一小段错误信息,说明「log-summary 脚本运行失败」,
|
||||||
|
- 不要尝试自己扫描所有日志文件(除非用户另外要求)。
|
||||||
|
- 不要擅自删除或修改日志文件。
|
||||||
|
|
||||||
115
.kiro/skills/log-summary/scripts/log_summary.py
Normal file
115
.kiro/skills/log-summary/scripts/log_summary.py
Normal file
@@ -0,0 +1,115 @@
|
|||||||
|
#!/usr/bin/env python
|
||||||
|
# -*- coding: utf-8 -*-
|
||||||
|
"""
|
||||||
|
简单日志汇总脚本
|
||||||
|
|
||||||
|
遍历 logs/ 目录下最近的若干个 dashboard.log 文件,统计 ERROR / WARNING / CRITICAL,
|
||||||
|
并输出简要汇总信息,供 log-summary Skill 调用。
|
||||||
|
"""
|
||||||
|
|
||||||
|
import os
|
||||||
|
import re
|
||||||
|
from pathlib import Path
|
||||||
|
from typing import List, Tuple
|
||||||
|
|
||||||
|
|
||||||
|
LOG_ROOT = Path("logs")
|
||||||
|
LOG_FILENAME = "dashboard.log"
|
||||||
|
MAX_FILES = 5 # 最多分析最近 N 个日志文件
|
||||||
|
|
||||||
|
|
||||||
|
LEVEL_PATTERNS = {
|
||||||
|
"ERROR": re.compile(r"\bERROR\b"),
|
||||||
|
"WARNING": re.compile(r"\bWARNING\b"),
|
||||||
|
"CRITICAL": re.compile(r"\bCRITICAL\b"),
|
||||||
|
}
|
||||||
|
|
||||||
|
|
||||||
|
def find_log_files() -> List[Path]:
|
||||||
|
if not LOG_ROOT.exists():
|
||||||
|
return []
|
||||||
|
|
||||||
|
candidates: List[Tuple[float, Path]] = []
|
||||||
|
for root, dirs, files in os.walk(LOG_ROOT):
|
||||||
|
if LOG_FILENAME in files:
|
||||||
|
p = Path(root) / LOG_FILENAME
|
||||||
|
try:
|
||||||
|
mtime = p.stat().st_mtime
|
||||||
|
except OSError:
|
||||||
|
continue
|
||||||
|
candidates.append((mtime, p))
|
||||||
|
|
||||||
|
# 按修改时间从新到旧排序
|
||||||
|
candidates.sort(key=lambda x: x[0], reverse=True)
|
||||||
|
return [p for _, p in candidates[:MAX_FILES]]
|
||||||
|
|
||||||
|
|
||||||
|
def summarize_file(path: Path):
|
||||||
|
counts = {level: 0 for level in LEVEL_PATTERNS.keys()}
|
||||||
|
top_messages = {}
|
||||||
|
|
||||||
|
try:
|
||||||
|
with path.open("r", encoding="utf-8", errors="ignore") as f:
|
||||||
|
for line in f:
|
||||||
|
for level, pattern in LEVEL_PATTERNS.items():
|
||||||
|
if pattern.search(line):
|
||||||
|
counts[level] += 1
|
||||||
|
# 取日志消息部分做前缀(粗略)
|
||||||
|
msg = line.strip()
|
||||||
|
# 截断以防过长
|
||||||
|
msg = msg[:200]
|
||||||
|
top_messages[msg] = top_messages.get(msg, 0) + 1
|
||||||
|
break
|
||||||
|
except OSError as e:
|
||||||
|
print(f"[!] 读取日志失败 {path}: {e}")
|
||||||
|
return None
|
||||||
|
|
||||||
|
# 取 Top 5
|
||||||
|
top_list = sorted(top_messages.items(), key=lambda x: x[1], reverse=True)[:5]
|
||||||
|
return counts, top_list
|
||||||
|
|
||||||
|
|
||||||
|
def main():
|
||||||
|
log_files = find_log_files()
|
||||||
|
if not log_files:
|
||||||
|
print("未找到任何日志文件(logs/*/dashboard.log)。")
|
||||||
|
return
|
||||||
|
|
||||||
|
print(f"共找到 {len(log_files)} 个最近的日志文件(最多 {MAX_FILES} 个):\n")
|
||||||
|
|
||||||
|
overall = {level: 0 for level in LEVEL_PATTERNS.keys()}
|
||||||
|
|
||||||
|
for idx, path in enumerate(log_files, start=1):
|
||||||
|
print(f"[{idx}] 日志文件: {path}")
|
||||||
|
result = summarize_file(path)
|
||||||
|
if result is None:
|
||||||
|
print(" 无法读取该日志文件。\n")
|
||||||
|
continue
|
||||||
|
|
||||||
|
counts, top_list = result
|
||||||
|
for level, c in counts.items():
|
||||||
|
overall[level] += c
|
||||||
|
print(
|
||||||
|
" 级别统计: "
|
||||||
|
+ ", ".join(f"{lvl}={counts[lvl]}" for lvl in LEVEL_PATTERNS.keys())
|
||||||
|
)
|
||||||
|
|
||||||
|
if top_list:
|
||||||
|
print(" Top 错误/警告消息:")
|
||||||
|
for msg, n in top_list:
|
||||||
|
print(f" [{n}次] {msg}")
|
||||||
|
else:
|
||||||
|
print(" 未发现 ERROR/WARNING/CRITICAL 级别日志。")
|
||||||
|
|
||||||
|
print()
|
||||||
|
|
||||||
|
print("总体统计:")
|
||||||
|
print(
|
||||||
|
" "
|
||||||
|
+ ", ".join(f"{lvl}={overall[lvl]}" for lvl in LEVEL_PATTERNS.keys())
|
||||||
|
)
|
||||||
|
|
||||||
|
|
||||||
|
if __name__ == "__main__":
|
||||||
|
main()
|
||||||
|
|
||||||
88
.kiro/specs/architecture-evolution/tasks.md
Normal file
88
.kiro/specs/architecture-evolution/tasks.md
Normal file
@@ -0,0 +1,88 @@
|
|||||||
|
# 架构演进任务清单
|
||||||
|
|
||||||
|
## 概述
|
||||||
|
|
||||||
|
基于两轮架构审查发现的结构性问题,按优先级排列的演进任务。每个任务独立可交付,不依赖其他任务的完成。
|
||||||
|
|
||||||
|
## Tasks
|
||||||
|
|
||||||
|
- [-] 1. 引入 Repository 层,分离数据访问逻辑
|
||||||
|
- [x] 1.1 创建 `src/repositories/` 目录,为核心模型创建 Repository 类
|
||||||
|
- WorkOrderRepository: 封装工单的 CRUD + 按 tenant_id 过滤
|
||||||
|
- KnowledgeRepository: 封装知识库的 CRUD + 按 tenant_id 过滤
|
||||||
|
- ConversationRepository: 封装对话/会话的 CRUD + 按 tenant_id 过滤
|
||||||
|
- AlertRepository: 封装预警的 CRUD + 按 tenant_id 过滤
|
||||||
|
- [ ] 1.2 将 blueprint 中的直接 DB 查询迁移到 Repository
|
||||||
|
- workorders.py 的 get_workorders、create_workorder、delete_workorder
|
||||||
|
- knowledge.py 的 get_knowledge、add_knowledge、delete_knowledge
|
||||||
|
- conversations.py 的所有端点
|
||||||
|
- alerts.py 的所有端点
|
||||||
|
- [x] 1.3 在 Repository 基类中统一添加 tenant_id 过滤
|
||||||
|
- 所有查询方法自动附加 tenant_id 条件
|
||||||
|
- 写操作自动设置 tenant_id
|
||||||
|
|
||||||
|
- [ ] 2. 统一 LLM 客户端
|
||||||
|
- [ ] 2.1 将 `src/agent/llm_client.py` 的异步能力合并到 `src/core/llm_client.py`
|
||||||
|
- LLMClient 同时支持同步和异步调用
|
||||||
|
- 统一超时、重试、token 统计逻辑
|
||||||
|
- [ ] 2.2 让 agent_assistant.py 使用统一的 LLMClient
|
||||||
|
- 删除 `src/agent/llm_client.py` 中的 LLMManager/OpenAIClient 等重复类
|
||||||
|
- [ ] 2.3 统一 LLM 配置入口
|
||||||
|
- 所有 LLM 调用从 unified_config 读取配置
|
||||||
|
|
||||||
|
- [ ] 3. 引入 MessagePipeline 统一消息处理
|
||||||
|
- [ ] 3.1 创建 `src/dialogue/message_pipeline.py`
|
||||||
|
- 定义统一的消息处理流程:接收 → 租户解析 → 会话管理 → 知识搜索 → LLM 调用 → 保存 → 回复
|
||||||
|
- 各入口(WebSocket、HTTP、飞书 bot、飞书长连接)只负责协议适配
|
||||||
|
- [ ] 3.2 重构 realtime_chat.py 使用 Pipeline
|
||||||
|
- process_message 和 process_message_stream 委托给 Pipeline
|
||||||
|
- [ ] 3.3 重构飞书 bot/longconn 使用 Pipeline
|
||||||
|
- 消除 feishu_bot.py 和 feishu_longconn_service.py 中的重复逻辑
|
||||||
|
|
||||||
|
- [ ] 4. 引入 Alembic 数据库迁移
|
||||||
|
- [ ] 4.1 初始化 Alembic 配置
|
||||||
|
- alembic init, 配置 env.py 连接 unified_config
|
||||||
|
- [ ] 4.2 生成初始迁移脚本
|
||||||
|
- 从当前 models.py 生成 baseline migration
|
||||||
|
- [ ] 4.3 移除 database.py 中的 _run_migrations 手动迁移逻辑
|
||||||
|
- 改为启动时运行 alembic upgrade head
|
||||||
|
|
||||||
|
- [ ] 5. 统一配置管理
|
||||||
|
- [ ] 5.1 定义配置优先级:环境变量 > system_settings.json > 代码默认值
|
||||||
|
- [ ] 5.2 创建 ConfigService 统一读写接口
|
||||||
|
- get(key, default) / set(key, value) / get_section(section)
|
||||||
|
- 底层自动合并三个来源
|
||||||
|
- [ ] 5.3 迁移 SystemOptimizer、PerformanceConfig 使用 ConfigService
|
||||||
|
|
||||||
|
- [ ] 6. API 契约定义
|
||||||
|
- [ ] 6.1 引入 Flask-RESTX 或 apispec 生成 OpenAPI 文档
|
||||||
|
- [ ] 6.2 为所有 blueprint 端点添加 schema 定义
|
||||||
|
- [ ] 6.3 统一所有端点使用 api_response() 标准格式
|
||||||
|
|
||||||
|
- [ ] 7. 会话状态迁移到 Redis
|
||||||
|
- [ ] 7.1 将 RealtimeChatManager.active_sessions 迁移到 Redis Hash
|
||||||
|
- [ ] 7.2 将消息去重从内存缓存迁移到 Redis SET(支持多进程)
|
||||||
|
- [ ] 7.3 支持多实例部署(无状态 Flask + 共享 Redis)
|
||||||
|
|
||||||
|
- [ ] 8. 密码哈希升级
|
||||||
|
- [ ] 8.1 将 SHA-256 替换为 bcrypt(pip install bcrypt)
|
||||||
|
- [ ] 8.2 兼容旧密码:登录时检测旧格式,自动升级为 bcrypt
|
||||||
|
|
||||||
|
- [ ] 9. 前端状态管理优化
|
||||||
|
- [ ] 9.1 引入简易事件总线(EventEmitter 模式)
|
||||||
|
- 模块间通过事件通信,不直接读写共享状态
|
||||||
|
- [ ] 9.2 将 this.xxxCurrentTenantId 等状态封装为 Store 对象
|
||||||
|
|
||||||
|
- [x] 10. 清理旧代码
|
||||||
|
- [x] 10.1 删除 src/web/static/js/core/ 目录(旧的未完成重构)
|
||||||
|
- [x] 10.2 删除 src/web/static/js/services/ 目录
|
||||||
|
- [x] 10.3 删除 src/web/static/js/components/ 目录
|
||||||
|
- [x] 10.4 删除 src/web/static/js/pages/ 目录
|
||||||
|
- [x] 10.5 清理 index.html、chat.html、chat_http.html 中对已删除 JS 的引用
|
||||||
|
|
||||||
|
## Notes
|
||||||
|
|
||||||
|
- 每个任务独立可交付,按 1 → 2 → 3 的顺序做收益最大
|
||||||
|
- 任务 4(Alembic)可以随时做,不依赖其他任务
|
||||||
|
- 任务 7(Redis 会话)只在需要多实例部署时才有必要
|
||||||
|
- 任务 8(密码升级)安全性高但影响面小,可以穿插做
|
||||||
1
.kiro/specs/conversation-tenant-view/.config.kiro
Normal file
1
.kiro/specs/conversation-tenant-view/.config.kiro
Normal file
@@ -0,0 +1 @@
|
|||||||
|
{"specId": "b7e3c1a2-5f84-4d9e-a1b3-8c6d2e4f7a90", "workflowType": "requirements-first", "specType": "feature"}
|
||||||
319
.kiro/specs/conversation-tenant-view/design.md
Normal file
319
.kiro/specs/conversation-tenant-view/design.md
Normal file
@@ -0,0 +1,319 @@
|
|||||||
|
# Design Document: 对话历史租户分组展示 (conversation-tenant-view)
|
||||||
|
|
||||||
|
## Overview
|
||||||
|
|
||||||
|
本设计将对话历史页面从扁平会话列表改造为两层结构:第一层按 `tenant_id` 分组展示租户汇总卡片(会话总数、消息总数、活跃会话数、最近活跃时间),第二层展示某租户下的具体会话列表。点击会话仍可查看消息详情(保留现有第三层功能)。改造涉及三个层面:
|
||||||
|
|
||||||
|
1. **后端 API 层** — 在 `conversations_bp` 中新增租户汇总端点 `GET /api/conversations/tenants`,并为现有 `/api/conversations/sessions` 和 `/api/conversations/analytics` 端点增加 `tenant_id` 查询参数支持。
|
||||||
|
2. **业务逻辑层** — 在 `ConversationHistoryManager` 中新增 `get_tenant_summary()` 方法,并为 `get_sessions_paginated()`、`get_conversation_analytics()` 方法增加 `tenant_id` 过滤参数。
|
||||||
|
3. **前端展示层** — 在 `dashboard.js` 中实现 `Tenant_List_View` 和 `Tenant_Detail_View` 两个视图状态的切换逻辑,包括面包屑导航、统计面板上下文切换、搜索范围限定。
|
||||||
|
|
||||||
|
数据模型 `ChatSession` 和 `Conversation` 已有 `tenant_id` 字段(`String(50)`, indexed),无需数据库迁移。
|
||||||
|
|
||||||
|
交互模式与知识库租户分组视图(knowledge-tenant-view)保持一致。
|
||||||
|
|
||||||
|
## Architecture
|
||||||
|
|
||||||
|
```mermaid
|
||||||
|
graph TD
|
||||||
|
subgraph Frontend["前端 (dashboard.js)"]
|
||||||
|
TLV[Tenant_List_View<br/>租户卡片列表]
|
||||||
|
TDV[Tenant_Detail_View<br/>租户会话列表]
|
||||||
|
MDV[Message_Detail_View<br/>会话消息详情]
|
||||||
|
Stats[统计面板<br/>全局/租户统计切换]
|
||||||
|
Breadcrumb[面包屑导航]
|
||||||
|
end
|
||||||
|
|
||||||
|
subgraph API["Flask Blueprint (conversations_bp)"]
|
||||||
|
EP1["GET /api/conversations/tenants"]
|
||||||
|
EP2["GET /api/conversations/sessions?tenant_id=X"]
|
||||||
|
EP3["GET /api/conversations/analytics?tenant_id=X"]
|
||||||
|
EP4["GET /api/conversations/sessions/<id>"]
|
||||||
|
EP5["DELETE /api/conversations/sessions/<id>"]
|
||||||
|
end
|
||||||
|
|
||||||
|
subgraph Service["ConversationHistoryManager"]
|
||||||
|
M1[get_tenant_summary]
|
||||||
|
M2[get_sessions_paginated<br/>+tenant_id filter]
|
||||||
|
M3[get_conversation_analytics<br/>+tenant_id filter]
|
||||||
|
M4[get_session_messages]
|
||||||
|
M5[delete_session]
|
||||||
|
end
|
||||||
|
|
||||||
|
subgraph DB["SQLAlchemy"]
|
||||||
|
CS[ChatSession<br/>tenant_id indexed]
|
||||||
|
CV[Conversation<br/>tenant_id indexed]
|
||||||
|
end
|
||||||
|
|
||||||
|
TLV -->|点击租户卡片| TDV
|
||||||
|
TDV -->|点击会话行| MDV
|
||||||
|
TDV -->|面包屑返回| TLV
|
||||||
|
MDV -->|面包屑返回| TDV
|
||||||
|
|
||||||
|
TLV --> EP1
|
||||||
|
TDV --> EP2
|
||||||
|
Stats --> EP3
|
||||||
|
MDV --> EP4
|
||||||
|
TDV --> EP5
|
||||||
|
|
||||||
|
EP1 --> M1
|
||||||
|
EP2 --> M2
|
||||||
|
EP3 --> M3
|
||||||
|
EP4 --> M4
|
||||||
|
EP5 --> M5
|
||||||
|
|
||||||
|
M1 --> CS
|
||||||
|
M2 --> CS
|
||||||
|
M3 --> CS & CV
|
||||||
|
M4 --> CV
|
||||||
|
M5 --> CS & CV
|
||||||
|
```
|
||||||
|
|
||||||
|
### 设计决策
|
||||||
|
|
||||||
|
- **不引入新模型/表**:`tenant_id` 已存在于 `ChatSession` 和 `Conversation`,聚合查询通过 `GROUP BY` 实现,无需额外的 Tenant 表。
|
||||||
|
- **视图状态管理在前端**:使用 JS 变量 `conversationCurrentTenantId` 控制当前视图层级,避免引入前端路由框架。与 knowledge-tenant-view 的 `currentTenantId` 模式一致。
|
||||||
|
- **统计面板复用**:同一个统计面板根据 `conversationCurrentTenantId` 是否为 `null` 决定请求全局或租户级统计。
|
||||||
|
- **搜索范围自动限定**:当处于 `Tenant_Detail_View` 时,搜索请求自动附加 `tenant_id` 参数。
|
||||||
|
- **复用现有删除逻辑**:`delete_session()` 已实现删除会话及关联消息,无需修改。
|
||||||
|
|
||||||
|
## Components and Interfaces
|
||||||
|
|
||||||
|
### 1. ConversationHistoryManager 新增/修改方法
|
||||||
|
|
||||||
|
```python
|
||||||
|
# 新增方法
|
||||||
|
def get_tenant_summary(self) -> List[Dict[str, Any]]:
|
||||||
|
"""
|
||||||
|
按 tenant_id 聚合 ChatSession,返回租户汇总列表。
|
||||||
|
返回格式: [
|
||||||
|
{
|
||||||
|
"tenant_id": "market_a",
|
||||||
|
"session_count": 15,
|
||||||
|
"message_count": 230,
|
||||||
|
"active_session_count": 5,
|
||||||
|
"last_active_time": "2026-03-20T10:30:00"
|
||||||
|
}, ...
|
||||||
|
]
|
||||||
|
按 last_active_time 降序排列。
|
||||||
|
"""
|
||||||
|
|
||||||
|
# 修改方法签名
|
||||||
|
def get_sessions_paginated(
|
||||||
|
self,
|
||||||
|
page: int = 1,
|
||||||
|
per_page: int = 20,
|
||||||
|
status: Optional[str] = None,
|
||||||
|
search: str = '',
|
||||||
|
date_filter: str = '',
|
||||||
|
tenant_id: Optional[str] = None # 新增
|
||||||
|
) -> Dict[str, Any]
|
||||||
|
|
||||||
|
def get_conversation_analytics(
|
||||||
|
self,
|
||||||
|
work_order_id: Optional[int] = None,
|
||||||
|
days: int = 7,
|
||||||
|
tenant_id: Optional[str] = None # 新增
|
||||||
|
) -> Dict[str, Any]
|
||||||
|
```
|
||||||
|
|
||||||
|
### 2. Conversations API 新增/修改端点
|
||||||
|
|
||||||
|
| 端点 | 方法 | 变更 | 说明 |
|
||||||
|
|------|------|------|------|
|
||||||
|
| `/api/conversations/tenants` | GET | 新增 | 返回租户汇总数组 |
|
||||||
|
| `/api/conversations/sessions` | GET | 修改 | 增加 `tenant_id` 查询参数 |
|
||||||
|
| `/api/conversations/analytics` | GET | 修改 | 增加 `tenant_id` 查询参数 |
|
||||||
|
|
||||||
|
现有端点保持不变:
|
||||||
|
- `GET /api/conversations/sessions/<session_id>` — 获取会话消息详情
|
||||||
|
- `DELETE /api/conversations/sessions/<session_id>` — 删除会话
|
||||||
|
|
||||||
|
### 3. 前端组件
|
||||||
|
|
||||||
|
| 组件/函数 | 职责 |
|
||||||
|
|-----------|------|
|
||||||
|
| `loadConversationTenantList()` | 请求 `/api/conversations/tenants`,渲染租户卡片 |
|
||||||
|
| `loadConversationTenantDetail(tenantId, page)` | 请求 `/api/conversations/sessions?tenant_id=X`,渲染会话列表 |
|
||||||
|
| `renderConversationBreadcrumb(tenantId, sessionTitle)` | 渲染面包屑 "对话历史 > {tenant_id}" 或 "对话历史 > {tenant_id} > {session_title}" |
|
||||||
|
| `loadConversationStats(tenantId)` | 根据 tenantId 是否为 null 请求全局/租户统计 |
|
||||||
|
| `searchConversationSessions()` | 搜索时自动附加 `conversationCurrentTenantId` |
|
||||||
|
|
||||||
|
## Data Models
|
||||||
|
|
||||||
|
### ChatSession(现有,无变更)
|
||||||
|
|
||||||
|
```python
|
||||||
|
class ChatSession(Base):
|
||||||
|
__tablename__ = "chat_sessions"
|
||||||
|
id = Column(Integer, primary_key=True)
|
||||||
|
tenant_id = Column(String(50), nullable=False, default="default", index=True)
|
||||||
|
session_id = Column(String(100), unique=True, nullable=False)
|
||||||
|
user_id = Column(String(100), nullable=True)
|
||||||
|
work_order_id = Column(Integer, ForeignKey("work_orders.id"), nullable=True)
|
||||||
|
title = Column(String(200), nullable=True)
|
||||||
|
status = Column(String(20), default="active") # active, ended
|
||||||
|
message_count = Column(Integer, default=0)
|
||||||
|
source = Column(String(50), nullable=True)
|
||||||
|
ip_address = Column(String(45), nullable=True)
|
||||||
|
created_at = Column(DateTime, default=datetime.now)
|
||||||
|
updated_at = Column(DateTime, default=datetime.now, onupdate=datetime.now)
|
||||||
|
ended_at = Column(DateTime, nullable=True)
|
||||||
|
```
|
||||||
|
|
||||||
|
### Conversation(现有,无变更)
|
||||||
|
|
||||||
|
```python
|
||||||
|
class Conversation(Base):
|
||||||
|
__tablename__ = "conversations"
|
||||||
|
id = Column(Integer, primary_key=True)
|
||||||
|
tenant_id = Column(String(50), nullable=False, default="default", index=True)
|
||||||
|
session_id = Column(String(100), ForeignKey("chat_sessions.session_id"), nullable=True)
|
||||||
|
work_order_id = Column(Integer, ForeignKey("work_orders.id"))
|
||||||
|
user_message = Column(Text, nullable=False)
|
||||||
|
assistant_response = Column(Text, nullable=False)
|
||||||
|
timestamp = Column(DateTime, default=datetime.now)
|
||||||
|
confidence_score = Column(Float)
|
||||||
|
response_time = Column(Float)
|
||||||
|
# ... 其他字段
|
||||||
|
```
|
||||||
|
|
||||||
|
### Tenant Summary(API 响应结构,非持久化)
|
||||||
|
|
||||||
|
```json
|
||||||
|
{
|
||||||
|
"tenant_id": "market_a",
|
||||||
|
"session_count": 15,
|
||||||
|
"message_count": 230,
|
||||||
|
"active_session_count": 5,
|
||||||
|
"last_active_time": "2026-03-20T10:30:00"
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
### Analytics 响应结构(扩展)
|
||||||
|
|
||||||
|
现有 analytics 响应增加 `tenant_id` 字段(仅当按租户筛选时返回),其余结构不变。
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
## Correctness Properties
|
||||||
|
|
||||||
|
*A property is a characteristic or behavior that should hold true across all valid executions of a system — essentially, a formal statement about what the system should do. Properties serve as the bridge between human-readable specifications and machine-verifiable correctness guarantees.*
|
||||||
|
|
||||||
|
### Property 1: Tenant summary aggregation correctness
|
||||||
|
|
||||||
|
*For any* set of `ChatSession` records with mixed `tenant_id`, `status`, and `message_count` values, calling `get_tenant_summary()` should return a list where each element's `session_count` equals the number of `ChatSession` records for that `tenant_id`, each `message_count` equals the sum of `message_count` fields for that `tenant_id`, and each `active_session_count` equals the count of `ChatSession` records with `status == 'active'` for that `tenant_id`.
|
||||||
|
|
||||||
|
**Validates: Requirements 1.1, 1.2, 1.3, 1.4**
|
||||||
|
|
||||||
|
### Property 2: Tenant summary sorted by last_active_time descending
|
||||||
|
|
||||||
|
*For any* result returned by `get_tenant_summary()`, the list should be sorted such that for every consecutive pair of elements `(a, b)`, `a.last_active_time >= b.last_active_time`.
|
||||||
|
|
||||||
|
**Validates: Requirements 1.5**
|
||||||
|
|
||||||
|
### Property 3: Session filtering by tenant, status, and search
|
||||||
|
|
||||||
|
*For any* combination of `tenant_id`, `status`, and `search` parameters, all sessions returned by `get_sessions_paginated()` should satisfy all specified filter conditions simultaneously. Specifically: every returned session's `tenant_id` matches the requested `tenant_id`, every returned session's `status` matches the `status` filter (if provided), and every returned session's `title` or `session_id` contains the `search` string (if provided).
|
||||||
|
|
||||||
|
**Validates: Requirements 2.1, 2.3**
|
||||||
|
|
||||||
|
### Property 4: Pagination consistency with tenant filter
|
||||||
|
|
||||||
|
*For any* `tenant_id` and valid `page`/`per_page` values, the sessions returned by `get_sessions_paginated(tenant_id=X, page=P, per_page=N)` should be a correct slice of the full filtered result set. The `total` field should equal the count of all matching sessions, `total_pages` should equal `ceil(total / per_page)`, and the number of returned sessions should equal `min(per_page, total - (page-1)*per_page)` when `page <= total_pages`.
|
||||||
|
|
||||||
|
**Validates: Requirements 2.2**
|
||||||
|
|
||||||
|
### Property 5: Session deletion removes session and all associated messages
|
||||||
|
|
||||||
|
*For any* `ChatSession` and its associated `Conversation` records, after calling `delete_session(session_id)`, querying for the `ChatSession` by `session_id` should return no results, and querying for `Conversation` records with that `session_id` should also return no results.
|
||||||
|
|
||||||
|
**Validates: Requirements 6.2**
|
||||||
|
|
||||||
|
### Property 6: Search results scoped to tenant
|
||||||
|
|
||||||
|
*For any* search query and `tenant_id`, all sessions returned by `get_sessions_paginated(search=Q, tenant_id=X)` should have `tenant_id == X`. The result set should be a subset of what `get_sessions_paginated(search=Q)` returns (without tenant filter).
|
||||||
|
|
||||||
|
**Validates: Requirements 7.1, 7.2**
|
||||||
|
|
||||||
|
### Property 7: Analytics scoped to tenant
|
||||||
|
|
||||||
|
*For any* `tenant_id`, the analytics returned by `get_conversation_analytics(tenant_id=X)` should reflect only `ChatSession` and `Conversation` records with `tenant_id == X`. When `tenant_id` is omitted, the analytics should aggregate across all tenants. Specifically, the conversation total count with a tenant filter should be less than or equal to the global total count.
|
||||||
|
|
||||||
|
**Validates: Requirements 8.3, 8.4**
|
||||||
|
|
||||||
|
## Error Handling
|
||||||
|
|
||||||
|
### API 层错误处理
|
||||||
|
|
||||||
|
所有 API 端点使用 try/except 包裹,捕获异常后返回统一错误格式:
|
||||||
|
|
||||||
|
| 异常场景 | HTTP 状态码 | 响应格式 |
|
||||||
|
|----------|------------|---------|
|
||||||
|
| 参数校验失败(如 `page < 1`) | 400 | `{"error": "描述信息"}` |
|
||||||
|
| 数据库查询异常 | 500 | `{"error": "描述信息"}` |
|
||||||
|
| 正常但无数据 | 200 | 空数组 `[]` 或 `{"sessions": [], "total": 0}` |
|
||||||
|
|
||||||
|
### 业务逻辑层错误处理
|
||||||
|
|
||||||
|
- `get_tenant_summary()` — 数据库异常时返回空列表 `[]`,记录 error 日志。
|
||||||
|
- `get_sessions_paginated()` — 异常时返回空结构 `{"sessions": [], "total": 0, ...}`(现有行为保持不变)。
|
||||||
|
- `get_conversation_analytics()` — 异常时返回空字典 `{}`(现有行为保持不变)。
|
||||||
|
- `delete_session()` — 异常时返回 `False`,记录 error 日志(现有行为保持不变)。
|
||||||
|
|
||||||
|
### 前端错误处理
|
||||||
|
|
||||||
|
- API 请求失败时通过 `showNotification(message, 'error')` 展示错误提示。
|
||||||
|
- 网络超时或断连时显示通用错误消息。
|
||||||
|
- 删除操作失败时显示具体失败原因。
|
||||||
|
|
||||||
|
## Testing Strategy
|
||||||
|
|
||||||
|
### 测试框架
|
||||||
|
|
||||||
|
- **单元测试**: `pytest`
|
||||||
|
- **属性测试**: `hypothesis`(Python property-based testing 库)
|
||||||
|
- **每个属性测试最少运行 100 次迭代**
|
||||||
|
|
||||||
|
### 属性测试(Property-Based Tests)
|
||||||
|
|
||||||
|
每个 Correctness Property 对应一个属性测试,使用 `hypothesis` 的 `@given` 装饰器生成随机输入。
|
||||||
|
|
||||||
|
测试标签格式: `Feature: conversation-tenant-view, Property {number}: {property_text}`
|
||||||
|
|
||||||
|
| Property | 测试描述 | 生成策略 |
|
||||||
|
|----------|---------|---------|
|
||||||
|
| Property 1 | 生成随机 ChatSession 列表(混合 tenant_id、status、message_count),验证 `get_tenant_summary()` 聚合正确性 | `st.lists(st.builds(ChatSession, tenant_id=st.sampled_from([...]), status=st.sampled_from(['active','ended']), message_count=st.integers(min_value=0, max_value=100)))` |
|
||||||
|
| Property 2 | 验证 `get_tenant_summary()` 返回列表按 last_active_time 降序 | 复用 Property 1 的生成策略 |
|
||||||
|
| Property 3 | 生成随机 tenant_id + status + search 组合,验证过滤结果一致性 | `st.sampled_from(tenant_ids)`, `st.sampled_from(['active','ended',''])`, `st.text(min_size=0, max_size=20)` |
|
||||||
|
| Property 4 | 生成随机 page/per_page,验证分页切片正确性 | `st.integers(min_value=1, max_value=10)` for page/per_page |
|
||||||
|
| Property 5 | 创建随机会话及关联消息,删除后验证两者均不存在 | `st.text(min_size=1, max_size=50)` for session_id, `st.integers(min_value=1, max_value=10)` for message count |
|
||||||
|
| Property 6 | 生成随机搜索词和 tenant_id,验证搜索结果范围 | `st.text()` for query, `st.sampled_from(tenant_ids)` |
|
||||||
|
| Property 7 | 生成随机 tenant_id,验证 analytics 数据与手动聚合一致 | `st.sampled_from(tenant_ids)` + `st.none()` |
|
||||||
|
|
||||||
|
### 单元测试(Unit Tests)
|
||||||
|
|
||||||
|
单元测试聚焦于边界情况和具体示例:
|
||||||
|
|
||||||
|
- **边界**: 无 ChatSession 记录时 `get_tenant_summary()` 返回空数组
|
||||||
|
- **边界**: 不存在的 `tenant_id` 查询返回空列表 + `total=0`
|
||||||
|
- **示例**: 数据库异常时 API 返回 500
|
||||||
|
- **示例**: 删除最后一个会话后租户从汇总中消失
|
||||||
|
- **集成**: 前端 `loadConversationTenantList()` → API → Manager 完整链路
|
||||||
|
|
||||||
|
### 测试配置
|
||||||
|
|
||||||
|
```python
|
||||||
|
from hypothesis import settings
|
||||||
|
|
||||||
|
@settings(max_examples=100)
|
||||||
|
```
|
||||||
|
|
||||||
|
每个属性测试函数头部添加注释引用设计文档中的 Property 编号,例如:
|
||||||
|
|
||||||
|
```python
|
||||||
|
# Feature: conversation-tenant-view, Property 1: Tenant summary aggregation correctness
|
||||||
|
@given(sessions=st.lists(chat_session_strategy(), min_size=0, max_size=50))
|
||||||
|
def test_tenant_summary_aggregation(sessions):
|
||||||
|
...
|
||||||
|
```
|
||||||
116
.kiro/specs/conversation-tenant-view/requirements.md
Normal file
116
.kiro/specs/conversation-tenant-view/requirements.md
Normal file
@@ -0,0 +1,116 @@
|
|||||||
|
# Requirements Document
|
||||||
|
|
||||||
|
## Introduction
|
||||||
|
|
||||||
|
对话历史租户分组展示功能。当前对话历史页面以扁平的会话列表展示所有 `ChatSession` 记录,缺乏租户(市场)维度的组织结构。本功能将对话历史页面改造为两层结构:第一层按租户分组展示汇总信息(会话总数、消息总数、最近活跃时间等),第二层展示某个租户下的具体会话列表。点击具体会话仍可查看消息详情(保留现有功能)。交互模式与知识库租户分组视图保持一致,包括卡片视图、面包屑导航、搜索范围限定和统计面板上下文切换。
|
||||||
|
|
||||||
|
## Glossary
|
||||||
|
|
||||||
|
- **Dashboard**: Flask + Jinja2 + Bootstrap 5 构建的 Web 管理后台主页面(`dashboard.html`)
|
||||||
|
- **Conversation_Tab**: Dashboard 中 `#conversation-history-tab` 区域,用于展示和管理对话历史
|
||||||
|
- **Conversation_API**: Flask Blueprint `conversations_bp`,提供对话相关的 REST API(`/api/conversations/*`)
|
||||||
|
- **History_Manager**: `ConversationHistoryManager` 类,封装对话历史的数据库查询与业务逻辑
|
||||||
|
- **Tenant**: 租户,即市场标识(如 `market_a`、`market_b`),通过 `ChatSession.tenant_id` 字段区分
|
||||||
|
- **Tenant_Summary**: 租户汇总信息,包含租户 ID、会话总数、消息总数、活跃会话数、最近活跃时间等聚合数据
|
||||||
|
- **Tenant_List_View**: 第一层视图,以卡片形式展示所有租户的对话汇总信息
|
||||||
|
- **Tenant_Detail_View**: 第二层视图,展示某个租户下的具体会话列表(含分页、筛选)
|
||||||
|
- **ChatSession**: SQLAlchemy 数据模型,包含 `tenant_id`、`session_id`、`title`、`status`、`message_count`、`source`、`created_at`、`updated_at` 等字段
|
||||||
|
- **Conversation**: SQLAlchemy 数据模型,表示单条对话消息,包含 `tenant_id`、`session_id`、`user_message`、`assistant_response` 等字段
|
||||||
|
|
||||||
|
## Requirements
|
||||||
|
|
||||||
|
### Requirement 1: 租户汇总 API
|
||||||
|
|
||||||
|
**User Story:** 作为管理员,我希望后端提供按租户分组的对话会话汇总接口,以便前端展示每个租户的对话统计。
|
||||||
|
|
||||||
|
#### Acceptance Criteria
|
||||||
|
|
||||||
|
1. WHEN a GET request is sent to `/api/conversations/tenants`, THE Conversation_API SHALL return a JSON array of Tenant_Summary objects, each containing `tenant_id`, `session_count`, `message_count`, `active_session_count`, and `last_active_time`
|
||||||
|
2. THE Conversation_API SHALL compute `session_count` by counting all ChatSession records for each Tenant
|
||||||
|
3. THE Conversation_API SHALL compute `message_count` by summing the `message_count` field of all ChatSession records for each Tenant
|
||||||
|
4. THE Conversation_API SHALL compute `active_session_count` by counting ChatSession records with `status == 'active'` for each Tenant
|
||||||
|
5. THE Conversation_API SHALL sort the Tenant_Summary array by `last_active_time` in descending order
|
||||||
|
6. WHEN no ChatSession records exist, THE Conversation_API SHALL return an empty JSON array with HTTP status 200
|
||||||
|
7. IF a database query error occurs, THEN THE Conversation_API SHALL return an error response with HTTP status 500 and a descriptive error message
|
||||||
|
|
||||||
|
### Requirement 2: 租户会话列表 API
|
||||||
|
|
||||||
|
**User Story:** 作为管理员,我希望后端提供按租户筛选的会话分页接口,以便在点击某个租户后查看该租户下的具体会话列表。
|
||||||
|
|
||||||
|
#### Acceptance Criteria
|
||||||
|
|
||||||
|
1. WHEN a GET request with query parameter `tenant_id` is sent to `/api/conversations/sessions`, THE Conversation_API SHALL return only the ChatSession records belonging to the specified Tenant
|
||||||
|
2. THE Conversation_API SHALL support pagination via `page` and `per_page` query parameters when filtering by `tenant_id`
|
||||||
|
3. THE Conversation_API SHALL support `status` and `search` query parameters for further filtering within a Tenant
|
||||||
|
4. WHEN the `tenant_id` parameter value does not match any existing ChatSession records, THE Conversation_API SHALL return an empty session list with `total` equal to 0 and HTTP status 200
|
||||||
|
5. THE History_Manager SHALL accept `tenant_id` as a filter parameter in the `get_sessions_paginated` method and return paginated results scoped to the specified Tenant
|
||||||
|
|
||||||
|
### Requirement 3: 租户列表视图(第一层)
|
||||||
|
|
||||||
|
**User Story:** 作为管理员,我希望对话历史页面首先展示按租户分组的汇总卡片,以便快速了解各市场的对话活跃度。
|
||||||
|
|
||||||
|
#### Acceptance Criteria
|
||||||
|
|
||||||
|
1. WHEN the Conversation_Tab is activated, THE Dashboard SHALL display a Tenant_List_View showing one card per Tenant
|
||||||
|
2. THE Tenant_List_View SHALL display the following information for each Tenant: tenant_id(租户名称), session_count(会话总数), message_count(消息总数), active_session_count(活跃会话数), last_active_time(最近活跃时间)
|
||||||
|
3. WHEN the Tenant_List_View is loading data, THE Dashboard SHALL display a loading spinner in the Conversation_Tab area
|
||||||
|
4. WHEN no tenants exist, THE Dashboard SHALL display a placeholder message indicating that no conversation sessions are available
|
||||||
|
5. THE Tenant_List_View SHALL refresh its data when the user clicks a refresh button
|
||||||
|
|
||||||
|
### Requirement 4: 租户详情视图(第二层)
|
||||||
|
|
||||||
|
**User Story:** 作为管理员,我希望点击某个租户卡片后能查看该租户下的具体会话列表,以便管理和审查对话内容。
|
||||||
|
|
||||||
|
#### Acceptance Criteria
|
||||||
|
|
||||||
|
1. WHEN a user clicks on a Tenant card in the Tenant_List_View, THE Dashboard SHALL transition to the Tenant_Detail_View showing ChatSession records for the selected Tenant
|
||||||
|
2. THE Tenant_Detail_View SHALL display each ChatSession with the following fields: title(会话标题), message_count(消息数), status(状态), source(来源), created_at(创建时间), updated_at(最近更新时间)
|
||||||
|
3. THE Tenant_Detail_View SHALL provide a breadcrumb navigation showing "对话历史 > {tenant_id}" to indicate the current context
|
||||||
|
4. WHEN the user clicks the breadcrumb "对话历史" link, THE Dashboard SHALL navigate back to the Tenant_List_View
|
||||||
|
5. THE Tenant_Detail_View SHALL support pagination with configurable page size
|
||||||
|
6. THE Tenant_Detail_View SHALL support filtering by session status and date range
|
||||||
|
|
||||||
|
### Requirement 5: 会话详情查看(第三层保留)
|
||||||
|
|
||||||
|
**User Story:** 作为管理员,我希望在租户详情视图中点击某个会话后能查看该会话的消息详情,以便审查具体对话内容。
|
||||||
|
|
||||||
|
#### Acceptance Criteria
|
||||||
|
|
||||||
|
1. WHEN a user clicks on a ChatSession row in the Tenant_Detail_View, THE Dashboard SHALL display the message detail view showing all Conversation records for the selected ChatSession
|
||||||
|
2. THE Dashboard SHALL retain the existing message detail display logic and UI layout
|
||||||
|
3. THE Dashboard SHALL provide a breadcrumb navigation showing "对话历史 > {tenant_id} > {session_title}" in the message detail view
|
||||||
|
4. WHEN the user clicks the breadcrumb "{tenant_id}" link, THE Dashboard SHALL navigate back to the Tenant_Detail_View for the corresponding Tenant
|
||||||
|
|
||||||
|
### Requirement 6: 会话管理操作
|
||||||
|
|
||||||
|
**User Story:** 作为管理员,我希望在租户详情视图中能对会话执行删除操作,以便维护对话历史数据。
|
||||||
|
|
||||||
|
#### Acceptance Criteria
|
||||||
|
|
||||||
|
1. WHILE viewing the Tenant_Detail_View, THE Dashboard SHALL provide a delete button for each ChatSession row
|
||||||
|
2. WHEN a user deletes a ChatSession in the Tenant_Detail_View, THE Conversation_API SHALL delete the ChatSession and all associated Conversation records
|
||||||
|
3. WHEN a user deletes a ChatSession, THE Dashboard SHALL refresh the Tenant_Detail_View to reflect the updated data
|
||||||
|
4. WHEN a user deletes all ChatSession records for a Tenant, THE Dashboard SHALL navigate back to the Tenant_List_View and remove the empty Tenant card
|
||||||
|
5. IF a ChatSession deletion fails, THEN THE Dashboard SHALL display an error notification with the failure reason
|
||||||
|
|
||||||
|
### Requirement 7: 搜索功能适配
|
||||||
|
|
||||||
|
**User Story:** 作为管理员,我希望在租户详情视图中搜索会话时,搜索范围限定在当前租户内,以便精确查找。
|
||||||
|
|
||||||
|
#### Acceptance Criteria
|
||||||
|
|
||||||
|
1. WHILE viewing the Tenant_Detail_View, THE Dashboard SHALL scope the session search to the currently selected Tenant
|
||||||
|
2. WHEN a search query is submitted in the Tenant_Detail_View, THE Conversation_API SHALL filter search results by the specified `tenant_id`
|
||||||
|
3. WHEN the search query is cleared, THE Dashboard SHALL restore the full paginated session list for the current Tenant
|
||||||
|
4. THE History_Manager search method SHALL accept an optional `tenant_id` parameter to limit search scope
|
||||||
|
|
||||||
|
### Requirement 8: 统计信息适配
|
||||||
|
|
||||||
|
**User Story:** 作为管理员,我希望对话历史统计面板在租户列表视图时展示全局统计,在租户详情视图时展示当前租户的统计,以便获取准确的上下文信息。
|
||||||
|
|
||||||
|
#### Acceptance Criteria
|
||||||
|
|
||||||
|
1. WHILE the Tenant_List_View is displayed, THE Dashboard SHALL show global conversation statistics (total sessions across all tenants, total messages, total active sessions)
|
||||||
|
2. WHILE the Tenant_Detail_View is displayed, THE Dashboard SHALL show statistics scoped to the selected Tenant
|
||||||
|
3. WHEN a GET request with query parameter `tenant_id` is sent to `/api/conversations/analytics`, THE Conversation_API SHALL return analytics data filtered by the specified Tenant
|
||||||
|
4. WHEN the `tenant_id` parameter is omitted from the analytics request, THE Conversation_API SHALL return global analytics across all tenants
|
||||||
142
.kiro/specs/conversation-tenant-view/tasks.md
Normal file
142
.kiro/specs/conversation-tenant-view/tasks.md
Normal file
@@ -0,0 +1,142 @@
|
|||||||
|
# Implementation Plan: 对话历史租户分组展示 (conversation-tenant-view)
|
||||||
|
|
||||||
|
## Overview
|
||||||
|
|
||||||
|
将对话历史页面从扁平会话列表改造为两层结构:第一层按 `tenant_id` 分组展示租户汇总卡片,第二层展示租户下的会话列表。改造涉及 ConversationHistoryManager 业务逻辑层、Flask API 层、前端 dashboard.js 三个层面。交互模式与知识库租户分组视图保持一致。
|
||||||
|
|
||||||
|
## Tasks
|
||||||
|
|
||||||
|
- [x] 1. ConversationHistoryManager 新增 get_tenant_summary 方法
|
||||||
|
- [x] 1.1 在 `src/dialogue/conversation_history.py` 中新增 `get_tenant_summary()` 方法
|
||||||
|
- 使用 SQLAlchemy `GROUP BY ChatSession.tenant_id` 聚合所有 ChatSession 记录
|
||||||
|
- 计算每个租户的 `session_count`(会话总数)、`message_count`(消息总数,sum of message_count)、`active_session_count`(status=='active' 的会话数)、`last_active_time`(max of updated_at)
|
||||||
|
- 按 `last_active_time` 降序排列
|
||||||
|
- 数据库异常时返回空列表 `[]`,记录 error 日志
|
||||||
|
- 无 ChatSession 记录时返回空列表 `[]`
|
||||||
|
- _Requirements: 1.1, 1.2, 1.3, 1.4, 1.5, 1.6_
|
||||||
|
|
||||||
|
- [ ]* 1.2 为 get_tenant_summary 编写属性测试
|
||||||
|
- **Property 1: Tenant summary aggregation correctness**
|
||||||
|
- **Property 2: Tenant summary sorted by last_active_time descending**
|
||||||
|
- 使用 `hypothesis` 生成随机 ChatSession 列表(混合 tenant_id、status、message_count),验证聚合正确性和排序
|
||||||
|
- **Validates: Requirements 1.1, 1.2, 1.3, 1.4, 1.5**
|
||||||
|
|
||||||
|
- [x] 2. ConversationHistoryManager 现有方法增加 tenant_id 过滤
|
||||||
|
- [x] 2.1 为 `get_sessions_paginated()` 增加 `tenant_id` 可选参数
|
||||||
|
- 在 `src/dialogue/conversation_history.py` 中修改方法签名,增加 `tenant_id: Optional[str] = None`
|
||||||
|
- 当 `tenant_id` 不为 None 时,在查询中增加 `ChatSession.tenant_id == tenant_id` 过滤条件
|
||||||
|
- 返回结构不变,仅过滤范围缩小
|
||||||
|
- _Requirements: 2.1, 2.2, 2.3, 2.5_
|
||||||
|
|
||||||
|
- [ ]* 2.2 为 get_sessions_paginated 的 tenant_id 过滤编写属性测试
|
||||||
|
- **Property 3: Session filtering by tenant, status, and search**
|
||||||
|
- **Property 4: Pagination consistency with tenant filter**
|
||||||
|
- **Validates: Requirements 2.1, 2.2, 2.3**
|
||||||
|
|
||||||
|
- [x] 2.3 为 `get_conversation_analytics()` 增加 `tenant_id` 可选参数
|
||||||
|
- 当 `tenant_id` 不为 None 时,所有统计查询增加 `ChatSession.tenant_id == tenant_id` 和 `Conversation.tenant_id == tenant_id` 过滤
|
||||||
|
- 返回结构不变
|
||||||
|
- _Requirements: 8.3, 8.4_
|
||||||
|
|
||||||
|
- [ ]* 2.4 为 get_conversation_analytics 的 tenant_id 过滤编写属性测试
|
||||||
|
- **Property 7: Analytics scoped to tenant**
|
||||||
|
- **Validates: Requirements 8.3, 8.4**
|
||||||
|
|
||||||
|
- [x] 3. Checkpoint - 确保后端业务逻辑层完成
|
||||||
|
- Ensure all tests pass, ask the user if questions arise.
|
||||||
|
|
||||||
|
- [x] 4. Conversations API 层新增和修改端点
|
||||||
|
- [x] 4.1 在 `src/web/blueprints/conversations.py` 中新增 `GET /api/conversations/tenants` 端点
|
||||||
|
- 调用 `history_manager.get_tenant_summary()` 返回租户汇总 JSON 数组
|
||||||
|
- 使用 try/except 包裹,异常时返回 HTTP 500
|
||||||
|
- _Requirements: 1.1, 1.5, 1.6, 1.7_
|
||||||
|
|
||||||
|
- [x] 4.2 修改 `GET /api/conversations/sessions` 端点,增加 `tenant_id` 查询参数支持
|
||||||
|
- 从 `request.args` 获取 `tenant_id` 参数,传递给 `history_manager.get_sessions_paginated()`
|
||||||
|
- _Requirements: 2.1, 2.2, 2.3, 2.4_
|
||||||
|
|
||||||
|
- [x] 4.3 修改 `GET /api/conversations/analytics` 端点,增加 `tenant_id` 查询参数支持
|
||||||
|
- 从 `request.args` 获取 `tenant_id` 参数,传递给 `history_manager.get_conversation_analytics()`
|
||||||
|
- _Requirements: 8.3, 8.4_
|
||||||
|
|
||||||
|
- [ ]* 4.4 为新增和修改的 API 端点编写单元测试
|
||||||
|
- 测试 `/api/conversations/tenants` 返回正确的汇总数据
|
||||||
|
- 测试各端点的 `tenant_id` 参数过滤行为
|
||||||
|
- 测试空数据和异常情况
|
||||||
|
- _Requirements: 1.1, 1.6, 1.7, 2.4_
|
||||||
|
|
||||||
|
- [x] 5. Checkpoint - 确保后端 API 层完成
|
||||||
|
- Ensure all tests pass, ask the user if questions arise.
|
||||||
|
|
||||||
|
- [x] 6. 前端 Tenant_List_View(租户列表视图)
|
||||||
|
- [x] 6.1 在 `src/web/static/js/dashboard.js` 中实现 `loadConversationTenantList()` 函数
|
||||||
|
- 请求 `GET /api/conversations/tenants` 获取租户汇总数据
|
||||||
|
- 渲染租户卡片列表,每张卡片展示 `tenant_id`、`session_count`、`message_count`、`active_session_count`、`last_active_time`
|
||||||
|
- 添加加载中 spinner 状态
|
||||||
|
- 无租户时展示空状态占位提示
|
||||||
|
- 卡片点击事件绑定,调用 `loadConversationTenantDetail(tenantId)`
|
||||||
|
- _Requirements: 3.1, 3.2, 3.3, 3.4_
|
||||||
|
|
||||||
|
- [x] 6.2 实现刷新按钮功能
|
||||||
|
- 在对话历史 tab 区域添加刷新按钮,点击时重新调用 `loadConversationTenantList()`
|
||||||
|
- _Requirements: 3.5_
|
||||||
|
|
||||||
|
- [x] 7. 前端 Tenant_Detail_View(租户详情视图)
|
||||||
|
- [x] 7.1 实现 `loadConversationTenantDetail(tenantId, page)` 函数
|
||||||
|
- 请求 `GET /api/conversations/sessions?tenant_id=X&page=P&per_page=N` 获取会话列表
|
||||||
|
- 渲染会话表格,展示 title、message_count、status、source、created_at、updated_at
|
||||||
|
- 实现分页控件
|
||||||
|
- 支持 status 和 date_filter 筛选
|
||||||
|
- _Requirements: 4.1, 4.2, 4.5, 4.6_
|
||||||
|
|
||||||
|
- [x] 7.2 实现面包屑导航 `renderConversationBreadcrumb(tenantId, sessionTitle)`
|
||||||
|
- 展示 "对话历史 > {tenant_id}" 面包屑(租户详情视图)
|
||||||
|
- 展示 "对话历史 > {tenant_id} > {session_title}" 面包屑(消息详情视图)
|
||||||
|
- 点击 "对话历史" 链接时调用 `loadConversationTenantList()` 返回租户列表视图
|
||||||
|
- 点击 "{tenant_id}" 链接时调用 `loadConversationTenantDetail(tenantId)` 返回租户详情视图
|
||||||
|
- 管理 `conversationCurrentTenantId` 状态变量控制视图层级
|
||||||
|
- _Requirements: 4.3, 4.4, 5.3, 5.4_
|
||||||
|
|
||||||
|
- [x] 7.3 在 Tenant_Detail_View 中集成会话管理操作
|
||||||
|
- 每行会话提供删除按钮,调用 `DELETE /api/conversations/sessions/<session_id>`
|
||||||
|
- 删除成功后刷新当前租户详情视图
|
||||||
|
- 删除所有会话后自动返回租户列表视图并移除空租户卡片
|
||||||
|
- 操作失败时通过 `showNotification` 展示错误提示
|
||||||
|
- _Requirements: 6.1, 6.2, 6.3, 6.4, 6.5_
|
||||||
|
|
||||||
|
- [ ]* 7.4 为删除操作编写属性测试
|
||||||
|
- **Property 5: Session deletion removes session and all associated messages**
|
||||||
|
- **Validates: Requirements 6.2**
|
||||||
|
|
||||||
|
- [x] 8. 前端搜索和统计面板适配
|
||||||
|
- [x] 8.1 修改搜索功能 `searchConversationSessions()`
|
||||||
|
- 在 Tenant_Detail_View 中搜索时自动附加 `tenant_id` 参数
|
||||||
|
- 清空搜索时恢复当前租户的完整分页列表
|
||||||
|
- _Requirements: 7.1, 7.2, 7.3_
|
||||||
|
|
||||||
|
- [ ]* 8.2 为搜索范围限定编写属性测试
|
||||||
|
- **Property 6: Search results scoped to tenant**
|
||||||
|
- **Validates: Requirements 7.1, 7.2**
|
||||||
|
|
||||||
|
- [x] 8.3 修改 `loadConversationStats(tenantId)` 函数
|
||||||
|
- 当 `conversationCurrentTenantId` 为 null 时请求全局统计
|
||||||
|
- 当 `conversationCurrentTenantId` 有值时请求 `GET /api/conversations/analytics?tenant_id=X`
|
||||||
|
- _Requirements: 8.1, 8.2_
|
||||||
|
|
||||||
|
- [x] 9. 前端 HTML 模板更新
|
||||||
|
- [x] 9.1 在 `src/web/templates/dashboard.html` 的 `#conversation-history-tab` 区域添加必要的 DOM 容器
|
||||||
|
- 添加面包屑容器、租户卡片列表容器、租户详情容器
|
||||||
|
- 确保与现有 Bootstrap 5 样式一致,与知识库租户视图风格统一
|
||||||
|
- _Requirements: 3.1, 4.3_
|
||||||
|
|
||||||
|
- [x] 10. Final checkpoint - 确保所有功能集成完成
|
||||||
|
- Ensure all tests pass, ask the user if questions arise.
|
||||||
|
|
||||||
|
## Notes
|
||||||
|
|
||||||
|
- Tasks marked with `*` are optional and can be skipped for faster MVP
|
||||||
|
- Each task references specific requirements for traceability
|
||||||
|
- Checkpoints ensure incremental validation
|
||||||
|
- Property tests validate universal correctness properties from the design document
|
||||||
|
- 数据模型 `ChatSession` 和 `Conversation` 已有 `tenant_id` 字段且已建索引,无需数据库迁移
|
||||||
|
- 交互模式与知识库租户分组视图 (knowledge-tenant-view) 保持一致
|
||||||
1
.kiro/specs/knowledge-tenant-view/.config.kiro
Normal file
1
.kiro/specs/knowledge-tenant-view/.config.kiro
Normal file
@@ -0,0 +1 @@
|
|||||||
|
{"specId": "0d6981a4-ab44-429e-966d-0874ce82383c", "workflowType": "requirements-first", "specType": "feature"}
|
||||||
310
.kiro/specs/knowledge-tenant-view/design.md
Normal file
310
.kiro/specs/knowledge-tenant-view/design.md
Normal file
@@ -0,0 +1,310 @@
|
|||||||
|
# Design Document: 知识库租户分组展示 (knowledge-tenant-view)
|
||||||
|
|
||||||
|
## Overview
|
||||||
|
|
||||||
|
本设计将知识库管理页面从扁平列表改造为两层结构:第一层按 `tenant_id` 分组展示租户汇总卡片,第二层展示某租户下的知识条目列表。改造涉及三个层面:
|
||||||
|
|
||||||
|
1. **后端 API 层** — 在 `knowledge_bp` 中新增租户汇总端点 `/api/knowledge/tenants`,并为现有 `/api/knowledge` 和 `/api/knowledge/stats` 端点增加 `tenant_id` 查询参数支持。
|
||||||
|
2. **业务逻辑层** — 在 `KnowledgeManager` 中新增 `get_tenant_summary()` 方法,并为 `get_knowledge_paginated()`、`search_knowledge()`、`get_knowledge_stats()` 方法增加 `tenant_id` 过滤参数。`add_knowledge_entry()` 方法也需接受 `tenant_id` 参数。
|
||||||
|
3. **前端展示层** — 在 `dashboard.js` 中实现 `Tenant_List_View` 和 `Tenant_Detail_View` 两个视图状态的切换逻辑,包括面包屑导航、统计面板上下文切换、搜索范围限定。
|
||||||
|
|
||||||
|
数据模型 `KnowledgeEntry` 已有 `tenant_id` 字段(`String(50)`, indexed),无需数据库迁移。
|
||||||
|
|
||||||
|
## Architecture
|
||||||
|
|
||||||
|
```mermaid
|
||||||
|
graph TD
|
||||||
|
subgraph Frontend["前端 (dashboard.js)"]
|
||||||
|
TLV[Tenant_List_View<br/>租户卡片列表]
|
||||||
|
TDV[Tenant_Detail_View<br/>租户知识条目列表]
|
||||||
|
Stats[统计面板<br/>全局/租户统计切换]
|
||||||
|
Breadcrumb[面包屑导航]
|
||||||
|
end
|
||||||
|
|
||||||
|
subgraph API["Flask Blueprint (knowledge_bp)"]
|
||||||
|
EP1["GET /api/knowledge/tenants"]
|
||||||
|
EP2["GET /api/knowledge?tenant_id=X"]
|
||||||
|
EP3["GET /api/knowledge/stats?tenant_id=X"]
|
||||||
|
EP4["GET /api/knowledge/search?q=...&tenant_id=X"]
|
||||||
|
EP5["POST /api/knowledge (含 tenant_id)"]
|
||||||
|
end
|
||||||
|
|
||||||
|
subgraph Service["KnowledgeManager"]
|
||||||
|
M1[get_tenant_summary]
|
||||||
|
M2[get_knowledge_paginated<br/>+tenant_id filter]
|
||||||
|
M3[get_knowledge_stats<br/>+tenant_id filter]
|
||||||
|
M4[search_knowledge<br/>+tenant_id filter]
|
||||||
|
M5[add_knowledge_entry<br/>+tenant_id param]
|
||||||
|
end
|
||||||
|
|
||||||
|
subgraph DB["SQLAlchemy"]
|
||||||
|
KE[KnowledgeEntry<br/>tenant_id indexed]
|
||||||
|
end
|
||||||
|
|
||||||
|
TLV -->|点击租户卡片| TDV
|
||||||
|
TDV -->|面包屑返回| TLV
|
||||||
|
TLV --> EP1
|
||||||
|
TDV --> EP2
|
||||||
|
TDV --> EP4
|
||||||
|
Stats --> EP3
|
||||||
|
TDV --> EP5
|
||||||
|
|
||||||
|
EP1 --> M1
|
||||||
|
EP2 --> M2
|
||||||
|
EP3 --> M3
|
||||||
|
EP4 --> M4
|
||||||
|
EP5 --> M5
|
||||||
|
|
||||||
|
M1 --> KE
|
||||||
|
M2 --> KE
|
||||||
|
M3 --> KE
|
||||||
|
M4 --> KE
|
||||||
|
M5 --> KE
|
||||||
|
```
|
||||||
|
|
||||||
|
### 设计决策
|
||||||
|
|
||||||
|
- **不引入新模型/表**:`tenant_id` 已存在于 `KnowledgeEntry`,聚合查询通过 `GROUP BY` 实现,无需额外的 Tenant 表。
|
||||||
|
- **视图状态管理在前端**:使用 JS 变量 `currentTenantId` 控制当前视图层级,避免引入前端路由框架。
|
||||||
|
- **统计面板复用**:同一个统计面板根据 `currentTenantId` 是否为 `null` 决定请求全局或租户级统计。
|
||||||
|
- **搜索范围自动限定**:当处于 `Tenant_Detail_View` 时,搜索请求自动附加 `tenant_id` 参数。
|
||||||
|
|
||||||
|
## Components and Interfaces
|
||||||
|
|
||||||
|
### 1. KnowledgeManager 新增/修改方法
|
||||||
|
|
||||||
|
```python
|
||||||
|
# 新增方法
|
||||||
|
def get_tenant_summary(self) -> List[Dict[str, Any]]:
|
||||||
|
"""
|
||||||
|
按 tenant_id 聚合活跃知识条目,返回租户汇总列表。
|
||||||
|
返回格式: [
|
||||||
|
{
|
||||||
|
"tenant_id": "market_a",
|
||||||
|
"entry_count": 42,
|
||||||
|
"verified_count": 30,
|
||||||
|
"category_distribution": {"FAQ": 20, "故障排查": 22}
|
||||||
|
}, ...
|
||||||
|
]
|
||||||
|
按 entry_count 降序排列。
|
||||||
|
"""
|
||||||
|
|
||||||
|
# 修改方法签名
|
||||||
|
def get_knowledge_paginated(
|
||||||
|
self, page=1, per_page=10,
|
||||||
|
category_filter='', verified_filter='',
|
||||||
|
tenant_id: Optional[str] = None # 新增
|
||||||
|
) -> Dict[str, Any]
|
||||||
|
|
||||||
|
def search_knowledge(
|
||||||
|
self, query: str, top_k=3,
|
||||||
|
verified_only=True,
|
||||||
|
tenant_id: Optional[str] = None # 新增
|
||||||
|
) -> List[Dict[str, Any]]
|
||||||
|
|
||||||
|
def get_knowledge_stats(
|
||||||
|
self,
|
||||||
|
tenant_id: Optional[str] = None # 新增
|
||||||
|
) -> Dict[str, Any]
|
||||||
|
|
||||||
|
def add_knowledge_entry(
|
||||||
|
self, question, answer, category,
|
||||||
|
confidence_score=0.5, is_verified=False,
|
||||||
|
tenant_id: Optional[str] = None # 新增,默认取 config
|
||||||
|
) -> bool
|
||||||
|
```
|
||||||
|
|
||||||
|
### 2. Knowledge API 新增/修改端点
|
||||||
|
|
||||||
|
| 端点 | 方法 | 变更 | 说明 |
|
||||||
|
|------|------|------|------|
|
||||||
|
| `/api/knowledge/tenants` | GET | 新增 | 返回租户汇总数组 |
|
||||||
|
| `/api/knowledge` | GET | 修改 | 增加 `tenant_id` 查询参数 |
|
||||||
|
| `/api/knowledge/stats` | GET | 修改 | 增加 `tenant_id` 查询参数 |
|
||||||
|
| `/api/knowledge/search` | GET | 修改 | 增加 `tenant_id` 查询参数 |
|
||||||
|
| `/api/knowledge` | POST | 修改 | 请求体增加 `tenant_id` 字段 |
|
||||||
|
|
||||||
|
### 3. 前端组件
|
||||||
|
|
||||||
|
| 组件 | 职责 |
|
||||||
|
|------|------|
|
||||||
|
| `loadTenantList()` | 请求 `/api/knowledge/tenants`,渲染租户卡片 |
|
||||||
|
| `loadTenantDetail(tenantId, page)` | 请求 `/api/knowledge?tenant_id=X`,渲染知识条目列表 |
|
||||||
|
| `renderBreadcrumb(tenantId)` | 渲染面包屑 "知识库 > {tenant_id}" |
|
||||||
|
| `loadKnowledgeStats(tenantId)` | 根据 tenantId 是否为 null 请求全局/租户统计 |
|
||||||
|
| `searchKnowledge()` | 搜索时自动附加 `currentTenantId` |
|
||||||
|
|
||||||
|
## Data Models
|
||||||
|
|
||||||
|
### KnowledgeEntry(现有,无变更)
|
||||||
|
|
||||||
|
```python
|
||||||
|
class KnowledgeEntry(Base):
|
||||||
|
__tablename__ = "knowledge_entries"
|
||||||
|
id = Column(Integer, primary_key=True)
|
||||||
|
tenant_id = Column(String(50), nullable=False, default="default", index=True)
|
||||||
|
question = Column(Text, nullable=False)
|
||||||
|
answer = Column(Text, nullable=False)
|
||||||
|
category = Column(String(100), nullable=False)
|
||||||
|
confidence_score = Column(Float, default=0.0)
|
||||||
|
usage_count = Column(Integer, default=0)
|
||||||
|
created_at = Column(DateTime, default=datetime.now)
|
||||||
|
updated_at = Column(DateTime, default=datetime.now, onupdate=datetime.now)
|
||||||
|
is_active = Column(Boolean, default=True)
|
||||||
|
is_verified = Column(Boolean, default=False)
|
||||||
|
verified_by = Column(String(100))
|
||||||
|
verified_at = Column(DateTime)
|
||||||
|
vector_embedding = Column(Text)
|
||||||
|
search_frequency = Column(Integer, default=0)
|
||||||
|
last_accessed = Column(DateTime)
|
||||||
|
relevance_score = Column(Float)
|
||||||
|
```
|
||||||
|
|
||||||
|
### Tenant Summary(API 响应结构,非持久化)
|
||||||
|
|
||||||
|
```json
|
||||||
|
{
|
||||||
|
"tenant_id": "market_a",
|
||||||
|
"entry_count": 42,
|
||||||
|
"verified_count": 30,
|
||||||
|
"category_distribution": {
|
||||||
|
"FAQ": 20,
|
||||||
|
"故障排查": 22
|
||||||
|
}
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
### Stats 响应结构(扩展)
|
||||||
|
|
||||||
|
```json
|
||||||
|
{
|
||||||
|
"total_entries": 100,
|
||||||
|
"active_entries": 80,
|
||||||
|
"category_distribution": {"FAQ": 40, "故障排查": 60},
|
||||||
|
"average_confidence": 0.85,
|
||||||
|
"tenant_id": "market_a" // 新增,仅当按租户筛选时返回
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
|
||||||
|
## Correctness Properties
|
||||||
|
|
||||||
|
*A property is a characteristic or behavior that should hold true across all valid executions of a system — essentially, a formal statement about what the system should do. Properties serve as the bridge between human-readable specifications and machine-verifiable correctness guarantees.*
|
||||||
|
|
||||||
|
### Property 1: Tenant summary correctly aggregates active entries
|
||||||
|
|
||||||
|
*For any* set of `KnowledgeEntry` records with mixed `is_active` and `tenant_id` values, calling `get_tenant_summary()` should return a list where each element's `entry_count` equals the number of active entries for that `tenant_id`, each `verified_count` equals the number of active+verified entries for that `tenant_id`, and each `category_distribution` correctly reflects the category counts of active entries for that `tenant_id`.
|
||||||
|
|
||||||
|
**Validates: Requirements 1.1, 1.2**
|
||||||
|
|
||||||
|
### Property 2: Tenant summary sorted by entry_count descending
|
||||||
|
|
||||||
|
*For any* result returned by `get_tenant_summary()`, the list should be sorted such that for every consecutive pair of elements `(a, b)`, `a.entry_count >= b.entry_count`.
|
||||||
|
|
||||||
|
**Validates: Requirements 1.3**
|
||||||
|
|
||||||
|
### Property 3: Knowledge entry filtering by tenant, category, and verified status
|
||||||
|
|
||||||
|
*For any* combination of `tenant_id`, `category_filter`, and `verified_filter` parameters, all entries returned by `get_knowledge_paginated()` should satisfy all specified filter conditions simultaneously. Specifically: every returned entry's `tenant_id` matches the requested `tenant_id`, every returned entry's `category` matches `category_filter` (if provided), and every returned entry's `is_verified` matches `verified_filter` (if provided).
|
||||||
|
|
||||||
|
**Validates: Requirements 2.1, 2.3**
|
||||||
|
|
||||||
|
### Property 4: Pagination consistency with tenant filter
|
||||||
|
|
||||||
|
*For any* `tenant_id` and valid `page`/`per_page` values, the entries returned by `get_knowledge_paginated(tenant_id=X, page=P, per_page=N)` should be a correct slice of the full filtered result set. The `total` field should equal the count of all matching entries, `total_pages` should equal `ceil(total / per_page)`, and the number of returned entries should equal `min(per_page, total - (page-1)*per_page)` when `page <= total_pages`.
|
||||||
|
|
||||||
|
**Validates: Requirements 2.2**
|
||||||
|
|
||||||
|
### Property 5: New entry tenant association
|
||||||
|
|
||||||
|
*For any* valid `tenant_id` and valid entry data (question, answer, category), calling `add_knowledge_entry(tenant_id=X, ...)` should result in the newly created `KnowledgeEntry` record having `tenant_id == X`. If `tenant_id` is not provided, it should default to the configured `get_config().server.tenant_id`.
|
||||||
|
|
||||||
|
**Validates: Requirements 5.2**
|
||||||
|
|
||||||
|
### Property 6: Search results scoped to tenant
|
||||||
|
|
||||||
|
*For any* search query and `tenant_id`, all results returned by `search_knowledge(query=Q, tenant_id=X)` should have `tenant_id == X`. The result set should be a subset of what `search_knowledge(query=Q)` returns (without tenant filter).
|
||||||
|
|
||||||
|
**Validates: Requirements 6.2**
|
||||||
|
|
||||||
|
### Property 7: Stats scoped to tenant
|
||||||
|
|
||||||
|
*For any* `tenant_id`, the statistics returned by `get_knowledge_stats(tenant_id=X)` should reflect only entries with `tenant_id == X`. Specifically, `total_entries` should equal the count of active entries for that tenant, and `average_confidence` should equal the mean confidence of those entries. When `tenant_id` is omitted, the stats should aggregate across all tenants.
|
||||||
|
|
||||||
|
**Validates: Requirements 7.3, 7.4**
|
||||||
|
|
||||||
|
## Error Handling
|
||||||
|
|
||||||
|
### API 层错误处理
|
||||||
|
|
||||||
|
所有 API 端点已使用 `@handle_api_errors` 装饰器,该装饰器捕获以下异常:
|
||||||
|
|
||||||
|
| 异常类型 | HTTP 状态码 | 说明 |
|
||||||
|
|----------|------------|------|
|
||||||
|
| `ValueError` | 400 | 参数校验失败(如 `page < 1`) |
|
||||||
|
| `PermissionError` | 403 | 权限不足 |
|
||||||
|
| `Exception` | 500 | 数据库查询失败等未预期错误 |
|
||||||
|
|
||||||
|
### 业务逻辑层错误处理
|
||||||
|
|
||||||
|
- `get_tenant_summary()` — 数据库异常时返回空列表 `[]`,记录 error 日志。
|
||||||
|
- `get_knowledge_paginated()` — 异常时返回空结构 `{"knowledge": [], "total": 0, ...}`(现有行为保持不变)。
|
||||||
|
- `get_knowledge_stats()` — 异常时返回空字典 `{}`(现有行为保持不变)。
|
||||||
|
- `add_knowledge_entry()` — 异常时返回 `False`,记录 error 日志。
|
||||||
|
|
||||||
|
### 前端错误处理
|
||||||
|
|
||||||
|
- API 请求失败时通过 `showNotification(message, 'error')` 展示错误提示。
|
||||||
|
- 网络超时或断连时显示通用错误消息。
|
||||||
|
- 批量操作部分失败时显示成功/失败计数。
|
||||||
|
|
||||||
|
## Testing Strategy
|
||||||
|
|
||||||
|
### 测试框架
|
||||||
|
|
||||||
|
- **单元测试**: `pytest`
|
||||||
|
- **属性测试**: `hypothesis`(Python property-based testing 库)
|
||||||
|
- **每个属性测试最少运行 100 次迭代**
|
||||||
|
|
||||||
|
### 属性测试(Property-Based Tests)
|
||||||
|
|
||||||
|
每个 Correctness Property 对应一个属性测试,使用 `hypothesis` 的 `@given` 装饰器生成随机输入。
|
||||||
|
|
||||||
|
测试标签格式: `Feature: knowledge-tenant-view, Property {number}: {property_text}`
|
||||||
|
|
||||||
|
| Property | 测试描述 | 生成策略 |
|
||||||
|
|----------|---------|---------|
|
||||||
|
| Property 1 | 生成随机 KnowledgeEntry 列表(混合 tenant_id、is_active),验证 `get_tenant_summary()` 聚合正确性 | `st.lists(st.builds(KnowledgeEntry, tenant_id=st.sampled_from([...]), is_active=st.booleans()))` |
|
||||||
|
| Property 2 | 验证 `get_tenant_summary()` 返回列表按 entry_count 降序 | 复用 Property 1 的生成策略 |
|
||||||
|
| Property 3 | 生成随机 tenant_id + category + verified 组合,验证过滤结果一致性 | `st.sampled_from(tenant_ids)`, `st.sampled_from(categories)`, `st.sampled_from(['true','false',''])` |
|
||||||
|
| Property 4 | 生成随机 page/per_page,验证分页切片正确性 | `st.integers(min_value=1, max_value=10)` for page/per_page |
|
||||||
|
| Property 5 | 生成随机 tenant_id 和条目数据,验证新建条目的 tenant_id | `st.text(min_size=1, max_size=50)` for tenant_id |
|
||||||
|
| Property 6 | 生成随机搜索词和 tenant_id,验证搜索结果范围 | `st.text()` for query, `st.sampled_from(tenant_ids)` |
|
||||||
|
| Property 7 | 生成随机 tenant_id,验证统计数据与手动聚合一致 | `st.sampled_from(tenant_ids)` + `st.none()` |
|
||||||
|
|
||||||
|
### 单元测试(Unit Tests)
|
||||||
|
|
||||||
|
单元测试聚焦于边界情况和具体示例:
|
||||||
|
|
||||||
|
- **边界**: 无活跃条目时 `get_tenant_summary()` 返回空数组
|
||||||
|
- **边界**: 不存在的 `tenant_id` 查询返回空列表 + `total=0`
|
||||||
|
- **示例**: 数据库异常时 API 返回 500
|
||||||
|
- **示例**: `add_knowledge_entry` 不传 `tenant_id` 时使用配置默认值
|
||||||
|
- **集成**: 前端 `loadTenantList()` → API → Manager 完整链路
|
||||||
|
|
||||||
|
### 测试配置
|
||||||
|
|
||||||
|
```python
|
||||||
|
from hypothesis import settings
|
||||||
|
|
||||||
|
@settings(max_examples=100)
|
||||||
|
```
|
||||||
|
|
||||||
|
每个属性测试函数头部添加注释引用设计文档中的 Property 编号,例如:
|
||||||
|
|
||||||
|
```python
|
||||||
|
# Feature: knowledge-tenant-view, Property 1: Tenant summary correctly aggregates active entries
|
||||||
|
@given(entries=st.lists(knowledge_entry_strategy(), min_size=0, max_size=50))
|
||||||
|
def test_tenant_summary_aggregation(entries):
|
||||||
|
...
|
||||||
|
```
|
||||||
102
.kiro/specs/knowledge-tenant-view/requirements.md
Normal file
102
.kiro/specs/knowledge-tenant-view/requirements.md
Normal file
@@ -0,0 +1,102 @@
|
|||||||
|
# Requirements Document
|
||||||
|
|
||||||
|
## Introduction
|
||||||
|
|
||||||
|
知识库租户分组展示功能。当前知识库管理页面以扁平列表展示所有知识条目,缺乏租户(市场)维度的组织结构。本功能将知识库页面改造为两层结构:第一层按租户分组展示汇总信息,第二层展示某个租户下的具体知识条目。数据库模型 `KnowledgeEntry` 已有 `tenant_id` 字段,后端需新增按租户聚合的 API,前端需实现分组视图与钻取交互。
|
||||||
|
|
||||||
|
## Glossary
|
||||||
|
|
||||||
|
- **Dashboard**: Flask + Jinja2 + Bootstrap 5 构建的 Web 管理后台主页面(`dashboard.html`)
|
||||||
|
- **Knowledge_Tab**: Dashboard 中 `#knowledge-tab` 区域,用于展示和管理知识库条目
|
||||||
|
- **Knowledge_API**: Flask Blueprint `knowledge_bp`,提供知识库相关的 REST API(`/api/knowledge/*`)
|
||||||
|
- **Knowledge_Manager**: `KnowledgeManager` 类,封装知识库的数据库查询与业务逻辑
|
||||||
|
- **Tenant**: 租户,即市场标识(如 `market_a`、`market_b`),通过 `KnowledgeEntry.tenant_id` 字段区分
|
||||||
|
- **Tenant_Summary**: 租户汇总信息,包含租户 ID、知识条目总数等聚合数据
|
||||||
|
- **Tenant_List_View**: 第一层视图,以卡片或列表形式展示所有租户的汇总信息
|
||||||
|
- **Tenant_Detail_View**: 第二层视图,展示某个租户下的具体知识条目列表(含分页、筛选)
|
||||||
|
- **KnowledgeEntry**: SQLAlchemy 数据模型,包含 `tenant_id`、`question`、`answer`、`category`、`confidence_score`、`usage_count`、`is_verified` 等字段
|
||||||
|
|
||||||
|
## Requirements
|
||||||
|
|
||||||
|
### Requirement 1: 租户汇总 API
|
||||||
|
|
||||||
|
**User Story:** 作为管理员,我希望后端提供按租户分组的知识库汇总接口,以便前端展示每个租户的知识条目统计。
|
||||||
|
|
||||||
|
#### Acceptance Criteria
|
||||||
|
|
||||||
|
1. WHEN a GET request is sent to `/api/knowledge/tenants`, THE Knowledge_API SHALL return a JSON array of Tenant_Summary objects, each containing `tenant_id`, `entry_count`, `verified_count`, and `category_distribution`
|
||||||
|
2. THE Knowledge_API SHALL only count active knowledge entries (`is_active == True`) in the Tenant_Summary aggregation
|
||||||
|
3. THE Knowledge_API SHALL sort the Tenant_Summary array by `entry_count` in descending order
|
||||||
|
4. WHEN no active knowledge entries exist, THE Knowledge_API SHALL return an empty JSON array with HTTP status 200
|
||||||
|
5. IF a database query error occurs, THEN THE Knowledge_API SHALL return an error response with HTTP status 500 and a descriptive error message
|
||||||
|
|
||||||
|
### Requirement 2: 租户条目列表 API
|
||||||
|
|
||||||
|
**User Story:** 作为管理员,我希望后端提供按租户筛选的知识条目分页接口,以便在点击某个租户后查看该租户下的具体知识条目。
|
||||||
|
|
||||||
|
#### Acceptance Criteria
|
||||||
|
|
||||||
|
1. WHEN a GET request with query parameter `tenant_id` is sent to `/api/knowledge`, THE Knowledge_API SHALL return only the knowledge entries belonging to the specified Tenant
|
||||||
|
2. THE Knowledge_API SHALL support pagination via `page` and `per_page` query parameters when filtering by `tenant_id`
|
||||||
|
3. THE Knowledge_API SHALL support `category` and `verified` query parameters for further filtering within a Tenant
|
||||||
|
4. WHEN the `tenant_id` parameter value does not match any existing entries, THE Knowledge_API SHALL return an empty knowledge list with `total` equal to 0 and HTTP status 200
|
||||||
|
5. THE Knowledge_Manager SHALL provide a method that accepts `tenant_id` as a filter parameter and returns paginated results
|
||||||
|
|
||||||
|
### Requirement 3: 租户列表视图(第一层)
|
||||||
|
|
||||||
|
**User Story:** 作为管理员,我希望知识库页面首先展示按租户分组的汇总卡片,以便快速了解各市场的知识库规模。
|
||||||
|
|
||||||
|
#### Acceptance Criteria
|
||||||
|
|
||||||
|
1. WHEN the Knowledge_Tab is activated, THE Dashboard SHALL display a Tenant_List_View showing one card per Tenant
|
||||||
|
2. THE Tenant_List_View SHALL display the following information for each Tenant: tenant_id(租户名称), entry_count(知识条目总数), verified_count(已验证条目数)
|
||||||
|
3. WHEN the Tenant_List_View is loading data, THE Dashboard SHALL display a loading spinner in the Knowledge_Tab area
|
||||||
|
4. WHEN no tenants exist, THE Dashboard SHALL display a placeholder message indicating that no knowledge entries are available
|
||||||
|
5. THE Tenant_List_View SHALL refresh its data when the user clicks a refresh button
|
||||||
|
|
||||||
|
### Requirement 4: 租户详情视图(第二层)
|
||||||
|
|
||||||
|
**User Story:** 作为管理员,我希望点击某个租户卡片后能查看该租户下的具体知识条目列表,以便管理和审核知识内容。
|
||||||
|
|
||||||
|
#### Acceptance Criteria
|
||||||
|
|
||||||
|
1. WHEN a user clicks on a Tenant card in the Tenant_List_View, THE Dashboard SHALL transition to the Tenant_Detail_View showing knowledge entries for the selected Tenant
|
||||||
|
2. THE Tenant_Detail_View SHALL display each knowledge entry with the following fields: question, answer, category, confidence_score, usage_count, is_verified status
|
||||||
|
3. THE Tenant_Detail_View SHALL provide a breadcrumb navigation showing "知识库 > {tenant_id}" to indicate the current context
|
||||||
|
4. WHEN the user clicks the breadcrumb "知识库" link, THE Dashboard SHALL navigate back to the Tenant_List_View
|
||||||
|
5. THE Tenant_Detail_View SHALL support pagination with configurable page size
|
||||||
|
6. THE Tenant_Detail_View SHALL support filtering by category and verification status
|
||||||
|
|
||||||
|
### Requirement 5: 租户详情视图中的知识条目操作
|
||||||
|
|
||||||
|
**User Story:** 作为管理员,我希望在租户详情视图中能对知识条目执行添加、删除、验证等操作,以便维护知识库内容。
|
||||||
|
|
||||||
|
#### Acceptance Criteria
|
||||||
|
|
||||||
|
1. WHILE viewing the Tenant_Detail_View, THE Dashboard SHALL provide buttons for adding, deleting, verifying, and unverifying knowledge entries
|
||||||
|
2. WHEN a user adds a new knowledge entry in the Tenant_Detail_View, THE Knowledge_API SHALL associate the new entry with the currently selected Tenant by setting the `tenant_id` field
|
||||||
|
3. WHEN a user performs a batch operation (batch delete, batch verify, batch unverify) in the Tenant_Detail_View, THE Dashboard SHALL refresh the Tenant_Detail_View to reflect the updated data
|
||||||
|
4. WHEN a user deletes all entries for a Tenant, THE Dashboard SHALL navigate back to the Tenant_List_View and remove the empty Tenant card
|
||||||
|
5. IF a knowledge entry operation fails, THEN THE Dashboard SHALL display an error notification with the failure reason
|
||||||
|
|
||||||
|
### Requirement 6: 搜索功能适配
|
||||||
|
|
||||||
|
**User Story:** 作为管理员,我希望在租户详情视图中搜索知识条目时,搜索范围限定在当前租户内,以便精确查找。
|
||||||
|
|
||||||
|
#### Acceptance Criteria
|
||||||
|
|
||||||
|
1. WHILE viewing the Tenant_Detail_View, THE Dashboard SHALL scope the knowledge search to the currently selected Tenant
|
||||||
|
2. WHEN a search query is submitted in the Tenant_Detail_View, THE Knowledge_API SHALL filter search results by the specified `tenant_id`
|
||||||
|
3. WHEN the search query is cleared, THE Dashboard SHALL restore the full paginated list for the current Tenant
|
||||||
|
4. THE Knowledge_Manager search method SHALL accept an optional `tenant_id` parameter to limit search scope
|
||||||
|
|
||||||
|
### Requirement 7: 统计信息适配
|
||||||
|
|
||||||
|
**User Story:** 作为管理员,我希望知识库统计面板在租户列表视图时展示全局统计,在租户详情视图时展示当前租户的统计,以便获取准确的上下文信息。
|
||||||
|
|
||||||
|
#### Acceptance Criteria
|
||||||
|
|
||||||
|
1. WHILE the Tenant_List_View is displayed, THE Dashboard SHALL show global knowledge statistics (total entries across all tenants, total verified entries, average confidence)
|
||||||
|
2. WHILE the Tenant_Detail_View is displayed, THE Dashboard SHALL show statistics scoped to the selected Tenant
|
||||||
|
3. WHEN a GET request with query parameter `tenant_id` is sent to `/api/knowledge/stats`, THE Knowledge_API SHALL return statistics filtered by the specified Tenant
|
||||||
|
4. WHEN the `tenant_id` parameter is omitted from the stats request, THE Knowledge_API SHALL return global statistics across all tenants
|
||||||
157
.kiro/specs/knowledge-tenant-view/tasks.md
Normal file
157
.kiro/specs/knowledge-tenant-view/tasks.md
Normal file
@@ -0,0 +1,157 @@
|
|||||||
|
# Implementation Plan: 知识库租户分组展示 (knowledge-tenant-view)
|
||||||
|
|
||||||
|
## Overview
|
||||||
|
|
||||||
|
将知识库管理页面从扁平列表改造为两层结构:第一层按租户分组展示汇总卡片,第二层展示租户下的知识条目列表。改造涉及 KnowledgeManager 业务逻辑层、Flask API 层、前端 dashboard.js 三个层面。
|
||||||
|
|
||||||
|
## Tasks
|
||||||
|
|
||||||
|
- [x] 1. KnowledgeManager 新增 get_tenant_summary 方法
|
||||||
|
- [x] 1.1 在 `src/knowledge_base/knowledge_manager.py` 中新增 `get_tenant_summary()` 方法
|
||||||
|
- 使用 SQLAlchemy `GROUP BY tenant_id` 聚合 `is_active == True` 的知识条目
|
||||||
|
- 返回包含 `tenant_id`、`entry_count`、`verified_count`、`category_distribution` 的字典列表
|
||||||
|
- 按 `entry_count` 降序排列
|
||||||
|
- 数据库异常时返回空列表 `[]`,记录 error 日志
|
||||||
|
- _Requirements: 1.1, 1.2, 1.3, 1.4, 1.5_
|
||||||
|
|
||||||
|
- [ ]* 1.2 为 get_tenant_summary 编写属性测试
|
||||||
|
- **Property 1: Tenant summary correctly aggregates active entries**
|
||||||
|
- **Property 2: Tenant summary sorted by entry_count descending**
|
||||||
|
- 使用 `hypothesis` 生成随机 KnowledgeEntry 列表,验证聚合正确性和排序
|
||||||
|
- **Validates: Requirements 1.1, 1.2, 1.3**
|
||||||
|
|
||||||
|
- [x] 2. KnowledgeManager 现有方法增加 tenant_id 过滤
|
||||||
|
- [x] 2.1 为 `get_knowledge_paginated()` 增加 `tenant_id` 可选参数
|
||||||
|
- 在 `src/knowledge_base/knowledge_manager.py` 中修改方法签名,增加 `tenant_id: Optional[str] = None`
|
||||||
|
- 当 `tenant_id` 不为 None 时,在查询中增加 `KnowledgeEntry.tenant_id == tenant_id` 过滤条件
|
||||||
|
- 返回结构不变,仅过滤范围缩小
|
||||||
|
- _Requirements: 2.1, 2.2, 2.3, 2.4, 2.5_
|
||||||
|
|
||||||
|
- [ ]* 2.2 为 get_knowledge_paginated 的 tenant_id 过滤编写属性测试
|
||||||
|
- **Property 3: Knowledge entry filtering by tenant, category, and verified status**
|
||||||
|
- **Property 4: Pagination consistency with tenant filter**
|
||||||
|
- **Validates: Requirements 2.1, 2.2, 2.3**
|
||||||
|
|
||||||
|
- [x] 2.3 为 `search_knowledge()` 增加 `tenant_id` 可选参数
|
||||||
|
- 修改 `search_knowledge()`、`_search_by_embedding()`、`_search_by_keyword()` 方法签名
|
||||||
|
- 当 `tenant_id` 不为 None 时,在查询中增加 tenant_id 过滤条件
|
||||||
|
- _Requirements: 6.2, 6.4_
|
||||||
|
|
||||||
|
- [ ]* 2.4 为 search_knowledge 的 tenant_id 过滤编写属性测试
|
||||||
|
- **Property 6: Search results scoped to tenant**
|
||||||
|
- **Validates: Requirements 6.2**
|
||||||
|
|
||||||
|
- [x] 2.5 为 `get_knowledge_stats()` 增加 `tenant_id` 可选参数
|
||||||
|
- 当 `tenant_id` 不为 None 时,所有统计查询增加 tenant_id 过滤
|
||||||
|
- 返回结构中增加 `tenant_id` 字段(仅当按租户筛选时)
|
||||||
|
- _Requirements: 7.3, 7.4_
|
||||||
|
|
||||||
|
- [ ]* 2.6 为 get_knowledge_stats 的 tenant_id 过滤编写属性测试
|
||||||
|
- **Property 7: Stats scoped to tenant**
|
||||||
|
- **Validates: Requirements 7.3, 7.4**
|
||||||
|
|
||||||
|
- [x] 2.7 为 `add_knowledge_entry()` 增加 `tenant_id` 可选参数
|
||||||
|
- 当 `tenant_id` 不为 None 时,新建条目的 `tenant_id` 设为该值
|
||||||
|
- 当 `tenant_id` 为 None 时,使用 `get_config().server.tenant_id` 作为默认值
|
||||||
|
- _Requirements: 5.2_
|
||||||
|
|
||||||
|
- [ ]* 2.8 为 add_knowledge_entry 的 tenant_id 关联编写属性测试
|
||||||
|
- **Property 5: New entry tenant association**
|
||||||
|
- **Validates: Requirements 5.2**
|
||||||
|
|
||||||
|
- [x] 3. Checkpoint - 确保后端业务逻辑层完成
|
||||||
|
- Ensure all tests pass, ask the user if questions arise.
|
||||||
|
|
||||||
|
- [x] 4. Knowledge API 层新增和修改端点
|
||||||
|
- [x] 4.1 在 `src/web/blueprints/knowledge.py` 中新增 `GET /api/knowledge/tenants` 端点
|
||||||
|
- 调用 `knowledge_manager.get_tenant_summary()` 返回租户汇总 JSON 数组
|
||||||
|
- 使用 `@handle_api_errors` 装饰器处理异常
|
||||||
|
- _Requirements: 1.1, 1.2, 1.3, 1.4, 1.5_
|
||||||
|
|
||||||
|
- [x] 4.2 修改 `GET /api/knowledge` 端点,增加 `tenant_id` 查询参数支持
|
||||||
|
- 从 `request.args` 获取 `tenant_id` 参数,传递给 `get_knowledge_paginated()`
|
||||||
|
- _Requirements: 2.1, 2.2, 2.3, 2.4_
|
||||||
|
|
||||||
|
- [x] 4.3 修改 `GET /api/knowledge/stats` 端点,增加 `tenant_id` 查询参数支持
|
||||||
|
- 从 `request.args` 获取 `tenant_id` 参数,传递给 `get_knowledge_stats()`
|
||||||
|
- _Requirements: 7.3, 7.4_
|
||||||
|
|
||||||
|
- [x] 4.4 修改 `GET /api/knowledge/search` 端点,增加 `tenant_id` 查询参数支持
|
||||||
|
- 从 `request.args` 获取 `tenant_id` 参数,传递给 `search_knowledge()`
|
||||||
|
- _Requirements: 6.2_
|
||||||
|
|
||||||
|
- [x] 4.5 修改 `POST /api/knowledge` 端点,从请求体读取 `tenant_id` 字段
|
||||||
|
- 将 `tenant_id` 传递给 `add_knowledge_entry()`
|
||||||
|
- _Requirements: 5.2_
|
||||||
|
|
||||||
|
- [ ]* 4.6 为新增和修改的 API 端点编写单元测试
|
||||||
|
- 测试 `/api/knowledge/tenants` 返回正确的汇总数据
|
||||||
|
- 测试各端点的 `tenant_id` 参数过滤行为
|
||||||
|
- 测试空数据和异常情况
|
||||||
|
- _Requirements: 1.1, 1.4, 1.5, 2.4_
|
||||||
|
|
||||||
|
- [x] 5. Checkpoint - 确保后端 API 层完成
|
||||||
|
- Ensure all tests pass, ask the user if questions arise.
|
||||||
|
|
||||||
|
- [x] 6. 前端 Tenant_List_View(租户列表视图)
|
||||||
|
- [x] 6.1 在 `src/web/static/js/dashboard.js` 中实现 `loadTenantList()` 函数
|
||||||
|
- 请求 `GET /api/knowledge/tenants` 获取租户汇总数据
|
||||||
|
- 渲染租户卡片列表,每张卡片展示 `tenant_id`、`entry_count`、`verified_count`
|
||||||
|
- 添加加载中 spinner 状态
|
||||||
|
- 无租户时展示空状态占位提示
|
||||||
|
- 卡片点击事件绑定,调用 `loadTenantDetail(tenantId)`
|
||||||
|
- _Requirements: 3.1, 3.2, 3.3, 3.4_
|
||||||
|
|
||||||
|
- [x] 6.2 实现刷新按钮功能
|
||||||
|
- 在知识库 tab 区域添加刷新按钮,点击时重新调用 `loadTenantList()`
|
||||||
|
- _Requirements: 3.5_
|
||||||
|
|
||||||
|
- [x] 7. 前端 Tenant_Detail_View(租户详情视图)
|
||||||
|
- [x] 7.1 实现 `loadTenantDetail(tenantId, page)` 函数
|
||||||
|
- 请求 `GET /api/knowledge?tenant_id=X&page=P&per_page=N` 获取知识条目
|
||||||
|
- 渲染知识条目表格,展示 question、answer、category、confidence_score、usage_count、is_verified
|
||||||
|
- 实现分页控件
|
||||||
|
- 支持 category 和 verified 筛选下拉框
|
||||||
|
- _Requirements: 4.1, 4.2, 4.5, 4.6_
|
||||||
|
|
||||||
|
- [x] 7.2 实现面包屑导航 `renderBreadcrumb(tenantId)`
|
||||||
|
- 展示 "知识库 > {tenant_id}" 面包屑
|
||||||
|
- 点击 "知识库" 链接时调用 `loadTenantList()` 返回租户列表视图
|
||||||
|
- 管理 `currentTenantId` 状态变量控制视图层级
|
||||||
|
- _Requirements: 4.3, 4.4_
|
||||||
|
|
||||||
|
- [x] 7.3 在 Tenant_Detail_View 中集成知识条目操作按钮
|
||||||
|
- 复用现有的添加、删除、验证、取消验证按钮逻辑
|
||||||
|
- 添加知识条目时自动设置 `tenant_id` 为当前选中的租户
|
||||||
|
- 批量操作(批量删除、批量验证、批量取消验证)后刷新当前视图
|
||||||
|
- 删除所有条目后自动返回租户列表视图
|
||||||
|
- 操作失败时通过 `showNotification` 展示错误提示
|
||||||
|
- _Requirements: 5.1, 5.2, 5.3, 5.4, 5.5_
|
||||||
|
|
||||||
|
- [x] 8. 前端搜索和统计面板适配
|
||||||
|
- [x] 8.1 修改搜索功能,在 Tenant_Detail_View 中自动附加 `tenant_id` 参数
|
||||||
|
- 搜索请求附加 `&tenant_id=currentTenantId`
|
||||||
|
- 清空搜索时恢复当前租户的完整分页列表
|
||||||
|
- _Requirements: 6.1, 6.2, 6.3_
|
||||||
|
|
||||||
|
- [x] 8.2 修改 `loadKnowledgeStats()` 函数,根据视图层级请求不同统计
|
||||||
|
- 当 `currentTenantId` 为 null 时请求全局统计
|
||||||
|
- 当 `currentTenantId` 有值时请求 `GET /api/knowledge/stats?tenant_id=X`
|
||||||
|
- _Requirements: 7.1, 7.2_
|
||||||
|
|
||||||
|
- [x] 9. 前端 HTML 模板更新
|
||||||
|
- [x] 9.1 在 `src/web/templates/dashboard.html` 的 `#knowledge-tab` 区域添加必要的 DOM 容器
|
||||||
|
- 添加面包屑容器、租户卡片列表容器、租户详情容器
|
||||||
|
- 确保与现有 Bootstrap 5 样式一致
|
||||||
|
- _Requirements: 3.1, 4.3_
|
||||||
|
|
||||||
|
- [x] 10. Final checkpoint - 确保所有功能集成完成
|
||||||
|
- Ensure all tests pass, ask the user if questions arise.
|
||||||
|
|
||||||
|
## Notes
|
||||||
|
|
||||||
|
- Tasks marked with `*` are optional and can be skipped for faster MVP
|
||||||
|
- Each task references specific requirements for traceability
|
||||||
|
- Checkpoints ensure incremental validation
|
||||||
|
- Property tests validate universal correctness properties from the design document
|
||||||
|
- 数据模型 `KnowledgeEntry` 已有 `tenant_id` 字段且已建索引,无需数据库迁移
|
||||||
25
.kiro/steering/product.md
Normal file
25
.kiro/steering/product.md
Normal file
@@ -0,0 +1,25 @@
|
|||||||
|
# Product Overview
|
||||||
|
|
||||||
|
TSP Assistant (TSP智能助手) is an AI-powered customer service and work order management system built for TSP (Telematics Service Provider) vehicle service providers.
|
||||||
|
|
||||||
|
## What It Does
|
||||||
|
|
||||||
|
- Intelligent dialogue with customers via WebSocket real-time chat and Feishu (Lark) bot integration
|
||||||
|
- Work order lifecycle management with AI-generated resolution suggestions
|
||||||
|
- Knowledge base with semantic search (TF-IDF + cosine similarity, optional local embedding model)
|
||||||
|
- Vehicle data querying by VIN
|
||||||
|
- Analytics dashboard with alerts, performance monitoring, and reporting
|
||||||
|
- Multi-tenant architecture — data is isolated by `tenant_id` across all core tables
|
||||||
|
- Feishu multi-dimensional table (多维表格) bidirectional sync for work orders
|
||||||
|
|
||||||
|
## Key Domain Concepts
|
||||||
|
|
||||||
|
- **Work Order (工单)**: A support ticket tied to a vehicle issue. Can be dispatched to module owners, tracked through statuses, and enriched with AI suggestions.
|
||||||
|
- **Knowledge Entry (知识库条目)**: Q&A pairs used for retrieval-augmented responses. Verified entries have higher confidence.
|
||||||
|
- **Tenant (租户)**: Logical isolation unit (e.g., a market or region). All major entities carry a `tenant_id`.
|
||||||
|
- **Agent**: A ReAct-style LLM agent with registered tools (knowledge search, vehicle query, analytics, Feishu messaging).
|
||||||
|
- **Chat Session (对话会话)**: Groups multi-turn conversations; tracks source (websocket, API, feishu_bot).
|
||||||
|
|
||||||
|
## Primary Language
|
||||||
|
|
||||||
|
The codebase, comments, log messages, and UI are predominantly in **Chinese (Simplified)**. Variable names and code structure follow English conventions.
|
||||||
79
.kiro/steering/structure.md
Normal file
79
.kiro/steering/structure.md
Normal file
@@ -0,0 +1,79 @@
|
|||||||
|
# Project Structure
|
||||||
|
|
||||||
|
```
|
||||||
|
├── src/ # Main application source
|
||||||
|
│ ├── main.py # TSPAssistant facade class (orchestrates all managers)
|
||||||
|
│ ├── agent_assistant.py # Agent-enhanced assistant variant
|
||||||
|
│ ├── agent/ # ReAct LLM agent
|
||||||
|
│ │ ├── react_agent.py # Agent loop with tool dispatch
|
||||||
|
│ │ └── llm_client.py # Agent-specific LLM client
|
||||||
|
│ ├── core/ # Core infrastructure
|
||||||
|
│ │ ├── models.py # SQLAlchemy ORM models (all entities)
|
||||||
|
│ │ ├── database.py # DatabaseManager singleton, session management
|
||||||
|
│ │ ├── llm_client.py # QwenClient (OpenAI-compatible LLM calls)
|
||||||
|
│ │ ├── cache_manager.py # In-memory + Redis caching
|
||||||
|
│ │ ├── redis_manager.py # Redis connection pool
|
||||||
|
│ │ ├── vector_store.py # Vector storage for embeddings
|
||||||
|
│ │ ├── embedding_client.py # Local embedding model client
|
||||||
|
│ │ ├── auth_manager.py # Authentication logic
|
||||||
|
│ │ └── ... # Performance, backup, query optimizer
|
||||||
|
│ ├── config/
|
||||||
|
│ │ └── unified_config.py # UnifiedConfig singleton (env → dataclasses)
|
||||||
|
│ ├── dialogue/ # Conversation management
|
||||||
|
│ │ ├── dialogue_manager.py # Message processing, work order creation
|
||||||
|
│ │ ├── conversation_history.py
|
||||||
|
│ │ └── realtime_chat.py # Real-time chat manager
|
||||||
|
│ ├── knowledge_base/
|
||||||
|
│ │ └── knowledge_manager.py # Knowledge CRUD, search, import
|
||||||
|
│ ├── analytics/ # Monitoring & analytics
|
||||||
|
│ │ ├── analytics_manager.py
|
||||||
|
│ │ ├── alert_system.py
|
||||||
|
│ │ ├── monitor_service.py
|
||||||
|
│ │ ├── token_monitor.py
|
||||||
|
│ │ └── ai_success_monitor.py
|
||||||
|
│ ├── integrations/ # External service integrations
|
||||||
|
│ │ ├── feishu_client.py # Feishu API client
|
||||||
|
│ │ ├── feishu_service.py # Feishu business logic
|
||||||
|
│ │ ├── feishu_longconn_service.py # Feishu event subscription (long-conn)
|
||||||
|
│ │ ├── workorder_sync.py # Feishu ↔ local work order sync
|
||||||
|
│ │ └── flexible_field_mapper.py # Feishu field mapping
|
||||||
|
│ ├── vehicle/
|
||||||
|
│ │ └── vehicle_data_manager.py
|
||||||
|
│ ├── utils/ # Shared helpers
|
||||||
|
│ │ ├── helpers.py
|
||||||
|
│ │ ├── encoding_helper.py
|
||||||
|
│ │ └── semantic_similarity.py
|
||||||
|
│ └── web/ # Web layer
|
||||||
|
│ ├── app.py # Flask app factory, middleware, blueprint registration
|
||||||
|
│ ├── service_manager.py # Lazy-loading service singleton registry
|
||||||
|
│ ├── decorators.py # @handle_errors, @require_json, @resolve_tenant_id, @rate_limit
|
||||||
|
│ ├── error_handlers.py # Unified API response helpers
|
||||||
|
│ ├── websocket_server.py # Standalone WebSocket server
|
||||||
|
│ ├── blueprints/ # Flask blueprints (one per domain)
|
||||||
|
│ │ ├── alerts.py, workorders.py, conversations.py, knowledge.py
|
||||||
|
│ │ ├── auth.py, tenants.py, chat.py, agent.py, vehicle.py
|
||||||
|
│ │ ├── analytics.py, monitoring.py, system.py
|
||||||
|
│ │ ├── feishu_sync.py, feishu_bot.py
|
||||||
|
│ │ └── test.py, core.py
|
||||||
|
│ ├── static/ # Frontend assets (JS, CSS)
|
||||||
|
│ └── templates/ # Jinja2 HTML templates
|
||||||
|
├── config/ # Runtime config files (field mappings)
|
||||||
|
├── data/ # SQLite DB file, system settings JSON
|
||||||
|
├── logs/ # Log files (per-startup subdirectories)
|
||||||
|
├── scripts/ # Migration and utility scripts
|
||||||
|
├── start_dashboard.py # Main entry point (Flask + WS + Feishu)
|
||||||
|
├── start_feishu_bot.py # Standalone Feishu bot entry point
|
||||||
|
├── init_database.py # DB initialization script
|
||||||
|
├── requirements.txt # Python dependencies
|
||||||
|
├── nginx.conf # Nginx reverse proxy config
|
||||||
|
└── .env / .env.example # Environment configuration
|
||||||
|
```
|
||||||
|
|
||||||
|
## Key Patterns
|
||||||
|
|
||||||
|
- **Singleton managers**: `db_manager`, `service_manager`, `get_config()` — instantiated once, imported globally.
|
||||||
|
- **Blueprint-per-domain**: Each functional area (workorders, alerts, knowledge, etc.) has its own Flask blueprint under `src/web/blueprints/`.
|
||||||
|
- **Service manager with lazy loading**: `ServiceManager` in `src/web/service_manager.py` provides thread-safe lazy initialization of all service instances. Blueprints access services through it.
|
||||||
|
- **Decorator-driven API patterns**: Common decorators in `src/web/decorators.py` handle error wrapping, JSON validation, tenant resolution, and rate limiting.
|
||||||
|
- **Multi-tenant by convention**: All DB queries should filter by `tenant_id`. The `@resolve_tenant_id` decorator extracts it from request body, query params, or session.
|
||||||
|
- **Config from env**: No hardcoded secrets. All configuration flows through `UnifiedConfig` which reads from `.env` via `python-dotenv`.
|
||||||
67
.kiro/steering/tech.md
Normal file
67
.kiro/steering/tech.md
Normal file
@@ -0,0 +1,67 @@
|
|||||||
|
# Tech Stack & Build
|
||||||
|
|
||||||
|
## Language & Runtime
|
||||||
|
|
||||||
|
- Python 3.11+
|
||||||
|
|
||||||
|
## Core Frameworks & Libraries
|
||||||
|
|
||||||
|
| Layer | Technology |
|
||||||
|
|---|---|
|
||||||
|
| Web framework | Flask 3.x + Flask-CORS |
|
||||||
|
| ORM / Database | SQLAlchemy 2.x (MySQL via PyMySQL, SQLite for dev) |
|
||||||
|
| Real-time comms | `websockets` library (standalone server on port 8765) |
|
||||||
|
| Caching | Redis 5.x client + hiredis |
|
||||||
|
| LLM integration | OpenAI-compatible API (default provider: Qwen/通义千问 via DashScope) |
|
||||||
|
| Embedding | `sentence-transformers` with `BAAI/bge-small-zh-v1.5` (local, optional) |
|
||||||
|
| NLP | jieba (Chinese word segmentation), scikit-learn (TF-IDF) |
|
||||||
|
| Feishu SDK | `lark-oapi` 1.3.x (event subscription 2.0, long-connection mode) |
|
||||||
|
| Data validation | pydantic 2.x, marshmallow |
|
||||||
|
| Auth | JWT (`pyjwt`), SHA-256 password hashing |
|
||||||
|
| Monitoring | psutil (in-process), Prometheus + Grafana (Docker) |
|
||||||
|
|
||||||
|
## Configuration
|
||||||
|
|
||||||
|
- All config loaded from environment variables via `python-dotenv` → `src/config/unified_config.py`
|
||||||
|
- Singleton `UnifiedConfig` with typed dataclasses (`DatabaseConfig`, `LLMConfig`, `ServerConfig`, etc.)
|
||||||
|
- `.env` file at project root (see `.env.example` for all keys)
|
||||||
|
|
||||||
|
## Common Commands
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Install dependencies
|
||||||
|
pip install -r requirements.txt
|
||||||
|
|
||||||
|
# Initialize / migrate database
|
||||||
|
python init_database.py
|
||||||
|
|
||||||
|
# Start the full application (Flask + WebSocket + Feishu long-conn)
|
||||||
|
python start_dashboard.py
|
||||||
|
|
||||||
|
# Start only the Feishu bot long-connection client
|
||||||
|
python start_feishu_bot.py
|
||||||
|
|
||||||
|
# Run tests
|
||||||
|
pytest
|
||||||
|
|
||||||
|
# Code formatting
|
||||||
|
black .
|
||||||
|
isort .
|
||||||
|
|
||||||
|
# Linting
|
||||||
|
flake8
|
||||||
|
mypy .
|
||||||
|
```
|
||||||
|
|
||||||
|
## Deployment
|
||||||
|
|
||||||
|
- Docker + docker-compose (MySQL 8, Redis 7, Nginx, Prometheus, Grafana)
|
||||||
|
- Nginx reverse proxy in front of Flask (port 80/443 → 5000)
|
||||||
|
- Default ports: Flask 5000, WebSocket 8765, Redis 6379, MySQL 3306
|
||||||
|
|
||||||
|
## Code Quality Tools
|
||||||
|
|
||||||
|
- `black` for formatting (PEP 8)
|
||||||
|
- `isort` for import sorting
|
||||||
|
- `flake8` for linting
|
||||||
|
- `mypy` for type checking
|
||||||
96
AGENTS.md
Normal file
96
AGENTS.md
Normal file
@@ -0,0 +1,96 @@
|
|||||||
|
# AGENTS.md — TSP 智能助手
|
||||||
|
|
||||||
|
> AI 助手代码导航指南。详细文档见 `.agents/summary/index.md`。
|
||||||
|
|
||||||
|
## 项目概述
|
||||||
|
|
||||||
|
TSP 智能助手是一个 AI 驱动的多租户客服与工单管理系统。Python 3.11+ / Flask 3.x / SQLAlchemy 2.x。入口文件 `start_dashboard.py`。
|
||||||
|
|
||||||
|
## 目录导航
|
||||||
|
|
||||||
|
```
|
||||||
|
src/
|
||||||
|
├── config/unified_config.py # UnifiedConfig 单例,所有配置从 .env 加载
|
||||||
|
├── core/
|
||||||
|
│ ├── models.py # 所有 SQLAlchemy ORM 模型(11 个表)
|
||||||
|
│ ├── database.py # DatabaseManager 单例,session 管理
|
||||||
|
│ ├── llm_client.py # LLMClient — OpenAI-compatible API (Qwen)
|
||||||
|
│ ├── auth_manager.py # JWT + SHA-256 认证
|
||||||
|
│ ├── cache_manager.py # Redis 缓存
|
||||||
|
│ └── vector_store.py # Embedding 向量存储
|
||||||
|
├── dialogue/
|
||||||
|
│ ├── dialogue_manager.py # 核心对话处理,调用 LLM + 知识库
|
||||||
|
│ └── realtime_chat.py # WebSocket 实时聊天管理
|
||||||
|
├── knowledge_base/
|
||||||
|
│ └── knowledge_manager.py # 知识库 CRUD + TF-IDF/Embedding 搜索
|
||||||
|
├── integrations/
|
||||||
|
│ ├── feishu_service.py # 飞书 API 客户端
|
||||||
|
│ ├── feishu_longconn_service.py # 飞书长连接事件订阅
|
||||||
|
│ └── workorder_sync.py # 工单 ↔ 飞书多维表格双向同步
|
||||||
|
├── agent/react_agent.py # ReAct Agent(工具调度循环)
|
||||||
|
├── analytics/
|
||||||
|
│ ├── analytics_manager.py # 数据分析
|
||||||
|
│ └── alert_system.py # 预警规则引擎
|
||||||
|
└── web/
|
||||||
|
├── app.py # Flask 应用工厂 + 路由
|
||||||
|
├── service_manager.py # 懒加载服务注册中心
|
||||||
|
├── decorators.py # @handle_errors, @require_json, @resolve_tenant_id, @rate_limit
|
||||||
|
├── websocket_server.py # 独立 WebSocket 服务器 (port 8765)
|
||||||
|
└── blueprints/ # 16 个 Flask Blueprint(每个领域一个文件)
|
||||||
|
```
|
||||||
|
|
||||||
|
## 关键模式
|
||||||
|
|
||||||
|
### 多租户
|
||||||
|
所有核心表含 `tenant_id` 字段。`@resolve_tenant_id` 装饰器从请求中提取。查询时必须过滤 `tenant_id`。
|
||||||
|
|
||||||
|
### 服务获取
|
||||||
|
Blueprint 通过 `ServiceManager` 懒加载获取服务实例,不要直接实例化业务类。
|
||||||
|
|
||||||
|
### API 装饰器栈
|
||||||
|
典型 API 端点模式:
|
||||||
|
```python
|
||||||
|
@bp.route('/api/xxx', methods=['POST'])
|
||||||
|
@handle_errors()
|
||||||
|
@require_json(['field1', 'field2'])
|
||||||
|
def create_xxx():
|
||||||
|
...
|
||||||
|
```
|
||||||
|
|
||||||
|
### 配置
|
||||||
|
所有配置通过 `get_config()` 获取,从 `.env` 加载。不要硬编码配置值。参考 `.env.example` 了解所有可用变量。
|
||||||
|
|
||||||
|
## 入口与启动
|
||||||
|
|
||||||
|
- `start_dashboard.py` — 主入口,启动 Flask + WebSocket + 飞书长连接(3 个线程)
|
||||||
|
- `start_feishu_bot.py` — 独立飞书机器人入口
|
||||||
|
- `init_database.py` — 数据库初始化
|
||||||
|
|
||||||
|
## 数据库
|
||||||
|
|
||||||
|
- 开发: SQLite (`data/tsp_assistant.db`)
|
||||||
|
- 生产: MySQL via PyMySQL
|
||||||
|
- ORM: SQLAlchemy 2.x,模型定义在 `src/core/models.py`
|
||||||
|
- 核心表: `Tenant`, `WorkOrder`, `ChatSession`, `Conversation`, `KnowledgeEntry`, `Alert`, `Analytics`, `VehicleData`, `User`
|
||||||
|
|
||||||
|
## 外部集成
|
||||||
|
|
||||||
|
- **LLM**: Qwen/DashScope (OpenAI-compatible API),通过 `LLMClient` 调用
|
||||||
|
- **飞书**: `lark-oapi` SDK,长连接模式接收消息,API 模式发送消息和操作多维表格
|
||||||
|
- **Redis**: 可选缓存层,`REDIS_ENABLED` 控制
|
||||||
|
|
||||||
|
## 详细文档
|
||||||
|
|
||||||
|
完整文档体系在 `.agents/summary/` 目录:
|
||||||
|
- `index.md` — 文档索引(推荐作为 AI 上下文入口)
|
||||||
|
- `architecture.md` — 架构图和设计模式
|
||||||
|
- `components.md` — 组件职责
|
||||||
|
- `interfaces.md` — API 列表
|
||||||
|
- `data_models.md` — 数据模型 ER 图
|
||||||
|
- `workflows.md` — 关键流程时序图
|
||||||
|
- `dependencies.md` — 依赖说明
|
||||||
|
|
||||||
|
## Custom Instructions
|
||||||
|
<!-- This section is for human and agent-maintained operational knowledge.
|
||||||
|
Add repo-specific conventions, gotchas, and workflow rules here.
|
||||||
|
This section is preserved exactly as-is when re-running codebase-summary. -->
|
||||||
88
CLAUDE.md
88
CLAUDE.md
@@ -1,88 +0,0 @@
|
|||||||
# CLAUDE.md
|
|
||||||
|
|
||||||
This file provides guidance to Claude Code (claude.ai/code) when working with code in this repository.
|
|
||||||
|
|
||||||
## High-Level Architecture
|
|
||||||
|
|
||||||
This project is a Python Flask-based web application called "TSP Assistant". It's an intelligent customer service system designed for Telematics Service Providers (TSP).
|
|
||||||
|
|
||||||
The backend is built with Flask and utilizes a modular structure with Blueprints. The core application logic resides in the `src/` directory.
|
|
||||||
|
|
||||||
Key components of the architecture include:
|
|
||||||
|
|
||||||
* **Web Framework**: The web interface and APIs are built using **Flask**. The main Flask app is likely configured in `src/web/app.py`.
|
|
||||||
* **Modular Routing**: The application uses Flask **Blueprints** for organizing routes. These are located in `src/web/blueprints/`. Each file in this directory corresponds to a feature area (e.g., `agent.py`, `workorders.py`, `analytics.py`).
|
|
||||||
* **Intelligent Agent**: A core feature is the AI agent. Its logic is contained within the `src/agent/` directory, which includes components for planning (`planner.py`), tool management (`tool_manager.py`), and execution (`executor.py`).
|
|
||||||
* **Database**: The application uses a relational database (likely MySQL) with **SQLAlchemy** as the ORM. Models are defined in `src/core/models.py`.
|
|
||||||
* **Configuration**: A unified configuration center (`src/config/unified_config.py`) manages all settings via environment variables and `.env` files.
|
|
||||||
* **Real-time Communication**: **WebSockets** are used for real-time features like the intelligent chat. The server logic is in `src/web/websocket_server.py`.
|
|
||||||
* **Data Analytics**: The system has a dedicated data analysis module located in `src/analytics/`.
|
|
||||||
* **Frontend**: The frontend is built with Bootstrap 5, Chart.js, and vanilla JavaScript (ES6+). Frontend assets are in `src/web/static/` and templates are in `src/web/templates/`.
|
|
||||||
|
|
||||||
## Common Commands
|
|
||||||
|
|
||||||
### Environment Setup
|
|
||||||
|
|
||||||
The project can be run using Docker (recommended) or locally.
|
|
||||||
|
|
||||||
**1. Install Dependencies:**
|
|
||||||
```bash
|
|
||||||
pip install -r requirements.txt
|
|
||||||
```
|
|
||||||
|
|
||||||
**2. Initialize the Database:**
|
|
||||||
This script sets up the necessary database tables.
|
|
||||||
```bash
|
|
||||||
python init_database.py
|
|
||||||
```
|
|
||||||
|
|
||||||
### Running the Application
|
|
||||||
|
|
||||||
**Local Development:**
|
|
||||||
To start the Flask development server:
|
|
||||||
```bash
|
|
||||||
python start_dashboard.py
|
|
||||||
```
|
|
||||||
The application will be available at `http://localhost:5000`.
|
|
||||||
|
|
||||||
**Docker Deployment:**
|
|
||||||
The project includes a `docker-compose.yml` for easy setup of all services (application, database, cache, monitoring).
|
|
||||||
|
|
||||||
To start all services:
|
|
||||||
```bash
|
|
||||||
docker-compose up -d
|
|
||||||
```
|
|
||||||
Or use the provided script:
|
|
||||||
```bash
|
|
||||||
chmod +x scripts/docker_deploy.sh
|
|
||||||
./scripts/docker_deploy.sh start
|
|
||||||
```
|
|
||||||
|
|
||||||
To stop services:
|
|
||||||
```bash
|
|
||||||
./scripts/docker_deploy.sh stop
|
|
||||||
```
|
|
||||||
|
|
||||||
### Running Tests
|
|
||||||
|
|
||||||
The project uses `pytest` for testing.
|
|
||||||
```bash
|
|
||||||
pytest
|
|
||||||
```
|
|
||||||
To run tests with coverage:
|
|
||||||
```bash
|
|
||||||
pytest --cov
|
|
||||||
```
|
|
||||||
|
|
||||||
## Key File Locations
|
|
||||||
|
|
||||||
* **Main Application Entry Point**: `start_dashboard.py` (local) or `src/web/app.py` (via WSGI in production).
|
|
||||||
* **Flask Blueprints (Routes)**: `src/web/blueprints/`
|
|
||||||
* **Agent Core Logic**: `src/agent/`
|
|
||||||
* **Database Models**: `src/core/models.py`
|
|
||||||
* **Frontend Static Assets**: `src/web/static/` (JS, CSS, images)
|
|
||||||
* **Frontend HTML Templates**: `src/web/templates/`
|
|
||||||
* **WebSocket Server**: `src/web/websocket_server.py`
|
|
||||||
* **Configuration Files**: `config/`
|
|
||||||
* **Deployment Scripts**: `scripts/`
|
|
||||||
* **Database Initialization**: `init_database.py`
|
|
||||||
597
README.md
597
README.md
@@ -1,521 +1,130 @@
|
|||||||
# TSP智能助手 (TSP Assistant)
|
# TSP 智能助手
|
||||||
|
|
||||||
[](version.json)
|
AI 驱动的多租户客服与工单管理系统,支持飞书机器人、WebSocket 实时对话、知识库语义搜索。
|
||||||
[](requirements.txt)
|
|
||||||
[](Dockerfile)
|
|
||||||
[](LICENSE)
|
|
||||||
[]()
|
|
||||||
|
|
||||||
> 基于大语言模型的智能客服系统,专为TSP(Telematics Service Provider)车辆服务提供商设计
|
## 功能概览
|
||||||
|
|
||||||
## 🚀 项目特色
|
- **智能对话** — WebSocket 实时聊天 + 飞书机器人(长连接模式),按租户隔离知识库
|
||||||
|
- **工单管理** — 创建、编辑、删除、飞书多维表格双向同步,AI 生成处理建议
|
||||||
|
- **知识库** — TF-IDF + 可选 Embedding 语义搜索,支持文件导入、人工验证
|
||||||
|
- **多租户** — 数据按 tenant_id 隔离,每个租户独立的系统提示词和飞书群绑定
|
||||||
|
- **数据分析** — 工单趋势、预警统计、满意度分析,支持按租户筛选
|
||||||
|
- **预警系统** — 自定义规则、多级别预警、批量管理
|
||||||
|
- **Agent 模式** — ReAct 风格 LLM Agent,支持工具调度(知识搜索、车辆查询、飞书消息等)
|
||||||
|
- **系统管理** — 模块权限控制、流量/成本/安全配置、Token 监控
|
||||||
|
|
||||||
### 🧠 智能Agent架构
|
## 快速开始
|
||||||
- **多工具集成**: 知识库搜索、工单管理、数据分析、通知推送等10+工具
|
|
||||||
- **智能规划**: 基于目标驱动的任务规划和执行
|
|
||||||
- **自主学习**: 从用户反馈中持续优化响应质量
|
|
||||||
- **实时监控**: 主动监控系统状态和异常情况
|
|
||||||
- **模块化重构**: 后端服务(Agent, 车辆数据, 分析, 测试)蓝图化,提高可维护性和扩展性
|
|
||||||
- **前端模块化**: 引入ES6模块化架构,优化UI组件、状态管理和API服务
|
|
||||||
|
|
||||||
### 💬 智能对话系统
|
|
||||||
- **实时通信**: WebSocket支持,毫秒级响应,已修复连接稳定性问题
|
|
||||||
- **上下文理解**: 多轮对话记忆和上下文关联
|
|
||||||
- **VIN识别**: 自动识别车辆VIN码并获取实时数据
|
|
||||||
- **知识库集成**: 基于TF-IDF和余弦相似度的智能检索
|
|
||||||
- **自定义提示词**: 支持飞书同步和实时对话不同场景的LLM提示词
|
|
||||||
|
|
||||||
### 📊 数据驱动分析
|
|
||||||
- **真实数据**: 基于数据库的真实性能趋势分析
|
|
||||||
- **多维度统计**: 工单、预警、满意度、性能指标
|
|
||||||
- **可视化展示**: Chart.js图表,直观的数据呈现
|
|
||||||
- **系统监控**: 实时CPU、内存、健康状态监控
|
|
||||||
- **专属蓝图**: 独立的数据分析API模块,提供专业数据报告导出
|
|
||||||
|
|
||||||
### 🔧 企业级管理
|
|
||||||
- **多环境部署**: 开发、测试、生产环境隔离
|
|
||||||
- **版本控制**: 完整的版本管理和变更日志
|
|
||||||
- **热更新**: 支持前端文件热更新,无需重启服务
|
|
||||||
- **自动备份**: 更新前自动备份,支持一键回滚
|
|
||||||
- **飞书集成**: 支持飞书多维表格数据同步和管理
|
|
||||||
- **统一错误处理**: 后端API统一异常处理,提高系统健壮性
|
|
||||||
|
|
||||||
## 🏗️ 系统架构
|
|
||||||
|
|
||||||
```
|
|
||||||
┌─────────────────┐ ┌─────────────────┐ ┌─────────────────┐
|
|
||||||
│ 前端界面 │ │ 后端服务 │ │ 数据存储 │
|
|
||||||
│ │ │ │ │ │
|
|
||||||
│ • 仪表板 │◄──►│ • Flask API │◄──►│ • MySQL DB │
|
|
||||||
│ • 智能对话 │ │ • WebSocket │ │ • Redis缓存 │
|
|
||||||
│ • Agent管理 │ │ • Agent核心 │ │ • 知识库 │
|
|
||||||
│ • 数据分析 │ │ • LLM集成 │ │ • 工单系统 │
|
|
||||||
│ • 备份管理 │ │ • 备份系统 │ │ • 车辆数据 │
|
|
||||||
└─────────────────┘ └─────────────────┘ └─────────────────┘
|
|
||||||
│
|
|
||||||
▼
|
|
||||||
┌─────────────────┐
|
|
||||||
│ 监控系统 │
|
|
||||||
│ │
|
|
||||||
│ • Prometheus │
|
|
||||||
│ • Grafana │
|
|
||||||
│ • Nginx代理 │
|
|
||||||
└─────────────────┘
|
|
||||||
```
|
|
||||||
|
|
||||||
## 🎯 核心功能
|
|
||||||
|
|
||||||
### 1. 智能对话 💬
|
|
||||||
- **多轮对话**: 支持上下文关联的连续对话
|
|
||||||
- **VIN识别**: 自动识别车辆VIN并获取实时数据
|
|
||||||
- **知识库检索**: 智能匹配相关技术文档和解决方案
|
|
||||||
- **工单创建**: 对话中直接创建和关联工单
|
|
||||||
- **错误修复**: 解决了WebSocket连接TypeError问题
|
|
||||||
|
|
||||||
### 2. Agent管理 🤖
|
|
||||||
- **工具管理**: 10+内置工具,支持自定义工具注册
|
|
||||||
- **执行监控**: 实时监控Agent任务执行状态
|
|
||||||
- **性能统计**: 工具使用频率和成功率分析
|
|
||||||
- **智能规划**: 基于目标的任务分解和执行
|
|
||||||
- **专用蓝图**: Agent相关API已独立为蓝图管理
|
|
||||||
|
|
||||||
### 3. 工单系统 📋
|
|
||||||
- **AI建议**: 基于知识库生成工单处理建议
|
|
||||||
- **人工审核**: 支持人工输入和AI建议对比
|
|
||||||
- **相似度评估**: 自动计算AI与人工建议的相似度
|
|
||||||
- **知识库更新**: 高相似度建议自动入库
|
|
||||||
- **飞书AI提示词**: 针对飞书同步场景提供更详细的AI建议提示词
|
|
||||||
|
|
||||||
### 4. 知识库管理 📚
|
|
||||||
- **多格式支持**: TXT、PDF、DOC、DOCX、MD文件
|
|
||||||
- **智能提取**: 自动从文档中提取Q&A对
|
|
||||||
- **向量化检索**: TF-IDF + 余弦相似度搜索
|
|
||||||
- **质量验证**: 支持知识条目验证和置信度设置
|
|
||||||
|
|
||||||
### 5. 数据分析 📊
|
|
||||||
- **实时趋势**: 基于真实数据的性能趋势分析
|
|
||||||
- **多维度统计**: 工单、预警、满意度等关键指标
|
|
||||||
- **系统健康**: CPU、内存、响应时间监控
|
|
||||||
- **可视化展示**: 丰富的图表和仪表板
|
|
||||||
- **专用蓝图**: 数据分析API已独立为蓝图管理,并支持Excel报告导出
|
|
||||||
|
|
||||||
### 6. 系统设置 ⚙️
|
|
||||||
- **API管理**: 支持多种LLM提供商配置
|
|
||||||
- **模型参数**: 温度、最大令牌数等参数调节
|
|
||||||
- **端口配置**: Web服务和WebSocket端口管理
|
|
||||||
- **日志级别**: 灵活的日志级别控制
|
|
||||||
- **数据库健壮性**: 优化了数据库连接配置和错误处理
|
|
||||||
|
|
||||||
### 7. 飞书集成 📱
|
|
||||||
- **多维表格同步**: 自动同步飞书多维表格数据
|
|
||||||
- **字段映射**: 智能映射飞书字段到本地数据库
|
|
||||||
- **实时更新**: 支持增量同步和全量同步
|
|
||||||
- **数据预览**: 同步前预览数据,确保准确性
|
|
||||||
- **统一管理**: 飞书功能集成到主仪表板
|
|
||||||
|
|
||||||
## 🛠️ 技术栈
|
|
||||||
|
|
||||||
### 后端技术
|
|
||||||
- **Python 3.11+**: 核心开发语言
|
|
||||||
- **Flask 2.3+**: Web框架和API服务
|
|
||||||
- **SQLAlchemy 2.0+**: ORM数据库操作
|
|
||||||
- **WebSocket**: 实时通信支持
|
|
||||||
- **psutil**: 系统资源监控
|
|
||||||
- **Redis**: 缓存和会话管理
|
|
||||||
|
|
||||||
### 前端技术
|
|
||||||
- **Bootstrap 5**: UI框架
|
|
||||||
- **Chart.js**: 数据可视化
|
|
||||||
- **JavaScript ES6+**: 前端逻辑
|
|
||||||
- **WebSocket**: 实时通信客户端
|
|
||||||
|
|
||||||
### AI/ML技术
|
|
||||||
- **大语言模型**: 支持OpenAI、通义千问等
|
|
||||||
- **TF-IDF**: 文本向量化
|
|
||||||
- **余弦相似度**: 语义相似度计算
|
|
||||||
- **Agent框架**: 智能任务规划
|
|
||||||
- **Transformers**: 预训练模型支持
|
|
||||||
|
|
||||||
### 部署运维
|
|
||||||
- **Docker**: 容器化部署
|
|
||||||
- **Docker Compose**: 多服务编排
|
|
||||||
- **Nginx**: 反向代理和静态文件服务
|
|
||||||
- **Prometheus**: 监控数据收集
|
|
||||||
- **Grafana**: 监控仪表板
|
|
||||||
- **MySQL 8.0**: 主数据库
|
|
||||||
- **Redis 7**: 缓存服务
|
|
||||||
|
|
||||||
## 🚀 快速开始
|
|
||||||
|
|
||||||
### 环境要求
|
|
||||||
|
|
||||||
#### Docker部署(推荐)
|
|
||||||
- Docker 20.10+
|
|
||||||
- Docker Compose 2.0+
|
|
||||||
- 4GB+ 可用内存
|
|
||||||
- 10GB+ 可用磁盘空间
|
|
||||||
|
|
||||||
#### 本地部署
|
|
||||||
- Python 3.11+
|
|
||||||
- Node.js 16+ (可选,用于前端构建)
|
|
||||||
- MySQL 8.0+ 或 SQLite
|
|
||||||
- Redis 7+ (可选)
|
|
||||||
- Git
|
|
||||||
|
|
||||||
### 🐳 Docker部署(推荐)
|
|
||||||
|
|
||||||
1. **克隆项目**
|
|
||||||
```bash
|
|
||||||
git clone http://jeason.online:3000/zhaojie/assist.git
|
|
||||||
cd assist
|
|
||||||
```
|
|
||||||
|
|
||||||
2. **一键启动所有服务**
|
|
||||||
```bash
|
|
||||||
# 使用部署脚本
|
|
||||||
chmod +x scripts/docker_deploy.sh
|
|
||||||
./scripts/docker_deploy.sh start
|
|
||||||
|
|
||||||
# 或直接使用docker-compose
|
|
||||||
docker-compose up -d
|
|
||||||
```
|
|
||||||
|
|
||||||
3. **访问系统**
|
|
||||||
- **TSP助手**: http://localhost:5000
|
|
||||||
- **Nginx代理**: http://localhost
|
|
||||||
- **Prometheus监控**: http://localhost:9090
|
|
||||||
- **Grafana仪表板**: http://localhost:3000 (admin/admin123456)
|
|
||||||
|
|
||||||
4. **服务管理**
|
|
||||||
```bash
|
|
||||||
# 查看服务状态
|
|
||||||
./scripts/docker_deploy.sh status
|
|
||||||
|
|
||||||
# 查看日志
|
|
||||||
./scripts/docker_deploy.sh logs tsp-assistant
|
|
||||||
|
|
||||||
# 停止服务
|
|
||||||
./scripts/docker_deploy.sh stop
|
|
||||||
|
|
||||||
# 重启服务
|
|
||||||
./scripts/docker_deploy.sh restart
|
|
||||||
```
|
|
||||||
|
|
||||||
### 💻 本地部署
|
|
||||||
|
|
||||||
1. **克隆项目**
|
|
||||||
```bash
|
|
||||||
git clone http://jeason.online:3000/zhaojie/assist.git
|
|
||||||
cd assist
|
|
||||||
```
|
|
||||||
|
|
||||||
2. **安装依赖**
|
|
||||||
```bash
|
```bash
|
||||||
pip install -r requirements.txt
|
pip install -r requirements.txt
|
||||||
```
|
cp .env.example .env # 编辑填入 LLM API Key、飞书凭证等
|
||||||
|
|
||||||
3. **初始化数据库**
|
|
||||||
```bash
|
|
||||||
python init_database.py
|
python init_database.py
|
||||||
```
|
|
||||||
|
|
||||||
4. **启动服务**
|
|
||||||
```bash
|
|
||||||
python start_dashboard.py
|
python start_dashboard.py
|
||||||
```
|
```
|
||||||
|
|
||||||
5. **访问系统**
|
访问 http://localhost:5000,默认账号 `admin` / `admin123`。
|
||||||
- 打开浏览器访问: `http://localhost:5000`
|
|
||||||
- 默认端口: 5000 (可在系统设置中修改)
|
|
||||||
|
|
||||||
### Windows快速启动
|
## 系统架构
|
||||||
```cmd
|
|
||||||
# 双击运行
|
```mermaid
|
||||||
快速启动.bat
|
graph TB
|
||||||
|
subgraph Clients["客户端"]
|
||||||
|
Browser["浏览器 Dashboard"]
|
||||||
|
FeishuBot["飞书机器人"]
|
||||||
|
WSClient["WebSocket"]
|
||||||
|
end
|
||||||
|
|
||||||
|
subgraph App["应用层"]
|
||||||
|
Flask["Flask :5000"]
|
||||||
|
WS["WebSocket :8765"]
|
||||||
|
FeishuLC["飞书长连接"]
|
||||||
|
end
|
||||||
|
|
||||||
|
subgraph Services["业务层"]
|
||||||
|
DM["对话管理"]
|
||||||
|
KM["知识库"]
|
||||||
|
WOS["工单同步"]
|
||||||
|
Agent["ReAct Agent"]
|
||||||
|
end
|
||||||
|
|
||||||
|
subgraph Infra["基础设施"]
|
||||||
|
DB["SQLAlchemy"]
|
||||||
|
LLM["Qwen API"]
|
||||||
|
Cache["Redis"]
|
||||||
|
end
|
||||||
|
|
||||||
|
Clients --> App
|
||||||
|
App --> Services
|
||||||
|
Services --> Infra
|
||||||
```
|
```
|
||||||
|
|
||||||
## 📖 使用指南
|
## 项目结构
|
||||||
|
|
||||||
### 基础操作
|
```
|
||||||
|
src/
|
||||||
1. **智能对话**
|
├── config/ # 配置管理(unified_config)
|
||||||
- 在"智能对话"页面输入问题
|
├── core/ # 基础设施(数据库、LLM、缓存、认证、ORM 模型)
|
||||||
- 系统自动检索知识库并生成回答
|
├── dialogue/ # 对话管理(realtime_chat、dialogue_manager)
|
||||||
- 支持VIN码识别和车辆数据查询
|
├── knowledge_base/ # 知识库(搜索、导入、验证)
|
||||||
|
├── analytics/ # 监控与分析(预警、Token、AI 成功率)
|
||||||
2. **工单管理**
|
├── integrations/ # 外部集成(飞书客户端、工单同步)
|
||||||
- 创建工单并获取AI建议
|
├── agent/ # ReAct Agent(工具调度)
|
||||||
- 人工输入解决方案
|
├── vehicle/ # 车辆数据管理
|
||||||
- 系统自动评估相似度并更新知识库
|
├── utils/ # 通用工具
|
||||||
|
└── web/ # Web 层
|
||||||
3. **知识库维护**
|
├── app.py # Flask 应用
|
||||||
- 手动添加Q&A对
|
├── blueprints/ # API 蓝图(16 个,每个领域一个文件)
|
||||||
- 上传文档自动提取知识
|
├── service_manager.py # 懒加载服务注册
|
||||||
- 设置置信度和验证状态
|
├── static/ # 前端资源 (JS/CSS)
|
||||||
|
└── templates/ # Jinja2 模板
|
||||||
4. **系统监控**
|
|
||||||
- 查看实时性能趋势
|
|
||||||
- 监控系统健康状态
|
|
||||||
- 管理预警和通知
|
|
||||||
|
|
||||||
### 高级功能
|
|
||||||
|
|
||||||
1. **Agent工具管理**
|
|
||||||
- 查看工具使用统计
|
|
||||||
- 注册自定义工具
|
|
||||||
- 监控执行历史
|
|
||||||
|
|
||||||
2. **数据分析**
|
|
||||||
- 多维度数据统计
|
|
||||||
- 自定义时间范围
|
|
||||||
- 导出分析报告
|
|
||||||
|
|
||||||
3. **系统配置**
|
|
||||||
- API和模型参数配置
|
|
||||||
- 端口和日志级别设置
|
|
||||||
- 环境变量管理
|
|
||||||
|
|
||||||
## 🔄 部署与更新
|
|
||||||
|
|
||||||
### 版本管理
|
|
||||||
```bash
|
|
||||||
# 更新版本号
|
|
||||||
python version.py increment --type minor
|
|
||||||
|
|
||||||
# 添加变更日志
|
|
||||||
python version.py changelog --message "新功能描述"
|
|
||||||
|
|
||||||
# 创建发布标签
|
|
||||||
python version.py tag --message "Release v1.3.0"
|
|
||||||
```
|
```
|
||||||
|
|
||||||
## 📊 系统监控
|
## 环境变量
|
||||||
|
|
||||||
### 健康检查
|
| 变量 | 说明 | 默认值 |
|
||||||
- **API状态**: `/api/health`
|
|------|------|--------|
|
||||||
- **服务监控**: 自动健康检查和故障恢复
|
| `SECRET_KEY` | Flask session 密钥 | 随机生成 |
|
||||||
- **性能指标**: 响应时间、吞吐量、错误率
|
| `DATABASE_URL` | 数据库连接串 | SQLite |
|
||||||
|
| `LLM_BASE_URL` | LLM API 地址 | DashScope |
|
||||||
|
| `LLM_API_KEY` | LLM API 密钥 | — |
|
||||||
|
| `LLM_MODEL` | 模型名称 | qwen-plus-latest |
|
||||||
|
| `FEISHU_APP_ID` | 飞书应用 ID | — |
|
||||||
|
| `FEISHU_APP_SECRET` | 飞书应用密钥 | — |
|
||||||
|
| `FEISHU_APP_TOKEN` | 飞书多维表格 App Token | — |
|
||||||
|
| `FEISHU_TABLE_ID` | 飞书多维表格 Table ID | — |
|
||||||
|
| `REDIS_HOST` | Redis 地址 | localhost |
|
||||||
|
| `REDIS_ENABLED` | 启用 Redis 缓存 | True |
|
||||||
|
| `EMBEDDING_ENABLED` | 启用 Embedding 语义搜索 | True |
|
||||||
|
| `EMBEDDING_MODEL` | Embedding 模型 | BAAI/bge-small-zh-v1.5 |
|
||||||
|
|
||||||
### 日志管理
|
完整变量列表见 `.env.example`。
|
||||||
- **应用日志**: `logs/tsp_assistant.log`
|
|
||||||
- **访问日志**: Nginx访问日志
|
|
||||||
- **错误追踪**: 详细的错误堆栈信息
|
|
||||||
|
|
||||||
## 🔧 配置说明
|
## 技术栈
|
||||||
|
|
||||||
### Docker环境变量
|
| 层级 | 技术 |
|
||||||
```bash
|
|---|---|
|
||||||
# 数据库配置
|
| Web 框架 | Flask 3.x + Flask-CORS |
|
||||||
DATABASE_URL=mysql+pymysql://tsp_user:tsp_password@mysql:3306/tsp_assistant?charset=utf8mb4
|
| ORM | SQLAlchemy 2.x (MySQL / SQLite) |
|
||||||
REDIS_URL=redis://redis:6379/0
|
| 实时通信 | websockets (port 8765) |
|
||||||
|
| 缓存 | Redis 5.x + hiredis |
|
||||||
|
| LLM | Qwen/DashScope (OpenAI-compatible) |
|
||||||
|
| Embedding | sentence-transformers + BAAI/bge-small-zh-v1.5 (可选) |
|
||||||
|
| NLP | jieba + scikit-learn (TF-IDF) |
|
||||||
|
| 飞书 | lark-oapi 1.3.x |
|
||||||
|
| 认证 | JWT + SHA-256 |
|
||||||
|
|
||||||
# LLM配置
|
## 数据库
|
||||||
LLM_PROVIDER=openai
|
|
||||||
LLM_API_KEY=your_api_key
|
|
||||||
LLM_MODEL=gpt-3.5-turbo
|
|
||||||
|
|
||||||
# 服务配置
|
- **开发**: SQLite (`data/tsp_assistant.db`)
|
||||||
SERVER_PORT=5000
|
- **生产**: MySQL via PyMySQL
|
||||||
WEBSOCKET_PORT=8765
|
- **核心表**: Tenant, WorkOrder, ChatSession, Conversation, KnowledgeEntry, Alert, Analytics, VehicleData, User
|
||||||
LOG_LEVEL=INFO
|
|
||||||
TZ=Asia/Shanghai
|
|
||||||
```
|
|
||||||
|
|
||||||
### Docker服务配置
|
## 部署
|
||||||
|
|
||||||
#### 主要服务
|
- Docker + docker-compose (MySQL 8, Redis 7, Nginx, Prometheus, Grafana)
|
||||||
- **tsp-assistant**: 主应用服务 (端口: 5000, 8765)
|
- Nginx 反向代理 (80/443 → 5000)
|
||||||
- **mysql**: MySQL数据库 (端口: 3306)
|
- 默认端口: Flask 5000, WebSocket 8765, Redis 6379, MySQL 3306
|
||||||
- **redis**: Redis缓存 (端口: 6379)
|
|
||||||
- **nginx**: 反向代理 (端口: 80, 443)
|
|
||||||
|
|
||||||
#### 监控服务
|
## 详细文档
|
||||||
- **prometheus**: 监控数据收集 (端口: 9090)
|
|
||||||
- **grafana**: 监控仪表板 (端口: 3000)
|
|
||||||
|
|
||||||
#### 数据卷
|
完整技术文档见 `.agents/summary/` 目录,推荐从 `index.md` 开始阅读。
|
||||||
- `mysql_data`: MySQL数据持久化
|
|
||||||
- `redis_data`: Redis数据持久化
|
|
||||||
- `prometheus_data`: Prometheus数据持久化
|
|
||||||
- `grafana_data`: Grafana配置和数据持久化
|
|
||||||
|
|
||||||
### 配置文件
|
|
||||||
- `config/llm_config.py`: LLM客户端配置
|
|
||||||
- `config/integrations_config.json`: 飞书集成配置
|
|
||||||
- `nginx.conf`: Nginx反向代理配置
|
|
||||||
- `monitoring/prometheus.yml`: Prometheus监控配置
|
|
||||||
- `init.sql`: 数据库初始化脚本
|
|
||||||
- `docker-compose.yml`: Docker服务编排配置
|
|
||||||
- `Dockerfile`: 应用镜像构建配置
|
|
||||||
|
|
||||||
## 🤝 贡献指南
|
|
||||||
|
|
||||||
### 开发流程
|
|
||||||
1. Fork项目到个人仓库
|
|
||||||
2. 创建功能分支: `git checkout -b feature/new-feature`
|
|
||||||
3. 提交更改: `git commit -m "Add new feature"`
|
|
||||||
4. 推送分支: `git push origin feature/new-feature`
|
|
||||||
5. 创建Pull Request
|
|
||||||
|
|
||||||
### 代码规范
|
|
||||||
- Python代码遵循PEP 8规范
|
|
||||||
- JavaScript使用ES6+语法
|
|
||||||
- 提交信息使用约定式提交格式
|
|
||||||
- 新功能需要添加相应的测试
|
|
||||||
|
|
||||||
## 📝 更新日志
|
|
||||||
|
|
||||||
### v2.1.0 (2025-12-08) - 全面架构优化与问题修复
|
|
||||||
- ⚙️ **后端架构重构**:
|
|
||||||
- 将Agent、车辆数据、数据分析、API测试相关路由拆分为独立蓝图。
|
|
||||||
- 精简 `app.py` 主应用文件,提升模块化和可维护性。
|
|
||||||
- 引入统一错误处理装饰器和依赖注入机制。
|
|
||||||
- 🎨 **前端架构优化**:
|
|
||||||
- 实现了JavaScript模块化架构,划分 `core`, `services`, `components` 目录。
|
|
||||||
- 引入了统一状态管理 (`store.js`) 和API服务 (`api.js`)。
|
|
||||||
- 优化了通知管理和预警显示组件。
|
|
||||||
- 🛠️ **关键问题修复**:
|
|
||||||
- 修复了WebSocket连接中 `TypeError: missing 1 required positional argument: 'path'` 错误。
|
|
||||||
- 改进了数据库连接的健壮性,优化MySQL连接池配置,并增强了异常处理和重连机制。
|
|
||||||
- 解决了 `generator didn't stop` 错误,确保数据库会话的正确关闭。
|
|
||||||
- 增强了预警系统异常处理,并在规则检查失败时生成系统预警。
|
|
||||||
- 优化了API错误响应,包含更详细的错误信息。
|
|
||||||
- ✨ **新功能增强**:
|
|
||||||
- 为飞书同步和实时对话场景引入了不同的LLM提示词,提升AI建议的针对性。
|
|
||||||
- 增加了对`Analysising`工单状态的映射处理。
|
|
||||||
|
|
||||||
### v2.0.0 (2025-09-22) - Docker环境全面升级
|
|
||||||
- 🐳 **Docker环境重构**: 升级到Python 3.11,优化镜像构建
|
|
||||||
- 🐳 **多服务编排**: MySQL 8.0 + Redis 7 + Nginx + Prometheus + Grafana
|
|
||||||
- 🐳 **监控系统**: 集成Prometheus监控和Grafana仪表板
|
|
||||||
- 🐳 **安全增强**: 非root用户运行,数据卷隔离
|
|
||||||
- 🐳 **部署脚本**: 一键部署脚本,支持启动/停止/重启/清理
|
|
||||||
- 🔧 **知识库搜索修复**: 简化搜索算法,提升检索准确率
|
|
||||||
- 🔧 **批量删除优化**: 修复外键约束和缓存问题
|
|
||||||
- 🔧 **日志编码修复**: 解决中文乱码问题
|
|
||||||
- 📊 **可视化增强**: 修复预警、性能、满意度图表显示
|
|
||||||
- 📚 **文档更新**: 完整的Docker部署和使用指南
|
|
||||||
|
|
||||||
### v1.4.0 (2025-09-19)
|
|
||||||
- ✅ 飞书集成功能:支持飞书多维表格数据同步
|
|
||||||
- ✅ 页面功能合并:飞书同步页面合并到主仪表板
|
|
||||||
- ✅ 数据库架构优化:扩展工单表字段,支持飞书数据
|
|
||||||
- ✅ 代码重构优化:大文件拆分,降低运行风险
|
|
||||||
- ✅ 字段映射完善:智能映射飞书字段到本地数据库
|
|
||||||
- ✅ 数据库初始化改进:集成字段迁移到初始化流程
|
|
||||||
|
|
||||||
### v1.3.0 (2025-09-17)
|
|
||||||
- ✅ 数据库架构优化:MySQL主数据库+SQLite备份系统
|
|
||||||
- ✅ 工单详情API修复:解决数据库会话管理问题
|
|
||||||
- ✅ 备份管理系统:自动备份MySQL数据到SQLite
|
|
||||||
- ✅ 数据库状态监控:实时监控MySQL和SQLite状态
|
|
||||||
- ✅ 备份管理API:支持数据备份和恢复操作
|
|
||||||
|
|
||||||
### v1.2.0 (2025-09-16)
|
|
||||||
- ✅ 系统设置扩展:API管理、模型参数配置、端口管理
|
|
||||||
- ✅ 真实数据分析:修复性能趋势图表显示问题
|
|
||||||
- ✅ 工单AI建议功能:智能生成处理建议
|
|
||||||
- ✅ 知识库搜索优化:提升检索准确率
|
|
||||||
- ✅ Agent管理改进:工具使用统计和自定义工具
|
|
||||||
|
|
||||||
### v1.1.0 (2025-09-16)
|
|
||||||
- ✅ 工单AI建议功能
|
|
||||||
- ✅ 知识库搜索优化
|
|
||||||
- ✅ Agent管理改进
|
|
||||||
|
|
||||||
### v1.0.0 (2024-01-01)
|
|
||||||
- ✅ 初始版本发布
|
|
||||||
|
|
||||||
## 📄 许可证
|
|
||||||
|
|
||||||
本项目采用 MIT 许可证 - 查看 [LICENSE](LICENSE) 文件了解详情
|
|
||||||
|
|
||||||
## 🔧 故障排除
|
|
||||||
|
|
||||||
### Docker部署问题
|
|
||||||
|
|
||||||
#### 常见问题
|
|
||||||
1. **端口冲突**
|
|
||||||
```bash
|
|
||||||
# 检查端口占用
|
|
||||||
netstat -tulpn | grep :5000
|
|
||||||
# 修改docker-compose.yml中的端口映射
|
|
||||||
```
|
|
||||||
|
|
||||||
2. **内存不足**
|
|
||||||
```bash
|
|
||||||
# 检查Docker资源使用
|
|
||||||
docker stats
|
|
||||||
# 增加Docker内存限制或关闭其他服务
|
|
||||||
```
|
|
||||||
|
|
||||||
3. **数据库连接失败**
|
|
||||||
```bash
|
|
||||||
# 检查MySQL服务状态
|
|
||||||
docker-compose logs mysql
|
|
||||||
# 等待数据库完全启动(约30秒)
|
|
||||||
```
|
|
||||||
|
|
||||||
4. **权限问题**
|
|
||||||
```bash
|
|
||||||
# 给脚本添加执行权限
|
|
||||||
chmod +x scripts/docker_deploy.sh
|
|
||||||
# 检查文件权限
|
|
||||||
ls -la scripts/
|
|
||||||
```
|
|
||||||
|
|
||||||
#### 日志查看
|
|
||||||
```bash
|
|
||||||
# 查看所有服务日志
|
|
||||||
docker-compose logs -f
|
|
||||||
|
|
||||||
# 查看特定服务日志
|
|
||||||
docker-compose logs -f tsp-assistant
|
|
||||||
docker-compose logs -f mysql
|
|
||||||
docker-compose logs -f redis
|
|
||||||
```
|
|
||||||
|
|
||||||
#### 服务重启
|
|
||||||
```bash
|
|
||||||
# 重启特定服务
|
|
||||||
docker-compose restart tsp-assistant
|
|
||||||
|
|
||||||
# 重启所有服务
|
|
||||||
docker-compose down && docker-compose up -d
|
|
||||||
```
|
|
||||||
|
|
||||||
### 性能优化
|
|
||||||
|
|
||||||
#### Docker资源限制
|
|
||||||
```yaml
|
|
||||||
# 在docker-compose.yml中添加资源限制
|
|
||||||
services:
|
|
||||||
tsp-assistant:
|
|
||||||
deploy:
|
|
||||||
resources:
|
|
||||||
limits:
|
|
||||||
memory: 2G
|
|
||||||
cpus: '1.0'
|
|
||||||
```
|
|
||||||
|
|
||||||
#### 数据库优化
|
|
||||||
```sql
|
|
||||||
-- MySQL性能优化
|
|
||||||
SET GLOBAL innodb_buffer_pool_size = 1G;
|
|
||||||
SET GLOBAL max_connections = 200;
|
|
||||||
```
|
|
||||||
|
|
||||||
## 📞 支持与联系
|
|
||||||
|
|
||||||
- **项目地址**: http://jeason.online:3000/zhaojie/assist
|
|
||||||
- **问题反馈**: 请在Issues中提交问题
|
|
||||||
- **功能建议**: 欢迎提交Feature Request
|
|
||||||
- **Docker问题**: 请提供docker-compose logs输出
|
|
||||||
|
|
||||||
## 🙏 致谢
|
|
||||||
|
|
||||||
感谢所有为项目做出贡献的开发者和用户!
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
**TSP智能助手** - 让车辆服务更智能,让客户体验更美好! 🚗✨
|
|
||||||
|
|||||||
@@ -1,16 +1,37 @@
|
|||||||
{
|
{
|
||||||
"api_timeout": 30,
|
|
||||||
"max_history": 10,
|
|
||||||
"refresh_interval": 10,
|
|
||||||
"auto_monitoring": true,
|
|
||||||
"agent_mode": true,
|
"agent_mode": true,
|
||||||
"api_provider": "openai",
|
|
||||||
"api_base_url": "",
|
"api_base_url": "",
|
||||||
"api_key": "",
|
"api_key": "",
|
||||||
|
"api_provider": "openai",
|
||||||
|
"api_timeout": 30,
|
||||||
|
"auto_monitoring": true,
|
||||||
|
"cpu_usage_percent": 0,
|
||||||
|
"current_server_port": 5000,
|
||||||
|
"current_websocket_port": 8765,
|
||||||
|
"log_level": "INFO",
|
||||||
|
"max_history": 10,
|
||||||
|
"memory_usage_percent": 70.9,
|
||||||
|
"model_max_tokens": 1000,
|
||||||
"model_name": "qwen-turbo",
|
"model_name": "qwen-turbo",
|
||||||
"model_temperature": 0.7,
|
"model_temperature": 0.7,
|
||||||
"model_max_tokens": 1000,
|
"refresh_interval": 10,
|
||||||
"server_port": 5000,
|
"server_port": 5000,
|
||||||
|
"uptime_seconds": 0,
|
||||||
"websocket_port": 8765,
|
"websocket_port": 8765,
|
||||||
"log_level": "INFO"
|
"modules": {
|
||||||
|
"dashboard": true,
|
||||||
|
"chat": true,
|
||||||
|
"knowledge": true,
|
||||||
|
"workorders": true,
|
||||||
|
"conversation-history": true,
|
||||||
|
"alerts": true,
|
||||||
|
"feishu-sync": true,
|
||||||
|
"agent": false,
|
||||||
|
"token-monitor": true,
|
||||||
|
"ai-monitor": true,
|
||||||
|
"analytics": true,
|
||||||
|
"system-optimizer": true,
|
||||||
|
"settings": true,
|
||||||
|
"tenant-management": true
|
||||||
|
}
|
||||||
}
|
}
|
||||||
Binary file not shown.
@@ -1,6 +1,6 @@
|
|||||||
# 飞书长连接模式使用指南
|
# 飞书长连接模式使用指南
|
||||||
|
|
||||||
> ✅ **已验证可用** - 2026-02-11
|
> **已验证可用** - 2026-02-11
|
||||||
>
|
>
|
||||||
> 本文档介绍如何使用飞书官方 SDK 的长连接模式,无需公网域名即可接入飞书机器人。
|
> 本文档介绍如何使用飞书官方 SDK 的长连接模式,无需公网域名即可接入飞书机器人。
|
||||||
|
|
||||||
@@ -8,14 +8,14 @@
|
|||||||
|
|
||||||
飞书官方 SDK 提供了**事件订阅 2.0(长连接模式)**,相比传统的 webhook 模式有以下优势:
|
飞书官方 SDK 提供了**事件订阅 2.0(长连接模式)**,相比传统的 webhook 模式有以下优势:
|
||||||
|
|
||||||
| 特性 | Webhook 模式(旧) | 长连接模式(新)✅ |
|
| 特性 | Webhook 模式(旧) | 长连接模式(新) |
|
||||||
|------|-------------------|-------------------|
|
|------|-------------------|-------------------|
|
||||||
| 公网域名 | ✅ 必需 | ❌ 不需要 |
|
| 公网域名 | 必需 | 不需要 |
|
||||||
| SSL 证书 | ✅ 必需 | ❌ 不需要 |
|
| SSL 证书 | 必需 | 不需要 |
|
||||||
| 回调配置 | ✅ 需要配置 | ❌ 不需要 |
|
| 回调配置 | 需要配置 | 不需要 |
|
||||||
| 内网部署 | ❌ 不支持 | ✅ 支持 |
|
| 内网部署 | 不支持 | 支持 |
|
||||||
| 实时性 | 中等(HTTP 轮询) | ✅ 高(WebSocket) |
|
| 实时性 | 中等(HTTP 轮询) | 高(WebSocket) |
|
||||||
| 稳定性 | 中等 | ✅ 高(自动重连) |
|
| 稳定性 | 中等 | 高(自动重连) |
|
||||||
|
|
||||||
---
|
---
|
||||||
|
|
||||||
@@ -61,12 +61,12 @@ python init_database.py
|
|||||||
访问 [飞书开放平台](https://open.feishu.cn/app),为您的应用添加以下权限:
|
访问 [飞书开放平台](https://open.feishu.cn/app),为您的应用添加以下权限:
|
||||||
|
|
||||||
**必需权限:**
|
**必需权限:**
|
||||||
- ✅ `im:message` - 获取与发送单聊、群组消息
|
- `im:message` - 获取与发送单聊、群组消息
|
||||||
- ✅ `im:message:send_as_bot` - 以应用的身份发消息
|
- `im:message:send_as_bot` - 以应用的身份发消息
|
||||||
- ✅ `im:chat` - 获取群组信息
|
- `im:chat` - 获取群组信息
|
||||||
|
|
||||||
**可选权限(用于工单同步):**
|
**可选权限(用于工单同步):**
|
||||||
- ✅ `bitable:app` - 查看、评论、编辑和管理多维表格
|
- `bitable:app` - 查看、评论、编辑和管理多维表格
|
||||||
|
|
||||||
**注意:** 添加权限后需要重新发布应用版本。
|
**注意:** 添加权限后需要重新发布应用版本。
|
||||||
|
|
||||||
@@ -94,7 +94,7 @@ python start_feishu_bot.py
|
|||||||
- 日志级别: INFO
|
- 日志级别: INFO
|
||||||
|
|
||||||
🔌 启动模式: 事件订阅 2.0(长连接)
|
🔌 启动模式: 事件订阅 2.0(长连接)
|
||||||
✅ 优势:
|
优势:
|
||||||
- 无需公网域名
|
- 无需公网域名
|
||||||
- 无需配置 webhook
|
- 无需配置 webhook
|
||||||
- 自动重连
|
- 自动重连
|
||||||
@@ -107,7 +107,7 @@ python start_feishu_bot.py
|
|||||||
- App ID: cli_xxxxxxxxxxxxx
|
- App ID: cli_xxxxxxxxxxxxx
|
||||||
- 模式: 事件订阅 2.0(长连接)
|
- 模式: 事件订阅 2.0(长连接)
|
||||||
- 无需公网域名和 webhook 配置
|
- 无需公网域名和 webhook 配置
|
||||||
✅ 飞书长连接服务初始化成功
|
飞书长连接服务初始化成功
|
||||||
```
|
```
|
||||||
|
|
||||||
### 方式 2: 在代码中集成
|
### 方式 2: 在代码中集成
|
||||||
@@ -167,18 +167,18 @@ app.run(...)
|
|||||||
服务运行时会实时输出日志:
|
服务运行时会实时输出日志:
|
||||||
|
|
||||||
```
|
```
|
||||||
📨 [Feishu LongConn] 收到消息
|
[Feishu LongConn] 收到消息
|
||||||
- 消息ID: om_xxxxxxxxxxxxx
|
- 消息ID: om_xxxxxxxxxxxxx
|
||||||
- 群聊ID: oc_xxxxxxxxxxxxx
|
- 群聊ID: oc_xxxxxxxxxxxxx
|
||||||
- 发送者: ou_xxxxxxxxxxxxx
|
- 发送者: ou_xxxxxxxxxxxxx
|
||||||
- 消息类型: text
|
- 消息类型: text
|
||||||
- 原始内容: @TSP助手 车辆无法连接网络
|
- 原始内容: @TSP助手 车辆无法连接网络
|
||||||
- 清理后内容: 车辆无法连接网络
|
- 清理后内容: 车辆无法连接网络
|
||||||
🔑 会话用户标识: feishu_oc_xxxxxxxxxxxxx_ou_xxxxxxxxxxxxx
|
会话用户标识: feishu_oc_xxxxxxxxxxxxx_ou_xxxxxxxxxxxxx
|
||||||
🆕 为用户 ou_xxxxxxxxxxxxx 在群聊 oc_xxxxxxxxxxxxx 创建新会话: session_xxxxxxxxxxxxx
|
为用户 ou_xxxxxxxxxxxxx 在群聊 oc_xxxxxxxxxxxxx 创建新会话: session_xxxxxxxxxxxxx
|
||||||
🤖 调用 TSP Assistant 处理消息...
|
🤖 调用 TSP Assistant 处理消息...
|
||||||
📤 准备发送回复 (长度: 156)
|
准备发送回复 (长度: 156)
|
||||||
✅ 成功回复消息: om_xxxxxxxxxxxxx
|
成功回复消息: om_xxxxxxxxxxxxx
|
||||||
```
|
```
|
||||||
|
|
||||||
---
|
---
|
||||||
@@ -195,12 +195,12 @@ app.run(...)
|
|||||||
```
|
```
|
||||||
|
|
||||||
**缺点:**
|
**缺点:**
|
||||||
- ❌ 需要公网域名
|
- 需要公网域名
|
||||||
- ❌ 需要配置 SSL
|
- 需要配置 SSL
|
||||||
- ❌ 内网无法部署
|
- 内网无法部署
|
||||||
- ❌ 需要在飞书后台配置回调地址
|
- 需要在飞书后台配置回调地址
|
||||||
|
|
||||||
### 新模式(长连接)- 推荐 ✅
|
### 新模式(长连接)- 推荐
|
||||||
|
|
||||||
**文件:** `src/integrations/feishu_longconn_service.py`
|
**文件:** `src/integrations/feishu_longconn_service.py`
|
||||||
|
|
||||||
@@ -210,16 +210,16 @@ app.run(...)
|
|||||||
```
|
```
|
||||||
|
|
||||||
**优点:**
|
**优点:**
|
||||||
- ✅ 无需公网域名
|
- 无需公网域名
|
||||||
- ✅ 无需 SSL 证书
|
- 无需 SSL 证书
|
||||||
- ✅ 内网可部署
|
- 内网可部署
|
||||||
- ✅ 无需配置回调地址
|
- 无需配置回调地址
|
||||||
- ✅ 自动重连
|
- 自动重连
|
||||||
- ✅ 更稳定
|
- 更稳定
|
||||||
|
|
||||||
---
|
---
|
||||||
|
|
||||||
## 📊 架构说明
|
## 架构说明
|
||||||
|
|
||||||
```
|
```
|
||||||
┌─────────────────────────────────────────────────────────────┐
|
┌─────────────────────────────────────────────────────────────┐
|
||||||
@@ -332,7 +332,7 @@ python3 -m pip install --upgrade certifi
|
|||||||
|
|
||||||
3. **验证修复**:
|
3. **验证修复**:
|
||||||
```bash
|
```bash
|
||||||
python3 -c "import urllib.request; urllib.request.urlopen('https://open.feishu.cn', timeout=5); print('✅ SSL 验证成功')"
|
python3 -c "import urllib.request; urllib.request.urlopen('https://open.feishu.cn', timeout=5); print(' SSL 验证成功')"
|
||||||
```
|
```
|
||||||
|
|
||||||
**解决方案(Linux/Windows):**
|
**解决方案(Linux/Windows):**
|
||||||
@@ -343,11 +343,11 @@ pip3 install --upgrade certifi
|
|||||||
### Q2: 启动后没有收到消息?
|
### Q2: 启动后没有收到消息?
|
||||||
|
|
||||||
**检查清单:**
|
**检查清单:**
|
||||||
1. ✅ 确认飞书应用权限已配置(`im:message`)
|
1. 确认飞书应用权限已配置(`im:message`)
|
||||||
2. ✅ 确认应用已发布最新版本
|
2. 确认应用已发布最新版本
|
||||||
3. ✅ 确认机器人已添加到群聊
|
3. 确认机器人已添加到群聊
|
||||||
4. ✅ 查看日志是否有连接成功的提示
|
4. 查看日志是否有连接成功的提示
|
||||||
5. ✅ 确认没有 SSL 证书错误(见 Q1)
|
5. 确认没有 SSL 证书错误(见 Q1)
|
||||||
|
|
||||||
### Q3: 提示"权限不足"?
|
### Q3: 提示"权限不足"?
|
||||||
|
|
||||||
@@ -379,7 +379,7 @@ pip3 install --upgrade certifi
|
|||||||
|
|
||||||
现在您的 TSP Assistant 已经支持**飞书官方 SDK 的长连接模式**!
|
现在您的 TSP Assistant 已经支持**飞书官方 SDK 的长连接模式**!
|
||||||
|
|
||||||
✅ **核心优势:**
|
**核心优势:**
|
||||||
- 无需公网域名
|
- 无需公网域名
|
||||||
- 无需 webhook 配置
|
- 无需 webhook 配置
|
||||||
- 内网可部署
|
- 内网可部署
|
||||||
@@ -387,7 +387,7 @@ pip3 install --upgrade certifi
|
|||||||
- 群聊隔离
|
- 群聊隔离
|
||||||
- 完整的工单管理和 AI 分析功能
|
- 完整的工单管理和 AI 分析功能
|
||||||
|
|
||||||
✅ **使用方式:**
|
**使用方式:**
|
||||||
```bash
|
```bash
|
||||||
python start_feishu_bot.py
|
python start_feishu_bot.py
|
||||||
```
|
```
|
||||||
|
|||||||
14
feishu_config_2026-04-02.json
Normal file
14
feishu_config_2026-04-02.json
Normal file
@@ -0,0 +1,14 @@
|
|||||||
|
{
|
||||||
|
"feishu": {
|
||||||
|
"app_id": "cli_a8b50ec0eed1500d",
|
||||||
|
"table_id": "tblnl3vJPpgMTSiP",
|
||||||
|
"status": "active",
|
||||||
|
"app_secret": "***"
|
||||||
|
},
|
||||||
|
"system": {
|
||||||
|
"sync_limit": 10,
|
||||||
|
"ai_suggestions_enabled": true,
|
||||||
|
"auto_sync_interval": 0,
|
||||||
|
"last_sync_time": null
|
||||||
|
}
|
||||||
|
}
|
||||||
@@ -20,7 +20,7 @@ from src.utils.helpers import setup_logging
|
|||||||
from src.core.database import db_manager
|
from src.core.database import db_manager
|
||||||
from src.core.models import (
|
from src.core.models import (
|
||||||
Base, WorkOrder, KnowledgeEntry, Conversation, Analytics, Alert, VehicleData,
|
Base, WorkOrder, KnowledgeEntry, Conversation, Analytics, Alert, VehicleData,
|
||||||
WorkOrderSuggestion, WorkOrderProcessHistory, User
|
WorkOrderSuggestion, WorkOrderProcessHistory, User, ChatSession
|
||||||
)
|
)
|
||||||
|
|
||||||
class DatabaseInitializer:
|
class DatabaseInitializer:
|
||||||
@@ -197,7 +197,9 @@ class DatabaseInitializer:
|
|||||||
self._migrate_workorder_dispatch_fields,
|
self._migrate_workorder_dispatch_fields,
|
||||||
self._migrate_workorder_process_history_table,
|
self._migrate_workorder_process_history_table,
|
||||||
self._migrate_analytics_enhancements,
|
self._migrate_analytics_enhancements,
|
||||||
self._migrate_system_optimization_fields
|
self._migrate_system_optimization_fields,
|
||||||
|
self._migrate_chat_sessions_table,
|
||||||
|
self._migrate_tenant_id_fields,
|
||||||
]
|
]
|
||||||
|
|
||||||
success_count = 0
|
success_count = 0
|
||||||
@@ -445,6 +447,61 @@ class DatabaseInitializer:
|
|||||||
|
|
||||||
return success
|
return success
|
||||||
|
|
||||||
|
def _migrate_chat_sessions_table(self) -> bool:
|
||||||
|
"""迁移:创建 chat_sessions 表并为 conversations 添加 session_id 字段"""
|
||||||
|
print(" 检查会话管理表...")
|
||||||
|
|
||||||
|
try:
|
||||||
|
inspector = inspect(db_manager.engine)
|
||||||
|
|
||||||
|
# 1. 创建 chat_sessions 表
|
||||||
|
if 'chat_sessions' not in inspector.get_table_names():
|
||||||
|
print(" 创建 chat_sessions 表...")
|
||||||
|
ChatSession.__table__.create(db_manager.engine, checkfirst=True)
|
||||||
|
print(" chat_sessions 表创建成功")
|
||||||
|
else:
|
||||||
|
print(" chat_sessions 表已存在")
|
||||||
|
|
||||||
|
# 2. 为 conversations 表添加 session_id 字段
|
||||||
|
if not self._column_exists('conversations', 'session_id'):
|
||||||
|
print(" 添加 conversations.session_id 字段...")
|
||||||
|
self._add_table_columns('conversations', [
|
||||||
|
('session_id', 'VARCHAR(100)')
|
||||||
|
])
|
||||||
|
print(" session_id 字段添加成功")
|
||||||
|
else:
|
||||||
|
print(" conversations.session_id 字段已存在")
|
||||||
|
|
||||||
|
return True
|
||||||
|
|
||||||
|
except Exception as e:
|
||||||
|
print(f" 会话管理表迁移失败: {e}")
|
||||||
|
return False
|
||||||
|
|
||||||
|
def _migrate_tenant_id_fields(self) -> bool:
|
||||||
|
"""迁移:为核心表添加 tenant_id 多租户字段"""
|
||||||
|
print(" 检查多租户 tenant_id 字段...")
|
||||||
|
tables = [
|
||||||
|
"work_orders", "chat_sessions", "conversations",
|
||||||
|
"knowledge_entries", "analytics", "alerts", "users",
|
||||||
|
]
|
||||||
|
try:
|
||||||
|
added = 0
|
||||||
|
for table in tables:
|
||||||
|
if not self._column_exists(table, 'tenant_id'):
|
||||||
|
print(f" 添加 {table}.tenant_id ...")
|
||||||
|
self._add_table_columns(table, [
|
||||||
|
('tenant_id', "VARCHAR(50) DEFAULT 'default' NOT NULL")
|
||||||
|
])
|
||||||
|
added += 1
|
||||||
|
else:
|
||||||
|
print(f" {table}.tenant_id 已存在")
|
||||||
|
print(f" tenant_id 迁移完成,新增 {added} 个表")
|
||||||
|
return True
|
||||||
|
except Exception as e:
|
||||||
|
print(f" tenant_id 迁移失败: {e}")
|
||||||
|
return False
|
||||||
|
|
||||||
def _add_table_columns(self, table_name: str, fields: List[tuple]) -> bool:
|
def _add_table_columns(self, table_name: str, fields: List[tuple]) -> bool:
|
||||||
"""为表添加字段"""
|
"""为表添加字段"""
|
||||||
try:
|
try:
|
||||||
|
|||||||
@@ -1,9 +0,0 @@
|
|||||||
2026-02-11 09:14:37,303 - src.core.database - ERROR - 数据库操作失败: 'search_frequency' is an invalid keyword argument for KnowledgeEntry
|
|
||||||
2026-02-11 09:23:37,045 - src.vehicle.vehicle_data_manager - INFO - 添加车辆数据成功: V001 - location
|
|
||||||
2026-02-11 09:23:37,047 - src.vehicle.vehicle_data_manager - INFO - 添加车辆数据成功: V001 - status
|
|
||||||
2026-02-11 09:23:37,049 - src.vehicle.vehicle_data_manager - INFO - 添加车辆数据成功: V001 - battery
|
|
||||||
2026-02-11 09:23:37,051 - src.vehicle.vehicle_data_manager - INFO - 添加车辆数据成功: V001 - engine
|
|
||||||
2026-02-11 09:23:37,053 - src.vehicle.vehicle_data_manager - INFO - 添加车辆数据成功: V002 - location
|
|
||||||
2026-02-11 09:23:37,055 - src.vehicle.vehicle_data_manager - INFO - 添加车辆数据成功: V002 - status
|
|
||||||
2026-02-11 09:23:37,057 - src.vehicle.vehicle_data_manager - INFO - 添加车辆数据成功: V002 - fault
|
|
||||||
2026-02-11 09:23:37,057 - src.vehicle.vehicle_data_manager - INFO - 示例车辆数据添加成功
|
|
||||||
|
|||||||
Binary file not shown.
Binary file not shown.
13
nginx.conf
13
nginx.conf
@@ -57,6 +57,19 @@ http {
|
|||||||
proxy_read_timeout 30s;
|
proxy_read_timeout 30s;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
# SSE 流式接口 — 关闭缓冲,支持逐 token 推送
|
||||||
|
location /api/chat/message/stream {
|
||||||
|
proxy_pass http://tsp_backend;
|
||||||
|
proxy_set_header Host $host;
|
||||||
|
proxy_set_header X-Real-IP $remote_addr;
|
||||||
|
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
|
||||||
|
proxy_set_header X-Forwarded-Proto $scheme;
|
||||||
|
proxy_buffering off;
|
||||||
|
proxy_cache off;
|
||||||
|
proxy_read_timeout 120s;
|
||||||
|
chunked_transfer_encoding on;
|
||||||
|
}
|
||||||
|
|
||||||
# WebSocket代理
|
# WebSocket代理
|
||||||
location /ws/ {
|
location /ws/ {
|
||||||
proxy_pass http://tsp_backend;
|
proxy_pass http://tsp_backend;
|
||||||
|
|||||||
@@ -69,3 +69,6 @@ marshmallow==3.23.3
|
|||||||
|
|
||||||
# 飞书官方 SDK(事件订阅 2.0 - 长连接模式)
|
# 飞书官方 SDK(事件订阅 2.0 - 长连接模式)
|
||||||
lark-oapi==1.3.5
|
lark-oapi==1.3.5
|
||||||
|
|
||||||
|
# 本地 Embedding 模型(可选,EMBEDDING_ENABLED=True 时需要)
|
||||||
|
# pip install sentence-transformers torch
|
||||||
81
scripts/migrate_embeddings.py
Normal file
81
scripts/migrate_embeddings.py
Normal file
@@ -0,0 +1,81 @@
|
|||||||
|
#!/usr/bin/env python
|
||||||
|
# -*- coding: utf-8 -*-
|
||||||
|
"""
|
||||||
|
批量为已有知识库条目生成 Embedding 向量(本地模型)
|
||||||
|
运行方式: python scripts/migrate_embeddings.py
|
||||||
|
|
||||||
|
首次运行会自动下载模型(~95MB),之后走本地缓存
|
||||||
|
"""
|
||||||
|
|
||||||
|
import sys
|
||||||
|
import os
|
||||||
|
import json
|
||||||
|
import logging
|
||||||
|
|
||||||
|
sys.path.append(os.path.dirname(os.path.dirname(os.path.abspath(__file__))))
|
||||||
|
|
||||||
|
from src.config.unified_config import get_config
|
||||||
|
from src.core.database import db_manager
|
||||||
|
from src.core.models import KnowledgeEntry
|
||||||
|
from src.core.embedding_client import EmbeddingClient
|
||||||
|
from src.core.vector_store import vector_store
|
||||||
|
|
||||||
|
logging.basicConfig(level=logging.INFO, format="%(asctime)s [%(levelname)s] %(message)s")
|
||||||
|
logger = logging.getLogger(__name__)
|
||||||
|
|
||||||
|
|
||||||
|
def migrate():
|
||||||
|
config = get_config()
|
||||||
|
if not config.embedding.enabled:
|
||||||
|
logger.warning("Embedding 功能未启用,请在 .env 中设置 EMBEDDING_ENABLED=True")
|
||||||
|
return
|
||||||
|
|
||||||
|
client = EmbeddingClient()
|
||||||
|
|
||||||
|
# 测试模型加载
|
||||||
|
logger.info("正在加载本地 embedding 模型(首次运行需下载)...")
|
||||||
|
if not client.test_connection():
|
||||||
|
logger.error("Embedding 模型加载失败,请检查 sentence-transformers 是否安装")
|
||||||
|
return
|
||||||
|
logger.info("模型加载成功")
|
||||||
|
|
||||||
|
# 获取所有需要生成 embedding 的条目
|
||||||
|
with db_manager.get_session() as session:
|
||||||
|
entries = session.query(KnowledgeEntry).filter(
|
||||||
|
KnowledgeEntry.is_active == True
|
||||||
|
).all()
|
||||||
|
|
||||||
|
# 筛选出没有 embedding 的条目
|
||||||
|
to_process = []
|
||||||
|
for entry in entries:
|
||||||
|
if not entry.vector_embedding or entry.vector_embedding.strip() == '':
|
||||||
|
to_process.append(entry)
|
||||||
|
|
||||||
|
logger.info(f"共 {len(entries)} 条活跃知识,{len(to_process)} 条需要生成 embedding")
|
||||||
|
|
||||||
|
if not to_process:
|
||||||
|
logger.info("所有条目已有 embedding,无需迁移")
|
||||||
|
return
|
||||||
|
|
||||||
|
# 批量生成
|
||||||
|
texts = [e.question + " " + e.answer for e in to_process]
|
||||||
|
logger.info(f"开始批量生成 embedding...")
|
||||||
|
vectors = client.embed_batch(texts)
|
||||||
|
|
||||||
|
success_count = 0
|
||||||
|
for i, entry in enumerate(to_process):
|
||||||
|
vec = vectors[i]
|
||||||
|
if vec:
|
||||||
|
entry.vector_embedding = json.dumps(vec)
|
||||||
|
success_count += 1
|
||||||
|
|
||||||
|
session.commit()
|
||||||
|
logger.info(f"Embedding 生成完成: 成功 {success_count}/{len(to_process)}")
|
||||||
|
|
||||||
|
# 重建向量索引
|
||||||
|
vector_store.load_from_db()
|
||||||
|
logger.info(f"向量索引重建完成: {vector_store.size} 条")
|
||||||
|
|
||||||
|
|
||||||
|
if __name__ == "__main__":
|
||||||
|
migrate()
|
||||||
58
scripts/migrate_tenant.py
Normal file
58
scripts/migrate_tenant.py
Normal file
@@ -0,0 +1,58 @@
|
|||||||
|
#!/usr/bin/env python
|
||||||
|
# -*- coding: utf-8 -*-
|
||||||
|
"""
|
||||||
|
多租户迁移脚本
|
||||||
|
为现有数据表添加 tenant_id 字段,已有数据填充为 'default'
|
||||||
|
"""
|
||||||
|
|
||||||
|
import sys
|
||||||
|
import os
|
||||||
|
sys.path.insert(0, os.path.dirname(os.path.dirname(os.path.abspath(__file__))))
|
||||||
|
|
||||||
|
from src.core.database import db_manager
|
||||||
|
from sqlalchemy import text, inspect
|
||||||
|
|
||||||
|
TABLES_TO_MIGRATE = [
|
||||||
|
"work_orders",
|
||||||
|
"chat_sessions",
|
||||||
|
"conversations",
|
||||||
|
"knowledge_entries",
|
||||||
|
"analytics",
|
||||||
|
"alerts",
|
||||||
|
"users",
|
||||||
|
]
|
||||||
|
|
||||||
|
|
||||||
|
def migrate():
|
||||||
|
print("=" * 50)
|
||||||
|
print("多租户迁移: 添加 tenant_id 字段")
|
||||||
|
print("=" * 50)
|
||||||
|
|
||||||
|
with db_manager.get_session() as session:
|
||||||
|
inspector = inspect(session.bind)
|
||||||
|
|
||||||
|
for table in TABLES_TO_MIGRATE:
|
||||||
|
# 检查表是否存在
|
||||||
|
if table not in inspector.get_table_names():
|
||||||
|
print(f" [跳过] 表 {table} 不存在")
|
||||||
|
continue
|
||||||
|
|
||||||
|
# 检查字段是否已存在
|
||||||
|
columns = [col["name"] for col in inspector.get_columns(table)]
|
||||||
|
if "tenant_id" in columns:
|
||||||
|
print(f" [已有] 表 {table} 已包含 tenant_id")
|
||||||
|
continue
|
||||||
|
|
||||||
|
# 添加字段
|
||||||
|
print(f" [迁移] 表 {table} 添加 tenant_id ...")
|
||||||
|
session.execute(text(
|
||||||
|
f"ALTER TABLE {table} ADD COLUMN tenant_id VARCHAR(50) DEFAULT 'default' NOT NULL"
|
||||||
|
))
|
||||||
|
session.commit()
|
||||||
|
print(f" [完成] 表 {table} 迁移成功")
|
||||||
|
|
||||||
|
print("\n迁移完成!")
|
||||||
|
|
||||||
|
|
||||||
|
if __name__ == "__main__":
|
||||||
|
migrate()
|
||||||
Binary file not shown.
Binary file not shown.
@@ -1,22 +1,8 @@
|
|||||||
|
|
||||||
# -*- coding: utf-8 -*-
|
# -*- coding: utf-8 -*-
|
||||||
"""
|
"""
|
||||||
Agent模块初始化文件
|
Agent模块
|
||||||
"""
|
"""
|
||||||
|
|
||||||
from .agent_core import AgentCore, AgentState
|
from .react_agent import ReactAgent
|
||||||
from .planner import TaskPlanner
|
|
||||||
from .executor import TaskExecutor
|
|
||||||
from .tool_manager import ToolManager
|
|
||||||
from .reasoning_engine import ReasoningEngine
|
|
||||||
from .goal_manager import GoalManager
|
|
||||||
|
|
||||||
__all__ = [
|
__all__ = ['ReactAgent']
|
||||||
'AgentCore',
|
|
||||||
'AgentState',
|
|
||||||
'TaskPlanner',
|
|
||||||
'TaskExecutor',
|
|
||||||
'ToolManager',
|
|
||||||
'ReasoningEngine',
|
|
||||||
'GoalManager'
|
|
||||||
]
|
|
||||||
|
|||||||
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
@@ -1,255 +0,0 @@
|
|||||||
|
|
||||||
# -*- coding: utf-8 -*-
|
|
||||||
"""
|
|
||||||
Agent动作执行器 - 执行具体的Agent动作
|
|
||||||
"""
|
|
||||||
|
|
||||||
import logging
|
|
||||||
import asyncio
|
|
||||||
from typing import Dict, Any, List, Optional
|
|
||||||
from datetime import datetime
|
|
||||||
import json
|
|
||||||
|
|
||||||
from .intelligent_agent import AgentAction, ActionType, AlertContext, KnowledgeContext
|
|
||||||
|
|
||||||
logger = logging.getLogger(__name__)
|
|
||||||
|
|
||||||
class ActionExecutor:
|
|
||||||
"""动作执行器"""
|
|
||||||
|
|
||||||
def __init__(self, tsp_assistant=None):
|
|
||||||
self.tsp_assistant = tsp_assistant
|
|
||||||
self.execution_history = []
|
|
||||||
self.action_handlers = {
|
|
||||||
ActionType.ALERT_RESPONSE: self._handle_alert_response,
|
|
||||||
ActionType.KNOWLEDGE_UPDATE: self._handle_knowledge_update,
|
|
||||||
ActionType.WORKORDER_CREATE: self._handle_workorder_create,
|
|
||||||
ActionType.SYSTEM_OPTIMIZE: self._handle_system_optimize,
|
|
||||||
ActionType.USER_NOTIFY: self._handle_user_notify
|
|
||||||
}
|
|
||||||
|
|
||||||
async def execute_action(self, action: AgentAction) -> Dict[str, Any]:
|
|
||||||
"""执行动作"""
|
|
||||||
try:
|
|
||||||
logger.info(f"开始执行动作: {action.action_type.value}")
|
|
||||||
start_time = datetime.now()
|
|
||||||
|
|
||||||
# 获取处理器
|
|
||||||
handler = self.action_handlers.get(action.action_type)
|
|
||||||
if not handler:
|
|
||||||
return {"success": False, "error": f"未找到动作处理器: {action.action_type}"}
|
|
||||||
|
|
||||||
# 执行动作
|
|
||||||
result = await handler(action)
|
|
||||||
|
|
||||||
# 记录执行历史
|
|
||||||
execution_record = {
|
|
||||||
"action_id": f"{action.action_type.value}_{datetime.now().timestamp()}",
|
|
||||||
"action_type": action.action_type.value,
|
|
||||||
"description": action.description,
|
|
||||||
"priority": action.priority,
|
|
||||||
"confidence": action.confidence,
|
|
||||||
"start_time": start_time.isoformat(),
|
|
||||||
"end_time": datetime.now().isoformat(),
|
|
||||||
"success": result.get("success", False),
|
|
||||||
"result": result
|
|
||||||
}
|
|
||||||
self.execution_history.append(execution_record)
|
|
||||||
|
|
||||||
logger.info(f"动作执行完成: {action.action_type.value}, 结果: {result.get('success', False)}")
|
|
||||||
return result
|
|
||||||
|
|
||||||
except Exception as e:
|
|
||||||
logger.error(f"执行动作失败: {e}")
|
|
||||||
return {"success": False, "error": str(e)}
|
|
||||||
|
|
||||||
async def _handle_alert_response(self, action: AgentAction) -> Dict[str, Any]:
|
|
||||||
"""处理预警响应"""
|
|
||||||
try:
|
|
||||||
alert_id = action.parameters.get("alert_id")
|
|
||||||
service = action.parameters.get("service")
|
|
||||||
|
|
||||||
# 根据动作描述执行具体操作
|
|
||||||
if "重启" in action.description:
|
|
||||||
return await self._restart_service(service)
|
|
||||||
elif "检查" in action.description:
|
|
||||||
return await self._check_system_status(alert_id)
|
|
||||||
elif "通知" in action.description:
|
|
||||||
return await self._notify_alert(alert_id, action.description)
|
|
||||||
else:
|
|
||||||
return await self._generic_alert_response(action)
|
|
||||||
|
|
||||||
except Exception as e:
|
|
||||||
logger.error(f"处理预警响应失败: {e}")
|
|
||||||
return {"success": False, "error": str(e)}
|
|
||||||
|
|
||||||
async def _handle_knowledge_update(self, action: AgentAction) -> Dict[str, Any]:
|
|
||||||
"""处理知识库更新"""
|
|
||||||
try:
|
|
||||||
question = action.parameters.get("question")
|
|
||||||
enhanced_answer = action.parameters.get("enhanced_answer")
|
|
||||||
|
|
||||||
if enhanced_answer:
|
|
||||||
# 更新知识库条目
|
|
||||||
if self.tsp_assistant:
|
|
||||||
# 这里调用TSP助手的知识库更新方法
|
|
||||||
result = await self._update_knowledge_entry(question, enhanced_answer)
|
|
||||||
return result
|
|
||||||
else:
|
|
||||||
return {"success": True, "message": "知识库条目已标记更新"}
|
|
||||||
else:
|
|
||||||
# 标记低置信度条目
|
|
||||||
return await self._mark_low_confidence_knowledge(question)
|
|
||||||
|
|
||||||
except Exception as e:
|
|
||||||
logger.error(f"处理知识库更新失败: {e}")
|
|
||||||
return {"success": False, "error": str(e)}
|
|
||||||
|
|
||||||
async def _handle_workorder_create(self, action: AgentAction) -> Dict[str, Any]:
|
|
||||||
"""处理工单创建"""
|
|
||||||
try:
|
|
||||||
title = action.parameters.get("title", "Agent自动创建工单")
|
|
||||||
description = action.description
|
|
||||||
category = action.parameters.get("category", "系统问题")
|
|
||||||
priority = action.parameters.get("priority", "medium")
|
|
||||||
|
|
||||||
if self.tsp_assistant:
|
|
||||||
# 调用TSP助手创建工单
|
|
||||||
workorder = self.tsp_assistant.create_work_order(
|
|
||||||
title=title,
|
|
||||||
description=description,
|
|
||||||
category=category,
|
|
||||||
priority=priority
|
|
||||||
)
|
|
||||||
return {"success": True, "workorder": workorder}
|
|
||||||
else:
|
|
||||||
return {"success": True, "message": "工单创建请求已记录"}
|
|
||||||
|
|
||||||
except Exception as e:
|
|
||||||
logger.error(f"处理工单创建失败: {e}")
|
|
||||||
return {"success": False, "error": str(e)}
|
|
||||||
|
|
||||||
async def _handle_system_optimize(self, action: AgentAction) -> Dict[str, Any]:
|
|
||||||
"""处理系统优化"""
|
|
||||||
try:
|
|
||||||
optimization_type = action.parameters.get("type", "general")
|
|
||||||
|
|
||||||
if optimization_type == "performance":
|
|
||||||
return await self._optimize_performance(action)
|
|
||||||
elif optimization_type == "memory":
|
|
||||||
return await self._optimize_memory(action)
|
|
||||||
elif optimization_type == "database":
|
|
||||||
return await self._optimize_database(action)
|
|
||||||
else:
|
|
||||||
return await self._general_optimization(action)
|
|
||||||
|
|
||||||
except Exception as e:
|
|
||||||
logger.error(f"处理系统优化失败: {e}")
|
|
||||||
return {"success": False, "error": str(e)}
|
|
||||||
|
|
||||||
async def _handle_user_notify(self, action: AgentAction) -> Dict[str, Any]:
|
|
||||||
"""处理用户通知"""
|
|
||||||
try:
|
|
||||||
message = action.description
|
|
||||||
user_id = action.parameters.get("user_id", "admin")
|
|
||||||
notification_type = action.parameters.get("type", "info")
|
|
||||||
|
|
||||||
# 这里实现具体的通知逻辑
|
|
||||||
# 可以是邮件、短信、系统通知等
|
|
||||||
return await self._send_notification(user_id, message, notification_type)
|
|
||||||
|
|
||||||
except Exception as e:
|
|
||||||
logger.error(f"处理用户通知失败: {e}")
|
|
||||||
return {"success": False, "error": str(e)}
|
|
||||||
|
|
||||||
# 具体实现方法
|
|
||||||
async def _restart_service(self, service: str) -> Dict[str, Any]:
|
|
||||||
"""重启服务"""
|
|
||||||
logger.info(f"重启服务: {service}")
|
|
||||||
# 这里实现具体的服务重启逻辑
|
|
||||||
await asyncio.sleep(2) # 模拟重启时间
|
|
||||||
return {"success": True, "message": f"服务 {service} 已重启"}
|
|
||||||
|
|
||||||
async def _check_system_status(self, alert_id: str) -> Dict[str, Any]:
|
|
||||||
"""检查系统状态"""
|
|
||||||
logger.info(f"检查系统状态: {alert_id}")
|
|
||||||
# 这里实现具体的系统检查逻辑
|
|
||||||
await asyncio.sleep(1)
|
|
||||||
return {"success": True, "status": "正常", "alert_id": alert_id}
|
|
||||||
|
|
||||||
async def _notify_alert(self, alert_id: str, message: str) -> Dict[str, Any]:
|
|
||||||
"""通知预警"""
|
|
||||||
logger.info(f"通知预警: {alert_id} - {message}")
|
|
||||||
# 这里实现具体的通知逻辑
|
|
||||||
return {"success": True, "message": "预警通知已发送"}
|
|
||||||
|
|
||||||
async def _generic_alert_response(self, action: AgentAction) -> Dict[str, Any]:
|
|
||||||
"""通用预警响应"""
|
|
||||||
logger.info(f"执行通用预警响应: {action.description}")
|
|
||||||
return {"success": True, "message": "预警响应已执行"}
|
|
||||||
|
|
||||||
async def _update_knowledge_entry(self, question: str, enhanced_answer: str) -> Dict[str, Any]:
|
|
||||||
"""更新知识库条目"""
|
|
||||||
logger.info(f"更新知识库条目: {question}")
|
|
||||||
# 这里实现具体的知识库更新逻辑
|
|
||||||
return {"success": True, "message": "知识库条目已更新"}
|
|
||||||
|
|
||||||
async def _mark_low_confidence_knowledge(self, question: str) -> Dict[str, Any]:
|
|
||||||
"""标记低置信度知识"""
|
|
||||||
logger.info(f"标记低置信度知识: {question}")
|
|
||||||
# 这里实现具体的标记逻辑
|
|
||||||
return {"success": True, "message": "低置信度知识已标记"}
|
|
||||||
|
|
||||||
async def _optimize_performance(self, action: AgentAction) -> Dict[str, Any]:
|
|
||||||
"""性能优化"""
|
|
||||||
logger.info("执行性能优化")
|
|
||||||
# 这里实现具体的性能优化逻辑
|
|
||||||
return {"success": True, "message": "性能优化已执行"}
|
|
||||||
|
|
||||||
async def _optimize_memory(self, action: AgentAction) -> Dict[str, Any]:
|
|
||||||
"""内存优化"""
|
|
||||||
logger.info("执行内存优化")
|
|
||||||
# 这里实现具体的内存优化逻辑
|
|
||||||
return {"success": True, "message": "内存优化已执行"}
|
|
||||||
|
|
||||||
async def _optimize_database(self, action: AgentAction) -> Dict[str, Any]:
|
|
||||||
"""数据库优化"""
|
|
||||||
logger.info("执行数据库优化")
|
|
||||||
# 这里实现具体的数据库优化逻辑
|
|
||||||
return {"success": True, "message": "数据库优化已执行"}
|
|
||||||
|
|
||||||
async def _general_optimization(self, action: AgentAction) -> Dict[str, Any]:
|
|
||||||
"""通用优化"""
|
|
||||||
logger.info(f"执行通用优化: {action.description}")
|
|
||||||
return {"success": True, "message": "系统优化已执行"}
|
|
||||||
|
|
||||||
async def _send_notification(self, user_id: str, message: str, notification_type: str) -> Dict[str, Any]:
|
|
||||||
"""发送通知"""
|
|
||||||
logger.info(f"发送通知给 {user_id}: {message}")
|
|
||||||
# 这里实现具体的通知发送逻辑
|
|
||||||
return {"success": True, "message": "通知已发送"}
|
|
||||||
|
|
||||||
def get_execution_history(self, limit: int = 100) -> List[Dict[str, Any]]:
|
|
||||||
"""获取执行历史"""
|
|
||||||
return self.execution_history[-limit:]
|
|
||||||
|
|
||||||
def get_action_statistics(self) -> Dict[str, Any]:
|
|
||||||
"""获取动作统计"""
|
|
||||||
total_actions = len(self.execution_history)
|
|
||||||
successful_actions = sum(1 for record in self.execution_history if record["success"])
|
|
||||||
|
|
||||||
action_types = {}
|
|
||||||
for record in self.execution_history:
|
|
||||||
action_type = record["action_type"]
|
|
||||||
if action_type not in action_types:
|
|
||||||
action_types[action_type] = {"total": 0, "successful": 0}
|
|
||||||
action_types[action_type]["total"] += 1
|
|
||||||
if record["success"]:
|
|
||||||
action_types[action_type]["successful"] += 1
|
|
||||||
|
|
||||||
return {
|
|
||||||
"total_actions": total_actions,
|
|
||||||
"successful_actions": successful_actions,
|
|
||||||
"success_rate": successful_actions / total_actions if total_actions > 0 else 0,
|
|
||||||
"action_types": action_types
|
|
||||||
}
|
|
||||||
@@ -1,268 +0,0 @@
|
|||||||
# -*- coding: utf-8 -*-
|
|
||||||
"""
|
|
||||||
TSP Agent助手核心模块
|
|
||||||
包含Agent助手的核心功能和基础类
|
|
||||||
"""
|
|
||||||
|
|
||||||
import logging
|
|
||||||
import asyncio
|
|
||||||
from typing import Dict, Any, List, Optional
|
|
||||||
from datetime import datetime
|
|
||||||
import json
|
|
||||||
|
|
||||||
from src.main import TSPAssistant
|
|
||||||
from src.agent import AgentCore, AgentState
|
|
||||||
from src.agent.auto_monitor import AutoMonitorService
|
|
||||||
from src.agent.intelligent_agent import IntelligentAgent, AlertContext, KnowledgeContext
|
|
||||||
from src.agent.llm_client import LLMManager, LLMConfig
|
|
||||||
from src.agent.action_executor import ActionExecutor
|
|
||||||
|
|
||||||
logger = logging.getLogger(__name__)
|
|
||||||
|
|
||||||
class TSPAgentAssistantCore(TSPAssistant):
|
|
||||||
"""TSP Agent助手核心 - 基础功能"""
|
|
||||||
|
|
||||||
def __init__(self, llm_config: Optional[LLMConfig] = None):
|
|
||||||
# 初始化基础TSP助手
|
|
||||||
super().__init__()
|
|
||||||
|
|
||||||
# 初始化Agent核心
|
|
||||||
self.agent_core = AgentCore()
|
|
||||||
|
|
||||||
# 初始化自动监控服务
|
|
||||||
self.auto_monitor = AutoMonitorService(self)
|
|
||||||
|
|
||||||
# 初始化LLM客户端
|
|
||||||
self._init_llm_manager(llm_config)
|
|
||||||
|
|
||||||
# 初始化智能Agent
|
|
||||||
self.intelligent_agent = IntelligentAgent(
|
|
||||||
llm_client=self.llm_manager
|
|
||||||
)
|
|
||||||
|
|
||||||
# 初始化动作执行器
|
|
||||||
self.action_executor = ActionExecutor(self)
|
|
||||||
|
|
||||||
# Agent状态
|
|
||||||
self.agent_state = AgentState.IDLE
|
|
||||||
self.is_agent_mode = True
|
|
||||||
self.proactive_monitoring_enabled = False
|
|
||||||
|
|
||||||
# 执行历史
|
|
||||||
self.execution_history = []
|
|
||||||
self.max_history_size = 1000
|
|
||||||
|
|
||||||
logger.info("TSP Agent助手核心初始化完成")
|
|
||||||
|
|
||||||
def _init_llm_manager(self, llm_config: Optional[LLMConfig] = None):
|
|
||||||
"""初始化LLM管理器"""
|
|
||||||
if llm_config:
|
|
||||||
self.llm_manager = LLMManager(llm_config)
|
|
||||||
else:
|
|
||||||
# 从统一配置管理器获取LLM配置
|
|
||||||
try:
|
|
||||||
from src.config.unified_config import get_config
|
|
||||||
unified_llm = get_config().llm
|
|
||||||
# 将统一配置的LLMConfig转换为agent需要的LLMConfig
|
|
||||||
agent_llm_config = LLMConfig(
|
|
||||||
provider=unified_llm.provider,
|
|
||||||
api_key=unified_llm.api_key,
|
|
||||||
base_url=unified_llm.base_url,
|
|
||||||
model=unified_llm.model,
|
|
||||||
temperature=unified_llm.temperature,
|
|
||||||
max_tokens=unified_llm.max_tokens
|
|
||||||
)
|
|
||||||
self.llm_manager = LLMManager(agent_llm_config)
|
|
||||||
except Exception as e:
|
|
||||||
logger.warning(f"无法从统一配置加载LLM配置,使用config/llm_config.py: {e}")
|
|
||||||
try:
|
|
||||||
from config.llm_config import DEFAULT_CONFIG
|
|
||||||
self.llm_manager = LLMManager(DEFAULT_CONFIG)
|
|
||||||
except ImportError:
|
|
||||||
# 最后的fallback
|
|
||||||
default_config = LLMConfig(
|
|
||||||
provider="qwen",
|
|
||||||
api_key="",
|
|
||||||
base_url="https://dashscope.aliyuncs.com/compatible-mode/v1",
|
|
||||||
model="qwen-turbo",
|
|
||||||
temperature=0.7,
|
|
||||||
max_tokens=2000
|
|
||||||
)
|
|
||||||
self.llm_manager = LLMManager(default_config)
|
|
||||||
|
|
||||||
def get_agent_status(self) -> Dict[str, Any]:
|
|
||||||
"""获取Agent状态"""
|
|
||||||
return {
|
|
||||||
"agent_state": self.agent_state.value,
|
|
||||||
"is_agent_mode": self.is_agent_mode,
|
|
||||||
"proactive_monitoring": self.proactive_monitoring_enabled,
|
|
||||||
"execution_count": len(self.execution_history),
|
|
||||||
"llm_status": self.llm_manager.get_status(),
|
|
||||||
"agent_core_status": self.agent_core.get_status(),
|
|
||||||
"last_activity": self.execution_history[-1]["timestamp"] if self.execution_history else None
|
|
||||||
}
|
|
||||||
|
|
||||||
def toggle_agent_mode(self, enabled: bool) -> bool:
|
|
||||||
"""切换Agent模式"""
|
|
||||||
try:
|
|
||||||
self.is_agent_mode = enabled
|
|
||||||
if enabled:
|
|
||||||
self.agent_state = AgentState.IDLE
|
|
||||||
logger.info("Agent模式已启用")
|
|
||||||
else:
|
|
||||||
self.agent_state = AgentState.DISABLED
|
|
||||||
logger.info("Agent模式已禁用")
|
|
||||||
return True
|
|
||||||
except Exception as e:
|
|
||||||
logger.error(f"切换Agent模式失败: {e}")
|
|
||||||
return False
|
|
||||||
|
|
||||||
def start_proactive_monitoring(self) -> bool:
|
|
||||||
"""启动主动监控"""
|
|
||||||
try:
|
|
||||||
if not self.proactive_monitoring_enabled:
|
|
||||||
self.proactive_monitoring_enabled = True
|
|
||||||
self.auto_monitor.start_monitoring()
|
|
||||||
logger.info("主动监控已启动")
|
|
||||||
return True
|
|
||||||
return False
|
|
||||||
except Exception as e:
|
|
||||||
logger.error(f"启动主动监控失败: {e}")
|
|
||||||
return False
|
|
||||||
|
|
||||||
def stop_proactive_monitoring(self) -> bool:
|
|
||||||
"""停止主动监控"""
|
|
||||||
try:
|
|
||||||
if self.proactive_monitoring_enabled:
|
|
||||||
self.proactive_monitoring_enabled = False
|
|
||||||
self.auto_monitor.stop_monitoring()
|
|
||||||
logger.info("主动监控已停止")
|
|
||||||
return True
|
|
||||||
return False
|
|
||||||
except Exception as e:
|
|
||||||
logger.error(f"停止主动监控失败: {e}")
|
|
||||||
return False
|
|
||||||
|
|
||||||
def run_proactive_monitoring(self) -> Dict[str, Any]:
|
|
||||||
"""运行主动监控检查"""
|
|
||||||
try:
|
|
||||||
if not self.proactive_monitoring_enabled:
|
|
||||||
return {"success": False, "message": "主动监控未启用"}
|
|
||||||
|
|
||||||
# 获取系统状态
|
|
||||||
system_health = self.get_system_health()
|
|
||||||
|
|
||||||
# 检查预警
|
|
||||||
alerts = self.check_alerts()
|
|
||||||
|
|
||||||
# 检查工单状态
|
|
||||||
workorders_status = self._check_workorders_status()
|
|
||||||
|
|
||||||
# 运行智能分析
|
|
||||||
analysis = self.intelligent_agent.analyze_system_state(
|
|
||||||
system_health=system_health,
|
|
||||||
alerts=alerts,
|
|
||||||
workorders=workorders_status
|
|
||||||
)
|
|
||||||
|
|
||||||
# 执行建议的动作
|
|
||||||
actions_taken = []
|
|
||||||
if analysis.get("recommended_actions"):
|
|
||||||
for action in analysis["recommended_actions"]:
|
|
||||||
result = self.action_executor.execute_action(action)
|
|
||||||
actions_taken.append(result)
|
|
||||||
|
|
||||||
return {
|
|
||||||
"success": True,
|
|
||||||
"analysis": analysis,
|
|
||||||
"actions_taken": actions_taken,
|
|
||||||
"timestamp": datetime.now().isoformat()
|
|
||||||
}
|
|
||||||
except Exception as e:
|
|
||||||
logger.error(f"主动监控检查失败: {e}")
|
|
||||||
return {"success": False, "error": str(e)}
|
|
||||||
|
|
||||||
def _check_workorders_status(self) -> Dict[str, Any]:
|
|
||||||
"""检查工单状态"""
|
|
||||||
try:
|
|
||||||
from src.core.database import db_manager
|
|
||||||
from src.core.models import WorkOrder
|
|
||||||
|
|
||||||
with db_manager.get_session() as session:
|
|
||||||
total_workorders = session.query(WorkOrder).count()
|
|
||||||
open_workorders = session.query(WorkOrder).filter(WorkOrder.status == 'open').count()
|
|
||||||
resolved_workorders = session.query(WorkOrder).filter(WorkOrder.status == 'resolved').count()
|
|
||||||
|
|
||||||
return {
|
|
||||||
"total": total_workorders,
|
|
||||||
"open": open_workorders,
|
|
||||||
"resolved": resolved_workorders,
|
|
||||||
"resolution_rate": resolved_workorders / total_workorders if total_workorders > 0 else 0
|
|
||||||
}
|
|
||||||
except Exception as e:
|
|
||||||
logger.error(f"检查工单状态失败: {e}")
|
|
||||||
return {"error": str(e)}
|
|
||||||
|
|
||||||
def run_intelligent_analysis(self) -> Dict[str, Any]:
|
|
||||||
"""运行智能分析"""
|
|
||||||
try:
|
|
||||||
# 获取系统数据
|
|
||||||
system_health = self.get_system_health()
|
|
||||||
alerts = self.check_alerts()
|
|
||||||
workorders = self._check_workorders_status()
|
|
||||||
|
|
||||||
# 创建分析上下文
|
|
||||||
context = {
|
|
||||||
"system_health": system_health,
|
|
||||||
"alerts": alerts,
|
|
||||||
"workorders": workorders,
|
|
||||||
"timestamp": datetime.now().isoformat()
|
|
||||||
}
|
|
||||||
|
|
||||||
# 运行智能分析
|
|
||||||
analysis = self.intelligent_agent.comprehensive_analysis(context)
|
|
||||||
|
|
||||||
# 记录分析结果
|
|
||||||
self._record_execution("intelligent_analysis", analysis)
|
|
||||||
|
|
||||||
return analysis
|
|
||||||
except Exception as e:
|
|
||||||
logger.error(f"智能分析失败: {e}")
|
|
||||||
return {"error": str(e)}
|
|
||||||
|
|
||||||
def _record_execution(self, action_type: str, result: Any):
|
|
||||||
"""记录执行历史"""
|
|
||||||
execution_record = {
|
|
||||||
"timestamp": datetime.now().isoformat(),
|
|
||||||
"action_type": action_type,
|
|
||||||
"result": result,
|
|
||||||
"agent_state": self.agent_state.value
|
|
||||||
}
|
|
||||||
|
|
||||||
self.execution_history.append(execution_record)
|
|
||||||
|
|
||||||
# 保持历史记录大小限制
|
|
||||||
if len(self.execution_history) > self.max_history_size:
|
|
||||||
self.execution_history = self.execution_history[-self.max_history_size:]
|
|
||||||
|
|
||||||
def get_action_history(self, limit: int = 50) -> List[Dict[str, Any]]:
|
|
||||||
"""获取动作执行历史"""
|
|
||||||
return self.execution_history[-limit:] if self.execution_history else []
|
|
||||||
|
|
||||||
def clear_execution_history(self) -> Dict[str, Any]:
|
|
||||||
"""清空执行历史"""
|
|
||||||
try:
|
|
||||||
self.execution_history.clear()
|
|
||||||
logger.info("执行历史已清空")
|
|
||||||
return {"success": True, "message": "执行历史已清空"}
|
|
||||||
except Exception as e:
|
|
||||||
logger.error(f"清空执行历史失败: {e}")
|
|
||||||
return {"success": False, "error": str(e)}
|
|
||||||
|
|
||||||
def get_llm_usage_stats(self) -> Dict[str, Any]:
|
|
||||||
"""获取LLM使用统计"""
|
|
||||||
try:
|
|
||||||
return self.llm_manager.get_usage_stats()
|
|
||||||
except Exception as e:
|
|
||||||
logger.error(f"获取LLM使用统计失败: {e}")
|
|
||||||
return {"error": str(e)}
|
|
||||||
@@ -1,312 +0,0 @@
|
|||||||
|
|
||||||
# -*- coding: utf-8 -*-
|
|
||||||
"""
|
|
||||||
Agent核心模块
|
|
||||||
实现智能体的核心逻辑和决策机制
|
|
||||||
"""
|
|
||||||
|
|
||||||
import logging
|
|
||||||
import asyncio
|
|
||||||
from typing import Dict, List, Any, Optional, Callable
|
|
||||||
from datetime import datetime
|
|
||||||
from enum import Enum
|
|
||||||
import json
|
|
||||||
|
|
||||||
from ..core.database import db_manager
|
|
||||||
from ..core.llm_client import QwenClient
|
|
||||||
from .planner import TaskPlanner
|
|
||||||
from .executor import TaskExecutor
|
|
||||||
from .tool_manager import ToolManager
|
|
||||||
from .reasoning_engine import ReasoningEngine
|
|
||||||
from .goal_manager import GoalManager
|
|
||||||
|
|
||||||
logger = logging.getLogger(__name__)
|
|
||||||
|
|
||||||
class AgentState(Enum):
|
|
||||||
"""Agent状态枚举"""
|
|
||||||
IDLE = "idle"
|
|
||||||
THINKING = "thinking"
|
|
||||||
PLANNING = "planning"
|
|
||||||
EXECUTING = "executing"
|
|
||||||
LEARNING = "learning"
|
|
||||||
ERROR = "error"
|
|
||||||
|
|
||||||
class AgentCore:
|
|
||||||
"""Agent核心类"""
|
|
||||||
|
|
||||||
def __init__(self):
|
|
||||||
self.state = AgentState.IDLE
|
|
||||||
self.llm_client = QwenClient()
|
|
||||||
self.planner = TaskPlanner()
|
|
||||||
self.executor = TaskExecutor()
|
|
||||||
self.tool_manager = ToolManager()
|
|
||||||
self.reasoning_engine = ReasoningEngine()
|
|
||||||
self.goal_manager = GoalManager()
|
|
||||||
|
|
||||||
# Agent记忆和上下文
|
|
||||||
self.memory = {}
|
|
||||||
self.current_goal = None
|
|
||||||
self.active_tasks = []
|
|
||||||
self.execution_history = []
|
|
||||||
|
|
||||||
# 配置参数
|
|
||||||
self.max_iterations = 10
|
|
||||||
self.confidence_threshold = 0.7
|
|
||||||
|
|
||||||
logger.info("Agent核心初始化完成")
|
|
||||||
|
|
||||||
async def process_request(self, request: Dict[str, Any]) -> Dict[str, Any]:
|
|
||||||
"""处理用户请求的主入口"""
|
|
||||||
try:
|
|
||||||
self.state = AgentState.THINKING
|
|
||||||
|
|
||||||
# 1. 理解用户意图
|
|
||||||
intent = await self._understand_intent(request)
|
|
||||||
|
|
||||||
# 2. 设定目标
|
|
||||||
goal = await self._set_goal(intent, request)
|
|
||||||
|
|
||||||
# 3. 制定计划
|
|
||||||
plan = await self._create_plan(goal)
|
|
||||||
|
|
||||||
# 4. 执行计划
|
|
||||||
result = await self._execute_plan(plan)
|
|
||||||
|
|
||||||
# 5. 学习和反思
|
|
||||||
await self._learn_from_execution(result)
|
|
||||||
|
|
||||||
self.state = AgentState.IDLE
|
|
||||||
return result
|
|
||||||
|
|
||||||
except Exception as e:
|
|
||||||
logger.error(f"处理请求失败: {e}")
|
|
||||||
self.state = AgentState.ERROR
|
|
||||||
return {"error": f"处理失败: {str(e)}"}
|
|
||||||
|
|
||||||
async def _understand_intent(self, request: Dict[str, Any]) -> Dict[str, Any]:
|
|
||||||
"""理解用户意图"""
|
|
||||||
user_message = request.get("message", "")
|
|
||||||
context = request.get("context", {})
|
|
||||||
|
|
||||||
# 使用推理引擎分析意图
|
|
||||||
intent_analysis = await self.reasoning_engine.analyze_intent(
|
|
||||||
message=user_message,
|
|
||||||
context=context,
|
|
||||||
history=self.execution_history[-5:] # 最近5次执行历史
|
|
||||||
)
|
|
||||||
|
|
||||||
return intent_analysis
|
|
||||||
|
|
||||||
async def _set_goal(self, intent: Dict[str, Any], request: Dict[str, Any]) -> Dict[str, Any]:
|
|
||||||
"""设定目标"""
|
|
||||||
goal = await self.goal_manager.create_goal(
|
|
||||||
intent=intent,
|
|
||||||
request=request,
|
|
||||||
current_state=self.state
|
|
||||||
)
|
|
||||||
|
|
||||||
self.current_goal = goal
|
|
||||||
return goal
|
|
||||||
|
|
||||||
async def _create_plan(self, goal: Dict[str, Any]) -> List[Dict[str, Any]]:
|
|
||||||
"""制定执行计划"""
|
|
||||||
self.state = AgentState.PLANNING
|
|
||||||
|
|
||||||
plan = await self.planner.create_plan(
|
|
||||||
goal=goal,
|
|
||||||
available_tools=self.tool_manager.get_available_tools(),
|
|
||||||
constraints=self._get_constraints()
|
|
||||||
)
|
|
||||||
|
|
||||||
return plan
|
|
||||||
|
|
||||||
async def _execute_plan(self, plan: List[Dict[str, Any]]) -> Dict[str, Any]:
|
|
||||||
"""执行计划"""
|
|
||||||
self.state = AgentState.EXECUTING
|
|
||||||
|
|
||||||
execution_result = await self.executor.execute_plan(
|
|
||||||
plan=plan,
|
|
||||||
tool_manager=self.tool_manager,
|
|
||||||
context=self.memory
|
|
||||||
)
|
|
||||||
|
|
||||||
# 记录执行历史
|
|
||||||
self.execution_history.append({
|
|
||||||
"timestamp": datetime.now().isoformat(),
|
|
||||||
"plan": plan,
|
|
||||||
"result": execution_result
|
|
||||||
})
|
|
||||||
|
|
||||||
return execution_result
|
|
||||||
|
|
||||||
async def _learn_from_execution(self, result: Dict[str, Any]):
|
|
||||||
"""从执行结果中学习"""
|
|
||||||
self.state = AgentState.LEARNING
|
|
||||||
|
|
||||||
# 分析执行效果
|
|
||||||
learning_insights = await self.reasoning_engine.extract_insights(
|
|
||||||
execution_result=result,
|
|
||||||
goal=self.current_goal
|
|
||||||
)
|
|
||||||
|
|
||||||
# 更新记忆
|
|
||||||
self._update_memory(learning_insights)
|
|
||||||
|
|
||||||
# 更新工具使用统计
|
|
||||||
self.tool_manager.update_usage_stats(result.get("tool_usage", []))
|
|
||||||
|
|
||||||
def _get_constraints(self) -> Dict[str, Any]:
|
|
||||||
"""获取执行约束"""
|
|
||||||
return {
|
|
||||||
"max_iterations": self.max_iterations,
|
|
||||||
"confidence_threshold": self.confidence_threshold,
|
|
||||||
"timeout": 300, # 5分钟超时
|
|
||||||
"memory_limit": 1000 # 内存限制
|
|
||||||
}
|
|
||||||
|
|
||||||
def _update_memory(self, insights: Dict[str, Any]):
|
|
||||||
"""更新Agent记忆"""
|
|
||||||
timestamp = datetime.now().isoformat()
|
|
||||||
|
|
||||||
# 更新成功模式
|
|
||||||
if insights.get("success_patterns"):
|
|
||||||
if "success_patterns" not in self.memory:
|
|
||||||
self.memory["success_patterns"] = []
|
|
||||||
self.memory["success_patterns"].extend(insights["success_patterns"])
|
|
||||||
|
|
||||||
# 更新失败模式
|
|
||||||
if insights.get("failure_patterns"):
|
|
||||||
if "failure_patterns" not in self.memory:
|
|
||||||
self.memory["failure_patterns"] = []
|
|
||||||
self.memory["failure_patterns"].extend(insights["failure_patterns"])
|
|
||||||
|
|
||||||
# 更新知识
|
|
||||||
if insights.get("new_knowledge"):
|
|
||||||
if "knowledge" not in self.memory:
|
|
||||||
self.memory["knowledge"] = []
|
|
||||||
self.memory["knowledge"].extend(insights["new_knowledge"])
|
|
||||||
|
|
||||||
# 限制记忆大小
|
|
||||||
for key in self.memory:
|
|
||||||
if isinstance(self.memory[key], list) and len(self.memory[key]) > 100:
|
|
||||||
self.memory[key] = self.memory[key][-100:]
|
|
||||||
|
|
||||||
async def proactive_action(self) -> Optional[Dict[str, Any]]:
|
|
||||||
"""主动行动 - Agent主动发起的行为"""
|
|
||||||
try:
|
|
||||||
# 检查是否有需要主动处理的任务
|
|
||||||
proactive_tasks = await self._identify_proactive_tasks()
|
|
||||||
|
|
||||||
if proactive_tasks:
|
|
||||||
# 选择最重要的任务
|
|
||||||
priority_task = max(proactive_tasks, key=lambda x: x.get("priority", 0))
|
|
||||||
|
|
||||||
# 执行主动任务
|
|
||||||
result = await self.process_request(priority_task)
|
|
||||||
return result
|
|
||||||
|
|
||||||
return None
|
|
||||||
|
|
||||||
except Exception as e:
|
|
||||||
logger.error(f"主动行动失败: {e}")
|
|
||||||
return None
|
|
||||||
|
|
||||||
async def _identify_proactive_tasks(self) -> List[Dict[str, Any]]:
|
|
||||||
"""识别需要主动处理的任务"""
|
|
||||||
tasks = []
|
|
||||||
|
|
||||||
# 检查预警系统
|
|
||||||
alerts = await self._check_alerts()
|
|
||||||
if alerts:
|
|
||||||
tasks.extend([{
|
|
||||||
"type": "alert_response",
|
|
||||||
"message": f"处理预警: {alert['message']}",
|
|
||||||
"priority": self._calculate_alert_priority(alert),
|
|
||||||
"context": {"alert": alert}
|
|
||||||
} for alert in alerts])
|
|
||||||
|
|
||||||
# 检查知识库更新需求
|
|
||||||
knowledge_gaps = await self._identify_knowledge_gaps()
|
|
||||||
if knowledge_gaps:
|
|
||||||
tasks.append({
|
|
||||||
"type": "knowledge_update",
|
|
||||||
"message": "更新知识库",
|
|
||||||
"priority": 0.6,
|
|
||||||
"context": {"gaps": knowledge_gaps}
|
|
||||||
})
|
|
||||||
|
|
||||||
# 检查系统健康状态
|
|
||||||
health_issues = await self._check_system_health()
|
|
||||||
if health_issues:
|
|
||||||
tasks.append({
|
|
||||||
"type": "system_maintenance",
|
|
||||||
"message": "系统维护",
|
|
||||||
"priority": 0.8,
|
|
||||||
"context": {"issues": health_issues}
|
|
||||||
})
|
|
||||||
|
|
||||||
return tasks
|
|
||||||
|
|
||||||
async def _check_alerts(self) -> List[Dict[str, Any]]:
|
|
||||||
"""检查预警"""
|
|
||||||
# 这里可以调用现有的预警系统
|
|
||||||
from ..analytics.alert_system import AlertSystem
|
|
||||||
alert_system = AlertSystem()
|
|
||||||
return alert_system.get_active_alerts()
|
|
||||||
|
|
||||||
def _calculate_alert_priority(self, alert: Dict[str, Any]) -> float:
|
|
||||||
"""计算预警优先级"""
|
|
||||||
severity_map = {
|
|
||||||
"low": 0.3,
|
|
||||||
"medium": 0.6,
|
|
||||||
"high": 0.8,
|
|
||||||
"critical": 1.0
|
|
||||||
}
|
|
||||||
return severity_map.get(alert.get("severity", "medium"), 0.5)
|
|
||||||
|
|
||||||
async def _identify_knowledge_gaps(self) -> List[Dict[str, Any]]:
|
|
||||||
"""识别知识库缺口"""
|
|
||||||
# 分析未解决的问题,识别知识缺口
|
|
||||||
gaps = []
|
|
||||||
|
|
||||||
# 这里可以实现具体的知识缺口识别逻辑
|
|
||||||
# 例如:分析低置信度的回复、未解决的问题等
|
|
||||||
|
|
||||||
return gaps
|
|
||||||
|
|
||||||
async def _check_system_health(self) -> List[Dict[str, Any]]:
|
|
||||||
"""检查系统健康状态"""
|
|
||||||
issues = []
|
|
||||||
|
|
||||||
# 检查各个组件的健康状态
|
|
||||||
if not self.llm_client.test_connection():
|
|
||||||
issues.append({"component": "llm_client", "issue": "连接失败"})
|
|
||||||
|
|
||||||
# 检查内存使用
|
|
||||||
import psutil
|
|
||||||
memory_percent = psutil.virtual_memory().percent
|
|
||||||
if memory_percent > 80:
|
|
||||||
issues.append({"component": "memory", "issue": f"内存使用率过高: {memory_percent}%"})
|
|
||||||
|
|
||||||
return issues
|
|
||||||
|
|
||||||
def get_status(self) -> Dict[str, Any]:
|
|
||||||
"""获取Agent状态"""
|
|
||||||
return {
|
|
||||||
"state": self.state.value,
|
|
||||||
"current_goal": self.current_goal,
|
|
||||||
"active_tasks": len(self.active_tasks),
|
|
||||||
"execution_history_count": len(self.execution_history),
|
|
||||||
"memory_size": len(str(self.memory)),
|
|
||||||
"available_tools": len(self.tool_manager.get_available_tools()),
|
|
||||||
"timestamp": datetime.now().isoformat()
|
|
||||||
}
|
|
||||||
|
|
||||||
def reset(self):
|
|
||||||
"""重置Agent状态"""
|
|
||||||
self.state = AgentState.IDLE
|
|
||||||
self.current_goal = None
|
|
||||||
self.active_tasks = []
|
|
||||||
self.execution_history = []
|
|
||||||
self.memory = {}
|
|
||||||
logger.info("Agent状态已重置")
|
|
||||||
@@ -1,243 +0,0 @@
|
|||||||
# -*- coding: utf-8 -*-
|
|
||||||
"""
|
|
||||||
TSP Agent消息处理模块
|
|
||||||
处理Agent的消息处理和对话功能
|
|
||||||
"""
|
|
||||||
|
|
||||||
import logging
|
|
||||||
import asyncio
|
|
||||||
from typing import Dict, Any, List, Optional
|
|
||||||
from datetime import datetime
|
|
||||||
|
|
||||||
from .agent_assistant_core import TSPAgentAssistantCore
|
|
||||||
from .intelligent_agent import IntelligentAgent
|
|
||||||
|
|
||||||
logger = logging.getLogger(__name__)
|
|
||||||
|
|
||||||
class AgentMessageHandler:
|
|
||||||
"""Agent消息处理器"""
|
|
||||||
|
|
||||||
def __init__(self, agent_core: TSPAgentAssistantCore):
|
|
||||||
self.agent_core = agent_core
|
|
||||||
self.intelligent_agent = agent_core.intelligent_agent
|
|
||||||
self.action_executor = agent_core.action_executor
|
|
||||||
|
|
||||||
async def process_message_agent(self, message: str, user_id: str = "admin",
|
|
||||||
work_order_id: Optional[int] = None,
|
|
||||||
enable_proactive: bool = True) -> Dict[str, Any]:
|
|
||||||
"""使用Agent处理消息"""
|
|
||||||
try:
|
|
||||||
# 更新Agent状态
|
|
||||||
self.agent_core.agent_state = self.agent_core.agent_core.AgentState.PROCESSING
|
|
||||||
|
|
||||||
# 创建对话上下文
|
|
||||||
context = {
|
|
||||||
"message": message,
|
|
||||||
"user_id": user_id,
|
|
||||||
"work_order_id": work_order_id,
|
|
||||||
"timestamp": datetime.now().isoformat(),
|
|
||||||
"enable_proactive": enable_proactive
|
|
||||||
}
|
|
||||||
|
|
||||||
# 使用智能Agent处理消息
|
|
||||||
agent_response = await self.intelligent_agent.process_message(context)
|
|
||||||
|
|
||||||
# 执行建议的动作
|
|
||||||
actions_taken = []
|
|
||||||
if agent_response.get("recommended_actions"):
|
|
||||||
for action in agent_response["recommended_actions"]:
|
|
||||||
action_result = self.action_executor.execute_action(action)
|
|
||||||
actions_taken.append(action_result)
|
|
||||||
|
|
||||||
# 生成响应
|
|
||||||
response = {
|
|
||||||
"response": agent_response.get("response", "Agent已处理您的请求"),
|
|
||||||
"actions": actions_taken,
|
|
||||||
"status": "completed",
|
|
||||||
"confidence": agent_response.get("confidence", 0.8),
|
|
||||||
"context": context
|
|
||||||
}
|
|
||||||
|
|
||||||
# 记录执行历史
|
|
||||||
self.agent_core._record_execution("message_processing", response)
|
|
||||||
|
|
||||||
# 更新Agent状态
|
|
||||||
self.agent_core.agent_state = self.agent_core.agent_core.AgentState.IDLE
|
|
||||||
|
|
||||||
return response
|
|
||||||
|
|
||||||
except Exception as e:
|
|
||||||
logger.error(f"Agent消息处理失败: {e}")
|
|
||||||
self.agent_core.agent_state = self.agent_core.agent_core.AgentState.ERROR
|
|
||||||
|
|
||||||
return {
|
|
||||||
"response": f"处理消息时发生错误: {str(e)}",
|
|
||||||
"actions": [],
|
|
||||||
"status": "error",
|
|
||||||
"error": str(e)
|
|
||||||
}
|
|
||||||
|
|
||||||
async def process_conversation_agent(self, conversation_data: Dict[str, Any]) -> Dict[str, Any]:
|
|
||||||
"""使用Agent处理对话"""
|
|
||||||
try:
|
|
||||||
# 提取对话信息
|
|
||||||
user_message = conversation_data.get("message", "")
|
|
||||||
user_id = conversation_data.get("user_id", "anonymous")
|
|
||||||
session_id = conversation_data.get("session_id")
|
|
||||||
|
|
||||||
# 创建对话上下文
|
|
||||||
context = {
|
|
||||||
"message": user_message,
|
|
||||||
"user_id": user_id,
|
|
||||||
"session_id": session_id,
|
|
||||||
"conversation_history": conversation_data.get("history", []),
|
|
||||||
"timestamp": datetime.now().isoformat()
|
|
||||||
}
|
|
||||||
|
|
||||||
# 使用智能Agent处理对话
|
|
||||||
agent_response = await self.intelligent_agent.process_conversation(context)
|
|
||||||
|
|
||||||
# 执行建议的动作
|
|
||||||
actions_taken = []
|
|
||||||
if agent_response.get("recommended_actions"):
|
|
||||||
for action in agent_response["recommended_actions"]:
|
|
||||||
action_result = self.action_executor.execute_action(action)
|
|
||||||
actions_taken.append(action_result)
|
|
||||||
|
|
||||||
# 生成响应
|
|
||||||
response = {
|
|
||||||
"response": agent_response.get("response", "Agent已处理您的对话"),
|
|
||||||
"actions": actions_taken,
|
|
||||||
"status": "completed",
|
|
||||||
"confidence": agent_response.get("confidence", 0.8),
|
|
||||||
"context": context,
|
|
||||||
"session_id": session_id
|
|
||||||
}
|
|
||||||
|
|
||||||
# 记录执行历史
|
|
||||||
self.agent_core._record_execution("conversation_processing", response)
|
|
||||||
|
|
||||||
return response
|
|
||||||
|
|
||||||
except Exception as e:
|
|
||||||
logger.error(f"Agent对话处理失败: {e}")
|
|
||||||
return {
|
|
||||||
"response": f"处理对话时发生错误: {str(e)}",
|
|
||||||
"actions": [],
|
|
||||||
"status": "error",
|
|
||||||
"error": str(e)
|
|
||||||
}
|
|
||||||
|
|
||||||
async def process_workorder_agent(self, workorder_data: Dict[str, Any]) -> Dict[str, Any]:
|
|
||||||
"""使用Agent处理工单"""
|
|
||||||
try:
|
|
||||||
# 提取工单信息
|
|
||||||
workorder_id = workorder_data.get("workorder_id")
|
|
||||||
action_type = workorder_data.get("action_type", "analyze")
|
|
||||||
|
|
||||||
# 创建工单上下文
|
|
||||||
context = {
|
|
||||||
"workorder_id": workorder_id,
|
|
||||||
"action_type": action_type,
|
|
||||||
"workorder_data": workorder_data,
|
|
||||||
"timestamp": datetime.now().isoformat()
|
|
||||||
}
|
|
||||||
|
|
||||||
# 使用智能Agent处理工单
|
|
||||||
agent_response = await self.intelligent_agent.process_workorder(context)
|
|
||||||
|
|
||||||
# 执行建议的动作
|
|
||||||
actions_taken = []
|
|
||||||
if agent_response.get("recommended_actions"):
|
|
||||||
for action in agent_response["recommended_actions"]:
|
|
||||||
action_result = self.action_executor.execute_action(action)
|
|
||||||
actions_taken.append(action_result)
|
|
||||||
|
|
||||||
# 生成响应
|
|
||||||
response = {
|
|
||||||
"response": agent_response.get("response", "Agent已处理工单"),
|
|
||||||
"actions": actions_taken,
|
|
||||||
"status": "completed",
|
|
||||||
"confidence": agent_response.get("confidence", 0.8),
|
|
||||||
"context": context
|
|
||||||
}
|
|
||||||
|
|
||||||
# 记录执行历史
|
|
||||||
self.agent_core._record_execution("workorder_processing", response)
|
|
||||||
|
|
||||||
return response
|
|
||||||
|
|
||||||
except Exception as e:
|
|
||||||
logger.error(f"Agent工单处理失败: {e}")
|
|
||||||
return {
|
|
||||||
"response": f"处理工单时发生错误: {str(e)}",
|
|
||||||
"actions": [],
|
|
||||||
"status": "error",
|
|
||||||
"error": str(e)
|
|
||||||
}
|
|
||||||
|
|
||||||
async def process_alert_agent(self, alert_data: Dict[str, Any]) -> Dict[str, Any]:
|
|
||||||
"""使用Agent处理预警"""
|
|
||||||
try:
|
|
||||||
# 创建预警上下文
|
|
||||||
context = {
|
|
||||||
"alert_data": alert_data,
|
|
||||||
"timestamp": datetime.now().isoformat()
|
|
||||||
}
|
|
||||||
|
|
||||||
# 使用智能Agent处理预警
|
|
||||||
agent_response = await self.intelligent_agent.process_alert(context)
|
|
||||||
|
|
||||||
# 执行建议的动作
|
|
||||||
actions_taken = []
|
|
||||||
if agent_response.get("recommended_actions"):
|
|
||||||
for action in agent_response["recommended_actions"]:
|
|
||||||
action_result = self.action_executor.execute_action(action)
|
|
||||||
actions_taken.append(action_result)
|
|
||||||
|
|
||||||
# 生成响应
|
|
||||||
response = {
|
|
||||||
"response": agent_response.get("response", "Agent已处理预警"),
|
|
||||||
"actions": actions_taken,
|
|
||||||
"status": "completed",
|
|
||||||
"confidence": agent_response.get("confidence", 0.8),
|
|
||||||
"context": context
|
|
||||||
}
|
|
||||||
|
|
||||||
# 记录执行历史
|
|
||||||
self.agent_core._record_execution("alert_processing", response)
|
|
||||||
|
|
||||||
return response
|
|
||||||
|
|
||||||
except Exception as e:
|
|
||||||
logger.error(f"Agent预警处理失败: {e}")
|
|
||||||
return {
|
|
||||||
"response": f"处理预警时发生错误: {str(e)}",
|
|
||||||
"actions": [],
|
|
||||||
"status": "error",
|
|
||||||
"error": str(e)
|
|
||||||
}
|
|
||||||
|
|
||||||
def get_conversation_suggestions(self, context: Dict[str, Any]) -> List[str]:
|
|
||||||
"""获取对话建议"""
|
|
||||||
try:
|
|
||||||
return self.intelligent_agent.get_conversation_suggestions(context)
|
|
||||||
except Exception as e:
|
|
||||||
logger.error(f"获取对话建议失败: {e}")
|
|
||||||
return []
|
|
||||||
|
|
||||||
def get_workorder_suggestions(self, workorder_data: Dict[str, Any]) -> List[str]:
|
|
||||||
"""获取工单建议"""
|
|
||||||
try:
|
|
||||||
return self.intelligent_agent.get_workorder_suggestions(workorder_data)
|
|
||||||
except Exception as e:
|
|
||||||
logger.error(f"获取工单建议失败: {e}")
|
|
||||||
return []
|
|
||||||
|
|
||||||
def get_alert_suggestions(self, alert_data: Dict[str, Any]) -> List[str]:
|
|
||||||
"""获取预警建议"""
|
|
||||||
try:
|
|
||||||
return self.intelligent_agent.get_alert_suggestions(alert_data)
|
|
||||||
except Exception as e:
|
|
||||||
logger.error(f"获取预警建议失败: {e}")
|
|
||||||
return []
|
|
||||||
@@ -1,405 +0,0 @@
|
|||||||
# -*- coding: utf-8 -*-
|
|
||||||
"""
|
|
||||||
TSP Agent示例动作模块
|
|
||||||
包含Agent的示例动作和测试功能
|
|
||||||
"""
|
|
||||||
|
|
||||||
import logging
|
|
||||||
import asyncio
|
|
||||||
from typing import Dict, Any, List
|
|
||||||
from datetime import datetime, timedelta
|
|
||||||
|
|
||||||
from .agent_assistant_core import TSPAgentAssistantCore
|
|
||||||
|
|
||||||
logger = logging.getLogger(__name__)
|
|
||||||
|
|
||||||
class AgentSampleActions:
|
|
||||||
"""Agent示例动作处理器"""
|
|
||||||
|
|
||||||
def __init__(self, agent_core: TSPAgentAssistantCore):
|
|
||||||
self.agent_core = agent_core
|
|
||||||
|
|
||||||
async def trigger_sample_actions(self) -> Dict[str, Any]:
|
|
||||||
"""触发示例动作"""
|
|
||||||
try:
|
|
||||||
logger.info("开始执行示例动作")
|
|
||||||
|
|
||||||
# 执行多个示例动作
|
|
||||||
actions_results = []
|
|
||||||
|
|
||||||
# 1. 系统健康检查
|
|
||||||
health_result = await self._sample_health_check()
|
|
||||||
actions_results.append(health_result)
|
|
||||||
|
|
||||||
# 2. 预警分析
|
|
||||||
alert_result = await self._sample_alert_analysis()
|
|
||||||
actions_results.append(alert_result)
|
|
||||||
|
|
||||||
# 3. 工单处理
|
|
||||||
workorder_result = await self._sample_workorder_processing()
|
|
||||||
actions_results.append(workorder_result)
|
|
||||||
|
|
||||||
# 4. 知识库更新
|
|
||||||
knowledge_result = await self._sample_knowledge_update()
|
|
||||||
actions_results.append(knowledge_result)
|
|
||||||
|
|
||||||
# 5. 性能优化
|
|
||||||
optimization_result = await self._sample_performance_optimization()
|
|
||||||
actions_results.append(optimization_result)
|
|
||||||
|
|
||||||
# 记录执行历史
|
|
||||||
self.agent_core._record_execution("sample_actions", {
|
|
||||||
"actions_count": len(actions_results),
|
|
||||||
"results": actions_results
|
|
||||||
})
|
|
||||||
|
|
||||||
return {
|
|
||||||
"success": True,
|
|
||||||
"message": f"成功执行 {len(actions_results)} 个示例动作",
|
|
||||||
"actions_results": actions_results,
|
|
||||||
"timestamp": datetime.now().isoformat()
|
|
||||||
}
|
|
||||||
|
|
||||||
except Exception as e:
|
|
||||||
logger.error(f"执行示例动作失败: {e}")
|
|
||||||
return {
|
|
||||||
"success": False,
|
|
||||||
"error": str(e),
|
|
||||||
"timestamp": datetime.now().isoformat()
|
|
||||||
}
|
|
||||||
|
|
||||||
async def _sample_health_check(self) -> Dict[str, Any]:
|
|
||||||
"""示例:系统健康检查"""
|
|
||||||
try:
|
|
||||||
# 获取系统健康状态
|
|
||||||
health_data = self.agent_core.get_system_health()
|
|
||||||
|
|
||||||
# 模拟健康检查逻辑
|
|
||||||
health_score = health_data.get("health_score", 0)
|
|
||||||
|
|
||||||
if health_score > 80:
|
|
||||||
status = "excellent"
|
|
||||||
message = "系统运行状态良好"
|
|
||||||
elif health_score > 60:
|
|
||||||
status = "good"
|
|
||||||
message = "系统运行状态正常"
|
|
||||||
elif health_score > 40:
|
|
||||||
status = "fair"
|
|
||||||
message = "系统运行状态一般,建议关注"
|
|
||||||
else:
|
|
||||||
status = "poor"
|
|
||||||
message = "系统运行状态较差,需要优化"
|
|
||||||
|
|
||||||
return {
|
|
||||||
"action_type": "health_check",
|
|
||||||
"status": status,
|
|
||||||
"message": message,
|
|
||||||
"health_score": health_score,
|
|
||||||
"timestamp": datetime.now().isoformat()
|
|
||||||
}
|
|
||||||
|
|
||||||
except Exception as e:
|
|
||||||
logger.error(f"健康检查失败: {e}")
|
|
||||||
return {
|
|
||||||
"action_type": "health_check",
|
|
||||||
"status": "error",
|
|
||||||
"error": str(e)
|
|
||||||
}
|
|
||||||
|
|
||||||
async def _sample_alert_analysis(self) -> Dict[str, Any]:
|
|
||||||
"""示例:预警分析"""
|
|
||||||
try:
|
|
||||||
# 获取预警数据
|
|
||||||
alerts = self.agent_core.check_alerts()
|
|
||||||
|
|
||||||
# 分析预警
|
|
||||||
alert_count = len(alerts)
|
|
||||||
critical_alerts = [a for a in alerts if a.get("level") == "critical"]
|
|
||||||
warning_alerts = [a for a in alerts if a.get("level") == "warning"]
|
|
||||||
|
|
||||||
# 生成分析结果
|
|
||||||
if alert_count == 0:
|
|
||||||
status = "no_alerts"
|
|
||||||
message = "当前无活跃预警"
|
|
||||||
elif len(critical_alerts) > 0:
|
|
||||||
status = "critical"
|
|
||||||
message = f"发现 {len(critical_alerts)} 个严重预警,需要立即处理"
|
|
||||||
elif len(warning_alerts) > 0:
|
|
||||||
status = "warning"
|
|
||||||
message = f"发现 {len(warning_alerts)} 个警告预警,建议关注"
|
|
||||||
else:
|
|
||||||
status = "info"
|
|
||||||
message = f"发现 {alert_count} 个信息预警"
|
|
||||||
|
|
||||||
return {
|
|
||||||
"action_type": "alert_analysis",
|
|
||||||
"status": status,
|
|
||||||
"message": message,
|
|
||||||
"alert_count": alert_count,
|
|
||||||
"critical_count": len(critical_alerts),
|
|
||||||
"warning_count": len(warning_alerts),
|
|
||||||
"timestamp": datetime.now().isoformat()
|
|
||||||
}
|
|
||||||
|
|
||||||
except Exception as e:
|
|
||||||
logger.error(f"预警分析失败: {e}")
|
|
||||||
return {
|
|
||||||
"action_type": "alert_analysis",
|
|
||||||
"status": "error",
|
|
||||||
"error": str(e)
|
|
||||||
}
|
|
||||||
|
|
||||||
async def _sample_workorder_processing(self) -> Dict[str, Any]:
|
|
||||||
"""示例:工单处理"""
|
|
||||||
try:
|
|
||||||
# 获取工单状态
|
|
||||||
workorders_status = self.agent_core._check_workorders_status()
|
|
||||||
|
|
||||||
total = workorders_status.get("total", 0)
|
|
||||||
open_count = workorders_status.get("open", 0)
|
|
||||||
resolved_count = workorders_status.get("resolved", 0)
|
|
||||||
resolution_rate = workorders_status.get("resolution_rate", 0)
|
|
||||||
|
|
||||||
# 分析工单状态
|
|
||||||
if total == 0:
|
|
||||||
status = "no_workorders"
|
|
||||||
message = "当前无工单"
|
|
||||||
elif open_count > 10:
|
|
||||||
status = "high_backlog"
|
|
||||||
message = f"工单积压严重,有 {open_count} 个待处理工单"
|
|
||||||
elif resolution_rate > 0.8:
|
|
||||||
status = "good_resolution"
|
|
||||||
message = f"工单处理效率良好,解决率 {resolution_rate:.1%}"
|
|
||||||
else:
|
|
||||||
status = "normal"
|
|
||||||
message = f"工单处理状态正常,待处理 {open_count} 个"
|
|
||||||
|
|
||||||
return {
|
|
||||||
"action_type": "workorder_processing",
|
|
||||||
"status": status,
|
|
||||||
"message": message,
|
|
||||||
"total_workorders": total,
|
|
||||||
"open_workorders": open_count,
|
|
||||||
"resolved_workorders": resolved_count,
|
|
||||||
"resolution_rate": resolution_rate,
|
|
||||||
"timestamp": datetime.now().isoformat()
|
|
||||||
}
|
|
||||||
|
|
||||||
except Exception as e:
|
|
||||||
logger.error(f"工单处理分析失败: {e}")
|
|
||||||
return {
|
|
||||||
"action_type": "workorder_processing",
|
|
||||||
"status": "error",
|
|
||||||
"error": str(e)
|
|
||||||
}
|
|
||||||
|
|
||||||
async def _sample_knowledge_update(self) -> Dict[str, Any]:
|
|
||||||
"""示例:知识库更新"""
|
|
||||||
try:
|
|
||||||
from src.core.database import db_manager
|
|
||||||
from src.core.models import KnowledgeEntry
|
|
||||||
|
|
||||||
with db_manager.get_session() as session:
|
|
||||||
# 获取知识库统计
|
|
||||||
total_knowledge = session.query(KnowledgeEntry).count()
|
|
||||||
verified_knowledge = session.query(KnowledgeEntry).filter(
|
|
||||||
KnowledgeEntry.is_verified == True
|
|
||||||
).count()
|
|
||||||
unverified_knowledge = total_knowledge - verified_knowledge
|
|
||||||
|
|
||||||
# 分析知识库状态
|
|
||||||
if total_knowledge == 0:
|
|
||||||
status = "empty"
|
|
||||||
message = "知识库为空,建议添加知识条目"
|
|
||||||
elif unverified_knowledge > 0:
|
|
||||||
status = "needs_verification"
|
|
||||||
message = f"有 {unverified_knowledge} 个知识条目需要验证"
|
|
||||||
else:
|
|
||||||
status = "up_to_date"
|
|
||||||
message = "知识库状态良好,所有条目已验证"
|
|
||||||
|
|
||||||
return {
|
|
||||||
"action_type": "knowledge_update",
|
|
||||||
"status": status,
|
|
||||||
"message": message,
|
|
||||||
"total_knowledge": total_knowledge,
|
|
||||||
"verified_knowledge": verified_knowledge,
|
|
||||||
"unverified_knowledge": unverified_knowledge,
|
|
||||||
"timestamp": datetime.now().isoformat()
|
|
||||||
}
|
|
||||||
|
|
||||||
except Exception as e:
|
|
||||||
logger.error(f"知识库更新分析失败: {e}")
|
|
||||||
return {
|
|
||||||
"action_type": "knowledge_update",
|
|
||||||
"status": "error",
|
|
||||||
"error": str(e)
|
|
||||||
}
|
|
||||||
|
|
||||||
async def _sample_performance_optimization(self) -> Dict[str, Any]:
|
|
||||||
"""示例:性能优化"""
|
|
||||||
try:
|
|
||||||
# 获取系统性能数据
|
|
||||||
system_health = self.agent_core.get_system_health()
|
|
||||||
|
|
||||||
# 分析性能指标
|
|
||||||
cpu_usage = system_health.get("cpu_usage", 0)
|
|
||||||
memory_usage = system_health.get("memory_usage", 0)
|
|
||||||
disk_usage = system_health.get("disk_usage", 0)
|
|
||||||
|
|
||||||
# 生成优化建议
|
|
||||||
optimization_suggestions = []
|
|
||||||
|
|
||||||
if cpu_usage > 80:
|
|
||||||
optimization_suggestions.append("CPU使用率过高,建议优化计算密集型任务")
|
|
||||||
if memory_usage > 80:
|
|
||||||
optimization_suggestions.append("内存使用率过高,建议清理缓存或增加内存")
|
|
||||||
if disk_usage > 90:
|
|
||||||
optimization_suggestions.append("磁盘空间不足,建议清理日志文件或扩容")
|
|
||||||
|
|
||||||
if not optimization_suggestions:
|
|
||||||
status = "optimal"
|
|
||||||
message = "系统性能良好,无需优化"
|
|
||||||
else:
|
|
||||||
status = "needs_optimization"
|
|
||||||
message = f"发现 {len(optimization_suggestions)} 个性能优化点"
|
|
||||||
|
|
||||||
return {
|
|
||||||
"action_type": "performance_optimization",
|
|
||||||
"status": status,
|
|
||||||
"message": message,
|
|
||||||
"cpu_usage": cpu_usage,
|
|
||||||
"memory_usage": memory_usage,
|
|
||||||
"disk_usage": disk_usage,
|
|
||||||
"optimization_suggestions": optimization_suggestions,
|
|
||||||
"timestamp": datetime.now().isoformat()
|
|
||||||
}
|
|
||||||
|
|
||||||
except Exception as e:
|
|
||||||
logger.error(f"性能优化分析失败: {e}")
|
|
||||||
return {
|
|
||||||
"action_type": "performance_optimization",
|
|
||||||
"status": "error",
|
|
||||||
"error": str(e)
|
|
||||||
}
|
|
||||||
|
|
||||||
async def run_performance_test(self) -> Dict[str, Any]:
|
|
||||||
"""运行性能测试"""
|
|
||||||
try:
|
|
||||||
start_time = datetime.now()
|
|
||||||
|
|
||||||
# 执行多个测试
|
|
||||||
test_results = []
|
|
||||||
|
|
||||||
# 1. 响应时间测试
|
|
||||||
response_time = await self._test_response_time()
|
|
||||||
test_results.append(response_time)
|
|
||||||
|
|
||||||
# 2. 并发处理测试
|
|
||||||
concurrency_test = await self._test_concurrency()
|
|
||||||
test_results.append(concurrency_test)
|
|
||||||
|
|
||||||
# 3. 内存使用测试
|
|
||||||
memory_test = await self._test_memory_usage()
|
|
||||||
test_results.append(memory_test)
|
|
||||||
|
|
||||||
end_time = datetime.now()
|
|
||||||
total_time = (end_time - start_time).total_seconds()
|
|
||||||
|
|
||||||
return {
|
|
||||||
"success": True,
|
|
||||||
"message": "性能测试完成",
|
|
||||||
"total_time": total_time,
|
|
||||||
"test_results": test_results,
|
|
||||||
"timestamp": datetime.now().isoformat()
|
|
||||||
}
|
|
||||||
|
|
||||||
except Exception as e:
|
|
||||||
logger.error(f"性能测试失败: {e}")
|
|
||||||
return {
|
|
||||||
"success": False,
|
|
||||||
"error": str(e),
|
|
||||||
"timestamp": datetime.now().isoformat()
|
|
||||||
}
|
|
||||||
|
|
||||||
async def _test_response_time(self) -> Dict[str, Any]:
|
|
||||||
"""测试响应时间"""
|
|
||||||
start_time = datetime.now()
|
|
||||||
|
|
||||||
# 模拟处理任务
|
|
||||||
await asyncio.sleep(0.1)
|
|
||||||
|
|
||||||
end_time = datetime.now()
|
|
||||||
response_time = (end_time - start_time).total_seconds()
|
|
||||||
|
|
||||||
return {
|
|
||||||
"test_type": "response_time",
|
|
||||||
"response_time": response_time,
|
|
||||||
"status": "good" if response_time < 0.5 else "slow"
|
|
||||||
}
|
|
||||||
|
|
||||||
async def _test_concurrency(self) -> Dict[str, Any]:
|
|
||||||
"""测试并发处理"""
|
|
||||||
try:
|
|
||||||
# 创建多个并发任务
|
|
||||||
tasks = []
|
|
||||||
for i in range(5):
|
|
||||||
task = asyncio.create_task(self._simulate_task(i))
|
|
||||||
tasks.append(task)
|
|
||||||
|
|
||||||
# 等待所有任务完成
|
|
||||||
results = await asyncio.gather(*tasks)
|
|
||||||
|
|
||||||
return {
|
|
||||||
"test_type": "concurrency",
|
|
||||||
"concurrent_tasks": len(tasks),
|
|
||||||
"successful_tasks": len([r for r in results if r.get("success")]),
|
|
||||||
"status": "good"
|
|
||||||
}
|
|
||||||
|
|
||||||
except Exception as e:
|
|
||||||
return {
|
|
||||||
"test_type": "concurrency",
|
|
||||||
"status": "error",
|
|
||||||
"error": str(e)
|
|
||||||
}
|
|
||||||
|
|
||||||
async def _simulate_task(self, task_id: int) -> Dict[str, Any]:
|
|
||||||
"""模拟任务"""
|
|
||||||
try:
|
|
||||||
await asyncio.sleep(0.05) # 模拟处理时间
|
|
||||||
return {
|
|
||||||
"task_id": task_id,
|
|
||||||
"success": True,
|
|
||||||
"result": f"Task {task_id} completed"
|
|
||||||
}
|
|
||||||
except Exception as e:
|
|
||||||
return {
|
|
||||||
"task_id": task_id,
|
|
||||||
"success": False,
|
|
||||||
"error": str(e)
|
|
||||||
}
|
|
||||||
|
|
||||||
async def _test_memory_usage(self) -> Dict[str, Any]:
|
|
||||||
"""测试内存使用"""
|
|
||||||
try:
|
|
||||||
import psutil
|
|
||||||
|
|
||||||
# 获取当前内存使用情况
|
|
||||||
memory_info = psutil.virtual_memory()
|
|
||||||
|
|
||||||
return {
|
|
||||||
"test_type": "memory_usage",
|
|
||||||
"total_memory": memory_info.total,
|
|
||||||
"available_memory": memory_info.available,
|
|
||||||
"used_memory": memory_info.used,
|
|
||||||
"memory_percentage": memory_info.percent,
|
|
||||||
"status": "good" if memory_info.percent < 80 else "high"
|
|
||||||
}
|
|
||||||
|
|
||||||
except Exception as e:
|
|
||||||
return {
|
|
||||||
"test_type": "memory_usage",
|
|
||||||
"status": "error",
|
|
||||||
"error": str(e)
|
|
||||||
}
|
|
||||||
@@ -5,7 +5,7 @@
|
|||||||
实现Agent的主动调用功能
|
实现Agent的主动调用功能
|
||||||
"""
|
"""
|
||||||
|
|
||||||
import asyncio跳过系统检查,直接启动服务...
|
import asyncio
|
||||||
import logging
|
import logging
|
||||||
import threading
|
import threading
|
||||||
import time
|
import time
|
||||||
@@ -22,7 +22,7 @@ class AutoMonitorService:
|
|||||||
self.agent_assistant = agent_assistant
|
self.agent_assistant = agent_assistant
|
||||||
self.is_running = False
|
self.is_running = False
|
||||||
self.monitor_thread = None
|
self.monitor_thread = None
|
||||||
self.check_interval = 300 # 5分钟检查一次
|
self.check_interval = 900 # 5分钟检查一次
|
||||||
self.last_check_time = None
|
self.last_check_time = None
|
||||||
self.monitoring_stats = {
|
self.monitoring_stats = {
|
||||||
"total_checks": 0,
|
"total_checks": 0,
|
||||||
|
|||||||
@@ -1,589 +0,0 @@
|
|||||||
|
|
||||||
# -*- coding: utf-8 -*-
|
|
||||||
"""
|
|
||||||
任务执行器
|
|
||||||
负责执行计划中的具体任务
|
|
||||||
"""
|
|
||||||
|
|
||||||
import logging
|
|
||||||
import asyncio
|
|
||||||
from typing import Dict, List, Any, Optional
|
|
||||||
from datetime import datetime
|
|
||||||
import json
|
|
||||||
|
|
||||||
logger = logging.getLogger(__name__)
|
|
||||||
|
|
||||||
class TaskExecutor:
|
|
||||||
"""任务执行器"""
|
|
||||||
|
|
||||||
def __init__(self):
|
|
||||||
self.execution_strategies = {
|
|
||||||
"sequential": self._execute_sequential,
|
|
||||||
"parallel": self._execute_parallel,
|
|
||||||
"conditional": self._execute_conditional,
|
|
||||||
"iterative": self._execute_iterative
|
|
||||||
}
|
|
||||||
self.active_executions = {}
|
|
||||||
|
|
||||||
async def execute_plan(
|
|
||||||
self,
|
|
||||||
plan: List[Dict[str, Any]],
|
|
||||||
tool_manager: Any,
|
|
||||||
context: Dict[str, Any]
|
|
||||||
) -> Dict[str, Any]:
|
|
||||||
"""执行计划"""
|
|
||||||
try:
|
|
||||||
execution_id = f"exec_{datetime.now().strftime('%Y%m%d_%H%M%S')}"
|
|
||||||
self.active_executions[execution_id] = {
|
|
||||||
"start_time": datetime.now(),
|
|
||||||
"status": "running",
|
|
||||||
"plan": plan
|
|
||||||
}
|
|
||||||
|
|
||||||
# 根据计划类型选择执行策略
|
|
||||||
execution_strategy = self._determine_execution_strategy(plan)
|
|
||||||
|
|
||||||
# 执行计划
|
|
||||||
result = await self.execution_strategies[execution_strategy](
|
|
||||||
plan=plan,
|
|
||||||
tool_manager=tool_manager,
|
|
||||||
context=context,
|
|
||||||
execution_id=execution_id
|
|
||||||
)
|
|
||||||
|
|
||||||
# 更新执行状态
|
|
||||||
self.active_executions[execution_id]["status"] = "completed"
|
|
||||||
self.active_executions[execution_id]["end_time"] = datetime.now()
|
|
||||||
self.active_executions[execution_id]["result"] = result
|
|
||||||
|
|
||||||
logger.info(f"计划执行完成: {execution_id}")
|
|
||||||
return result
|
|
||||||
|
|
||||||
except Exception as e:
|
|
||||||
logger.error(f"执行计划失败: {e}")
|
|
||||||
if execution_id in self.active_executions:
|
|
||||||
self.active_executions[execution_id]["status"] = "failed"
|
|
||||||
self.active_executions[execution_id]["error"] = str(e)
|
|
||||||
|
|
||||||
return {
|
|
||||||
"success": False,
|
|
||||||
"error": str(e),
|
|
||||||
"execution_id": execution_id
|
|
||||||
}
|
|
||||||
|
|
||||||
def _determine_execution_strategy(self, plan: List[Dict[str, Any]]) -> str:
|
|
||||||
"""确定执行策略"""
|
|
||||||
if not plan:
|
|
||||||
return "sequential"
|
|
||||||
|
|
||||||
# 检查计划类型
|
|
||||||
plan_types = [task.get("type") for task in plan]
|
|
||||||
|
|
||||||
if "parallel_group" in plan_types:
|
|
||||||
return "parallel"
|
|
||||||
elif "condition" in plan_types or "branch" in plan_types:
|
|
||||||
return "conditional"
|
|
||||||
elif "iteration_control" in plan_types:
|
|
||||||
return "iterative"
|
|
||||||
else:
|
|
||||||
return "sequential"
|
|
||||||
|
|
||||||
async def _execute_sequential(
|
|
||||||
self,
|
|
||||||
plan: List[Dict[str, Any]],
|
|
||||||
tool_manager: Any,
|
|
||||||
context: Dict[str, Any],
|
|
||||||
execution_id: str
|
|
||||||
) -> Dict[str, Any]:
|
|
||||||
"""顺序执行计划"""
|
|
||||||
results = []
|
|
||||||
execution_log = []
|
|
||||||
|
|
||||||
for i, task in enumerate(plan):
|
|
||||||
try:
|
|
||||||
logger.info(f"执行任务 {i+1}/{len(plan)}: {task.get('id', 'unknown')}")
|
|
||||||
|
|
||||||
# 检查任务依赖
|
|
||||||
if not await self._check_dependencies(task, results):
|
|
||||||
logger.warning(f"任务 {task.get('id')} 的依赖未满足,跳过执行")
|
|
||||||
continue
|
|
||||||
|
|
||||||
# 执行任务
|
|
||||||
task_result = await self._execute_single_task(task, tool_manager, context)
|
|
||||||
|
|
||||||
results.append({
|
|
||||||
"task_id": task.get("id"),
|
|
||||||
"result": task_result,
|
|
||||||
"timestamp": datetime.now().isoformat()
|
|
||||||
})
|
|
||||||
|
|
||||||
execution_log.append({
|
|
||||||
"task_id": task.get("id"),
|
|
||||||
"status": "completed",
|
|
||||||
"duration": task_result.get("duration", 0)
|
|
||||||
})
|
|
||||||
|
|
||||||
# 检查是否满足成功条件
|
|
||||||
if not self._check_success_criteria(task, task_result):
|
|
||||||
logger.warning(f"任务 {task.get('id')} 未满足成功条件")
|
|
||||||
|
|
||||||
except Exception as e:
|
|
||||||
logger.error(f"执行任务 {task.get('id')} 失败: {e}")
|
|
||||||
execution_log.append({
|
|
||||||
"task_id": task.get("id"),
|
|
||||||
"status": "failed",
|
|
||||||
"error": str(e)
|
|
||||||
})
|
|
||||||
|
|
||||||
# 根据任务重要性决定是否继续
|
|
||||||
if task.get("critical", False):
|
|
||||||
return {
|
|
||||||
"success": False,
|
|
||||||
"error": f"关键任务失败: {task.get('id')}",
|
|
||||||
"results": results,
|
|
||||||
"execution_log": execution_log
|
|
||||||
}
|
|
||||||
|
|
||||||
return {
|
|
||||||
"success": True,
|
|
||||||
"results": results,
|
|
||||||
"execution_log": execution_log,
|
|
||||||
"execution_id": execution_id
|
|
||||||
}
|
|
||||||
|
|
||||||
async def _execute_parallel(
|
|
||||||
self,
|
|
||||||
plan: List[Dict[str, Any]],
|
|
||||||
tool_manager: Any,
|
|
||||||
context: Dict[str, Any],
|
|
||||||
execution_id: str
|
|
||||||
) -> Dict[str, Any]:
|
|
||||||
"""并行执行计划"""
|
|
||||||
results = []
|
|
||||||
execution_log = []
|
|
||||||
|
|
||||||
# 将计划分组
|
|
||||||
parallel_groups = self._group_tasks_for_parallel_execution(plan)
|
|
||||||
|
|
||||||
for group in parallel_groups:
|
|
||||||
if group["execution_mode"] == "parallel":
|
|
||||||
# 并行执行组内任务
|
|
||||||
group_results = await self._execute_tasks_parallel(
|
|
||||||
group["tasks"], tool_manager, context
|
|
||||||
)
|
|
||||||
results.extend(group_results)
|
|
||||||
else:
|
|
||||||
# 顺序执行组内任务
|
|
||||||
for task in group["tasks"]:
|
|
||||||
task_result = await self._execute_single_task(task, tool_manager, context)
|
|
||||||
results.append({
|
|
||||||
"task_id": task.get("id"),
|
|
||||||
"result": task_result,
|
|
||||||
"timestamp": datetime.now().isoformat()
|
|
||||||
})
|
|
||||||
|
|
||||||
return {
|
|
||||||
"success": True,
|
|
||||||
"results": results,
|
|
||||||
"execution_log": execution_log,
|
|
||||||
"execution_id": execution_id
|
|
||||||
}
|
|
||||||
|
|
||||||
async def _execute_conditional(
|
|
||||||
self,
|
|
||||||
plan: List[Dict[str, Any]],
|
|
||||||
tool_manager: Any,
|
|
||||||
context: Dict[str, Any],
|
|
||||||
execution_id: str
|
|
||||||
) -> Dict[str, Any]:
|
|
||||||
"""条件执行计划"""
|
|
||||||
results = []
|
|
||||||
execution_log = []
|
|
||||||
|
|
||||||
# 找到条件检查任务
|
|
||||||
condition_task = None
|
|
||||||
branch_tasks = []
|
|
||||||
|
|
||||||
for task in plan:
|
|
||||||
if task.get("type") == "condition":
|
|
||||||
condition_task = task
|
|
||||||
elif task.get("type") == "branch":
|
|
||||||
branch_tasks.append(task)
|
|
||||||
|
|
||||||
if not condition_task:
|
|
||||||
logger.error("条件计划中缺少条件检查任务")
|
|
||||||
return {"success": False, "error": "缺少条件检查任务"}
|
|
||||||
|
|
||||||
# 执行条件检查
|
|
||||||
condition_result = await self._execute_single_task(condition_task, tool_manager, context)
|
|
||||||
results.append({
|
|
||||||
"task_id": condition_task.get("id"),
|
|
||||||
"result": condition_result,
|
|
||||||
"timestamp": datetime.now().isoformat()
|
|
||||||
})
|
|
||||||
|
|
||||||
# 根据条件结果选择分支
|
|
||||||
selected_branch = self._select_branch(condition_result, branch_tasks)
|
|
||||||
|
|
||||||
if selected_branch:
|
|
||||||
# 执行选中的分支
|
|
||||||
branch_result = await self._execute_sequential(
|
|
||||||
selected_branch.get("tasks", []),
|
|
||||||
tool_manager,
|
|
||||||
context,
|
|
||||||
execution_id
|
|
||||||
)
|
|
||||||
results.extend(branch_result.get("results", []))
|
|
||||||
execution_log.extend(branch_result.get("execution_log", []))
|
|
||||||
|
|
||||||
return {
|
|
||||||
"success": True,
|
|
||||||
"results": results,
|
|
||||||
"execution_log": execution_log,
|
|
||||||
"execution_id": execution_id,
|
|
||||||
"selected_branch": selected_branch.get("id") if selected_branch else None
|
|
||||||
}
|
|
||||||
|
|
||||||
async def _execute_iterative(
|
|
||||||
self,
|
|
||||||
plan: List[Dict[str, Any]],
|
|
||||||
tool_manager: Any,
|
|
||||||
context: Dict[str, Any],
|
|
||||||
execution_id: str
|
|
||||||
) -> Dict[str, Any]:
|
|
||||||
"""迭代执行计划"""
|
|
||||||
# 找到迭代控制任务
|
|
||||||
iteration_task = None
|
|
||||||
for task in plan:
|
|
||||||
if task.get("type") == "iteration_control":
|
|
||||||
iteration_task = task
|
|
||||||
break
|
|
||||||
|
|
||||||
if not iteration_task:
|
|
||||||
logger.error("迭代计划中缺少迭代控制任务")
|
|
||||||
return {"success": False, "error": "缺少迭代控制任务"}
|
|
||||||
|
|
||||||
max_iterations = iteration_task.get("max_iterations", 10)
|
|
||||||
convergence_criteria = iteration_task.get("convergence_criteria", {})
|
|
||||||
tasks = iteration_task.get("tasks", [])
|
|
||||||
|
|
||||||
results = []
|
|
||||||
execution_log = []
|
|
||||||
iteration_count = 0
|
|
||||||
|
|
||||||
while iteration_count < max_iterations:
|
|
||||||
iteration_count += 1
|
|
||||||
logger.info(f"执行第 {iteration_count} 次迭代")
|
|
||||||
|
|
||||||
# 执行迭代任务
|
|
||||||
iteration_result = await self._execute_sequential(
|
|
||||||
tasks, tool_manager, context, f"{execution_id}_iter_{iteration_count}"
|
|
||||||
)
|
|
||||||
|
|
||||||
results.append({
|
|
||||||
"iteration": iteration_count,
|
|
||||||
"result": iteration_result,
|
|
||||||
"timestamp": datetime.now().isoformat()
|
|
||||||
})
|
|
||||||
|
|
||||||
# 检查收敛条件
|
|
||||||
if self._check_convergence(iteration_result, convergence_criteria):
|
|
||||||
logger.info(f"迭代在第 {iteration_count} 次收敛")
|
|
||||||
break
|
|
||||||
|
|
||||||
return {
|
|
||||||
"success": True,
|
|
||||||
"results": results,
|
|
||||||
"execution_log": execution_log,
|
|
||||||
"execution_id": execution_id,
|
|
||||||
"iterations": iteration_count,
|
|
||||||
"converged": iteration_count < max_iterations
|
|
||||||
}
|
|
||||||
|
|
||||||
async def _execute_single_task(
|
|
||||||
self,
|
|
||||||
task: Dict[str, Any],
|
|
||||||
tool_manager: Any,
|
|
||||||
context: Dict[str, Any]
|
|
||||||
) -> Dict[str, Any]:
|
|
||||||
"""执行单个任务"""
|
|
||||||
start_time = datetime.now()
|
|
||||||
|
|
||||||
try:
|
|
||||||
task_id = task.get("id", "unknown")
|
|
||||||
task_type = task.get("type", "action")
|
|
||||||
tool_name = task.get("tool", "")
|
|
||||||
parameters = task.get("parameters", {})
|
|
||||||
|
|
||||||
logger.info(f"执行任务: {task_id}, 类型: {task_type}, 工具: {tool_name}")
|
|
||||||
|
|
||||||
# 根据任务类型执行
|
|
||||||
if task_type == "action":
|
|
||||||
result = await self._execute_action_task(task, tool_manager, context)
|
|
||||||
elif task_type == "condition":
|
|
||||||
result = await self._execute_condition_task(task, tool_manager, context)
|
|
||||||
elif task_type == "control":
|
|
||||||
result = await self._execute_control_task(task, tool_manager, context)
|
|
||||||
else:
|
|
||||||
result = await self._execute_general_task(task, tool_manager, context)
|
|
||||||
|
|
||||||
duration = (datetime.now() - start_time).total_seconds()
|
|
||||||
result["duration"] = duration
|
|
||||||
|
|
||||||
logger.info(f"任务 {task_id} 执行完成,耗时: {duration:.2f}秒")
|
|
||||||
return result
|
|
||||||
|
|
||||||
except Exception as e:
|
|
||||||
logger.error(f"执行任务失败: {e}")
|
|
||||||
return {
|
|
||||||
"success": False,
|
|
||||||
"error": str(e),
|
|
||||||
"duration": (datetime.now() - start_time).total_seconds()
|
|
||||||
}
|
|
||||||
|
|
||||||
async def _execute_action_task(
|
|
||||||
self,
|
|
||||||
task: Dict[str, Any],
|
|
||||||
tool_manager: Any,
|
|
||||||
context: Dict[str, Any]
|
|
||||||
) -> Dict[str, Any]:
|
|
||||||
"""执行动作任务"""
|
|
||||||
tool_name = task.get("tool", "")
|
|
||||||
parameters = task.get("parameters", {})
|
|
||||||
|
|
||||||
# 合并上下文参数
|
|
||||||
full_parameters = {**parameters, **context}
|
|
||||||
|
|
||||||
# 调用工具
|
|
||||||
result = await tool_manager.execute_tool(tool_name, full_parameters)
|
|
||||||
|
|
||||||
return {
|
|
||||||
"success": True,
|
|
||||||
"tool": tool_name,
|
|
||||||
"parameters": full_parameters,
|
|
||||||
"result": result
|
|
||||||
}
|
|
||||||
|
|
||||||
async def _execute_condition_task(
|
|
||||||
self,
|
|
||||||
task: Dict[str, Any],
|
|
||||||
tool_manager: Any,
|
|
||||||
context: Dict[str, Any]
|
|
||||||
) -> Dict[str, Any]:
|
|
||||||
"""执行条件任务"""
|
|
||||||
condition = task.get("condition", "")
|
|
||||||
branches = task.get("branches", {})
|
|
||||||
|
|
||||||
# 评估条件
|
|
||||||
condition_result = await self._evaluate_condition(condition, context)
|
|
||||||
|
|
||||||
return {
|
|
||||||
"success": True,
|
|
||||||
"condition": condition,
|
|
||||||
"result": condition_result,
|
|
||||||
"available_branches": list(branches.keys())
|
|
||||||
}
|
|
||||||
|
|
||||||
async def _execute_control_task(
|
|
||||||
self,
|
|
||||||
task: Dict[str, Any],
|
|
||||||
tool_manager: Any,
|
|
||||||
context: Dict[str, Any]
|
|
||||||
) -> Dict[str, Any]:
|
|
||||||
"""执行控制任务"""
|
|
||||||
control_type = task.get("control_type", "general")
|
|
||||||
|
|
||||||
if control_type == "iteration":
|
|
||||||
return await self._execute_iteration_control(task, context)
|
|
||||||
elif control_type == "loop":
|
|
||||||
return await self._execute_loop_control(task, context)
|
|
||||||
else:
|
|
||||||
return {
|
|
||||||
"success": True,
|
|
||||||
"control_type": control_type,
|
|
||||||
"message": "控制任务执行完成"
|
|
||||||
}
|
|
||||||
|
|
||||||
async def _execute_general_task(
|
|
||||||
self,
|
|
||||||
task: Dict[str, Any],
|
|
||||||
tool_manager: Any,
|
|
||||||
context: Dict[str, Any]
|
|
||||||
) -> Dict[str, Any]:
|
|
||||||
"""执行通用任务"""
|
|
||||||
description = task.get("description", "")
|
|
||||||
|
|
||||||
# 这里可以实现通用的任务执行逻辑
|
|
||||||
# 例如:调用LLM生成响应、执行数据库操作等
|
|
||||||
|
|
||||||
return {
|
|
||||||
"success": True,
|
|
||||||
"description": description,
|
|
||||||
"message": "通用任务执行完成"
|
|
||||||
}
|
|
||||||
|
|
||||||
async def _execute_tasks_parallel(
|
|
||||||
self,
|
|
||||||
tasks: List[Dict[str, Any]],
|
|
||||||
tool_manager: Any,
|
|
||||||
context: Dict[str, Any]
|
|
||||||
) -> List[Dict[str, Any]]:
|
|
||||||
"""并行执行多个任务"""
|
|
||||||
async def execute_task(task):
|
|
||||||
return await self._execute_single_task(task, tool_manager, context)
|
|
||||||
|
|
||||||
# 创建并行任务
|
|
||||||
parallel_tasks = [execute_task(task) for task in tasks]
|
|
||||||
|
|
||||||
# 等待所有任务完成
|
|
||||||
results = await asyncio.gather(*parallel_tasks, return_exceptions=True)
|
|
||||||
|
|
||||||
# 处理结果
|
|
||||||
processed_results = []
|
|
||||||
for i, result in enumerate(results):
|
|
||||||
if isinstance(result, Exception):
|
|
||||||
processed_results.append({
|
|
||||||
"task_id": tasks[i].get("id"),
|
|
||||||
"result": {"success": False, "error": str(result)},
|
|
||||||
"timestamp": datetime.now().isoformat()
|
|
||||||
})
|
|
||||||
else:
|
|
||||||
processed_results.append({
|
|
||||||
"task_id": tasks[i].get("id"),
|
|
||||||
"result": result,
|
|
||||||
"timestamp": datetime.now().isoformat()
|
|
||||||
})
|
|
||||||
|
|
||||||
return processed_results
|
|
||||||
|
|
||||||
def _group_tasks_for_parallel_execution(self, plan: List[Dict[str, Any]]) -> List[Dict[str, Any]]:
|
|
||||||
"""将任务分组以便并行执行"""
|
|
||||||
groups = []
|
|
||||||
current_group = []
|
|
||||||
|
|
||||||
for task in plan:
|
|
||||||
if task.get("type") == "parallel_group":
|
|
||||||
if current_group:
|
|
||||||
groups.append({
|
|
||||||
"execution_mode": "sequential",
|
|
||||||
"tasks": current_group
|
|
||||||
})
|
|
||||||
current_group = []
|
|
||||||
groups.append(task)
|
|
||||||
else:
|
|
||||||
current_group.append(task)
|
|
||||||
|
|
||||||
if current_group:
|
|
||||||
groups.append({
|
|
||||||
"execution_mode": "sequential",
|
|
||||||
"tasks": current_group
|
|
||||||
})
|
|
||||||
|
|
||||||
return groups
|
|
||||||
|
|
||||||
async def _check_dependencies(self, task: Dict[str, Any], results: List[Dict[str, Any]]) -> bool:
|
|
||||||
"""检查任务依赖是否满足"""
|
|
||||||
dependencies = task.get("dependencies", [])
|
|
||||||
|
|
||||||
if not dependencies:
|
|
||||||
return True
|
|
||||||
|
|
||||||
# 检查所有依赖是否已完成
|
|
||||||
completed_task_ids = [r["task_id"] for r in results if r["result"].get("success", False)]
|
|
||||||
|
|
||||||
for dep in dependencies:
|
|
||||||
if dep not in completed_task_ids:
|
|
||||||
return False
|
|
||||||
|
|
||||||
return True
|
|
||||||
|
|
||||||
def _check_success_criteria(self, task: Dict[str, Any], result: Dict[str, Any]) -> bool:
|
|
||||||
"""检查任务是否满足成功条件"""
|
|
||||||
success_criteria = task.get("success_criteria", {})
|
|
||||||
|
|
||||||
if not success_criteria:
|
|
||||||
return result.get("success", False)
|
|
||||||
|
|
||||||
# 检查每个成功条件
|
|
||||||
for criterion, expected_value in success_criteria.items():
|
|
||||||
actual_value = result.get(criterion)
|
|
||||||
if actual_value != expected_value:
|
|
||||||
return False
|
|
||||||
|
|
||||||
return True
|
|
||||||
|
|
||||||
def _select_branch(self, condition_result: Dict[str, Any], branch_tasks: List[Dict[str, Any]]) -> Optional[Dict[str, Any]]:
|
|
||||||
"""根据条件结果选择分支"""
|
|
||||||
condition_value = condition_result.get("result", "")
|
|
||||||
|
|
||||||
for branch_task in branch_tasks:
|
|
||||||
branch_condition = branch_task.get("condition", "")
|
|
||||||
if branch_condition == condition_value:
|
|
||||||
return branch_task
|
|
||||||
|
|
||||||
return None
|
|
||||||
|
|
||||||
def _check_convergence(self, iteration_result: Dict[str, Any], convergence_criteria: Dict[str, Any]) -> bool:
|
|
||||||
"""检查迭代是否收敛"""
|
|
||||||
if not convergence_criteria:
|
|
||||||
return False
|
|
||||||
|
|
||||||
# 检查收敛条件
|
|
||||||
for criterion, threshold in convergence_criteria.items():
|
|
||||||
actual_value = iteration_result.get(criterion)
|
|
||||||
if actual_value is None:
|
|
||||||
continue
|
|
||||||
|
|
||||||
# 这里可以实现更复杂的收敛判断逻辑
|
|
||||||
if isinstance(threshold, dict):
|
|
||||||
if threshold.get("type") == "less_than":
|
|
||||||
if actual_value >= threshold.get("value"):
|
|
||||||
return False
|
|
||||||
elif threshold.get("type") == "greater_than":
|
|
||||||
if actual_value <= threshold.get("value"):
|
|
||||||
return False
|
|
||||||
|
|
||||||
return True
|
|
||||||
|
|
||||||
async def _evaluate_condition(self, condition: str, context: Dict[str, Any]) -> str:
|
|
||||||
"""评估条件表达式"""
|
|
||||||
# 这里可以实现条件评估逻辑
|
|
||||||
# 例如:解析条件表达式、查询上下文等
|
|
||||||
|
|
||||||
# 简单的条件评估示例
|
|
||||||
if "satisfaction" in condition:
|
|
||||||
return "high" if context.get("satisfaction_score", 0) > 0.7 else "low"
|
|
||||||
elif "priority" in condition:
|
|
||||||
return context.get("priority", "medium")
|
|
||||||
else:
|
|
||||||
return "default"
|
|
||||||
|
|
||||||
async def _execute_iteration_control(self, task: Dict[str, Any], context: Dict[str, Any]) -> Dict[str, Any]:
|
|
||||||
"""执行迭代控制"""
|
|
||||||
max_iterations = task.get("max_iterations", 10)
|
|
||||||
current_iteration = context.get("current_iteration", 0)
|
|
||||||
|
|
||||||
return {
|
|
||||||
"success": True,
|
|
||||||
"max_iterations": max_iterations,
|
|
||||||
"current_iteration": current_iteration,
|
|
||||||
"continue": current_iteration < max_iterations
|
|
||||||
}
|
|
||||||
|
|
||||||
async def _execute_loop_control(self, task: Dict[str, Any], context: Dict[str, Any]) -> Dict[str, Any]:
|
|
||||||
"""执行循环控制"""
|
|
||||||
loop_condition = task.get("loop_condition", "")
|
|
||||||
|
|
||||||
return {
|
|
||||||
"success": True,
|
|
||||||
"loop_condition": loop_condition,
|
|
||||||
"continue": True # 这里应该根据实际条件判断
|
|
||||||
}
|
|
||||||
|
|
||||||
def get_execution_status(self, execution_id: str) -> Optional[Dict[str, Any]]:
|
|
||||||
"""获取执行状态"""
|
|
||||||
return self.active_executions.get(execution_id)
|
|
||||||
|
|
||||||
def get_all_executions(self) -> Dict[str, Any]:
|
|
||||||
"""获取所有执行记录"""
|
|
||||||
return self.active_executions
|
|
||||||
@@ -1,573 +0,0 @@
|
|||||||
|
|
||||||
# -*- coding: utf-8 -*-
|
|
||||||
"""
|
|
||||||
目标管理器
|
|
||||||
负责目标设定、跟踪和评估
|
|
||||||
"""
|
|
||||||
|
|
||||||
import logging
|
|
||||||
from typing import Dict, List, Any, Optional
|
|
||||||
from datetime import datetime
|
|
||||||
import json
|
|
||||||
|
|
||||||
from ..core.llm_client import QwenClient
|
|
||||||
|
|
||||||
logger = logging.getLogger(__name__)
|
|
||||||
|
|
||||||
class GoalManager:
|
|
||||||
"""目标管理器"""
|
|
||||||
|
|
||||||
def __init__(self):
|
|
||||||
self.llm_client = QwenClient()
|
|
||||||
self.active_goals = {}
|
|
||||||
self.goal_history = []
|
|
||||||
self.goal_templates = {
|
|
||||||
"problem_solving": self._create_problem_solving_goal,
|
|
||||||
"information_gathering": self._create_information_gathering_goal,
|
|
||||||
"task_execution": self._create_task_execution_goal,
|
|
||||||
"analysis": self._create_analysis_goal,
|
|
||||||
"communication": self._create_communication_goal
|
|
||||||
}
|
|
||||||
|
|
||||||
async def create_goal(
|
|
||||||
self,
|
|
||||||
intent: Dict[str, Any],
|
|
||||||
request: Dict[str, Any],
|
|
||||||
current_state: Any
|
|
||||||
) -> Dict[str, Any]:
|
|
||||||
"""创建目标"""
|
|
||||||
try:
|
|
||||||
goal_type = self._determine_goal_type(intent, request)
|
|
||||||
|
|
||||||
if goal_type in self.goal_templates:
|
|
||||||
goal = await self.goal_templates[goal_type](intent, request, current_state)
|
|
||||||
else:
|
|
||||||
goal = await self._create_general_goal(intent, request, current_state)
|
|
||||||
|
|
||||||
# 生成唯一目标ID
|
|
||||||
goal_id = f"goal_{datetime.now().strftime('%Y%m%d_%H%M%S')}"
|
|
||||||
goal["id"] = goal_id
|
|
||||||
goal["created_at"] = datetime.now().isoformat()
|
|
||||||
goal["status"] = "active"
|
|
||||||
|
|
||||||
# 添加到活跃目标
|
|
||||||
self.active_goals[goal_id] = goal
|
|
||||||
|
|
||||||
logger.info(f"创建目标: {goal_id}, 类型: {goal_type}")
|
|
||||||
return goal
|
|
||||||
|
|
||||||
except Exception as e:
|
|
||||||
logger.error(f"创建目标失败: {e}")
|
|
||||||
return self._create_fallback_goal(intent, request)
|
|
||||||
|
|
||||||
def _determine_goal_type(self, intent: Dict[str, Any], request: Dict[str, Any]) -> str:
|
|
||||||
"""确定目标类型"""
|
|
||||||
main_intent = intent.get("main_intent", "general_query")
|
|
||||||
|
|
||||||
goal_type_mapping = {
|
|
||||||
"problem_solving": ["problem_consultation", "issue_resolution", "troubleshooting"],
|
|
||||||
"information_gathering": ["information_query", "data_collection", "research"],
|
|
||||||
"task_execution": ["work_order_creation", "task_assignment", "action_request"],
|
|
||||||
"analysis": ["data_analysis", "report_generation", "performance_review"],
|
|
||||||
"communication": ["notification", "message_delivery", "user_interaction"]
|
|
||||||
}
|
|
||||||
|
|
||||||
for goal_type, intents in goal_type_mapping.items():
|
|
||||||
if main_intent in intents:
|
|
||||||
return goal_type
|
|
||||||
|
|
||||||
return "general"
|
|
||||||
|
|
||||||
async def _create_problem_solving_goal(
|
|
||||||
self,
|
|
||||||
intent: Dict[str, Any],
|
|
||||||
request: Dict[str, Any],
|
|
||||||
current_state: Any
|
|
||||||
) -> Dict[str, Any]:
|
|
||||||
"""创建问题解决目标"""
|
|
||||||
prompt = f"""
|
|
||||||
请为以下问题解决请求创建目标:
|
|
||||||
|
|
||||||
用户意图: {json.dumps(intent, ensure_ascii=False)}
|
|
||||||
请求内容: {json.dumps(request, ensure_ascii=False)}
|
|
||||||
|
|
||||||
请定义:
|
|
||||||
1. 目标描述
|
|
||||||
2. 成功标准
|
|
||||||
3. 所需步骤
|
|
||||||
4. 预期结果
|
|
||||||
5. 时间限制
|
|
||||||
6. 资源需求
|
|
||||||
|
|
||||||
请以JSON格式返回目标定义。
|
|
||||||
"""
|
|
||||||
|
|
||||||
messages = [
|
|
||||||
{"role": "system", "content": "你是一个目标设定专家,擅长为问题解决任务设定清晰的目标。"},
|
|
||||||
{"role": "user", "content": prompt}
|
|
||||||
]
|
|
||||||
|
|
||||||
result = self.llm_client.chat_completion(messages, temperature=0.3)
|
|
||||||
|
|
||||||
if "error" in result:
|
|
||||||
return self._create_default_problem_solving_goal(intent, request)
|
|
||||||
|
|
||||||
response_content = result["choices"][0]["message"]["content"]
|
|
||||||
import re
|
|
||||||
json_match = re.search(r'\{.*\}', response_content, re.DOTALL)
|
|
||||||
|
|
||||||
if json_match:
|
|
||||||
goal_data = json.loads(json_match.group())
|
|
||||||
goal_data["type"] = "problem_solving"
|
|
||||||
return goal_data
|
|
||||||
else:
|
|
||||||
return self._create_default_problem_solving_goal(intent, request)
|
|
||||||
|
|
||||||
async def _create_information_gathering_goal(
|
|
||||||
self,
|
|
||||||
intent: Dict[str, Any],
|
|
||||||
request: Dict[str, Any],
|
|
||||||
current_state: Any
|
|
||||||
) -> Dict[str, Any]:
|
|
||||||
"""创建信息收集目标"""
|
|
||||||
prompt = f"""
|
|
||||||
请为以下信息收集请求创建目标:
|
|
||||||
|
|
||||||
用户意图: {json.dumps(intent, ensure_ascii=False)}
|
|
||||||
请求内容: {json.dumps(request, ensure_ascii=False)}
|
|
||||||
|
|
||||||
请定义:
|
|
||||||
1. 信息收集范围
|
|
||||||
2. 信息质量要求
|
|
||||||
3. 收集方法
|
|
||||||
4. 验证标准
|
|
||||||
5. 整理格式
|
|
||||||
|
|
||||||
请以JSON格式返回目标定义。
|
|
||||||
"""
|
|
||||||
|
|
||||||
messages = [
|
|
||||||
{"role": "system", "content": "你是一个信息收集专家,擅长设定信息收集目标。"},
|
|
||||||
{"role": "user", "content": prompt}
|
|
||||||
]
|
|
||||||
|
|
||||||
result = self.llm_client.chat_completion(messages, temperature=0.3)
|
|
||||||
|
|
||||||
if "error" in result:
|
|
||||||
return self._create_default_information_goal(intent, request)
|
|
||||||
|
|
||||||
response_content = result["choices"][0]["message"]["content"]
|
|
||||||
import re
|
|
||||||
json_match = re.search(r'\{.*\}', response_content, re.DOTALL)
|
|
||||||
|
|
||||||
if json_match:
|
|
||||||
goal_data = json.loads(json_match.group())
|
|
||||||
goal_data["type"] = "information_gathering"
|
|
||||||
return goal_data
|
|
||||||
else:
|
|
||||||
return self._create_default_information_goal(intent, request)
|
|
||||||
|
|
||||||
async def _create_task_execution_goal(
|
|
||||||
self,
|
|
||||||
intent: Dict[str, Any],
|
|
||||||
request: Dict[str, Any],
|
|
||||||
current_state: Any
|
|
||||||
) -> Dict[str, Any]:
|
|
||||||
"""创建任务执行目标"""
|
|
||||||
prompt = f"""
|
|
||||||
请为以下任务执行请求创建目标:
|
|
||||||
|
|
||||||
用户意图: {json.dumps(intent, ensure_ascii=False)}
|
|
||||||
请求内容: {json.dumps(request, ensure_ascii=False)}
|
|
||||||
|
|
||||||
请定义:
|
|
||||||
1. 任务描述
|
|
||||||
2. 执行步骤
|
|
||||||
3. 完成标准
|
|
||||||
4. 质量要求
|
|
||||||
5. 时间安排
|
|
||||||
|
|
||||||
请以JSON格式返回目标定义。
|
|
||||||
"""
|
|
||||||
|
|
||||||
messages = [
|
|
||||||
{"role": "system", "content": "你是一个任务执行专家,擅长设定任务执行目标。"},
|
|
||||||
{"role": "user", "content": prompt}
|
|
||||||
]
|
|
||||||
|
|
||||||
result = self.llm_client.chat_completion(messages, temperature=0.3)
|
|
||||||
|
|
||||||
if "error" in result:
|
|
||||||
return self._create_default_task_goal(intent, request)
|
|
||||||
|
|
||||||
response_content = result["choices"][0]["message"]["content"]
|
|
||||||
import re
|
|
||||||
json_match = re.search(r'\{.*\}', response_content, re.DOTALL)
|
|
||||||
|
|
||||||
if json_match:
|
|
||||||
goal_data = json.loads(json_match.group())
|
|
||||||
goal_data["type"] = "task_execution"
|
|
||||||
return goal_data
|
|
||||||
else:
|
|
||||||
return self._create_default_task_goal(intent, request)
|
|
||||||
|
|
||||||
async def _create_analysis_goal(
|
|
||||||
self,
|
|
||||||
intent: Dict[str, Any],
|
|
||||||
request: Dict[str, Any],
|
|
||||||
current_state: Any
|
|
||||||
) -> Dict[str, Any]:
|
|
||||||
"""创建分析目标"""
|
|
||||||
prompt = f"""
|
|
||||||
请为以下分析请求创建目标:
|
|
||||||
|
|
||||||
用户意图: {json.dumps(intent, ensure_ascii=False)}
|
|
||||||
请求内容: {json.dumps(request, ensure_ascii=False)}
|
|
||||||
|
|
||||||
请定义:
|
|
||||||
1. 分析范围
|
|
||||||
2. 分析方法
|
|
||||||
3. 分析深度
|
|
||||||
4. 输出格式
|
|
||||||
5. 质量指标
|
|
||||||
|
|
||||||
请以JSON格式返回目标定义。
|
|
||||||
"""
|
|
||||||
|
|
||||||
messages = [
|
|
||||||
{"role": "system", "content": "你是一个分析专家,擅长设定分析目标。"},
|
|
||||||
{"role": "user", "content": prompt}
|
|
||||||
]
|
|
||||||
|
|
||||||
result = self.llm_client.chat_completion(messages, temperature=0.3)
|
|
||||||
|
|
||||||
if "error" in result:
|
|
||||||
return self._create_default_analysis_goal(intent, request)
|
|
||||||
|
|
||||||
response_content = result["choices"][0]["message"]["content"]
|
|
||||||
import re
|
|
||||||
json_match = re.search(r'\{.*\}', response_content, re.DOTALL)
|
|
||||||
|
|
||||||
if json_match:
|
|
||||||
goal_data = json.loads(json_match.group())
|
|
||||||
goal_data["type"] = "analysis"
|
|
||||||
return goal_data
|
|
||||||
else:
|
|
||||||
return self._create_default_analysis_goal(intent, request)
|
|
||||||
|
|
||||||
async def _create_communication_goal(
|
|
||||||
self,
|
|
||||||
intent: Dict[str, Any],
|
|
||||||
request: Dict[str, Any],
|
|
||||||
current_state: Any
|
|
||||||
) -> Dict[str, Any]:
|
|
||||||
"""创建沟通目标"""
|
|
||||||
prompt = f"""
|
|
||||||
请为以下沟通请求创建目标:
|
|
||||||
|
|
||||||
用户意图: {json.dumps(intent, ensure_ascii=False)}
|
|
||||||
请求内容: {json.dumps(request, ensure_ascii=False)}
|
|
||||||
|
|
||||||
请定义:
|
|
||||||
1. 沟通对象
|
|
||||||
2. 沟通内容
|
|
||||||
3. 沟通方式
|
|
||||||
4. 预期效果
|
|
||||||
5. 反馈机制
|
|
||||||
|
|
||||||
请以JSON格式返回目标定义。
|
|
||||||
"""
|
|
||||||
|
|
||||||
messages = [
|
|
||||||
{"role": "system", "content": "你是一个沟通专家,擅长设定沟通目标。"},
|
|
||||||
{"role": "user", "content": prompt}
|
|
||||||
]
|
|
||||||
|
|
||||||
result = self.llm_client.chat_completion(messages, temperature=0.3)
|
|
||||||
|
|
||||||
if "error" in result:
|
|
||||||
return self._create_default_communication_goal(intent, request)
|
|
||||||
|
|
||||||
response_content = result["choices"][0]["message"]["content"]
|
|
||||||
import re
|
|
||||||
json_match = re.search(r'\{.*\}', response_content, re.DOTALL)
|
|
||||||
|
|
||||||
if json_match:
|
|
||||||
goal_data = json.loads(json_match.group())
|
|
||||||
goal_data["type"] = "communication"
|
|
||||||
return goal_data
|
|
||||||
else:
|
|
||||||
return self._create_default_communication_goal(intent, request)
|
|
||||||
|
|
||||||
async def _create_general_goal(
|
|
||||||
self,
|
|
||||||
intent: Dict[str, Any],
|
|
||||||
request: Dict[str, Any],
|
|
||||||
current_state: Any
|
|
||||||
) -> Dict[str, Any]:
|
|
||||||
"""创建通用目标"""
|
|
||||||
return {
|
|
||||||
"type": "general",
|
|
||||||
"description": intent.get("main_intent", "处理用户请求"),
|
|
||||||
"success_criteria": {
|
|
||||||
"completion": True,
|
|
||||||
"user_satisfaction": 0.7
|
|
||||||
},
|
|
||||||
"steps": ["理解请求", "执行任务", "返回结果"],
|
|
||||||
"expected_result": "用户需求得到满足",
|
|
||||||
"time_limit": 300, # 5分钟
|
|
||||||
"resource_requirements": ["llm_client", "knowledge_base"]
|
|
||||||
}
|
|
||||||
|
|
||||||
def _create_default_problem_solving_goal(self, intent: Dict[str, Any], request: Dict[str, Any]) -> Dict[str, Any]:
|
|
||||||
"""创建默认问题解决目标"""
|
|
||||||
return {
|
|
||||||
"type": "problem_solving",
|
|
||||||
"description": "解决用户问题",
|
|
||||||
"success_criteria": {
|
|
||||||
"problem_identified": True,
|
|
||||||
"solution_provided": True,
|
|
||||||
"user_satisfaction": 0.7
|
|
||||||
},
|
|
||||||
"steps": ["分析问题", "寻找解决方案", "提供建议", "验证效果"],
|
|
||||||
"expected_result": "问题得到解决或提供有效建议",
|
|
||||||
"time_limit": 300,
|
|
||||||
"resource_requirements": ["knowledge_base", "llm_client"]
|
|
||||||
}
|
|
||||||
|
|
||||||
def _create_default_information_goal(self, intent: Dict[str, Any], request: Dict[str, Any]) -> Dict[str, Any]:
|
|
||||||
"""创建默认信息收集目标"""
|
|
||||||
return {
|
|
||||||
"type": "information_gathering",
|
|
||||||
"description": "收集相关信息",
|
|
||||||
"success_criteria": {
|
|
||||||
"information_complete": True,
|
|
||||||
"information_accurate": True,
|
|
||||||
"information_relevant": True
|
|
||||||
},
|
|
||||||
"steps": ["确定信息需求", "搜索信息源", "收集信息", "整理信息"],
|
|
||||||
"expected_result": "提供准确、完整、相关的信息",
|
|
||||||
"time_limit": 180,
|
|
||||||
"resource_requirements": ["knowledge_base", "search_tools"]
|
|
||||||
}
|
|
||||||
|
|
||||||
def _create_default_task_goal(self, intent: Dict[str, Any], request: Dict[str, Any]) -> Dict[str, Any]:
|
|
||||||
"""创建默认任务执行目标"""
|
|
||||||
return {
|
|
||||||
"type": "task_execution",
|
|
||||||
"description": "执行指定任务",
|
|
||||||
"success_criteria": {
|
|
||||||
"task_completed": True,
|
|
||||||
"quality_met": True,
|
|
||||||
"time_met": True
|
|
||||||
},
|
|
||||||
"steps": ["理解任务", "制定计划", "执行任务", "验证结果"],
|
|
||||||
"expected_result": "任务成功完成",
|
|
||||||
"time_limit": 600,
|
|
||||||
"resource_requirements": ["task_tools", "monitoring"]
|
|
||||||
}
|
|
||||||
|
|
||||||
def _create_default_analysis_goal(self, intent: Dict[str, Any], request: Dict[str, Any]) -> Dict[str, Any]:
|
|
||||||
"""创建默认分析目标"""
|
|
||||||
return {
|
|
||||||
"type": "analysis",
|
|
||||||
"description": "执行数据分析",
|
|
||||||
"success_criteria": {
|
|
||||||
"analysis_complete": True,
|
|
||||||
"insights_meaningful": True,
|
|
||||||
"report_clear": True
|
|
||||||
},
|
|
||||||
"steps": ["收集数据", "分析数据", "提取洞察", "生成报告"],
|
|
||||||
"expected_result": "提供有价值的分析报告",
|
|
||||||
"time_limit": 900,
|
|
||||||
"resource_requirements": ["analytics_tools", "data_sources"]
|
|
||||||
}
|
|
||||||
|
|
||||||
def _create_default_communication_goal(self, intent: Dict[str, Any], request: Dict[str, Any]) -> Dict[str, Any]:
|
|
||||||
"""创建默认沟通目标"""
|
|
||||||
return {
|
|
||||||
"type": "communication",
|
|
||||||
"description": "与用户沟通",
|
|
||||||
"success_criteria": {
|
|
||||||
"message_delivered": True,
|
|
||||||
"response_received": True,
|
|
||||||
"understanding_achieved": True
|
|
||||||
},
|
|
||||||
"steps": ["准备消息", "发送消息", "等待响应", "确认理解"],
|
|
||||||
"expected_result": "成功沟通并达成理解",
|
|
||||||
"time_limit": 120,
|
|
||||||
"resource_requirements": ["communication_tools"]
|
|
||||||
}
|
|
||||||
|
|
||||||
def _create_fallback_goal(self, intent: Dict[str, Any], request: Dict[str, Any]) -> Dict[str, Any]:
|
|
||||||
"""创建备用目标"""
|
|
||||||
return {
|
|
||||||
"type": "fallback",
|
|
||||||
"description": "处理用户请求",
|
|
||||||
"success_criteria": {"completion": True},
|
|
||||||
"steps": ["处理请求"],
|
|
||||||
"expected_result": "返回响应",
|
|
||||||
"time_limit": 60,
|
|
||||||
"resource_requirements": ["basic_tools"]
|
|
||||||
}
|
|
||||||
|
|
||||||
async def update_goal_progress(self, goal_id: str, progress_data: Dict[str, Any]) -> bool:
|
|
||||||
"""更新目标进度"""
|
|
||||||
try:
|
|
||||||
if goal_id not in self.active_goals:
|
|
||||||
return False
|
|
||||||
|
|
||||||
goal = self.active_goals[goal_id]
|
|
||||||
goal["progress"] = progress_data
|
|
||||||
goal["updated_at"] = datetime.now().isoformat()
|
|
||||||
|
|
||||||
# 检查是否完成
|
|
||||||
if self._check_goal_completion(goal):
|
|
||||||
goal["status"] = "completed"
|
|
||||||
goal["completed_at"] = datetime.now().isoformat()
|
|
||||||
|
|
||||||
# 移动到历史记录
|
|
||||||
self.goal_history.append(goal)
|
|
||||||
del self.active_goals[goal_id]
|
|
||||||
|
|
||||||
logger.info(f"目标 {goal_id} 已完成")
|
|
||||||
|
|
||||||
return True
|
|
||||||
|
|
||||||
except Exception as e:
|
|
||||||
logger.error(f"更新目标进度失败: {e}")
|
|
||||||
return False
|
|
||||||
|
|
||||||
def _check_goal_completion(self, goal: Dict[str, Any]) -> bool:
|
|
||||||
"""检查目标是否完成"""
|
|
||||||
success_criteria = goal.get("success_criteria", {})
|
|
||||||
|
|
||||||
if not success_criteria:
|
|
||||||
return True
|
|
||||||
|
|
||||||
progress = goal.get("progress", {})
|
|
||||||
|
|
||||||
# 检查每个成功标准
|
|
||||||
for criterion, required_value in success_criteria.items():
|
|
||||||
actual_value = progress.get(criterion)
|
|
||||||
if actual_value != required_value:
|
|
||||||
return False
|
|
||||||
|
|
||||||
return True
|
|
||||||
|
|
||||||
async def evaluate_goal_performance(self, goal_id: str) -> Dict[str, Any]:
|
|
||||||
"""评估目标性能"""
|
|
||||||
try:
|
|
||||||
if goal_id in self.active_goals:
|
|
||||||
goal = self.active_goals[goal_id]
|
|
||||||
elif goal_id in [g["id"] for g in self.goal_history]:
|
|
||||||
goal = next(g for g in self.goal_history if g["id"] == goal_id)
|
|
||||||
else:
|
|
||||||
return {"error": "目标不存在"}
|
|
||||||
|
|
||||||
evaluation = {
|
|
||||||
"goal_id": goal_id,
|
|
||||||
"type": goal.get("type"),
|
|
||||||
"status": goal.get("status"),
|
|
||||||
"created_at": goal.get("created_at"),
|
|
||||||
"completed_at": goal.get("completed_at"),
|
|
||||||
"duration": self._calculate_goal_duration(goal),
|
|
||||||
"success_rate": self._calculate_success_rate(goal),
|
|
||||||
"efficiency": self._calculate_efficiency(goal),
|
|
||||||
"quality_score": self._calculate_quality_score(goal)
|
|
||||||
}
|
|
||||||
|
|
||||||
return evaluation
|
|
||||||
|
|
||||||
except Exception as e:
|
|
||||||
logger.error(f"评估目标性能失败: {e}")
|
|
||||||
return {"error": str(e)}
|
|
||||||
|
|
||||||
def _calculate_goal_duration(self, goal: Dict[str, Any]) -> float:
|
|
||||||
"""计算目标持续时间"""
|
|
||||||
created_at = datetime.fromisoformat(goal.get("created_at", datetime.now().isoformat()))
|
|
||||||
|
|
||||||
if goal.get("completed_at"):
|
|
||||||
completed_at = datetime.fromisoformat(goal["completed_at"])
|
|
||||||
return (completed_at - created_at).total_seconds()
|
|
||||||
else:
|
|
||||||
return (datetime.now() - created_at).total_seconds()
|
|
||||||
|
|
||||||
def _calculate_success_rate(self, goal: Dict[str, Any]) -> float:
|
|
||||||
"""计算成功率"""
|
|
||||||
if goal.get("status") == "completed":
|
|
||||||
return 1.0
|
|
||||||
elif goal.get("status") == "failed":
|
|
||||||
return 0.0
|
|
||||||
else:
|
|
||||||
# 根据进度计算部分成功率
|
|
||||||
progress = goal.get("progress", {})
|
|
||||||
success_criteria = goal.get("success_criteria", {})
|
|
||||||
|
|
||||||
if not success_criteria:
|
|
||||||
return 0.5
|
|
||||||
|
|
||||||
completed_criteria = 0
|
|
||||||
for criterion in success_criteria:
|
|
||||||
if progress.get(criterion) == success_criteria[criterion]:
|
|
||||||
completed_criteria += 1
|
|
||||||
|
|
||||||
return completed_criteria / len(success_criteria)
|
|
||||||
|
|
||||||
def _calculate_efficiency(self, goal: Dict[str, Any]) -> float:
|
|
||||||
"""计算效率"""
|
|
||||||
duration = self._calculate_goal_duration(goal)
|
|
||||||
time_limit = goal.get("time_limit", 300)
|
|
||||||
|
|
||||||
if duration <= time_limit:
|
|
||||||
return 1.0
|
|
||||||
else:
|
|
||||||
# 超时惩罚
|
|
||||||
return max(0.0, 1.0 - (duration - time_limit) / time_limit)
|
|
||||||
|
|
||||||
def _calculate_quality_score(self, goal: Dict[str, Any]) -> float:
|
|
||||||
"""计算质量分数"""
|
|
||||||
# 这里可以根据具体的目标类型和质量指标计算
|
|
||||||
# 暂时返回一个基于成功率的简单计算
|
|
||||||
success_rate = self._calculate_success_rate(goal)
|
|
||||||
efficiency = self._calculate_efficiency(goal)
|
|
||||||
|
|
||||||
return (success_rate + efficiency) / 2
|
|
||||||
|
|
||||||
def get_active_goals(self) -> List[Dict[str, Any]]:
|
|
||||||
"""获取活跃目标"""
|
|
||||||
return list(self.active_goals.values())
|
|
||||||
|
|
||||||
def get_goal_history(self, limit: int = 10) -> List[Dict[str, Any]]:
|
|
||||||
"""获取目标历史"""
|
|
||||||
return self.goal_history[-limit:] if self.goal_history else []
|
|
||||||
|
|
||||||
def get_goal_statistics(self) -> Dict[str, Any]:
|
|
||||||
"""获取目标统计"""
|
|
||||||
total_goals = len(self.active_goals) + len(self.goal_history)
|
|
||||||
completed_goals = len([g for g in self.goal_history if g.get("status") == "completed"])
|
|
||||||
active_goals = len(self.active_goals)
|
|
||||||
|
|
||||||
return {
|
|
||||||
"total_goals": total_goals,
|
|
||||||
"active_goals": active_goals,
|
|
||||||
"completed_goals": completed_goals,
|
|
||||||
"completion_rate": completed_goals / total_goals if total_goals > 0 else 0,
|
|
||||||
"goal_types": self._get_goal_type_distribution()
|
|
||||||
}
|
|
||||||
|
|
||||||
def _get_goal_type_distribution(self) -> Dict[str, int]:
|
|
||||||
"""获取目标类型分布"""
|
|
||||||
distribution = {}
|
|
||||||
|
|
||||||
# 统计活跃目标
|
|
||||||
for goal in self.active_goals.values():
|
|
||||||
goal_type = goal.get("type", "unknown")
|
|
||||||
distribution[goal_type] = distribution.get(goal_type, 0) + 1
|
|
||||||
|
|
||||||
# 统计历史目标
|
|
||||||
for goal in self.goal_history:
|
|
||||||
goal_type = goal.get("type", "unknown")
|
|
||||||
distribution[goal_type] = distribution.get(goal_type, 0) + 1
|
|
||||||
|
|
||||||
return distribution
|
|
||||||
@@ -1,371 +0,0 @@
|
|||||||
|
|
||||||
# -*- coding: utf-8 -*-
|
|
||||||
"""
|
|
||||||
智能Agent核心 - 集成大模型和智能决策
|
|
||||||
高效实现Agent的智能处理能力
|
|
||||||
"""
|
|
||||||
|
|
||||||
import logging
|
|
||||||
import asyncio
|
|
||||||
import json
|
|
||||||
from typing import Dict, Any, List, Optional, Tuple
|
|
||||||
from datetime import datetime
|
|
||||||
from dataclasses import dataclass
|
|
||||||
from enum import Enum
|
|
||||||
|
|
||||||
logger = logging.getLogger(__name__)
|
|
||||||
|
|
||||||
class ActionType(Enum):
|
|
||||||
"""动作类型枚举"""
|
|
||||||
ALERT_RESPONSE = "alert_response"
|
|
||||||
KNOWLEDGE_UPDATE = "knowledge_update"
|
|
||||||
WORKORDER_CREATE = "workorder_create"
|
|
||||||
SYSTEM_OPTIMIZE = "system_optimize"
|
|
||||||
USER_NOTIFY = "user_notify"
|
|
||||||
|
|
||||||
class ConfidenceLevel(Enum):
|
|
||||||
"""置信度等级"""
|
|
||||||
HIGH = "high" # 高置信度 (>0.8)
|
|
||||||
MEDIUM = "medium" # 中等置信度 (0.5-0.8)
|
|
||||||
LOW = "low" # 低置信度 (<0.5)
|
|
||||||
|
|
||||||
@dataclass
|
|
||||||
class AgentAction:
|
|
||||||
"""Agent动作"""
|
|
||||||
action_type: ActionType
|
|
||||||
description: str
|
|
||||||
priority: int # 1-5, 5最高
|
|
||||||
confidence: float # 0-1
|
|
||||||
parameters: Dict[str, Any]
|
|
||||||
estimated_time: int # 预计执行时间(秒)
|
|
||||||
|
|
||||||
@dataclass
|
|
||||||
class AlertContext:
|
|
||||||
"""预警上下文"""
|
|
||||||
alert_id: str
|
|
||||||
alert_type: str
|
|
||||||
severity: str
|
|
||||||
description: str
|
|
||||||
affected_systems: List[str]
|
|
||||||
metrics: Dict[str, Any]
|
|
||||||
|
|
||||||
@dataclass
|
|
||||||
class KnowledgeContext:
|
|
||||||
"""知识库上下文"""
|
|
||||||
question: str
|
|
||||||
answer: str
|
|
||||||
confidence: float
|
|
||||||
source: str
|
|
||||||
category: str
|
|
||||||
|
|
||||||
class IntelligentAgent:
|
|
||||||
"""智能Agent核心"""
|
|
||||||
|
|
||||||
def __init__(self, llm_client=None):
|
|
||||||
self.llm_client = llm_client
|
|
||||||
self.action_history = []
|
|
||||||
self.learning_data = {}
|
|
||||||
self.confidence_thresholds = {
|
|
||||||
'high': 0.8,
|
|
||||||
'medium': 0.5,
|
|
||||||
'low': 0.3
|
|
||||||
}
|
|
||||||
|
|
||||||
async def process_alert(self, alert_context: AlertContext) -> List[AgentAction]:
|
|
||||||
"""处理预警信息,生成智能动作"""
|
|
||||||
try:
|
|
||||||
# 构建预警分析提示
|
|
||||||
prompt = self._build_alert_analysis_prompt(alert_context)
|
|
||||||
|
|
||||||
# 调用大模型分析
|
|
||||||
analysis = await self._call_llm(prompt)
|
|
||||||
|
|
||||||
# 解析动作
|
|
||||||
actions = self._parse_alert_actions(analysis, alert_context)
|
|
||||||
|
|
||||||
# 按优先级排序
|
|
||||||
actions.sort(key=lambda x: x.priority, reverse=True)
|
|
||||||
|
|
||||||
return actions
|
|
||||||
|
|
||||||
except Exception as e:
|
|
||||||
logger.error(f"处理预警失败: {e}")
|
|
||||||
return [self._create_default_alert_action(alert_context)]
|
|
||||||
|
|
||||||
async def process_knowledge_confidence(self, knowledge_context: KnowledgeContext) -> List[AgentAction]:
|
|
||||||
"""处理知识库置信度问题"""
|
|
||||||
try:
|
|
||||||
if knowledge_context.confidence >= self.confidence_thresholds['high']:
|
|
||||||
return [] # 高置信度,无需处理
|
|
||||||
|
|
||||||
# 构建知识增强提示
|
|
||||||
prompt = self._build_knowledge_enhancement_prompt(knowledge_context)
|
|
||||||
|
|
||||||
# 调用大模型增强知识
|
|
||||||
enhancement = await self._call_llm(prompt)
|
|
||||||
|
|
||||||
# 生成增强动作
|
|
||||||
actions = self._parse_knowledge_actions(enhancement, knowledge_context)
|
|
||||||
|
|
||||||
return actions
|
|
||||||
|
|
||||||
except Exception as e:
|
|
||||||
logger.error(f"处理知识库置信度失败: {e}")
|
|
||||||
return [self._create_default_knowledge_action(knowledge_context)]
|
|
||||||
|
|
||||||
async def execute_action(self, action: AgentAction) -> Dict[str, Any]:
|
|
||||||
"""执行Agent动作"""
|
|
||||||
try:
|
|
||||||
logger.info(f"执行Agent动作: {action.action_type.value} - {action.description}")
|
|
||||||
|
|
||||||
if action.action_type == ActionType.ALERT_RESPONSE:
|
|
||||||
return await self._execute_alert_response(action)
|
|
||||||
elif action.action_type == ActionType.KNOWLEDGE_UPDATE:
|
|
||||||
return await self._execute_knowledge_update(action)
|
|
||||||
elif action.action_type == ActionType.WORKORDER_CREATE:
|
|
||||||
return await self._execute_workorder_create(action)
|
|
||||||
elif action.action_type == ActionType.SYSTEM_OPTIMIZE:
|
|
||||||
return await self._execute_system_optimize(action)
|
|
||||||
elif action.action_type == ActionType.USER_NOTIFY:
|
|
||||||
return await self._execute_user_notify(action)
|
|
||||||
else:
|
|
||||||
return {"success": False, "error": "未知动作类型"}
|
|
||||||
|
|
||||||
except Exception as e:
|
|
||||||
logger.error(f"执行动作失败: {e}")
|
|
||||||
return {"success": False, "error": str(e)}
|
|
||||||
|
|
||||||
def _build_alert_analysis_prompt(self, alert_context: AlertContext) -> str:
|
|
||||||
"""构建预警分析提示"""
|
|
||||||
return f"""
|
|
||||||
作为TSP智能助手,请分析以下预警信息并提供处理建议:
|
|
||||||
|
|
||||||
预警信息:
|
|
||||||
- 类型: {alert_context.alert_type}
|
|
||||||
- 严重程度: {alert_context.severity}
|
|
||||||
- 描述: {alert_context.description}
|
|
||||||
- 影响系统: {', '.join(alert_context.affected_systems)}
|
|
||||||
- 指标数据: {json.dumps(alert_context.metrics, ensure_ascii=False)}
|
|
||||||
|
|
||||||
请提供以下格式的JSON响应:
|
|
||||||
{{
|
|
||||||
"analysis": "预警原因分析",
|
|
||||||
"immediate_actions": [
|
|
||||||
{{
|
|
||||||
"action": "立即执行的动作",
|
|
||||||
"priority": 5,
|
|
||||||
"confidence": 0.9,
|
|
||||||
"parameters": {{"key": "value"}}
|
|
||||||
}}
|
|
||||||
],
|
|
||||||
"follow_up_actions": [
|
|
||||||
{{
|
|
||||||
"action": "后续跟进动作",
|
|
||||||
"priority": 3,
|
|
||||||
"confidence": 0.7,
|
|
||||||
"parameters": {{"key": "value"}}
|
|
||||||
}}
|
|
||||||
],
|
|
||||||
"prevention_measures": [
|
|
||||||
"预防措施1",
|
|
||||||
"预防措施2"
|
|
||||||
]
|
|
||||||
}}
|
|
||||||
"""
|
|
||||||
|
|
||||||
def _build_knowledge_enhancement_prompt(self, knowledge_context: KnowledgeContext) -> str:
|
|
||||||
"""构建知识增强提示"""
|
|
||||||
return f"""
|
|
||||||
作为TSP智能助手,请分析以下知识库条目并提供增强建议:
|
|
||||||
|
|
||||||
知识条目:
|
|
||||||
- 问题: {knowledge_context.question}
|
|
||||||
- 答案: {knowledge_context.answer}
|
|
||||||
- 置信度: {knowledge_context.confidence}
|
|
||||||
- 来源: {knowledge_context.source}
|
|
||||||
- 分类: {knowledge_context.category}
|
|
||||||
|
|
||||||
请提供以下格式的JSON响应:
|
|
||||||
{{
|
|
||||||
"confidence_analysis": "置信度分析",
|
|
||||||
"enhancement_suggestions": [
|
|
||||||
"增强建议1",
|
|
||||||
"增强建议2"
|
|
||||||
],
|
|
||||||
"actions": [
|
|
||||||
{{
|
|
||||||
"action": "知识更新动作",
|
|
||||||
"priority": 4,
|
|
||||||
"confidence": 0.8,
|
|
||||||
"parameters": {{"enhanced_answer": "增强后的答案"}}
|
|
||||||
}}
|
|
||||||
],
|
|
||||||
"learning_opportunities": [
|
|
||||||
"学习机会1",
|
|
||||||
"学习机会2"
|
|
||||||
]
|
|
||||||
}}
|
|
||||||
"""
|
|
||||||
|
|
||||||
async def _call_llm(self, prompt: str) -> Dict[str, Any]:
|
|
||||||
"""调用大模型"""
|
|
||||||
try:
|
|
||||||
if self.llm_client:
|
|
||||||
# 使用真实的大模型客户端
|
|
||||||
response = await self.llm_client.generate(prompt)
|
|
||||||
return json.loads(response)
|
|
||||||
else:
|
|
||||||
# 模拟大模型响应
|
|
||||||
return self._simulate_llm_response(prompt)
|
|
||||||
except Exception as e:
|
|
||||||
logger.error(f"调用大模型失败: {e}")
|
|
||||||
return self._simulate_llm_response(prompt)
|
|
||||||
|
|
||||||
def _simulate_llm_response(self, prompt: str) -> Dict[str, Any]:
|
|
||||||
"""模拟大模型响应 - 千问模型风格"""
|
|
||||||
if "预警信息" in prompt:
|
|
||||||
return {
|
|
||||||
"analysis": "【千问分析】系统性能下降,需要立即处理。根据历史数据分析,这可能是由于资源不足或配置问题导致的。",
|
|
||||||
"immediate_actions": [
|
|
||||||
{
|
|
||||||
"action": "重启相关服务",
|
|
||||||
"priority": 5,
|
|
||||||
"confidence": 0.9,
|
|
||||||
"parameters": {"service": "main_service", "reason": "服务响应超时"}
|
|
||||||
}
|
|
||||||
],
|
|
||||||
"follow_up_actions": [
|
|
||||||
{
|
|
||||||
"action": "检查系统日志",
|
|
||||||
"priority": 3,
|
|
||||||
"confidence": 0.7,
|
|
||||||
"parameters": {"log_level": "error", "time_range": "last_hour"}
|
|
||||||
}
|
|
||||||
],
|
|
||||||
"prevention_measures": [
|
|
||||||
"增加监控频率,提前发现问题",
|
|
||||||
"优化系统配置,提升性能",
|
|
||||||
"建立预警机制,减少故障影响"
|
|
||||||
]
|
|
||||||
}
|
|
||||||
else:
|
|
||||||
return {
|
|
||||||
"confidence_analysis": "【千问分析】当前答案置信度较低,需要更多上下文信息。建议结合用户反馈和历史工单数据来提升答案质量。",
|
|
||||||
"enhancement_suggestions": [
|
|
||||||
"添加更多实际案例和操作步骤",
|
|
||||||
"提供详细的故障排除指南",
|
|
||||||
"结合系统架构图进行说明"
|
|
||||||
],
|
|
||||||
"actions": [
|
|
||||||
{
|
|
||||||
"action": "更新知识库条目",
|
|
||||||
"priority": 4,
|
|
||||||
"confidence": 0.8,
|
|
||||||
"parameters": {"enhanced_answer": "基于千问模型分析的增强答案"}
|
|
||||||
}
|
|
||||||
],
|
|
||||||
"learning_opportunities": [
|
|
||||||
"收集用户反馈,持续优化答案",
|
|
||||||
"分析相似问题,建立知识关联",
|
|
||||||
"利用千问模型的学习能力,提升知识质量"
|
|
||||||
]
|
|
||||||
}
|
|
||||||
|
|
||||||
def _parse_alert_actions(self, analysis: Dict[str, Any], alert_context: AlertContext) -> List[AgentAction]:
|
|
||||||
"""解析预警动作"""
|
|
||||||
actions = []
|
|
||||||
|
|
||||||
# 立即动作
|
|
||||||
for action_data in analysis.get("immediate_actions", []):
|
|
||||||
action = AgentAction(
|
|
||||||
action_type=ActionType.ALERT_RESPONSE,
|
|
||||||
description=action_data["action"],
|
|
||||||
priority=action_data["priority"],
|
|
||||||
confidence=action_data["confidence"],
|
|
||||||
parameters=action_data["parameters"],
|
|
||||||
estimated_time=30
|
|
||||||
)
|
|
||||||
actions.append(action)
|
|
||||||
|
|
||||||
# 后续动作
|
|
||||||
for action_data in analysis.get("follow_up_actions", []):
|
|
||||||
action = AgentAction(
|
|
||||||
action_type=ActionType.SYSTEM_OPTIMIZE,
|
|
||||||
description=action_data["action"],
|
|
||||||
priority=action_data["priority"],
|
|
||||||
confidence=action_data["confidence"],
|
|
||||||
parameters=action_data["parameters"],
|
|
||||||
estimated_time=300
|
|
||||||
)
|
|
||||||
actions.append(action)
|
|
||||||
|
|
||||||
return actions
|
|
||||||
|
|
||||||
def _parse_knowledge_actions(self, enhancement: Dict[str, Any], knowledge_context: KnowledgeContext) -> List[AgentAction]:
|
|
||||||
"""解析知识库动作"""
|
|
||||||
actions = []
|
|
||||||
|
|
||||||
for action_data in enhancement.get("actions", []):
|
|
||||||
action = AgentAction(
|
|
||||||
action_type=ActionType.KNOWLEDGE_UPDATE,
|
|
||||||
description=action_data["action"],
|
|
||||||
priority=action_data["priority"],
|
|
||||||
confidence=action_data["confidence"],
|
|
||||||
parameters=action_data["parameters"],
|
|
||||||
estimated_time=60
|
|
||||||
)
|
|
||||||
actions.append(action)
|
|
||||||
|
|
||||||
return actions
|
|
||||||
|
|
||||||
def _create_default_alert_action(self, alert_context: AlertContext) -> AgentAction:
|
|
||||||
"""创建默认预警动作"""
|
|
||||||
return AgentAction(
|
|
||||||
action_type=ActionType.USER_NOTIFY,
|
|
||||||
description=f"通知管理员处理{alert_context.alert_type}预警",
|
|
||||||
priority=3,
|
|
||||||
confidence=0.5,
|
|
||||||
parameters={"alert_id": alert_context.alert_id},
|
|
||||||
estimated_time=10
|
|
||||||
)
|
|
||||||
|
|
||||||
def _create_default_knowledge_action(self, knowledge_context: KnowledgeContext) -> AgentAction:
|
|
||||||
"""创建默认知识库动作"""
|
|
||||||
return AgentAction(
|
|
||||||
action_type=ActionType.KNOWLEDGE_UPDATE,
|
|
||||||
description="标记低置信度知识条目,等待人工审核",
|
|
||||||
priority=2,
|
|
||||||
confidence=0.3,
|
|
||||||
parameters={"question": knowledge_context.question},
|
|
||||||
estimated_time=5
|
|
||||||
)
|
|
||||||
|
|
||||||
async def _execute_alert_response(self, action: AgentAction) -> Dict[str, Any]:
|
|
||||||
"""执行预警响应动作"""
|
|
||||||
# 这里实现具体的预警响应逻辑
|
|
||||||
logger.info(f"执行预警响应: {action.description}")
|
|
||||||
return {"success": True, "message": "预警响应已执行"}
|
|
||||||
|
|
||||||
async def _execute_knowledge_update(self, action: AgentAction) -> Dict[str, Any]:
|
|
||||||
"""执行知识库更新动作"""
|
|
||||||
# 这里实现具体的知识库更新逻辑
|
|
||||||
logger.info(f"执行知识库更新: {action.description}")
|
|
||||||
return {"success": True, "message": "知识库已更新"}
|
|
||||||
|
|
||||||
async def _execute_workorder_create(self, action: AgentAction) -> Dict[str, Any]:
|
|
||||||
"""执行工单创建动作"""
|
|
||||||
# 这里实现具体的工单创建逻辑
|
|
||||||
logger.info(f"执行工单创建: {action.description}")
|
|
||||||
return {"success": True, "message": "工单已创建"}
|
|
||||||
|
|
||||||
async def _execute_system_optimize(self, action: AgentAction) -> Dict[str, Any]:
|
|
||||||
"""执行系统优化动作"""
|
|
||||||
# 这里实现具体的系统优化逻辑
|
|
||||||
logger.info(f"执行系统优化: {action.description}")
|
|
||||||
return {"success": True, "message": "系统优化已执行"}
|
|
||||||
|
|
||||||
async def _execute_user_notify(self, action: AgentAction) -> Dict[str, Any]:
|
|
||||||
"""执行用户通知动作"""
|
|
||||||
# 这里实现具体的用户通知逻辑
|
|
||||||
logger.info(f"执行用户通知: {action.description}")
|
|
||||||
return {"success": True, "message": "用户已通知"}
|
|
||||||
@@ -1,409 +0,0 @@
|
|||||||
|
|
||||||
# -*- coding: utf-8 -*-
|
|
||||||
"""
|
|
||||||
任务规划器
|
|
||||||
负责制定执行计划和任务分解
|
|
||||||
"""
|
|
||||||
|
|
||||||
import logging
|
|
||||||
from typing import Dict, List, Any, Optional
|
|
||||||
from datetime import datetime
|
|
||||||
import json
|
|
||||||
|
|
||||||
from ..core.llm_client import QwenClient
|
|
||||||
|
|
||||||
logger = logging.getLogger(__name__)
|
|
||||||
|
|
||||||
class TaskPlanner:
|
|
||||||
"""任务规划器"""
|
|
||||||
|
|
||||||
def __init__(self):
|
|
||||||
self.llm_client = QwenClient()
|
|
||||||
self.planning_strategies = {
|
|
||||||
"sequential": self._create_sequential_plan,
|
|
||||||
"parallel": self._create_parallel_plan,
|
|
||||||
"conditional": self._create_conditional_plan,
|
|
||||||
"iterative": self._create_iterative_plan
|
|
||||||
}
|
|
||||||
|
|
||||||
async def create_plan(
|
|
||||||
self,
|
|
||||||
goal: Dict[str, Any],
|
|
||||||
available_tools: List[Dict[str, Any]],
|
|
||||||
constraints: Dict[str, Any]
|
|
||||||
) -> List[Dict[str, Any]]:
|
|
||||||
"""创建执行计划"""
|
|
||||||
try:
|
|
||||||
# 1. 分析目标复杂度
|
|
||||||
complexity = await self._analyze_goal_complexity(goal)
|
|
||||||
|
|
||||||
# 2. 选择规划策略
|
|
||||||
strategy = self._select_planning_strategy(complexity, goal)
|
|
||||||
|
|
||||||
# 3. 生成计划
|
|
||||||
plan = await self.planning_strategies[strategy](goal, available_tools, constraints)
|
|
||||||
|
|
||||||
# 4. 优化计划
|
|
||||||
optimized_plan = await self._optimize_plan(plan, constraints)
|
|
||||||
|
|
||||||
logger.info(f"创建计划成功,包含 {len(optimized_plan)} 个任务")
|
|
||||||
return optimized_plan
|
|
||||||
|
|
||||||
except Exception as e:
|
|
||||||
logger.error(f"创建计划失败: {e}")
|
|
||||||
return []
|
|
||||||
|
|
||||||
async def _analyze_goal_complexity(self, goal: Dict[str, Any]) -> Dict[str, Any]:
|
|
||||||
"""分析目标复杂度"""
|
|
||||||
prompt = f"""
|
|
||||||
请分析以下目标的复杂度:
|
|
||||||
|
|
||||||
目标: {goal.get('description', '')}
|
|
||||||
类型: {goal.get('type', '')}
|
|
||||||
上下文: {goal.get('context', {})}
|
|
||||||
|
|
||||||
请从以下维度评估复杂度(1-10分):
|
|
||||||
1. 任务数量
|
|
||||||
2. 依赖关系复杂度
|
|
||||||
3. 所需工具数量
|
|
||||||
4. 时间要求
|
|
||||||
5. 资源需求
|
|
||||||
|
|
||||||
请以JSON格式返回分析结果。
|
|
||||||
"""
|
|
||||||
|
|
||||||
messages = [
|
|
||||||
{"role": "system", "content": "你是一个任务规划专家,擅长分析任务复杂度。"},
|
|
||||||
{"role": "user", "content": prompt}
|
|
||||||
]
|
|
||||||
|
|
||||||
result = self.llm_client.chat_completion(messages, temperature=0.3)
|
|
||||||
|
|
||||||
if "error" in result:
|
|
||||||
return {"complexity_score": 5, "strategy": "sequential"}
|
|
||||||
|
|
||||||
try:
|
|
||||||
response_content = result["choices"][0]["message"]["content"]
|
|
||||||
import re
|
|
||||||
json_match = re.search(r'\{.*\}', response_content, re.DOTALL)
|
|
||||||
if json_match:
|
|
||||||
analysis = json.loads(json_match.group())
|
|
||||||
return analysis
|
|
||||||
else:
|
|
||||||
return {"complexity_score": 5, "strategy": "sequential"}
|
|
||||||
except Exception as e:
|
|
||||||
logger.error(f"解析复杂度分析失败: {e}")
|
|
||||||
return {"complexity_score": 5, "strategy": "sequential"}
|
|
||||||
|
|
||||||
def _select_planning_strategy(self, complexity: Dict[str, Any], goal: Dict[str, Any]) -> str:
|
|
||||||
"""选择规划策略"""
|
|
||||||
complexity_score = complexity.get("complexity_score", 5)
|
|
||||||
goal_type = goal.get("type", "general")
|
|
||||||
|
|
||||||
if complexity_score <= 3:
|
|
||||||
return "sequential"
|
|
||||||
elif complexity_score <= 6:
|
|
||||||
if goal_type in ["analysis", "monitoring"]:
|
|
||||||
return "parallel"
|
|
||||||
else:
|
|
||||||
return "conditional"
|
|
||||||
else:
|
|
||||||
return "iterative"
|
|
||||||
|
|
||||||
async def _create_sequential_plan(
|
|
||||||
self,
|
|
||||||
goal: Dict[str, Any],
|
|
||||||
available_tools: List[Dict[str, Any]],
|
|
||||||
constraints: Dict[str, Any]
|
|
||||||
) -> List[Dict[str, Any]]:
|
|
||||||
"""创建顺序执行计划"""
|
|
||||||
prompt = f"""
|
|
||||||
请为以下目标创建一个顺序执行计划:
|
|
||||||
|
|
||||||
目标: {goal.get('description', '')}
|
|
||||||
可用工具: {[tool.get('name', '') for tool in available_tools]}
|
|
||||||
|
|
||||||
请将目标分解为具体的执行步骤,每个步骤包含:
|
|
||||||
1. 任务描述
|
|
||||||
2. 所需工具
|
|
||||||
3. 输入参数
|
|
||||||
4. 预期输出
|
|
||||||
5. 成功条件
|
|
||||||
|
|
||||||
请以JSON数组格式返回计划。
|
|
||||||
"""
|
|
||||||
|
|
||||||
messages = [
|
|
||||||
{"role": "system", "content": "你是一个任务规划专家,擅长创建顺序执行计划。"},
|
|
||||||
{"role": "user", "content": prompt}
|
|
||||||
]
|
|
||||||
|
|
||||||
result = self.llm_client.chat_completion(messages, temperature=0.3)
|
|
||||||
|
|
||||||
if "error" in result:
|
|
||||||
return self._create_fallback_plan(goal)
|
|
||||||
|
|
||||||
try:
|
|
||||||
response_content = result["choices"][0]["message"]["content"]
|
|
||||||
import re
|
|
||||||
json_match = re.search(r'\[.*\]', response_content, re.DOTALL)
|
|
||||||
if json_match:
|
|
||||||
plan = json.loads(json_match.group())
|
|
||||||
return self._format_plan_tasks(plan)
|
|
||||||
else:
|
|
||||||
return self._create_fallback_plan(goal)
|
|
||||||
except Exception as e:
|
|
||||||
logger.error(f"解析顺序计划失败: {e}")
|
|
||||||
return self._create_fallback_plan(goal)
|
|
||||||
|
|
||||||
async def _create_parallel_plan(
|
|
||||||
self,
|
|
||||||
goal: Dict[str, Any],
|
|
||||||
available_tools: List[Dict[str, Any]],
|
|
||||||
constraints: Dict[str, Any]
|
|
||||||
) -> List[Dict[str, Any]]:
|
|
||||||
"""创建并行执行计划"""
|
|
||||||
# 先创建基础任务
|
|
||||||
base_tasks = await self._create_sequential_plan(goal, available_tools, constraints)
|
|
||||||
|
|
||||||
# 分析任务间的依赖关系
|
|
||||||
parallel_groups = self._group_parallel_tasks(base_tasks)
|
|
||||||
|
|
||||||
return parallel_groups
|
|
||||||
|
|
||||||
async def _create_conditional_plan(
|
|
||||||
self,
|
|
||||||
goal: Dict[str, Any],
|
|
||||||
available_tools: List[Dict[str, Any]],
|
|
||||||
constraints: Dict[str, Any]
|
|
||||||
) -> List[Dict[str, Any]]:
|
|
||||||
"""创建条件执行计划"""
|
|
||||||
prompt = f"""
|
|
||||||
请为以下目标创建一个条件执行计划:
|
|
||||||
|
|
||||||
目标: {goal.get('description', '')}
|
|
||||||
上下文: {goal.get('context', {})}
|
|
||||||
|
|
||||||
计划应该包含:
|
|
||||||
1. 初始条件检查
|
|
||||||
2. 分支逻辑
|
|
||||||
3. 每个分支的具体任务
|
|
||||||
4. 合并条件
|
|
||||||
|
|
||||||
请以JSON格式返回计划。
|
|
||||||
"""
|
|
||||||
|
|
||||||
messages = [
|
|
||||||
{"role": "system", "content": "你是一个任务规划专家,擅长创建条件执行计划。"},
|
|
||||||
{"role": "user", "content": prompt}
|
|
||||||
]
|
|
||||||
|
|
||||||
result = self.llm_client.chat_completion(messages, temperature=0.3)
|
|
||||||
|
|
||||||
if "error" in result:
|
|
||||||
return await self._create_sequential_plan(goal, available_tools, constraints)
|
|
||||||
|
|
||||||
try:
|
|
||||||
response_content = result["choices"][0]["message"]["content"]
|
|
||||||
import re
|
|
||||||
json_match = re.search(r'\{.*\}', response_content, re.DOTALL)
|
|
||||||
if json_match:
|
|
||||||
plan = json.loads(json_match.group())
|
|
||||||
return self._format_conditional_plan(plan)
|
|
||||||
else:
|
|
||||||
return await self._create_sequential_plan(goal, available_tools, constraints)
|
|
||||||
except Exception as e:
|
|
||||||
logger.error(f"解析条件计划失败: {e}")
|
|
||||||
return await self._create_sequential_plan(goal, available_tools, constraints)
|
|
||||||
|
|
||||||
async def _create_iterative_plan(
|
|
||||||
self,
|
|
||||||
goal: Dict[str, Any],
|
|
||||||
available_tools: List[Dict[str, Any]],
|
|
||||||
constraints: Dict[str, Any]
|
|
||||||
) -> List[Dict[str, Any]]:
|
|
||||||
"""创建迭代执行计划"""
|
|
||||||
# 创建基础计划
|
|
||||||
base_plan = await self._create_sequential_plan(goal, available_tools, constraints)
|
|
||||||
|
|
||||||
# 添加迭代控制任务
|
|
||||||
iteration_control = {
|
|
||||||
"id": "iteration_control",
|
|
||||||
"type": "control",
|
|
||||||
"description": "迭代控制",
|
|
||||||
"max_iterations": constraints.get("max_iterations", 10),
|
|
||||||
"convergence_criteria": goal.get("success_criteria", {}),
|
|
||||||
"tasks": base_plan
|
|
||||||
}
|
|
||||||
|
|
||||||
return [iteration_control]
|
|
||||||
|
|
||||||
def _group_parallel_tasks(self, tasks: List[Dict[str, Any]]) -> List[Dict[str, Any]]:
|
|
||||||
"""将任务分组为可并行执行的任务组"""
|
|
||||||
groups = []
|
|
||||||
current_group = []
|
|
||||||
|
|
||||||
for task in tasks:
|
|
||||||
# 简单的分组逻辑:相同类型的任务可以并行
|
|
||||||
if not current_group or current_group[0].get("type") == task.get("type"):
|
|
||||||
current_group.append(task)
|
|
||||||
else:
|
|
||||||
if current_group:
|
|
||||||
groups.append({
|
|
||||||
"type": "parallel_group",
|
|
||||||
"tasks": current_group,
|
|
||||||
"execution_mode": "parallel"
|
|
||||||
})
|
|
||||||
current_group = [task]
|
|
||||||
|
|
||||||
if current_group:
|
|
||||||
groups.append({
|
|
||||||
"type": "parallel_group",
|
|
||||||
"tasks": current_group,
|
|
||||||
"execution_mode": "parallel"
|
|
||||||
})
|
|
||||||
|
|
||||||
return groups
|
|
||||||
|
|
||||||
def _format_plan_tasks(self, raw_plan: List[Dict[str, Any]]) -> List[Dict[str, Any]]:
|
|
||||||
"""格式化计划任务"""
|
|
||||||
formatted_tasks = []
|
|
||||||
|
|
||||||
for i, task in enumerate(raw_plan):
|
|
||||||
formatted_task = {
|
|
||||||
"id": f"task_{i+1}",
|
|
||||||
"type": task.get("type", "action"),
|
|
||||||
"description": task.get("description", ""),
|
|
||||||
"tool": task.get("tool", ""),
|
|
||||||
"parameters": task.get("parameters", {}),
|
|
||||||
"expected_output": task.get("expected_output", ""),
|
|
||||||
"success_criteria": task.get("success_criteria", {}),
|
|
||||||
"dependencies": task.get("dependencies", []),
|
|
||||||
"priority": task.get("priority", 0.5),
|
|
||||||
"timeout": task.get("timeout", 60)
|
|
||||||
}
|
|
||||||
formatted_tasks.append(formatted_task)
|
|
||||||
|
|
||||||
return formatted_tasks
|
|
||||||
|
|
||||||
def _format_conditional_plan(self, raw_plan: Dict[str, Any]) -> List[Dict[str, Any]]:
|
|
||||||
"""格式化条件计划"""
|
|
||||||
formatted_tasks = []
|
|
||||||
|
|
||||||
# 添加条件检查任务
|
|
||||||
condition_task = {
|
|
||||||
"id": "condition_check",
|
|
||||||
"type": "condition",
|
|
||||||
"description": "条件检查",
|
|
||||||
"condition": raw_plan.get("condition", ""),
|
|
||||||
"branches": raw_plan.get("branches", {})
|
|
||||||
}
|
|
||||||
formatted_tasks.append(condition_task)
|
|
||||||
|
|
||||||
# 添加分支任务
|
|
||||||
for branch_name, branch_tasks in raw_plan.get("branches", {}).items():
|
|
||||||
branch_task = {
|
|
||||||
"id": f"branch_{branch_name}",
|
|
||||||
"type": "branch",
|
|
||||||
"description": f"执行分支: {branch_name}",
|
|
||||||
"condition": branch_name,
|
|
||||||
"tasks": self._format_plan_tasks(branch_tasks)
|
|
||||||
}
|
|
||||||
formatted_tasks.append(branch_task)
|
|
||||||
|
|
||||||
return formatted_tasks
|
|
||||||
|
|
||||||
async def _optimize_plan(self, plan: List[Dict[str, Any]], constraints: Dict[str, Any]) -> List[Dict[str, Any]]:
|
|
||||||
"""优化计划"""
|
|
||||||
optimized_plan = []
|
|
||||||
|
|
||||||
for task in plan:
|
|
||||||
# 检查时间约束
|
|
||||||
if task.get("timeout", 60) > constraints.get("timeout", 300):
|
|
||||||
task["timeout"] = constraints.get("timeout", 300)
|
|
||||||
|
|
||||||
# 检查资源约束
|
|
||||||
if task.get("resource_usage", 0) > constraints.get("memory_limit", 1000):
|
|
||||||
# 分解大任务
|
|
||||||
subtasks = await self._decompose_task(task)
|
|
||||||
optimized_plan.extend(subtasks)
|
|
||||||
else:
|
|
||||||
optimized_plan.append(task)
|
|
||||||
|
|
||||||
return optimized_plan
|
|
||||||
|
|
||||||
async def _decompose_task(self, task: Dict[str, Any]) -> List[Dict[str, Any]]:
|
|
||||||
"""分解大任务为小任务"""
|
|
||||||
prompt = f"""
|
|
||||||
请将以下大任务分解为更小的子任务:
|
|
||||||
|
|
||||||
任务: {task.get('description', '')}
|
|
||||||
类型: {task.get('type', '')}
|
|
||||||
参数: {task.get('parameters', {})}
|
|
||||||
|
|
||||||
请返回分解后的子任务列表,每个子任务应该是独立的、可执行的。
|
|
||||||
"""
|
|
||||||
|
|
||||||
messages = [
|
|
||||||
{"role": "system", "content": "你是一个任务分解专家,擅长将复杂任务分解为简单任务。"},
|
|
||||||
{"role": "user", "content": prompt}
|
|
||||||
]
|
|
||||||
|
|
||||||
result = self.llm_client.chat_completion(messages, temperature=0.3)
|
|
||||||
|
|
||||||
if "error" in result:
|
|
||||||
return [task] # 如果分解失败,返回原任务
|
|
||||||
|
|
||||||
try:
|
|
||||||
response_content = result["choices"][0]["message"]["content"]
|
|
||||||
import re
|
|
||||||
json_match = re.search(r'\[.*\]', response_content, re.DOTALL)
|
|
||||||
if json_match:
|
|
||||||
subtasks = json.loads(json_match.group())
|
|
||||||
return self._format_plan_tasks(subtasks)
|
|
||||||
else:
|
|
||||||
return [task]
|
|
||||||
except Exception as e:
|
|
||||||
logger.error(f"任务分解失败: {e}")
|
|
||||||
return [task]
|
|
||||||
|
|
||||||
def _create_fallback_plan(self, goal: Dict[str, Any]) -> List[Dict[str, Any]]:
|
|
||||||
"""创建备用计划"""
|
|
||||||
return [{
|
|
||||||
"id": "fallback_task",
|
|
||||||
"type": "action",
|
|
||||||
"description": goal.get("description", "执行目标"),
|
|
||||||
"tool": "general_response",
|
|
||||||
"parameters": {"goal": goal},
|
|
||||||
"expected_output": "目标完成",
|
|
||||||
"success_criteria": {"completion": True},
|
|
||||||
"priority": 0.5,
|
|
||||||
"timeout": 60
|
|
||||||
}]
|
|
||||||
|
|
||||||
def validate_plan(self, plan: List[Dict[str, Any]]) -> Dict[str, Any]:
|
|
||||||
"""验证计划的有效性"""
|
|
||||||
validation_result = {
|
|
||||||
"valid": True,
|
|
||||||
"issues": [],
|
|
||||||
"warnings": []
|
|
||||||
}
|
|
||||||
|
|
||||||
for task in plan:
|
|
||||||
# 检查必要字段
|
|
||||||
if not task.get("id"):
|
|
||||||
validation_result["issues"].append("任务缺少ID")
|
|
||||||
validation_result["valid"] = False
|
|
||||||
|
|
||||||
if not task.get("description"):
|
|
||||||
validation_result["warnings"].append(f"任务 {task.get('id', 'unknown')} 缺少描述")
|
|
||||||
|
|
||||||
# 检查依赖关系
|
|
||||||
dependencies = task.get("dependencies", [])
|
|
||||||
task_ids = [t.get("id") for t in plan]
|
|
||||||
for dep in dependencies:
|
|
||||||
if dep not in task_ids:
|
|
||||||
validation_result["issues"].append(f"任务 {task.get('id')} 的依赖 {dep} 不存在")
|
|
||||||
validation_result["valid"] = False
|
|
||||||
|
|
||||||
return validation_result
|
|
||||||
345
src/agent/react_agent.py
Normal file
345
src/agent/react_agent.py
Normal file
@@ -0,0 +1,345 @@
|
|||||||
|
# -*- coding: utf-8 -*-
|
||||||
|
"""
|
||||||
|
ReAct Agent - 基于 ReAct 模式的智能代理
|
||||||
|
用单次 LLM 调用 + 工具循环替代原有的多步流水线
|
||||||
|
"""
|
||||||
|
|
||||||
|
import logging
|
||||||
|
import json
|
||||||
|
import re
|
||||||
|
from typing import Dict, Any, List, Optional
|
||||||
|
from datetime import datetime
|
||||||
|
|
||||||
|
from src.agent.llm_client import LLMManager
|
||||||
|
from src.config.unified_config import get_config
|
||||||
|
|
||||||
|
logger = logging.getLogger(__name__)
|
||||||
|
|
||||||
|
# ── 工具定义(供 LLM 理解可用能力) ──────────────────────────
|
||||||
|
|
||||||
|
TOOL_DEFINITIONS = [
|
||||||
|
{
|
||||||
|
"name": "search_knowledge",
|
||||||
|
"description": "搜索知识库,根据关键词查找相关的问题和答案",
|
||||||
|
"parameters": {
|
||||||
|
"query": {"type": "string", "description": "搜索关键词", "required": True},
|
||||||
|
"top_k": {"type": "integer", "description": "返回结果数量,默认3", "required": False}
|
||||||
|
}
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"name": "add_knowledge",
|
||||||
|
"description": "向知识库添加新的问答条目",
|
||||||
|
"parameters": {
|
||||||
|
"question": {"type": "string", "description": "问题", "required": True},
|
||||||
|
"answer": {"type": "string", "description": "答案", "required": True},
|
||||||
|
"category": {"type": "string", "description": "分类", "required": False}
|
||||||
|
}
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"name": "query_vehicle",
|
||||||
|
"description": "查询车辆信息,支持按VIN码或车牌号查询",
|
||||||
|
"parameters": {
|
||||||
|
"vin": {"type": "string", "description": "VIN码", "required": False},
|
||||||
|
"plate_number": {"type": "string", "description": "车牌号", "required": False}
|
||||||
|
}
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"name": "get_analytics",
|
||||||
|
"description": "获取系统数据分析报告,如每日统计、分类统计等",
|
||||||
|
"parameters": {
|
||||||
|
"report_type": {
|
||||||
|
"type": "string",
|
||||||
|
"description": "报告类型: daily_analytics / summary / category_performance",
|
||||||
|
"required": True
|
||||||
|
}
|
||||||
|
}
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"name": "send_feishu_message",
|
||||||
|
"description": "通过飞书发送消息通知",
|
||||||
|
"parameters": {
|
||||||
|
"message": {"type": "string", "description": "消息内容", "required": True},
|
||||||
|
"chat_id": {"type": "string", "description": "飞书群聊ID(可选)", "required": False}
|
||||||
|
}
|
||||||
|
},
|
||||||
|
]
|
||||||
|
|
||||||
|
|
||||||
|
def _build_tools_prompt() -> str:
|
||||||
|
"""构建工具描述文本供 system prompt 使用"""
|
||||||
|
lines = []
|
||||||
|
for t in TOOL_DEFINITIONS:
|
||||||
|
params_desc = []
|
||||||
|
for pname, pinfo in t["parameters"].items():
|
||||||
|
req = "必填" if pinfo.get("required") else "可选"
|
||||||
|
params_desc.append(f" - {pname} ({pinfo['type']}, {req}): {pinfo['description']}")
|
||||||
|
lines.append(f"- {t['name']}: {t['description']}\n 参数:\n" + "\n".join(params_desc))
|
||||||
|
return "\n".join(lines)
|
||||||
|
|
||||||
|
|
||||||
|
SYSTEM_PROMPT = f"""你是 TSP 智能客服助手,帮助用户解决售后问题、查询知识库、管理客诉信息。
|
||||||
|
|
||||||
|
你可以使用以下工具来完成任务:
|
||||||
|
{_build_tools_prompt()}
|
||||||
|
|
||||||
|
## 回复规则
|
||||||
|
1. 如果你需要使用工具,请严格按以下 JSON 格式回复(不要包含其他内容):
|
||||||
|
```json
|
||||||
|
{{"tool": "工具名", "parameters": {{"参数名": "参数值"}}}}
|
||||||
|
```
|
||||||
|
|
||||||
|
2. 如果你不需要使用工具,可以直接用自然语言回复用户。
|
||||||
|
3. 每次只调用一个工具。
|
||||||
|
4. 根据工具返回的结果,综合生成最终回复。
|
||||||
|
5. 回复要简洁专业,使用中文。
|
||||||
|
"""
|
||||||
|
|
||||||
|
|
||||||
|
class ReactAgent:
|
||||||
|
"""基于 ReAct 模式的 Agent"""
|
||||||
|
|
||||||
|
MAX_TOOL_ROUNDS = 5 # 最多工具调用轮次,防止死循环
|
||||||
|
|
||||||
|
def __init__(self):
|
||||||
|
config = get_config()
|
||||||
|
self.llm = LLMManager(config.llm)
|
||||||
|
self._tool_handlers = self._register_tool_handlers()
|
||||||
|
self.execution_history: List[Dict[str, Any]] = []
|
||||||
|
logger.info("ReactAgent 初始化完成")
|
||||||
|
|
||||||
|
# ── 工具处理器注册 ──────────────────────────────────────
|
||||||
|
|
||||||
|
def _register_tool_handlers(self) -> Dict[str, Any]:
|
||||||
|
return {
|
||||||
|
"search_knowledge": self._tool_search_knowledge,
|
||||||
|
"add_knowledge": self._tool_add_knowledge,
|
||||||
|
"query_vehicle": self._tool_query_vehicle,
|
||||||
|
"get_analytics": self._tool_get_analytics,
|
||||||
|
"send_feishu_message": self._tool_send_feishu_message,
|
||||||
|
}
|
||||||
|
|
||||||
|
# ── 主处理入口 ──────────────────────────────────────────
|
||||||
|
|
||||||
|
async def chat(
|
||||||
|
self,
|
||||||
|
message: str,
|
||||||
|
user_id: str = "anonymous",
|
||||||
|
conversation_history: Optional[List[Dict[str, str]]] = None,
|
||||||
|
) -> Dict[str, Any]:
|
||||||
|
"""处理用户消息,返回最终回复"""
|
||||||
|
messages = [{"role": "system", "content": SYSTEM_PROMPT}]
|
||||||
|
|
||||||
|
# 加入历史对话(最近 10 轮)
|
||||||
|
if conversation_history:
|
||||||
|
messages.extend(conversation_history[-10:])
|
||||||
|
|
||||||
|
messages.append({"role": "user", "content": message})
|
||||||
|
|
||||||
|
tool_calls_log = []
|
||||||
|
|
||||||
|
for round_idx in range(self.MAX_TOOL_ROUNDS):
|
||||||
|
# 调用 LLM
|
||||||
|
try:
|
||||||
|
response_text = await self.llm.chat(messages, temperature=0.3, max_tokens=2000)
|
||||||
|
except Exception as e:
|
||||||
|
logger.error(f"LLM 调用失败: {e}")
|
||||||
|
return self._error_response(str(e))
|
||||||
|
|
||||||
|
# 尝试解析工具调用
|
||||||
|
tool_call = self._parse_tool_call(response_text)
|
||||||
|
|
||||||
|
if tool_call is None:
|
||||||
|
# 没有工具调用 → 这是最终回复
|
||||||
|
self._record_execution(message, user_id, tool_calls_log, response_text)
|
||||||
|
return {
|
||||||
|
"success": True,
|
||||||
|
"response": response_text,
|
||||||
|
"tool_calls": tool_calls_log,
|
||||||
|
"rounds": round_idx + 1,
|
||||||
|
}
|
||||||
|
|
||||||
|
# 执行工具
|
||||||
|
tool_name = tool_call["tool"]
|
||||||
|
tool_params = tool_call.get("parameters", {})
|
||||||
|
logger.info(f"[Round {round_idx+1}] 调用工具: {tool_name}, 参数: {tool_params}")
|
||||||
|
|
||||||
|
tool_result = await self._execute_tool(tool_name, tool_params)
|
||||||
|
tool_calls_log.append({
|
||||||
|
"tool": tool_name,
|
||||||
|
"parameters": tool_params,
|
||||||
|
"result": tool_result,
|
||||||
|
"round": round_idx + 1,
|
||||||
|
})
|
||||||
|
|
||||||
|
# 把工具调用和结果加入对话上下文
|
||||||
|
messages.append({"role": "assistant", "content": response_text})
|
||||||
|
messages.append({
|
||||||
|
"role": "user",
|
||||||
|
"content": f"工具 `{tool_name}` 返回结果:\n```json\n{json.dumps(tool_result, ensure_ascii=False, default=str)}\n```\n请根据以上结果回复用户。"
|
||||||
|
})
|
||||||
|
|
||||||
|
# 超过最大轮次
|
||||||
|
self._record_execution(message, user_id, tool_calls_log, "[达到最大工具调用轮次]")
|
||||||
|
return {
|
||||||
|
"success": True,
|
||||||
|
"response": "抱歉,处理过程较复杂,请稍后重试或换个方式描述您的问题。",
|
||||||
|
"tool_calls": tool_calls_log,
|
||||||
|
"rounds": self.MAX_TOOL_ROUNDS,
|
||||||
|
}
|
||||||
|
|
||||||
|
# ── 工具调用解析 ──────────────────────────────────────
|
||||||
|
|
||||||
|
def _parse_tool_call(self, text: str) -> Optional[Dict[str, Any]]:
|
||||||
|
"""从 LLM 回复中解析工具调用 JSON"""
|
||||||
|
if not text:
|
||||||
|
return None
|
||||||
|
|
||||||
|
# 尝试从 ```json ... ``` 代码块中提取
|
||||||
|
code_block = re.search(r'```json\s*(\{.*?\})\s*```', text, re.DOTALL)
|
||||||
|
if code_block:
|
||||||
|
try:
|
||||||
|
data = json.loads(code_block.group(1))
|
||||||
|
if "tool" in data:
|
||||||
|
return data
|
||||||
|
except json.JSONDecodeError:
|
||||||
|
pass
|
||||||
|
|
||||||
|
# 尝试直接解析整段文本为 JSON
|
||||||
|
try:
|
||||||
|
data = json.loads(text.strip())
|
||||||
|
if isinstance(data, dict) and "tool" in data:
|
||||||
|
return data
|
||||||
|
except json.JSONDecodeError:
|
||||||
|
pass
|
||||||
|
|
||||||
|
# 尝试从文本中找到第一个 JSON 对象
|
||||||
|
json_match = re.search(r'\{[^{}]*"tool"\s*:\s*"[^"]+?"[^{}]*\}', text, re.DOTALL)
|
||||||
|
if json_match:
|
||||||
|
try:
|
||||||
|
data = json.loads(json_match.group())
|
||||||
|
if "tool" in data:
|
||||||
|
return data
|
||||||
|
except json.JSONDecodeError:
|
||||||
|
pass
|
||||||
|
|
||||||
|
return None
|
||||||
|
|
||||||
|
# ── 工具执行 ──────────────────────────────────────────
|
||||||
|
|
||||||
|
async def _execute_tool(self, tool_name: str, params: Dict[str, Any]) -> Dict[str, Any]:
|
||||||
|
"""执行指定工具"""
|
||||||
|
handler = self._tool_handlers.get(tool_name)
|
||||||
|
if not handler:
|
||||||
|
return {"error": f"未知工具: {tool_name}"}
|
||||||
|
try:
|
||||||
|
return await handler(**params)
|
||||||
|
except Exception as e:
|
||||||
|
logger.error(f"工具 {tool_name} 执行失败: {e}")
|
||||||
|
return {"error": str(e)}
|
||||||
|
|
||||||
|
# ── 具体工具实现 ──────────────────────────────────────
|
||||||
|
|
||||||
|
async def _tool_search_knowledge(self, query: str, top_k: int = 3, **kw) -> Dict[str, Any]:
|
||||||
|
"""搜索知识库"""
|
||||||
|
try:
|
||||||
|
from src.knowledge_base.knowledge_manager import KnowledgeManager
|
||||||
|
km = KnowledgeManager()
|
||||||
|
results = km.search_knowledge(query, top_k)
|
||||||
|
return {"results": results, "count": len(results)}
|
||||||
|
except Exception as e:
|
||||||
|
return {"error": str(e)}
|
||||||
|
|
||||||
|
async def _tool_add_knowledge(self, question: str, answer: str, category: str = "通用", **kw) -> Dict[str, Any]:
|
||||||
|
"""添加知识库条目"""
|
||||||
|
try:
|
||||||
|
from src.knowledge_base.knowledge_manager import KnowledgeManager
|
||||||
|
km = KnowledgeManager()
|
||||||
|
success = km.add_knowledge_entry(question=question, answer=answer, category=category)
|
||||||
|
return {"success": success}
|
||||||
|
except Exception as e:
|
||||||
|
return {"error": str(e)}
|
||||||
|
|
||||||
|
async def _tool_query_vehicle(self, vin: str = None, plate_number: str = None, **kw) -> Dict[str, Any]:
|
||||||
|
"""查询车辆信息"""
|
||||||
|
try:
|
||||||
|
from src.vehicle.vehicle_data_manager import VehicleDataManager
|
||||||
|
vm = VehicleDataManager()
|
||||||
|
if vin:
|
||||||
|
result = vm.get_latest_vehicle_data_by_vin(vin)
|
||||||
|
return {"vehicle_data": result} if result else {"error": "未找到该VIN的车辆数据"}
|
||||||
|
elif plate_number:
|
||||||
|
return {"error": "暂不支持按车牌号查询,请使用VIN码"}
|
||||||
|
else:
|
||||||
|
return {"error": "请提供 VIN 码"}
|
||||||
|
except Exception as e:
|
||||||
|
return {"error": str(e)}
|
||||||
|
|
||||||
|
async def _tool_get_analytics(self, report_type: str = "summary", **kw) -> Dict[str, Any]:
|
||||||
|
"""获取分析报告"""
|
||||||
|
try:
|
||||||
|
from src.analytics.analytics_manager import AnalyticsManager
|
||||||
|
am = AnalyticsManager()
|
||||||
|
if report_type == "daily_analytics":
|
||||||
|
return am.generate_daily_analytics()
|
||||||
|
elif report_type == "summary":
|
||||||
|
return am.get_analytics_summary()
|
||||||
|
elif report_type == "category_performance":
|
||||||
|
return am.get_category_performance()
|
||||||
|
else:
|
||||||
|
return {"error": f"不支持的报告类型: {report_type}"}
|
||||||
|
except Exception as e:
|
||||||
|
return {"error": str(e)}
|
||||||
|
|
||||||
|
async def _tool_send_feishu_message(self, message: str, chat_id: str = None, **kw) -> Dict[str, Any]:
|
||||||
|
"""发送飞书消息"""
|
||||||
|
try:
|
||||||
|
from src.integrations.feishu_service import FeishuService
|
||||||
|
fs = FeishuService()
|
||||||
|
if not chat_id:
|
||||||
|
return {"error": "请提供飞书群聊 chat_id"}
|
||||||
|
success = fs.send_message(receive_id=chat_id, content=message, receive_id_type="chat_id")
|
||||||
|
return {"success": success}
|
||||||
|
except Exception as e:
|
||||||
|
return {"error": str(e)}
|
||||||
|
|
||||||
|
# ── 辅助方法 ──────────────────────────────────────────
|
||||||
|
|
||||||
|
def _record_execution(self, message: str, user_id: str, tool_calls: list, response: str):
|
||||||
|
"""记录执行历史"""
|
||||||
|
record = {
|
||||||
|
"timestamp": datetime.now().isoformat(),
|
||||||
|
"user_id": user_id,
|
||||||
|
"message": message,
|
||||||
|
"tool_calls": tool_calls,
|
||||||
|
"response": response[:500],
|
||||||
|
}
|
||||||
|
self.execution_history.append(record)
|
||||||
|
if len(self.execution_history) > 500:
|
||||||
|
self.execution_history = self.execution_history[-500:]
|
||||||
|
|
||||||
|
def _error_response(self, error_msg: str) -> Dict[str, Any]:
|
||||||
|
return {
|
||||||
|
"success": False,
|
||||||
|
"response": "抱歉,系统处理出现问题,请稍后重试。",
|
||||||
|
"error": error_msg,
|
||||||
|
"tool_calls": [],
|
||||||
|
"rounds": 0,
|
||||||
|
}
|
||||||
|
|
||||||
|
def get_tool_definitions(self) -> List[Dict[str, Any]]:
|
||||||
|
"""返回工具定义列表(供 API 展示)"""
|
||||||
|
return TOOL_DEFINITIONS
|
||||||
|
|
||||||
|
def get_execution_history(self, limit: int = 50) -> List[Dict[str, Any]]:
|
||||||
|
"""获取执行历史"""
|
||||||
|
return self.execution_history[-limit:]
|
||||||
|
|
||||||
|
def get_status(self) -> Dict[str, Any]:
|
||||||
|
"""获取 Agent 状态"""
|
||||||
|
return {
|
||||||
|
"status": "active",
|
||||||
|
"available_tools": [t["name"] for t in TOOL_DEFINITIONS],
|
||||||
|
"tool_count": len(TOOL_DEFINITIONS),
|
||||||
|
"history_count": len(self.execution_history),
|
||||||
|
"max_tool_rounds": self.MAX_TOOL_ROUNDS,
|
||||||
|
}
|
||||||
@@ -1,479 +0,0 @@
|
|||||||
|
|
||||||
# -*- coding: utf-8 -*-
|
|
||||||
"""
|
|
||||||
推理引擎
|
|
||||||
负责逻辑推理和决策制定
|
|
||||||
"""
|
|
||||||
|
|
||||||
import logging
|
|
||||||
from typing import Dict, List, Any, Optional
|
|
||||||
from datetime import datetime
|
|
||||||
import json
|
|
||||||
|
|
||||||
from ..core.llm_client import QwenClient
|
|
||||||
|
|
||||||
logger = logging.getLogger(__name__)
|
|
||||||
|
|
||||||
class ReasoningEngine:
|
|
||||||
"""推理引擎"""
|
|
||||||
|
|
||||||
def __init__(self):
|
|
||||||
self.llm_client = QwenClient()
|
|
||||||
self.reasoning_patterns = {
|
|
||||||
"causal": self._causal_reasoning,
|
|
||||||
"deductive": self._deductive_reasoning,
|
|
||||||
"inductive": self._inductive_reasoning,
|
|
||||||
"abductive": self._abductive_reasoning,
|
|
||||||
"analogical": self._analogical_reasoning
|
|
||||||
}
|
|
||||||
self.reasoning_history = []
|
|
||||||
|
|
||||||
async def analyze_intent(
|
|
||||||
self,
|
|
||||||
message: str,
|
|
||||||
context: Dict[str, Any],
|
|
||||||
history: List[Dict[str, Any]]
|
|
||||||
) -> Dict[str, Any]:
|
|
||||||
"""分析用户意图"""
|
|
||||||
try:
|
|
||||||
prompt = f"""
|
|
||||||
请分析以下用户消息的意图:
|
|
||||||
|
|
||||||
用户消息: {message}
|
|
||||||
上下文: {json.dumps(context, ensure_ascii=False)}
|
|
||||||
历史记录: {json.dumps(history, ensure_ascii=False)}
|
|
||||||
|
|
||||||
请从以下维度分析:
|
|
||||||
1. 主要意图(问题咨询、工单创建、系统查询等)
|
|
||||||
2. 情感倾向(积极、消极、中性)
|
|
||||||
3. 紧急程度(高、中、低)
|
|
||||||
4. 所需工具类型
|
|
||||||
5. 预期响应类型
|
|
||||||
6. 关键信息提取
|
|
||||||
|
|
||||||
请以JSON格式返回分析结果。
|
|
||||||
"""
|
|
||||||
|
|
||||||
messages = [
|
|
||||||
{"role": "system", "content": "你是一个意图分析专家,擅长理解用户需求和意图。"},
|
|
||||||
{"role": "user", "content": prompt}
|
|
||||||
]
|
|
||||||
|
|
||||||
result = self.llm_client.chat_completion(messages, temperature=0.3)
|
|
||||||
|
|
||||||
if "error" in result:
|
|
||||||
return self._create_fallback_intent(message)
|
|
||||||
|
|
||||||
response_content = result["choices"][0]["message"]["content"]
|
|
||||||
import re
|
|
||||||
json_match = re.search(r'\{.*\}', response_content, re.DOTALL)
|
|
||||||
|
|
||||||
if json_match:
|
|
||||||
intent_analysis = json.loads(json_match.group())
|
|
||||||
intent_analysis["timestamp"] = datetime.now().isoformat()
|
|
||||||
return intent_analysis
|
|
||||||
else:
|
|
||||||
return self._create_fallback_intent(message)
|
|
||||||
|
|
||||||
except Exception as e:
|
|
||||||
logger.error(f"意图分析失败: {e}")
|
|
||||||
return self._create_fallback_intent(message)
|
|
||||||
|
|
||||||
async def make_decision(
|
|
||||||
self,
|
|
||||||
situation: Dict[str, Any],
|
|
||||||
options: List[Dict[str, Any]],
|
|
||||||
criteria: Dict[str, Any]
|
|
||||||
) -> Dict[str, Any]:
|
|
||||||
"""制定决策"""
|
|
||||||
try:
|
|
||||||
prompt = f"""
|
|
||||||
请根据以下情况制定决策:
|
|
||||||
|
|
||||||
当前情况: {json.dumps(situation, ensure_ascii=False)}
|
|
||||||
可选方案: {json.dumps(options, ensure_ascii=False)}
|
|
||||||
决策标准: {json.dumps(criteria, ensure_ascii=False)}
|
|
||||||
|
|
||||||
请分析每个方案的优缺点,并选择最佳方案。
|
|
||||||
返回格式:
|
|
||||||
{{
|
|
||||||
"selected_option": "方案ID",
|
|
||||||
"reasoning": "选择理由",
|
|
||||||
"confidence": 0.8,
|
|
||||||
"risks": ["风险1", "风险2"],
|
|
||||||
"mitigation": "风险缓解措施"
|
|
||||||
}}
|
|
||||||
"""
|
|
||||||
|
|
||||||
messages = [
|
|
||||||
{"role": "system", "content": "你是一个决策制定专家,擅长分析情况并做出最优决策。"},
|
|
||||||
{"role": "user", "content": prompt}
|
|
||||||
]
|
|
||||||
|
|
||||||
result = self.llm_client.chat_completion(messages, temperature=0.3)
|
|
||||||
|
|
||||||
if "error" in result:
|
|
||||||
return self._create_fallback_decision(options)
|
|
||||||
|
|
||||||
response_content = result["choices"][0]["message"]["content"]
|
|
||||||
import re
|
|
||||||
json_match = re.search(r'\{.*\}', response_content, re.DOTALL)
|
|
||||||
|
|
||||||
if json_match:
|
|
||||||
decision = json.loads(json_match.group())
|
|
||||||
decision["timestamp"] = datetime.now().isoformat()
|
|
||||||
return decision
|
|
||||||
else:
|
|
||||||
return self._create_fallback_decision(options)
|
|
||||||
|
|
||||||
except Exception as e:
|
|
||||||
logger.error(f"决策制定失败: {e}")
|
|
||||||
return self._create_fallback_decision(options)
|
|
||||||
|
|
||||||
async def reason_about_problem(
|
|
||||||
self,
|
|
||||||
problem: str,
|
|
||||||
available_information: Dict[str, Any],
|
|
||||||
reasoning_type: str = "causal"
|
|
||||||
) -> Dict[str, Any]:
|
|
||||||
"""对问题进行推理"""
|
|
||||||
try:
|
|
||||||
if reasoning_type not in self.reasoning_patterns:
|
|
||||||
reasoning_type = "causal"
|
|
||||||
|
|
||||||
reasoning_func = self.reasoning_patterns[reasoning_type]
|
|
||||||
result = await reasoning_func(problem, available_information)
|
|
||||||
|
|
||||||
# 记录推理历史
|
|
||||||
self.reasoning_history.append({
|
|
||||||
"timestamp": datetime.now().isoformat(),
|
|
||||||
"problem": problem,
|
|
||||||
"reasoning_type": reasoning_type,
|
|
||||||
"result": result
|
|
||||||
})
|
|
||||||
|
|
||||||
return result
|
|
||||||
|
|
||||||
except Exception as e:
|
|
||||||
logger.error(f"问题推理失败: {e}")
|
|
||||||
return {"error": str(e)}
|
|
||||||
|
|
||||||
async def _causal_reasoning(self, problem: str, information: Dict[str, Any]) -> Dict[str, Any]:
|
|
||||||
"""因果推理"""
|
|
||||||
prompt = f"""
|
|
||||||
请使用因果推理分析以下问题:
|
|
||||||
|
|
||||||
问题: {problem}
|
|
||||||
可用信息: {json.dumps(information, ensure_ascii=False)}
|
|
||||||
|
|
||||||
请分析:
|
|
||||||
1. 问题的根本原因
|
|
||||||
2. 可能的因果关系链
|
|
||||||
3. 影响因素分析
|
|
||||||
4. 解决方案的预期效果
|
|
||||||
|
|
||||||
请以JSON格式返回分析结果。
|
|
||||||
"""
|
|
||||||
|
|
||||||
messages = [
|
|
||||||
{"role": "system", "content": "你是一个因果推理专家,擅长分析问题的因果关系。"},
|
|
||||||
{"role": "user", "content": prompt}
|
|
||||||
]
|
|
||||||
|
|
||||||
result = self.llm_client.chat_completion(messages, temperature=0.3)
|
|
||||||
|
|
||||||
if "error" in result:
|
|
||||||
return {"reasoning_type": "causal", "error": "推理失败"}
|
|
||||||
|
|
||||||
response_content = result["choices"][0]["message"]["content"]
|
|
||||||
import re
|
|
||||||
json_match = re.search(r'\{.*\}', response_content, re.DOTALL)
|
|
||||||
|
|
||||||
if json_match:
|
|
||||||
return json.loads(json_match.group())
|
|
||||||
else:
|
|
||||||
return {"reasoning_type": "causal", "analysis": response_content}
|
|
||||||
|
|
||||||
async def _deductive_reasoning(self, problem: str, information: Dict[str, Any]) -> Dict[str, Any]:
|
|
||||||
"""演绎推理"""
|
|
||||||
prompt = f"""
|
|
||||||
请使用演绎推理分析以下问题:
|
|
||||||
|
|
||||||
问题: {problem}
|
|
||||||
可用信息: {json.dumps(information, ensure_ascii=False)}
|
|
||||||
|
|
||||||
请分析:
|
|
||||||
1. 一般性规则或原理
|
|
||||||
2. 具体事实或条件
|
|
||||||
3. 逻辑推导过程
|
|
||||||
4. 必然结论
|
|
||||||
|
|
||||||
请以JSON格式返回分析结果。
|
|
||||||
"""
|
|
||||||
|
|
||||||
messages = [
|
|
||||||
{"role": "system", "content": "你是一个演绎推理专家,擅长从一般原理推导具体结论。"},
|
|
||||||
{"role": "user", "content": prompt}
|
|
||||||
]
|
|
||||||
|
|
||||||
result = self.llm_client.chat_completion(messages, temperature=0.3)
|
|
||||||
|
|
||||||
if "error" in result:
|
|
||||||
return {"reasoning_type": "deductive", "error": "推理失败"}
|
|
||||||
|
|
||||||
response_content = result["choices"][0]["message"]["content"]
|
|
||||||
import re
|
|
||||||
json_match = re.search(r'\{.*\}', response_content, re.DOTALL)
|
|
||||||
|
|
||||||
if json_match:
|
|
||||||
return json.loads(json_match.group())
|
|
||||||
else:
|
|
||||||
return {"reasoning_type": "deductive", "analysis": response_content}
|
|
||||||
|
|
||||||
async def _inductive_reasoning(self, problem: str, information: Dict[str, Any]) -> Dict[str, Any]:
|
|
||||||
"""归纳推理"""
|
|
||||||
prompt = f"""
|
|
||||||
请使用归纳推理分析以下问题:
|
|
||||||
|
|
||||||
问题: {problem}
|
|
||||||
可用信息: {json.dumps(information, ensure_ascii=False)}
|
|
||||||
|
|
||||||
请分析:
|
|
||||||
1. 观察到的具体现象
|
|
||||||
2. 寻找共同模式
|
|
||||||
3. 形成一般性假设
|
|
||||||
4. 验证假设的合理性
|
|
||||||
|
|
||||||
请以JSON格式返回分析结果。
|
|
||||||
"""
|
|
||||||
|
|
||||||
messages = [
|
|
||||||
{"role": "system", "content": "你是一个归纳推理专家,擅长从具体现象归纳一般规律。"},
|
|
||||||
{"role": "user", "content": prompt}
|
|
||||||
]
|
|
||||||
|
|
||||||
result = self.llm_client.chat_completion(messages, temperature=0.3)
|
|
||||||
|
|
||||||
if "error" in result:
|
|
||||||
return {"reasoning_type": "inductive", "error": "推理失败"}
|
|
||||||
|
|
||||||
response_content = result["choices"][0]["message"]["content"]
|
|
||||||
import re
|
|
||||||
json_match = re.search(r'\{.*\}', response_content, re.DOTALL)
|
|
||||||
|
|
||||||
if json_match:
|
|
||||||
return json.loads(json_match.group())
|
|
||||||
else:
|
|
||||||
return {"reasoning_type": "inductive", "analysis": response_content}
|
|
||||||
|
|
||||||
async def _abductive_reasoning(self, problem: str, information: Dict[str, Any]) -> Dict[str, Any]:
|
|
||||||
"""溯因推理"""
|
|
||||||
prompt = f"""
|
|
||||||
请使用溯因推理分析以下问题:
|
|
||||||
|
|
||||||
问题: {problem}
|
|
||||||
可用信息: {json.dumps(information, ensure_ascii=False)}
|
|
||||||
|
|
||||||
请分析:
|
|
||||||
1. 观察到的现象
|
|
||||||
2. 可能的最佳解释
|
|
||||||
3. 解释的合理性评估
|
|
||||||
4. 需要进一步验证的假设
|
|
||||||
|
|
||||||
请以JSON格式返回分析结果。
|
|
||||||
"""
|
|
||||||
|
|
||||||
messages = [
|
|
||||||
{"role": "system", "content": "你是一个溯因推理专家,擅长寻找现象的最佳解释。"},
|
|
||||||
{"role": "user", "content": prompt}
|
|
||||||
]
|
|
||||||
|
|
||||||
result = self.llm_client.chat_completion(messages, temperature=0.3)
|
|
||||||
|
|
||||||
if "error" in result:
|
|
||||||
return {"reasoning_type": "abductive", "error": "推理失败"}
|
|
||||||
|
|
||||||
response_content = result["choices"][0]["message"]["content"]
|
|
||||||
import re
|
|
||||||
json_match = re.search(r'\{.*\}', response_content, re.DOTALL)
|
|
||||||
|
|
||||||
if json_match:
|
|
||||||
return json.loads(json_match.group())
|
|
||||||
else:
|
|
||||||
return {"reasoning_type": "abductive", "analysis": response_content}
|
|
||||||
|
|
||||||
async def _analogical_reasoning(self, problem: str, information: Dict[str, Any]) -> Dict[str, Any]:
|
|
||||||
"""类比推理"""
|
|
||||||
prompt = f"""
|
|
||||||
请使用类比推理分析以下问题:
|
|
||||||
|
|
||||||
问题: {problem}
|
|
||||||
可用信息: {json.dumps(information, ensure_ascii=False)}
|
|
||||||
|
|
||||||
请分析:
|
|
||||||
1. 寻找相似的问题或情况
|
|
||||||
2. 识别相似性和差异性
|
|
||||||
3. 应用类比关系
|
|
||||||
4. 调整解决方案以适应当前情况
|
|
||||||
|
|
||||||
请以JSON格式返回分析结果。
|
|
||||||
"""
|
|
||||||
|
|
||||||
messages = [
|
|
||||||
{"role": "system", "content": "你是一个类比推理专家,擅长通过类比解决问题。"},
|
|
||||||
{"role": "user", "content": prompt}
|
|
||||||
]
|
|
||||||
|
|
||||||
result = self.llm_client.chat_completion(messages, temperature=0.3)
|
|
||||||
|
|
||||||
if "error" in result:
|
|
||||||
return {"reasoning_type": "analogical", "error": "推理失败"}
|
|
||||||
|
|
||||||
response_content = result["choices"][0]["message"]["content"]
|
|
||||||
import re
|
|
||||||
json_match = re.search(r'\{.*\}', response_content, re.DOTALL)
|
|
||||||
|
|
||||||
if json_match:
|
|
||||||
return json.loads(json_match.group())
|
|
||||||
else:
|
|
||||||
return {"reasoning_type": "analogical", "analysis": response_content}
|
|
||||||
|
|
||||||
async def extract_insights(
|
|
||||||
self,
|
|
||||||
execution_result: Dict[str, Any],
|
|
||||||
goal: Dict[str, Any]
|
|
||||||
) -> Dict[str, Any]:
|
|
||||||
"""从执行结果中提取洞察"""
|
|
||||||
try:
|
|
||||||
prompt = f"""
|
|
||||||
请从以下执行结果中提取洞察:
|
|
||||||
|
|
||||||
执行结果: {json.dumps(execution_result, ensure_ascii=False)}
|
|
||||||
目标: {json.dumps(goal, ensure_ascii=False)}
|
|
||||||
|
|
||||||
请分析:
|
|
||||||
1. 成功模式(什么导致了成功)
|
|
||||||
2. 失败模式(什么导致了失败)
|
|
||||||
3. 性能指标(效率、准确性等)
|
|
||||||
4. 改进建议
|
|
||||||
5. 新发现的知识
|
|
||||||
|
|
||||||
请以JSON格式返回分析结果。
|
|
||||||
"""
|
|
||||||
|
|
||||||
messages = [
|
|
||||||
{"role": "system", "content": "你是一个洞察提取专家,擅长从执行结果中提取有价值的洞察。"},
|
|
||||||
{"role": "user", "content": prompt}
|
|
||||||
]
|
|
||||||
|
|
||||||
result = self.llm_client.chat_completion(messages, temperature=0.3)
|
|
||||||
|
|
||||||
if "error" in result:
|
|
||||||
return {"error": "洞察提取失败"}
|
|
||||||
|
|
||||||
response_content = result["choices"][0]["message"]["content"]
|
|
||||||
import re
|
|
||||||
json_match = re.search(r'\{.*\}', response_content, re.DOTALL)
|
|
||||||
|
|
||||||
if json_match:
|
|
||||||
insights = json.loads(json_match.group())
|
|
||||||
insights["timestamp"] = datetime.now().isoformat()
|
|
||||||
return insights
|
|
||||||
else:
|
|
||||||
return {"analysis": response_content, "timestamp": datetime.now().isoformat()}
|
|
||||||
|
|
||||||
except Exception as e:
|
|
||||||
logger.error(f"洞察提取失败: {e}")
|
|
||||||
return {"error": str(e)}
|
|
||||||
|
|
||||||
async def evaluate_solution(
|
|
||||||
self,
|
|
||||||
problem: str,
|
|
||||||
solution: Dict[str, Any],
|
|
||||||
criteria: Dict[str, Any]
|
|
||||||
) -> Dict[str, Any]:
|
|
||||||
"""评估解决方案"""
|
|
||||||
try:
|
|
||||||
prompt = f"""
|
|
||||||
请评估以下解决方案:
|
|
||||||
|
|
||||||
问题: {problem}
|
|
||||||
解决方案: {json.dumps(solution, ensure_ascii=False)}
|
|
||||||
评估标准: {json.dumps(criteria, ensure_ascii=False)}
|
|
||||||
|
|
||||||
请从以下维度评估:
|
|
||||||
1. 有效性(是否能解决问题)
|
|
||||||
2. 效率(资源消耗和时间成本)
|
|
||||||
3. 可行性(实施难度)
|
|
||||||
4. 风险(潜在问题)
|
|
||||||
5. 创新性(新颖程度)
|
|
||||||
|
|
||||||
请以JSON格式返回评估结果。
|
|
||||||
"""
|
|
||||||
|
|
||||||
messages = [
|
|
||||||
{"role": "system", "content": "你是一个解决方案评估专家,擅长全面评估解决方案的质量。"},
|
|
||||||
{"role": "user", "content": prompt}
|
|
||||||
]
|
|
||||||
|
|
||||||
result = self.llm_client.chat_completion(messages, temperature=0.3)
|
|
||||||
|
|
||||||
if "error" in result:
|
|
||||||
return {"error": "解决方案评估失败"}
|
|
||||||
|
|
||||||
response_content = result["choices"][0]["message"]["content"]
|
|
||||||
import re
|
|
||||||
json_match = re.search(r'\{.*\}', response_content, re.DOTALL)
|
|
||||||
|
|
||||||
if json_match:
|
|
||||||
evaluation = json.loads(json_match.group())
|
|
||||||
evaluation["timestamp"] = datetime.now().isoformat()
|
|
||||||
return evaluation
|
|
||||||
else:
|
|
||||||
return {"evaluation": response_content, "timestamp": datetime.now().isoformat()}
|
|
||||||
|
|
||||||
except Exception as e:
|
|
||||||
logger.error(f"解决方案评估失败: {e}")
|
|
||||||
return {"error": str(e)}
|
|
||||||
|
|
||||||
def _create_fallback_intent(self, message: str) -> Dict[str, Any]:
|
|
||||||
"""创建备用意图分析"""
|
|
||||||
return {
|
|
||||||
"main_intent": "general_query",
|
|
||||||
"emotion": "neutral",
|
|
||||||
"urgency": "medium",
|
|
||||||
"required_tools": ["generate_response"],
|
|
||||||
"expected_response": "text",
|
|
||||||
"key_information": {"message": message},
|
|
||||||
"confidence": 0.5,
|
|
||||||
"timestamp": datetime.now().isoformat()
|
|
||||||
}
|
|
||||||
|
|
||||||
def _create_fallback_decision(self, options: List[Dict[str, Any]]) -> Dict[str, Any]:
|
|
||||||
"""创建备用决策"""
|
|
||||||
if not options:
|
|
||||||
return {
|
|
||||||
"selected_option": None,
|
|
||||||
"reasoning": "无可用选项",
|
|
||||||
"confidence": 0.0,
|
|
||||||
"timestamp": datetime.now().isoformat()
|
|
||||||
}
|
|
||||||
|
|
||||||
# 选择第一个选项作为默认选择
|
|
||||||
return {
|
|
||||||
"selected_option": options[0].get("id", "option_1"),
|
|
||||||
"reasoning": "默认选择",
|
|
||||||
"confidence": 0.3,
|
|
||||||
"risks": ["决策质量未知"],
|
|
||||||
"mitigation": "需要进一步验证",
|
|
||||||
"timestamp": datetime.now().isoformat()
|
|
||||||
}
|
|
||||||
|
|
||||||
def get_reasoning_history(self, limit: int = 10) -> List[Dict[str, Any]]:
|
|
||||||
"""获取推理历史"""
|
|
||||||
return self.reasoning_history[-limit:] if self.reasoning_history else []
|
|
||||||
|
|
||||||
def clear_reasoning_history(self):
|
|
||||||
"""清空推理历史"""
|
|
||||||
self.reasoning_history = []
|
|
||||||
logger.info("推理历史已清空")
|
|
||||||
@@ -1,435 +0,0 @@
|
|||||||
|
|
||||||
# -*- coding: utf-8 -*-
|
|
||||||
"""
|
|
||||||
工具管理器
|
|
||||||
负责管理和执行各种工具
|
|
||||||
"""
|
|
||||||
|
|
||||||
import logging
|
|
||||||
import asyncio
|
|
||||||
from typing import Dict, List, Any, Optional, Callable
|
|
||||||
from datetime import datetime
|
|
||||||
import json
|
|
||||||
|
|
||||||
logger = logging.getLogger(__name__)
|
|
||||||
|
|
||||||
class ToolManager:
|
|
||||||
"""工具管理器"""
|
|
||||||
|
|
||||||
def __init__(self):
|
|
||||||
self.tools = {}
|
|
||||||
self.tool_usage_stats = {}
|
|
||||||
self.tool_performance = {}
|
|
||||||
self._register_default_tools()
|
|
||||||
|
|
||||||
def _register_default_tools(self):
|
|
||||||
"""注册默认工具"""
|
|
||||||
# 注册基础工具
|
|
||||||
self.register_tool("search_knowledge", self._search_knowledge_tool)
|
|
||||||
self.register_tool("create_work_order", self._create_work_order_tool)
|
|
||||||
self.register_tool("update_work_order", self._update_work_order_tool)
|
|
||||||
self.register_tool("generate_response", self._generate_response_tool)
|
|
||||||
self.register_tool("analyze_data", self._analyze_data_tool)
|
|
||||||
self.register_tool("send_notification", self._send_notification_tool)
|
|
||||||
self.register_tool("schedule_task", self._schedule_task_tool)
|
|
||||||
self.register_tool("web_search", self._web_search_tool)
|
|
||||||
self.register_tool("file_operation", self._file_operation_tool)
|
|
||||||
self.register_tool("database_query", self._database_query_tool)
|
|
||||||
|
|
||||||
logger.info(f"已注册 {len(self.tools)} 个默认工具")
|
|
||||||
|
|
||||||
def register_tool(self, name: str, func: Callable, metadata: Optional[Dict[str, Any]] = None):
|
|
||||||
"""注册工具"""
|
|
||||||
self.tools[name] = {
|
|
||||||
"function": func,
|
|
||||||
"metadata": metadata or {},
|
|
||||||
"usage_count": 0,
|
|
||||||
"last_used": None,
|
|
||||||
"success_rate": 0.0
|
|
||||||
}
|
|
||||||
|
|
||||||
logger.info(f"注册工具: {name}")
|
|
||||||
|
|
||||||
def unregister_tool(self, name: str) -> bool:
|
|
||||||
"""注销工具"""
|
|
||||||
if name in self.tools:
|
|
||||||
del self.tools[name]
|
|
||||||
logger.info(f"注销工具: {name}")
|
|
||||||
return True
|
|
||||||
return False
|
|
||||||
|
|
||||||
async def execute_tool(self, tool_name: str, parameters: Dict[str, Any]) -> Dict[str, Any]:
|
|
||||||
"""执行工具"""
|
|
||||||
if tool_name not in self.tools:
|
|
||||||
return {
|
|
||||||
"success": False,
|
|
||||||
"error": f"工具 '{tool_name}' 不存在"
|
|
||||||
}
|
|
||||||
|
|
||||||
tool = self.tools[tool_name]
|
|
||||||
start_time = datetime.now()
|
|
||||||
|
|
||||||
try:
|
|
||||||
# 更新使用统计
|
|
||||||
tool["usage_count"] += 1
|
|
||||||
tool["last_used"] = start_time
|
|
||||||
|
|
||||||
# 执行工具
|
|
||||||
if asyncio.iscoroutinefunction(tool["function"]):
|
|
||||||
result = await tool["function"](**parameters)
|
|
||||||
else:
|
|
||||||
result = tool["function"](**parameters)
|
|
||||||
|
|
||||||
# 更新性能统计
|
|
||||||
execution_time = (datetime.now() - start_time).total_seconds()
|
|
||||||
self._update_tool_performance(tool_name, True, execution_time)
|
|
||||||
|
|
||||||
logger.info(f"工具 '{tool_name}' 执行成功,耗时: {execution_time:.2f}秒")
|
|
||||||
|
|
||||||
return {
|
|
||||||
"success": True,
|
|
||||||
"result": result,
|
|
||||||
"execution_time": execution_time,
|
|
||||||
"tool": tool_name
|
|
||||||
}
|
|
||||||
|
|
||||||
except Exception as e:
|
|
||||||
logger.error(f"工具 '{tool_name}' 执行失败: {e}")
|
|
||||||
|
|
||||||
# 更新性能统计
|
|
||||||
execution_time = (datetime.now() - start_time).total_seconds()
|
|
||||||
self._update_tool_performance(tool_name, False, execution_time)
|
|
||||||
|
|
||||||
return {
|
|
||||||
"success": False,
|
|
||||||
"error": str(e),
|
|
||||||
"execution_time": execution_time,
|
|
||||||
"tool": tool_name
|
|
||||||
}
|
|
||||||
|
|
||||||
def _update_tool_performance(self, tool_name: str, success: bool, execution_time: float):
|
|
||||||
"""更新工具性能统计"""
|
|
||||||
if tool_name not in self.tool_performance:
|
|
||||||
self.tool_performance[tool_name] = {
|
|
||||||
"total_executions": 0,
|
|
||||||
"successful_executions": 0,
|
|
||||||
"total_time": 0.0,
|
|
||||||
"avg_execution_time": 0.0,
|
|
||||||
"success_rate": 0.0
|
|
||||||
}
|
|
||||||
|
|
||||||
perf = self.tool_performance[tool_name]
|
|
||||||
perf["total_executions"] += 1
|
|
||||||
perf["total_time"] += execution_time
|
|
||||||
perf["avg_execution_time"] = perf["total_time"] / perf["total_executions"]
|
|
||||||
|
|
||||||
if success:
|
|
||||||
perf["successful_executions"] += 1
|
|
||||||
|
|
||||||
perf["success_rate"] = perf["successful_executions"] / perf["total_executions"]
|
|
||||||
|
|
||||||
# 更新工具的成功率
|
|
||||||
self.tools[tool_name]["success_rate"] = perf["success_rate"]
|
|
||||||
|
|
||||||
def get_available_tools(self) -> List[Dict[str, Any]]:
|
|
||||||
"""获取可用工具列表"""
|
|
||||||
tools_info = []
|
|
||||||
|
|
||||||
for name, tool in self.tools.items():
|
|
||||||
tool_info = {
|
|
||||||
"name": name,
|
|
||||||
"metadata": tool["metadata"],
|
|
||||||
"usage_count": tool["usage_count"],
|
|
||||||
"last_used": tool["last_used"].isoformat() if tool["last_used"] else None,
|
|
||||||
"success_rate": tool["success_rate"]
|
|
||||||
}
|
|
||||||
|
|
||||||
# 添加性能信息
|
|
||||||
if name in self.tool_performance:
|
|
||||||
perf = self.tool_performance[name]
|
|
||||||
tool_info.update({
|
|
||||||
"avg_execution_time": perf["avg_execution_time"],
|
|
||||||
"total_executions": perf["total_executions"]
|
|
||||||
})
|
|
||||||
|
|
||||||
tools_info.append(tool_info)
|
|
||||||
|
|
||||||
return tools_info
|
|
||||||
|
|
||||||
def get_tool_info(self, tool_name: str) -> Optional[Dict[str, Any]]:
|
|
||||||
"""获取工具信息"""
|
|
||||||
if tool_name not in self.tools:
|
|
||||||
return None
|
|
||||||
|
|
||||||
tool = self.tools[tool_name]
|
|
||||||
info = {
|
|
||||||
"name": tool_name,
|
|
||||||
"metadata": tool["metadata"],
|
|
||||||
"usage_count": tool["usage_count"],
|
|
||||||
"last_used": tool["last_used"].isoformat() if tool["last_used"] else None,
|
|
||||||
"success_rate": tool["success_rate"]
|
|
||||||
}
|
|
||||||
|
|
||||||
if tool_name in self.tool_performance:
|
|
||||||
info.update(self.tool_performance[tool_name])
|
|
||||||
|
|
||||||
return info
|
|
||||||
|
|
||||||
def update_usage_stats(self, tool_usage: List[Dict[str, Any]]):
|
|
||||||
"""更新工具使用统计"""
|
|
||||||
for usage in tool_usage:
|
|
||||||
tool_name = usage.get("tool")
|
|
||||||
if tool_name in self.tools:
|
|
||||||
self.tools[tool_name]["usage_count"] += usage.get("count", 1)
|
|
||||||
|
|
||||||
# 默认工具实现
|
|
||||||
|
|
||||||
async def _search_knowledge_tool(self, query: str, top_k: int = 3, **kwargs) -> Dict[str, Any]:
|
|
||||||
"""搜索知识库工具"""
|
|
||||||
try:
|
|
||||||
from ..knowledge_base.knowledge_manager import KnowledgeManager
|
|
||||||
knowledge_manager = KnowledgeManager()
|
|
||||||
|
|
||||||
results = knowledge_manager.search_knowledge(query, top_k)
|
|
||||||
|
|
||||||
return {
|
|
||||||
"query": query,
|
|
||||||
"results": results,
|
|
||||||
"count": len(results)
|
|
||||||
}
|
|
||||||
except Exception as e:
|
|
||||||
logger.error(f"搜索知识库失败: {e}")
|
|
||||||
return {"error": str(e)}
|
|
||||||
|
|
||||||
async def _create_work_order_tool(self, title: str, description: str, category: str, priority: str = "medium", **kwargs) -> Dict[str, Any]:
|
|
||||||
"""创建工单工具"""
|
|
||||||
try:
|
|
||||||
from ..dialogue.dialogue_manager import DialogueManager
|
|
||||||
dialogue_manager = DialogueManager()
|
|
||||||
|
|
||||||
result = dialogue_manager.create_work_order(title, description, category, priority)
|
|
||||||
|
|
||||||
return result
|
|
||||||
except Exception as e:
|
|
||||||
logger.error(f"创建工单失败: {e}")
|
|
||||||
return {"error": str(e)}
|
|
||||||
|
|
||||||
async def _update_work_order_tool(self, work_order_id: int, **kwargs) -> Dict[str, Any]:
|
|
||||||
"""更新工单工具"""
|
|
||||||
try:
|
|
||||||
from ..dialogue.dialogue_manager import DialogueManager
|
|
||||||
dialogue_manager = DialogueManager()
|
|
||||||
|
|
||||||
success = dialogue_manager.update_work_order(work_order_id, **kwargs)
|
|
||||||
|
|
||||||
return {
|
|
||||||
"success": success,
|
|
||||||
"work_order_id": work_order_id,
|
|
||||||
"updated_fields": list(kwargs.keys())
|
|
||||||
}
|
|
||||||
except Exception as e:
|
|
||||||
logger.error(f"更新工单失败: {e}")
|
|
||||||
return {"error": str(e)}
|
|
||||||
|
|
||||||
async def _generate_response_tool(self, message: str, context: str = "", **kwargs) -> Dict[str, Any]:
|
|
||||||
"""生成回复工具"""
|
|
||||||
try:
|
|
||||||
from ..core.llm_client import QwenClient
|
|
||||||
llm_client = QwenClient()
|
|
||||||
|
|
||||||
result = llm_client.generate_response(message, context)
|
|
||||||
|
|
||||||
return result
|
|
||||||
except Exception as e:
|
|
||||||
logger.error(f"生成回复失败: {e}")
|
|
||||||
return {"error": str(e)}
|
|
||||||
|
|
||||||
async def _analyze_data_tool(self, data_type: str, date_range: str = "last_7_days", **kwargs) -> Dict[str, Any]:
|
|
||||||
"""数据分析工具"""
|
|
||||||
try:
|
|
||||||
from ..analytics.analytics_manager import AnalyticsManager
|
|
||||||
analytics_manager = AnalyticsManager()
|
|
||||||
|
|
||||||
if data_type == "daily_analytics":
|
|
||||||
result = analytics_manager.generate_daily_analytics()
|
|
||||||
elif data_type == "summary":
|
|
||||||
result = analytics_manager.get_analytics_summary()
|
|
||||||
elif data_type == "category_performance":
|
|
||||||
result = analytics_manager.get_category_performance()
|
|
||||||
else:
|
|
||||||
result = {"error": f"不支持的数据类型: {data_type}"}
|
|
||||||
|
|
||||||
return result
|
|
||||||
except Exception as e:
|
|
||||||
logger.error(f"数据分析失败: {e}")
|
|
||||||
return {"error": str(e)}
|
|
||||||
|
|
||||||
async def _send_notification_tool(self, message: str, recipients: List[str], notification_type: str = "info", **kwargs) -> Dict[str, Any]:
|
|
||||||
"""发送通知工具"""
|
|
||||||
try:
|
|
||||||
# 这里可以实现具体的通知逻辑
|
|
||||||
# 例如:发送邮件、短信、推送通知等
|
|
||||||
|
|
||||||
notification_data = {
|
|
||||||
"message": message,
|
|
||||||
"recipients": recipients,
|
|
||||||
"type": notification_type,
|
|
||||||
"timestamp": datetime.now().isoformat()
|
|
||||||
}
|
|
||||||
|
|
||||||
# 模拟发送通知
|
|
||||||
logger.info(f"发送通知: {message} 给 {recipients}")
|
|
||||||
|
|
||||||
return {
|
|
||||||
"success": True,
|
|
||||||
"notification_id": f"notif_{datetime.now().strftime('%Y%m%d_%H%M%S')}",
|
|
||||||
"data": notification_data
|
|
||||||
}
|
|
||||||
except Exception as e:
|
|
||||||
logger.error(f"发送通知失败: {e}")
|
|
||||||
return {"error": str(e)}
|
|
||||||
|
|
||||||
async def _schedule_task_tool(self, task_name: str, schedule_time: str, task_data: Dict[str, Any], **kwargs) -> Dict[str, Any]:
|
|
||||||
"""调度任务工具"""
|
|
||||||
try:
|
|
||||||
# 这里可以实现任务调度逻辑
|
|
||||||
# 例如:使用APScheduler、Celery等
|
|
||||||
|
|
||||||
schedule_data = {
|
|
||||||
"task_name": task_name,
|
|
||||||
"schedule_time": schedule_time,
|
|
||||||
"task_data": task_data,
|
|
||||||
"created_at": datetime.now().isoformat()
|
|
||||||
}
|
|
||||||
|
|
||||||
logger.info(f"调度任务: {task_name} 在 {schedule_time}")
|
|
||||||
|
|
||||||
return {
|
|
||||||
"success": True,
|
|
||||||
"schedule_id": f"schedule_{datetime.now().strftime('%Y%m%d_%H%M%S')}",
|
|
||||||
"data": schedule_data
|
|
||||||
}
|
|
||||||
except Exception as e:
|
|
||||||
logger.error(f"调度任务失败: {e}")
|
|
||||||
return {"error": str(e)}
|
|
||||||
|
|
||||||
async def _web_search_tool(self, query: str, max_results: int = 5, **kwargs) -> Dict[str, Any]:
|
|
||||||
"""网络搜索工具"""
|
|
||||||
try:
|
|
||||||
# 这里可以实现网络搜索逻辑
|
|
||||||
# 例如:使用Google Search API、Bing Search API等
|
|
||||||
|
|
||||||
search_results = [
|
|
||||||
{
|
|
||||||
"title": f"搜索结果 {i+1}",
|
|
||||||
"url": f"https://example.com/result{i+1}",
|
|
||||||
"snippet": f"这是关于 '{query}' 的搜索结果摘要 {i+1}"
|
|
||||||
}
|
|
||||||
for i in range(min(max_results, 3))
|
|
||||||
]
|
|
||||||
|
|
||||||
logger.info(f"网络搜索: {query}")
|
|
||||||
|
|
||||||
return {
|
|
||||||
"query": query,
|
|
||||||
"results": search_results,
|
|
||||||
"count": len(search_results)
|
|
||||||
}
|
|
||||||
except Exception as e:
|
|
||||||
logger.error(f"网络搜索失败: {e}")
|
|
||||||
return {"error": str(e)}
|
|
||||||
|
|
||||||
async def _file_operation_tool(self, operation: str, file_path: str, content: str = "", **kwargs) -> Dict[str, Any]:
|
|
||||||
"""文件操作工具"""
|
|
||||||
try:
|
|
||||||
import os
|
|
||||||
|
|
||||||
if operation == "read":
|
|
||||||
with open(file_path, 'r', encoding='utf-8') as f:
|
|
||||||
content = f.read()
|
|
||||||
return {"success": True, "content": content, "operation": "read"}
|
|
||||||
|
|
||||||
elif operation == "write":
|
|
||||||
with open(file_path, 'w', encoding='utf-8') as f:
|
|
||||||
f.write(content)
|
|
||||||
return {"success": True, "operation": "write", "file_path": file_path}
|
|
||||||
|
|
||||||
elif operation == "exists":
|
|
||||||
exists = os.path.exists(file_path)
|
|
||||||
return {"success": True, "exists": exists, "file_path": file_path}
|
|
||||||
|
|
||||||
else:
|
|
||||||
return {"error": f"不支持的文件操作: {operation}"}
|
|
||||||
|
|
||||||
except Exception as e:
|
|
||||||
logger.error(f"文件操作失败: {e}")
|
|
||||||
return {"error": str(e)}
|
|
||||||
|
|
||||||
async def _database_query_tool(self, query: str, query_type: str = "select", **kwargs) -> Dict[str, Any]:
|
|
||||||
"""数据库查询工具"""
|
|
||||||
try:
|
|
||||||
from ..core.database import db_manager
|
|
||||||
|
|
||||||
with db_manager.get_session() as session:
|
|
||||||
if query_type == "select":
|
|
||||||
result = session.execute(query).fetchall()
|
|
||||||
return {
|
|
||||||
"success": True,
|
|
||||||
"result": [dict(row) for row in result],
|
|
||||||
"count": len(result)
|
|
||||||
}
|
|
||||||
else:
|
|
||||||
session.execute(query)
|
|
||||||
session.commit()
|
|
||||||
return {"success": True, "operation": query_type}
|
|
||||||
|
|
||||||
except Exception as e:
|
|
||||||
logger.error(f"数据库查询失败: {e}")
|
|
||||||
return {"error": str(e)}
|
|
||||||
|
|
||||||
def get_tool_performance_report(self) -> Dict[str, Any]:
|
|
||||||
"""获取工具性能报告"""
|
|
||||||
report = {
|
|
||||||
"total_tools": len(self.tools),
|
|
||||||
"tool_performance": {},
|
|
||||||
"summary": {
|
|
||||||
"most_used": None,
|
|
||||||
"most_reliable": None,
|
|
||||||
"fastest": None,
|
|
||||||
"slowest": None
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
if not self.tool_performance:
|
|
||||||
return report
|
|
||||||
|
|
||||||
# 分析性能数据
|
|
||||||
most_used_count = 0
|
|
||||||
most_reliable_rate = 0
|
|
||||||
fastest_time = float('inf')
|
|
||||||
slowest_time = 0
|
|
||||||
|
|
||||||
for tool_name, perf in self.tool_performance.items():
|
|
||||||
report["tool_performance"][tool_name] = perf
|
|
||||||
|
|
||||||
# 找出最常用的工具
|
|
||||||
if perf["total_executions"] > most_used_count:
|
|
||||||
most_used_count = perf["total_executions"]
|
|
||||||
report["summary"]["most_used"] = tool_name
|
|
||||||
|
|
||||||
# 找出最可靠的工具
|
|
||||||
if perf["success_rate"] > most_reliable_rate:
|
|
||||||
most_reliable_rate = perf["success_rate"]
|
|
||||||
report["summary"]["most_reliable"] = tool_name
|
|
||||||
|
|
||||||
# 找出最快的工具
|
|
||||||
if perf["avg_execution_time"] < fastest_time:
|
|
||||||
fastest_time = perf["avg_execution_time"]
|
|
||||||
report["summary"]["fastest"] = tool_name
|
|
||||||
|
|
||||||
# 找出最慢的工具
|
|
||||||
if perf["avg_execution_time"] > slowest_time:
|
|
||||||
slowest_time = perf["avg_execution_time"]
|
|
||||||
report["summary"]["slowest"] = tool_name
|
|
||||||
|
|
||||||
return report
|
|
||||||
@@ -6,10 +6,13 @@ TSP Agent助手 - 简化版本
|
|||||||
|
|
||||||
import logging
|
import logging
|
||||||
import asyncio
|
import asyncio
|
||||||
|
import json
|
||||||
from typing import Dict, Any, List, Optional
|
from typing import Dict, Any, List, Optional
|
||||||
from datetime import datetime
|
from datetime import datetime
|
||||||
from src.config.unified_config import get_config
|
from src.config.unified_config import get_config
|
||||||
from src.agent.llm_client import LLMManager
|
from src.agent.llm_client import LLMManager
|
||||||
|
from src.web.service_manager import service_manager
|
||||||
|
from src.agent.react_agent import ReactAgent
|
||||||
|
|
||||||
logger = logging.getLogger(__name__)
|
logger = logging.getLogger(__name__)
|
||||||
|
|
||||||
@@ -23,7 +26,10 @@ class TSPAgentAssistant:
|
|||||||
self.is_agent_mode = True
|
self.is_agent_mode = True
|
||||||
self.execution_history = []
|
self.execution_history = []
|
||||||
|
|
||||||
# 工具注册表
|
# ReAct Agent(核心)
|
||||||
|
self.react_agent = ReactAgent()
|
||||||
|
|
||||||
|
# 工具注册表(保留兼容旧 API)
|
||||||
self.tools = {}
|
self.tools = {}
|
||||||
self.tool_performance = {}
|
self.tool_performance = {}
|
||||||
|
|
||||||
@@ -192,13 +198,15 @@ class TSPAgentAssistant:
|
|||||||
def get_agent_status(self) -> Dict[str, Any]:
|
def get_agent_status(self) -> Dict[str, Any]:
|
||||||
"""获取Agent状态"""
|
"""获取Agent状态"""
|
||||||
try:
|
try:
|
||||||
|
react_status = self.react_agent.get_status()
|
||||||
return {
|
return {
|
||||||
"success": True,
|
"success": True,
|
||||||
"is_active": self.is_agent_mode,
|
"is_active": self.is_agent_mode,
|
||||||
"ai_monitoring_active": self.ai_monitoring_active,
|
"ai_monitoring_active": self.ai_monitoring_active,
|
||||||
"total_tools": len(self.tools),
|
"total_tools": react_status["tool_count"],
|
||||||
"total_executions": len(self.execution_history),
|
"available_tools": react_status["available_tools"],
|
||||||
"tools": self.get_available_tools(),
|
"total_executions": len(self.execution_history) + react_status["history_count"],
|
||||||
|
"react_agent": react_status,
|
||||||
"performance": self.get_tool_performance_report()
|
"performance": self.get_tool_performance_report()
|
||||||
}
|
}
|
||||||
except Exception as e:
|
except Exception as e:
|
||||||
@@ -304,23 +312,60 @@ class TSPAgentAssistant:
|
|||||||
logger.error(f"获取LLM使用统计失败: {e}")
|
logger.error(f"获取LLM使用统计失败: {e}")
|
||||||
return {}
|
return {}
|
||||||
|
|
||||||
|
def process_message_agent_sync(self, message: str, user_id: str = "admin",
|
||||||
|
work_order_id: Optional[int] = None,
|
||||||
|
enable_proactive: bool = True) -> Dict[str, Any]:
|
||||||
|
"""处理消息(同步桥接)"""
|
||||||
|
try:
|
||||||
|
loop = asyncio.new_event_loop()
|
||||||
|
asyncio.set_event_loop(loop)
|
||||||
|
return loop.run_until_complete(self.process_message_agent(message, user_id, work_order_id, enable_proactive))
|
||||||
|
except Exception as e:
|
||||||
|
logger.error(f"同步处理消息失败: {e}")
|
||||||
|
return {"error": str(e)}
|
||||||
|
|
||||||
async def process_message_agent(self, message: str, user_id: str = "admin",
|
async def process_message_agent(self, message: str, user_id: str = "admin",
|
||||||
work_order_id: Optional[int] = None,
|
work_order_id: Optional[int] = None,
|
||||||
enable_proactive: bool = True) -> Dict[str, Any]:
|
enable_proactive: bool = True) -> Dict[str, Any]:
|
||||||
"""处理消息"""
|
"""处理消息 - 使用 ReAct Agent"""
|
||||||
try:
|
try:
|
||||||
# 简化的消息处理
|
logger.info(f"Agent收到消息: {message}")
|
||||||
return {
|
result = await self.react_agent.chat(
|
||||||
"success": True,
|
message=message,
|
||||||
"message": f"Agent收到消息: {message}",
|
user_id=user_id,
|
||||||
"user_id": user_id,
|
)
|
||||||
"work_order_id": work_order_id,
|
result["user_id"] = user_id
|
||||||
"timestamp": datetime.now().isoformat()
|
result["work_order_id"] = work_order_id
|
||||||
}
|
result["status"] = "completed" if result.get("success") else "error"
|
||||||
|
result["timestamp"] = datetime.now().isoformat()
|
||||||
|
# 兼容旧字段
|
||||||
|
result["actions"] = [
|
||||||
|
{"type": "tool_call", "tool": tc["tool"], "status": "executed"}
|
||||||
|
for tc in result.get("tool_calls", [])
|
||||||
|
]
|
||||||
|
return result
|
||||||
except Exception as e:
|
except Exception as e:
|
||||||
logger.error(f"处理消息失败: {e}")
|
logger.error(f"处理消息失败: {e}")
|
||||||
return {"error": str(e)}
|
return {"error": str(e)}
|
||||||
|
|
||||||
|
def execute_tool_sync(self, tool_name: str, parameters: Dict[str, Any] = None) -> Dict[str, Any]:
|
||||||
|
"""执行工具(同步桥接)"""
|
||||||
|
try:
|
||||||
|
loop = asyncio.new_event_loop()
|
||||||
|
asyncio.set_event_loop(loop)
|
||||||
|
return loop.run_until_complete(self.execute_tool(tool_name, parameters))
|
||||||
|
except Exception as e:
|
||||||
|
return {"error": str(e)}
|
||||||
|
|
||||||
|
def trigger_sample_actions_sync(self) -> Dict[str, Any]:
|
||||||
|
"""触发示例动作(同步桥接)"""
|
||||||
|
try:
|
||||||
|
loop = asyncio.new_event_loop()
|
||||||
|
asyncio.set_event_loop(loop)
|
||||||
|
return loop.run_until_complete(self.trigger_sample_actions())
|
||||||
|
except Exception as e:
|
||||||
|
return {"success": False, "error": str(e)}
|
||||||
|
|
||||||
async def trigger_sample_actions(self) -> Dict[str, Any]:
|
async def trigger_sample_actions(self) -> Dict[str, Any]:
|
||||||
"""触发示例动作"""
|
"""触发示例动作"""
|
||||||
try:
|
try:
|
||||||
@@ -336,7 +381,7 @@ class TSPAgentAssistant:
|
|||||||
logger.error(f"触发示例动作失败: {e}")
|
logger.error(f"触发示例动作失败: {e}")
|
||||||
return {"success": False, "error": str(e)}
|
return {"success": False, "error": str(e)}
|
||||||
|
|
||||||
def process_file_to_knowledge(self, file_path: str, filename: str) -> Dict[str, Any]:
|
async def process_file_to_knowledge(self, file_path: str, filename: str, tenant_id: str = None) -> Dict[str, Any]:
|
||||||
"""处理文件并生成知识库"""
|
"""处理文件并生成知识库"""
|
||||||
try:
|
try:
|
||||||
import os
|
import os
|
||||||
@@ -356,20 +401,40 @@ class TSPAgentAssistant:
|
|||||||
|
|
||||||
logger.info(f"文件读取成功: {filename}, 字符数={len(content)}")
|
logger.info(f"文件读取成功: {filename}, 字符数={len(content)}")
|
||||||
|
|
||||||
# 使用简化的知识提取
|
# 使用LLM进行知识提取 (异步调用)
|
||||||
logger.info(f"正在对文件内容进行 AI 知识提取...")
|
logger.info(f"正在对文件内容进行 AI 知识提取...")
|
||||||
knowledge_entries = self._extract_knowledge_from_content(content, filename)
|
knowledge_entries = await self._extract_knowledge_from_content(content, filename)
|
||||||
|
|
||||||
logger.info(f"知识提取完成: 共提取出 {len(knowledge_entries)} 个潜在条目")
|
logger.info(f"知识提取完成: 共提取出 {len(knowledge_entries)} 个潜在条目")
|
||||||
|
|
||||||
# 保存到知识库
|
# 保存到知识库
|
||||||
saved_count = 0
|
saved_count = 0
|
||||||
|
|
||||||
|
# 获取知识库管理器
|
||||||
|
try:
|
||||||
|
knowledge_manager = service_manager.get_assistant().knowledge_manager
|
||||||
|
except Exception as e:
|
||||||
|
logger.error(f"无法获取知识库管理器: {e}")
|
||||||
|
knowledge_manager = None
|
||||||
|
|
||||||
for i, entry in enumerate(knowledge_entries):
|
for i, entry in enumerate(knowledge_entries):
|
||||||
try:
|
try:
|
||||||
logger.info(f"正在保存知识条目 [{i+1}/{len(knowledge_entries)}]: {entry.get('question', '')[:30]}...")
|
logger.info(f"正在保存知识条目 [{i+1}/{len(knowledge_entries)}]: {entry.get('question', '')[:30]}...")
|
||||||
# 这里在实际项目中应当注入知识库管理器的保存逻辑
|
|
||||||
# 但在当前简化版本中仅记录日志
|
if knowledge_manager:
|
||||||
|
# 实际保存到数据库
|
||||||
|
knowledge_manager.add_knowledge_entry(
|
||||||
|
question=entry.get('question'),
|
||||||
|
answer=entry.get('answer'),
|
||||||
|
category=entry.get('category', '文档导入'),
|
||||||
|
confidence_score=entry.get('confidence_score', 0.8),
|
||||||
|
tenant_id=tenant_id
|
||||||
|
)
|
||||||
saved_count += 1
|
saved_count += 1
|
||||||
|
else:
|
||||||
|
# 如果无法获取管理器,仅记录日志(降级处理)
|
||||||
|
logger.warning("知识库管理器不可用,跳过保存")
|
||||||
|
|
||||||
except Exception as save_error:
|
except Exception as save_error:
|
||||||
logger.error(f"保存知识条目 {i+1} 时出错: {save_error}")
|
logger.error(f"保存知识条目 {i+1} 时出错: {save_error}")
|
||||||
|
|
||||||
@@ -402,25 +467,70 @@ class TSPAgentAssistant:
|
|||||||
logger.error(f"读取文件失败: {e}")
|
logger.error(f"读取文件失败: {e}")
|
||||||
return ""
|
return ""
|
||||||
|
|
||||||
def _extract_knowledge_from_content(self, content: str, filename: str) -> List[Dict[str, Any]]:
|
async def _extract_knowledge_from_content(self, content: str, filename: str) -> List[Dict[str, Any]]:
|
||||||
"""从内容中提取知识"""
|
"""从内容中提取知识 - 使用LLM"""
|
||||||
try:
|
try:
|
||||||
# 简化的知识提取逻辑
|
# 限制内容长度,避免超出token限制
|
||||||
entries = []
|
# 假设每个汉字2个token,保留前8000个字符作为上下文
|
||||||
|
truncated_content = content[:8000]
|
||||||
|
if len(content) > 8000:
|
||||||
|
truncated_content += "\n...(后续内容已省略)"
|
||||||
|
|
||||||
# 按段落分割内容
|
prompt = f"""
|
||||||
paragraphs = content.split('\n\n')
|
你是一个专业的知识库构建助手。请分析以下文档内容,提取出关键的"问题"和"答案"对,用于构建知识库。
|
||||||
|
|
||||||
for i, paragraph in enumerate(paragraphs[:5]): # 最多提取5个
|
文档文件名:{filename}
|
||||||
if len(paragraph.strip()) > 20: # 过滤太短的段落
|
文档内容:
|
||||||
entries.append({
|
{truncated_content}
|
||||||
"question": f"关于{filename}的问题{i+1}",
|
|
||||||
"answer": paragraph.strip(),
|
要求:
|
||||||
"category": "文档知识",
|
1. 提取文档中的核心知识点,转化为"问题(question)"和"答案(answer)"的形式。
|
||||||
"confidence_score": 0.7
|
2. "问题"应该清晰明确,方便用户搜索。
|
||||||
|
3. "答案"应该准确、完整,直接回答问题。
|
||||||
|
4. "分类(category)"请根据内容自动归类(如:故障排查、操作指南、系统配置、业务流程等)。
|
||||||
|
5. 输出格式必须是合法的 JSON 数组,不要包含Markdown标记。
|
||||||
|
|
||||||
|
JSON格式示例:
|
||||||
|
[
|
||||||
|
{{"question": "如何重置密码?", "answer": "请访问设置页面,点击重置密码按钮...", "category": "操作指南"}},
|
||||||
|
{{"question": "系统支持哪些浏览器?", "answer": "支持Chrome, Edge, Firefox...", "category": "系统配置"}}
|
||||||
|
]
|
||||||
|
"""
|
||||||
|
# 调用LLM生成
|
||||||
|
logger.info("正在调用LLM进行知识提取...")
|
||||||
|
response_text = await self.llm_manager.generate(prompt, temperature=0.3)
|
||||||
|
|
||||||
|
# 清理响应中的Markdown标记(如果存在)
|
||||||
|
cleaned_text = response_text.strip()
|
||||||
|
if cleaned_text.startswith("```json"):
|
||||||
|
cleaned_text = cleaned_text[7:]
|
||||||
|
if cleaned_text.startswith("```"):
|
||||||
|
cleaned_text = cleaned_text[3:]
|
||||||
|
if cleaned_text.endswith("```"):
|
||||||
|
cleaned_text = cleaned_text[:-3]
|
||||||
|
cleaned_text = cleaned_text.strip()
|
||||||
|
|
||||||
|
# 解析JSON
|
||||||
|
try:
|
||||||
|
entries = json.loads(cleaned_text)
|
||||||
|
except json.JSONDecodeError:
|
||||||
|
# 尝试修复常见的JSON错误
|
||||||
|
logger.warning(f"JSON解析失败,尝试简单修复: {cleaned_text[:100]}...")
|
||||||
|
# 这里可以添加更复杂的修复逻辑,或者直接记录错误
|
||||||
|
return []
|
||||||
|
|
||||||
|
# 验证和标准化
|
||||||
|
valid_entries = []
|
||||||
|
for entry in entries:
|
||||||
|
if isinstance(entry, dict) and "question" in entry and "answer" in entry:
|
||||||
|
valid_entries.append({
|
||||||
|
"question": entry["question"],
|
||||||
|
"answer": entry["answer"],
|
||||||
|
"category": entry.get("category", "文档导入"),
|
||||||
|
"confidence_score": 0.9 # LLM生成的置信度较高
|
||||||
})
|
})
|
||||||
|
|
||||||
return entries
|
return valid_entries
|
||||||
|
|
||||||
except Exception as e:
|
except Exception as e:
|
||||||
logger.error(f"提取知识失败: {e}")
|
logger.error(f"提取知识失败: {e}")
|
||||||
|
|||||||
@@ -1,322 +0,0 @@
|
|||||||
# -*- coding: utf-8 -*-
|
|
||||||
"""
|
|
||||||
增强版TSP助手 - 集成Agent功能
|
|
||||||
重构版本:模块化设计,降低代码复杂度
|
|
||||||
"""
|
|
||||||
|
|
||||||
import logging
|
|
||||||
import asyncio
|
|
||||||
from typing import Dict, Any, List, Optional
|
|
||||||
from datetime import datetime
|
|
||||||
|
|
||||||
from src.agent.agent_assistant_core import TSPAgentAssistantCore
|
|
||||||
from src.agent.agent_message_handler import AgentMessageHandler
|
|
||||||
from src.agent.agent_sample_actions import AgentSampleActions
|
|
||||||
|
|
||||||
logger = logging.getLogger(__name__)
|
|
||||||
|
|
||||||
class TSPAgentAssistant(TSPAgentAssistantCore):
|
|
||||||
"""TSP Agent助手 - 重构版本"""
|
|
||||||
|
|
||||||
def __init__(self, llm_config=None):
|
|
||||||
# 初始化核心功能
|
|
||||||
super().__init__(llm_config)
|
|
||||||
|
|
||||||
# 初始化消息处理器
|
|
||||||
self.message_handler = AgentMessageHandler(self)
|
|
||||||
|
|
||||||
# 初始化示例动作处理器
|
|
||||||
self.sample_actions = AgentSampleActions(self)
|
|
||||||
|
|
||||||
logger.info("TSP Agent助手初始化完成(重构版本)")
|
|
||||||
|
|
||||||
# ==================== 消息处理功能 ====================
|
|
||||||
|
|
||||||
async def process_message_agent(self, message: str, user_id: str = "admin",
|
|
||||||
work_order_id: Optional[int] = None,
|
|
||||||
enable_proactive: bool = True) -> Dict[str, Any]:
|
|
||||||
"""使用Agent处理消息"""
|
|
||||||
return await self.message_handler.process_message_agent(
|
|
||||||
message, user_id, work_order_id, enable_proactive
|
|
||||||
)
|
|
||||||
|
|
||||||
async def process_conversation_agent(self, conversation_data: Dict[str, Any]) -> Dict[str, Any]:
|
|
||||||
"""使用Agent处理对话"""
|
|
||||||
return await self.message_handler.process_conversation_agent(conversation_data)
|
|
||||||
|
|
||||||
async def process_workorder_agent(self, workorder_data: Dict[str, Any]) -> Dict[str, Any]:
|
|
||||||
"""使用Agent处理工单"""
|
|
||||||
return await self.message_handler.process_workorder_agent(workorder_data)
|
|
||||||
|
|
||||||
async def process_alert_agent(self, alert_data: Dict[str, Any]) -> Dict[str, Any]:
|
|
||||||
"""使用Agent处理预警"""
|
|
||||||
return await self.message_handler.process_alert_agent(alert_data)
|
|
||||||
|
|
||||||
# ==================== 建议功能 ====================
|
|
||||||
|
|
||||||
def get_conversation_suggestions(self, context: Dict[str, Any]) -> List[str]:
|
|
||||||
"""获取对话建议"""
|
|
||||||
return self.message_handler.get_conversation_suggestions(context)
|
|
||||||
|
|
||||||
def get_workorder_suggestions(self, workorder_data: Dict[str, Any]) -> List[str]:
|
|
||||||
"""获取工单建议"""
|
|
||||||
return self.message_handler.get_workorder_suggestions(workorder_data)
|
|
||||||
|
|
||||||
def get_alert_suggestions(self, alert_data: Dict[str, Any]) -> List[str]:
|
|
||||||
"""获取预警建议"""
|
|
||||||
return self.message_handler.get_alert_suggestions(alert_data)
|
|
||||||
|
|
||||||
# ==================== 示例动作功能 ====================
|
|
||||||
|
|
||||||
async def trigger_sample_actions(self) -> Dict[str, Any]:
|
|
||||||
"""触发示例动作"""
|
|
||||||
return await self.sample_actions.trigger_sample_actions()
|
|
||||||
|
|
||||||
async def run_performance_test(self) -> Dict[str, Any]:
|
|
||||||
"""运行性能测试"""
|
|
||||||
return await self.sample_actions.run_performance_test()
|
|
||||||
|
|
||||||
# ==================== 兼容性方法 ====================
|
|
||||||
|
|
||||||
def get_agent_status(self) -> Dict[str, Any]:
|
|
||||||
"""获取Agent状态(兼容性方法)"""
|
|
||||||
return super().get_agent_status()
|
|
||||||
|
|
||||||
def toggle_agent_mode(self, enabled: bool) -> bool:
|
|
||||||
"""切换Agent模式(兼容性方法)"""
|
|
||||||
return super().toggle_agent_mode(enabled)
|
|
||||||
|
|
||||||
def start_proactive_monitoring(self) -> bool:
|
|
||||||
"""启动主动监控(兼容性方法)"""
|
|
||||||
return super().start_proactive_monitoring()
|
|
||||||
|
|
||||||
def stop_proactive_monitoring(self) -> bool:
|
|
||||||
"""停止主动监控(兼容性方法)"""
|
|
||||||
return super().stop_proactive_monitoring()
|
|
||||||
|
|
||||||
def run_proactive_monitoring(self) -> Dict[str, Any]:
|
|
||||||
"""运行主动监控检查(兼容性方法)"""
|
|
||||||
return super().run_proactive_monitoring()
|
|
||||||
|
|
||||||
def run_intelligent_analysis(self) -> Dict[str, Any]:
|
|
||||||
"""运行智能分析(兼容性方法)"""
|
|
||||||
return super().run_intelligent_analysis()
|
|
||||||
|
|
||||||
def get_action_history(self, limit: int = 50) -> List[Dict[str, Any]]:
|
|
||||||
"""获取动作执行历史(兼容性方法)"""
|
|
||||||
return super().get_action_history(limit)
|
|
||||||
|
|
||||||
def clear_execution_history(self) -> Dict[str, Any]:
|
|
||||||
"""清空执行历史(兼容性方法)"""
|
|
||||||
return super().clear_execution_history()
|
|
||||||
|
|
||||||
def get_llm_usage_stats(self) -> Dict[str, Any]:
|
|
||||||
"""获取LLM使用统计(兼容性方法)"""
|
|
||||||
return super().get_llm_usage_stats()
|
|
||||||
|
|
||||||
# ==================== 高级功能 ====================
|
|
||||||
|
|
||||||
async def comprehensive_analysis(self) -> Dict[str, Any]:
|
|
||||||
"""综合分析 - 结合多个模块的分析结果"""
|
|
||||||
try:
|
|
||||||
# 运行智能分析
|
|
||||||
intelligent_analysis = self.run_intelligent_analysis()
|
|
||||||
|
|
||||||
# 运行主动监控
|
|
||||||
proactive_monitoring = self.run_proactive_monitoring()
|
|
||||||
|
|
||||||
# 运行性能测试
|
|
||||||
performance_test = await self.run_performance_test()
|
|
||||||
|
|
||||||
# 综合结果
|
|
||||||
comprehensive_result = {
|
|
||||||
"timestamp": self.execution_history[-1]["timestamp"] if self.execution_history else None,
|
|
||||||
"intelligent_analysis": intelligent_analysis,
|
|
||||||
"proactive_monitoring": proactive_monitoring,
|
|
||||||
"performance_test": performance_test,
|
|
||||||
"overall_status": self._determine_overall_status(
|
|
||||||
intelligent_analysis, proactive_monitoring, performance_test
|
|
||||||
)
|
|
||||||
}
|
|
||||||
|
|
||||||
# 记录综合分析
|
|
||||||
self._record_execution("comprehensive_analysis", comprehensive_result)
|
|
||||||
|
|
||||||
return comprehensive_result
|
|
||||||
|
|
||||||
except Exception as e:
|
|
||||||
logger.error(f"综合分析失败: {e}")
|
|
||||||
return {"error": str(e)}
|
|
||||||
|
|
||||||
def _determine_overall_status(self, intelligent_analysis: Dict,
|
|
||||||
proactive_monitoring: Dict,
|
|
||||||
performance_test: Dict) -> str:
|
|
||||||
"""确定整体状态"""
|
|
||||||
try:
|
|
||||||
# 检查各个模块的状态
|
|
||||||
statuses = []
|
|
||||||
|
|
||||||
if intelligent_analysis.get("success"):
|
|
||||||
statuses.append("intelligent_analysis_ok")
|
|
||||||
else:
|
|
||||||
statuses.append("intelligent_analysis_error")
|
|
||||||
|
|
||||||
if proactive_monitoring.get("success"):
|
|
||||||
statuses.append("proactive_monitoring_ok")
|
|
||||||
else:
|
|
||||||
statuses.append("proactive_monitoring_error")
|
|
||||||
|
|
||||||
if performance_test.get("success"):
|
|
||||||
statuses.append("performance_test_ok")
|
|
||||||
else:
|
|
||||||
statuses.append("performance_test_error")
|
|
||||||
|
|
||||||
# 根据状态确定整体状态
|
|
||||||
if all("ok" in status for status in statuses):
|
|
||||||
return "excellent"
|
|
||||||
elif any("error" in status for status in statuses):
|
|
||||||
return "needs_attention"
|
|
||||||
else:
|
|
||||||
return "good"
|
|
||||||
|
|
||||||
except Exception:
|
|
||||||
return "unknown"
|
|
||||||
|
|
||||||
async def batch_process_requests(self, requests: List[Dict[str, Any]]) -> List[Dict[str, Any]]:
|
|
||||||
"""批量处理请求"""
|
|
||||||
try:
|
|
||||||
results = []
|
|
||||||
|
|
||||||
for request in requests:
|
|
||||||
request_type = request.get("type", "message")
|
|
||||||
|
|
||||||
if request_type == "message":
|
|
||||||
result = await self.process_message_agent(
|
|
||||||
request.get("message", ""),
|
|
||||||
request.get("user_id", "admin"),
|
|
||||||
request.get("work_order_id"),
|
|
||||||
request.get("enable_proactive", True)
|
|
||||||
)
|
|
||||||
elif request_type == "conversation":
|
|
||||||
result = await self.process_conversation_agent(request)
|
|
||||||
elif request_type == "workorder":
|
|
||||||
result = await self.process_workorder_agent(request)
|
|
||||||
elif request_type == "alert":
|
|
||||||
result = await self.process_alert_agent(request)
|
|
||||||
else:
|
|
||||||
result = {"error": f"未知请求类型: {request_type}"}
|
|
||||||
|
|
||||||
results.append(result)
|
|
||||||
|
|
||||||
# 记录批量处理
|
|
||||||
self._record_execution("batch_process", {
|
|
||||||
"request_count": len(requests),
|
|
||||||
"results": results
|
|
||||||
})
|
|
||||||
|
|
||||||
return results
|
|
||||||
|
|
||||||
except Exception as e:
|
|
||||||
logger.error(f"批量处理请求失败: {e}")
|
|
||||||
return [{"error": str(e)} for _ in requests]
|
|
||||||
|
|
||||||
def get_system_summary(self) -> Dict[str, Any]:
|
|
||||||
"""获取系统摘要"""
|
|
||||||
try:
|
|
||||||
# 获取各种状态信息
|
|
||||||
agent_status = self.get_agent_status()
|
|
||||||
system_health = self.get_system_health()
|
|
||||||
workorders_status = self._check_workorders_status()
|
|
||||||
|
|
||||||
# 计算摘要指标
|
|
||||||
summary = {
|
|
||||||
"timestamp": datetime.now().isoformat(),
|
|
||||||
"agent_status": agent_status,
|
|
||||||
"system_health": system_health,
|
|
||||||
"workorders_status": workorders_status,
|
|
||||||
"execution_history_count": len(self.execution_history),
|
|
||||||
"llm_usage_stats": self.get_llm_usage_stats(),
|
|
||||||
"overall_health_score": system_health.get("health_score", 0)
|
|
||||||
}
|
|
||||||
|
|
||||||
return summary
|
|
||||||
|
|
||||||
except Exception as e:
|
|
||||||
logger.error(f"获取系统摘要失败: {e}")
|
|
||||||
return {"error": str(e)}
|
|
||||||
|
|
||||||
def export_agent_data(self) -> Dict[str, Any]:
|
|
||||||
"""导出Agent数据"""
|
|
||||||
try:
|
|
||||||
export_data = {
|
|
||||||
"export_timestamp": datetime.now().isoformat(),
|
|
||||||
"agent_status": self.get_agent_status(),
|
|
||||||
"execution_history": self.execution_history,
|
|
||||||
"llm_usage_stats": self.get_llm_usage_stats(),
|
|
||||||
"system_summary": self.get_system_summary()
|
|
||||||
}
|
|
||||||
|
|
||||||
return {
|
|
||||||
"success": True,
|
|
||||||
"data": export_data,
|
|
||||||
"message": "Agent数据导出成功"
|
|
||||||
}
|
|
||||||
|
|
||||||
except Exception as e:
|
|
||||||
logger.error(f"导出Agent数据失败: {e}")
|
|
||||||
return {
|
|
||||||
"success": False,
|
|
||||||
"error": str(e)
|
|
||||||
}
|
|
||||||
|
|
||||||
def import_agent_data(self, data: Dict[str, Any]) -> Dict[str, Any]:
|
|
||||||
"""导入Agent数据"""
|
|
||||||
try:
|
|
||||||
# 验证数据格式
|
|
||||||
if not isinstance(data, dict):
|
|
||||||
raise ValueError("数据格式不正确")
|
|
||||||
|
|
||||||
# 导入执行历史
|
|
||||||
if "execution_history" in data:
|
|
||||||
self.execution_history = data["execution_history"]
|
|
||||||
|
|
||||||
# 其他数据的导入逻辑...
|
|
||||||
|
|
||||||
return {
|
|
||||||
"success": True,
|
|
||||||
"message": "Agent数据导入成功"
|
|
||||||
}
|
|
||||||
|
|
||||||
except Exception as e:
|
|
||||||
logger.error(f"导入Agent数据失败: {e}")
|
|
||||||
return {
|
|
||||||
"success": False,
|
|
||||||
"error": str(e)
|
|
||||||
}
|
|
||||||
|
|
||||||
# 测试函数
|
|
||||||
async def main():
|
|
||||||
"""测试函数"""
|
|
||||||
print("🚀 TSP Agent助手测试")
|
|
||||||
|
|
||||||
# 创建Agent助手实例
|
|
||||||
agent_assistant = TSPAgentAssistant()
|
|
||||||
|
|
||||||
# 测试基本功能
|
|
||||||
status = agent_assistant.get_agent_status()
|
|
||||||
print("Agent状态:", status)
|
|
||||||
|
|
||||||
# 测试消息处理
|
|
||||||
result = await agent_assistant.process_message_agent("你好,请帮我分析系统状态")
|
|
||||||
print("消息处理结果:", result)
|
|
||||||
|
|
||||||
# 测试示例动作
|
|
||||||
sample_result = await agent_assistant.trigger_sample_actions()
|
|
||||||
print("示例动作结果:", sample_result)
|
|
||||||
|
|
||||||
# 测试综合分析
|
|
||||||
analysis_result = await agent_assistant.comprehensive_analysis()
|
|
||||||
print("综合分析结果:", analysis_result)
|
|
||||||
|
|
||||||
if __name__ == "__main__":
|
|
||||||
asyncio.run(main())
|
|
||||||
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
76
src/config/config_service.py
Normal file
76
src/config/config_service.py
Normal file
@@ -0,0 +1,76 @@
|
|||||||
|
# -*- coding: utf-8 -*-
|
||||||
|
"""
|
||||||
|
统一配置服务
|
||||||
|
优先级:环境变量 > system_settings.json > 代码默认值
|
||||||
|
"""
|
||||||
|
import os
|
||||||
|
import json
|
||||||
|
import logging
|
||||||
|
from typing import Any
|
||||||
|
|
||||||
|
logger = logging.getLogger(__name__)
|
||||||
|
_SETTINGS_PATH = os.path.join('data', 'system_settings.json')
|
||||||
|
|
||||||
|
|
||||||
|
class ConfigService:
|
||||||
|
def __init__(self):
|
||||||
|
self._file_cache = None
|
||||||
|
self._file_mtime = 0
|
||||||
|
|
||||||
|
def _load_file(self) -> dict:
|
||||||
|
try:
|
||||||
|
if os.path.exists(_SETTINGS_PATH):
|
||||||
|
mtime = os.path.getmtime(_SETTINGS_PATH)
|
||||||
|
if mtime != self._file_mtime or self._file_cache is None:
|
||||||
|
with open(_SETTINGS_PATH, 'r', encoding='utf-8') as f:
|
||||||
|
self._file_cache = json.load(f)
|
||||||
|
self._file_mtime = mtime
|
||||||
|
return self._file_cache or {}
|
||||||
|
except Exception as e:
|
||||||
|
logger.debug(f"加载配置文件失败: {e}")
|
||||||
|
return {}
|
||||||
|
|
||||||
|
def get(self, key: str, default: Any = None) -> Any:
|
||||||
|
env_key = key.upper().replace('.', '_')
|
||||||
|
env_val = os.environ.get(env_key)
|
||||||
|
if env_val is not None:
|
||||||
|
return self._cast(env_val, default)
|
||||||
|
settings = self._load_file()
|
||||||
|
parts = key.split('.')
|
||||||
|
val = settings
|
||||||
|
for part in parts:
|
||||||
|
if isinstance(val, dict):
|
||||||
|
val = val.get(part)
|
||||||
|
else:
|
||||||
|
return default
|
||||||
|
return val if val is not None else default
|
||||||
|
|
||||||
|
def get_section(self, section: str) -> dict:
|
||||||
|
return self._load_file().get(section, {})
|
||||||
|
|
||||||
|
def set(self, key: str, value: Any):
|
||||||
|
settings = self._load_file()
|
||||||
|
parts = key.split('.')
|
||||||
|
target = settings
|
||||||
|
for part in parts[:-1]:
|
||||||
|
target = target.setdefault(part, {})
|
||||||
|
target[parts[-1]] = value
|
||||||
|
os.makedirs('data', exist_ok=True)
|
||||||
|
with open(_SETTINGS_PATH, 'w', encoding='utf-8') as f:
|
||||||
|
json.dump(settings, f, ensure_ascii=False, indent=2)
|
||||||
|
self._file_cache = settings
|
||||||
|
|
||||||
|
@staticmethod
|
||||||
|
def _cast(value: str, default: Any) -> Any:
|
||||||
|
if default is None: return value
|
||||||
|
if isinstance(default, bool): return value.lower() in ('true', '1', 'yes')
|
||||||
|
if isinstance(default, int):
|
||||||
|
try: return int(value)
|
||||||
|
except: return default
|
||||||
|
if isinstance(default, float):
|
||||||
|
try: return float(value)
|
||||||
|
except: return default
|
||||||
|
return value
|
||||||
|
|
||||||
|
|
||||||
|
config_service = ConfigService()
|
||||||
@@ -49,6 +49,7 @@ class ServerConfig:
|
|||||||
websocket_port: int = 8765
|
websocket_port: int = 8765
|
||||||
debug: bool = False
|
debug: bool = False
|
||||||
log_level: str = "INFO"
|
log_level: str = "INFO"
|
||||||
|
tenant_id: str = "default" # 当前实例的租户标识
|
||||||
|
|
||||||
@dataclass
|
@dataclass
|
||||||
class FeishuConfig:
|
class FeishuConfig:
|
||||||
@@ -71,6 +72,19 @@ class AIAccuracyConfig:
|
|||||||
human_resolution_confidence: float = 0.90
|
human_resolution_confidence: float = 0.90
|
||||||
|
|
||||||
|
|
||||||
|
@dataclass
|
||||||
|
class EmbeddingConfig:
|
||||||
|
"""Embedding 向量配置"""
|
||||||
|
enabled: bool = True
|
||||||
|
api_key: Optional[str] = None # 本地模式不需要
|
||||||
|
base_url: Optional[str] = None # 本地模式不需要
|
||||||
|
model: str = "BAAI/bge-small-zh-v1.5" # 本地轻量中文模型
|
||||||
|
dimension: int = 512 # bge-small-zh 输出维度
|
||||||
|
batch_size: int = 32
|
||||||
|
similarity_threshold: float = 0.5 # 语义搜索相似度阈值
|
||||||
|
cache_ttl: int = 86400 # embedding 缓存过期时间(秒),默认 1 天
|
||||||
|
|
||||||
|
|
||||||
@dataclass
|
@dataclass
|
||||||
class RedisConfig:
|
class RedisConfig:
|
||||||
"""Redis缓存配置"""
|
"""Redis缓存配置"""
|
||||||
@@ -92,13 +106,13 @@ class UnifiedConfig:
|
|||||||
"""
|
"""
|
||||||
|
|
||||||
def __init__(self):
|
def __init__(self):
|
||||||
logger.info("Initializing unified configuration from environment variables...")
|
|
||||||
self.database = self._load_database_from_env()
|
self.database = self._load_database_from_env()
|
||||||
self.llm = self._load_llm_from_env()
|
self.llm = self._load_llm_from_env()
|
||||||
self.server = self._load_server_from_env()
|
self.server = self._load_server_from_env()
|
||||||
self.feishu = self._load_feishu_from_env()
|
self.feishu = self._load_feishu_from_env()
|
||||||
self.ai_accuracy = self._load_ai_accuracy_from_env()
|
self.ai_accuracy = self._load_ai_accuracy_from_env()
|
||||||
self.redis = self._load_redis_from_env()
|
self.redis = self._load_redis_from_env()
|
||||||
|
self.embedding = self._load_embedding_from_env()
|
||||||
self.validate_config()
|
self.validate_config()
|
||||||
|
|
||||||
def _load_database_from_env(self) -> DatabaseConfig:
|
def _load_database_from_env(self) -> DatabaseConfig:
|
||||||
@@ -122,7 +136,6 @@ class UnifiedConfig:
|
|||||||
max_tokens=int(os.getenv("LLM_MAX_TOKENS", 2000)),
|
max_tokens=int(os.getenv("LLM_MAX_TOKENS", 2000)),
|
||||||
timeout=int(os.getenv("LLM_TIMEOUT", 30))
|
timeout=int(os.getenv("LLM_TIMEOUT", 30))
|
||||||
)
|
)
|
||||||
logger.info("LLM config loaded.")
|
|
||||||
return config
|
return config
|
||||||
|
|
||||||
def _load_server_from_env(self) -> ServerConfig:
|
def _load_server_from_env(self) -> ServerConfig:
|
||||||
@@ -131,7 +144,8 @@ class UnifiedConfig:
|
|||||||
port=int(os.getenv("SERVER_PORT", 5000)),
|
port=int(os.getenv("SERVER_PORT", 5000)),
|
||||||
websocket_port=int(os.getenv("WEBSOCKET_PORT", 8765)),
|
websocket_port=int(os.getenv("WEBSOCKET_PORT", 8765)),
|
||||||
debug=os.getenv("DEBUG_MODE", "False").lower() in ('true', '1', 't'),
|
debug=os.getenv("DEBUG_MODE", "False").lower() in ('true', '1', 't'),
|
||||||
log_level=os.getenv("LOG_LEVEL", "INFO").upper()
|
log_level=os.getenv("LOG_LEVEL", "INFO").upper(),
|
||||||
|
tenant_id=os.getenv("TENANT_ID", "default"),
|
||||||
)
|
)
|
||||||
logger.info("Server config loaded.")
|
logger.info("Server config loaded.")
|
||||||
return config
|
return config
|
||||||
@@ -156,7 +170,6 @@ class UnifiedConfig:
|
|||||||
ai_suggestion_confidence=float(os.getenv("AI_SUGGESTION_CONFIDENCE", 0.95)),
|
ai_suggestion_confidence=float(os.getenv("AI_SUGGESTION_CONFIDENCE", 0.95)),
|
||||||
human_resolution_confidence=float(os.getenv("AI_HUMAN_RESOLUTION_CONFIDENCE", 0.90))
|
human_resolution_confidence=float(os.getenv("AI_HUMAN_RESOLUTION_CONFIDENCE", 0.90))
|
||||||
)
|
)
|
||||||
logger.info("AI Accuracy config loaded.")
|
|
||||||
return config
|
return config
|
||||||
|
|
||||||
def _load_redis_from_env(self) -> RedisConfig:
|
def _load_redis_from_env(self) -> RedisConfig:
|
||||||
@@ -172,6 +185,18 @@ class UnifiedConfig:
|
|||||||
logger.info("Redis config loaded.")
|
logger.info("Redis config loaded.")
|
||||||
return config
|
return config
|
||||||
|
|
||||||
|
def _load_embedding_from_env(self) -> EmbeddingConfig:
|
||||||
|
config = EmbeddingConfig(
|
||||||
|
enabled=os.getenv("EMBEDDING_ENABLED", "True").lower() in ('true', '1', 't'),
|
||||||
|
model=os.getenv("EMBEDDING_MODEL", "BAAI/bge-small-zh-v1.5"),
|
||||||
|
dimension=int(os.getenv("EMBEDDING_DIMENSION", 512)),
|
||||||
|
batch_size=int(os.getenv("EMBEDDING_BATCH_SIZE", 32)),
|
||||||
|
similarity_threshold=float(os.getenv("EMBEDDING_SIMILARITY_THRESHOLD", 0.5)),
|
||||||
|
cache_ttl=int(os.getenv("EMBEDDING_CACHE_TTL", 86400)),
|
||||||
|
)
|
||||||
|
logger.info("Embedding config loaded.")
|
||||||
|
return config
|
||||||
|
|
||||||
def validate_config(self):
|
def validate_config(self):
|
||||||
"""在启动时验证关键配置"""
|
"""在启动时验证关键配置"""
|
||||||
if not self.database.url:
|
if not self.database.url:
|
||||||
@@ -180,7 +205,6 @@ class UnifiedConfig:
|
|||||||
logger.warning("LLM API key is not configured. AI features may fail.")
|
logger.warning("LLM API key is not configured. AI features may fail.")
|
||||||
if self.feishu.app_id and not self.feishu.app_secret:
|
if self.feishu.app_id and not self.feishu.app_secret:
|
||||||
logger.warning("FEISHU_APP_ID is set, but FEISHU_APP_SECRET is missing.")
|
logger.warning("FEISHU_APP_ID is set, but FEISHU_APP_SECRET is missing.")
|
||||||
logger.info("Configuration validation passed (warnings may exist).")
|
|
||||||
|
|
||||||
# --- Public Getters ---
|
# --- Public Getters ---
|
||||||
|
|
||||||
@@ -193,6 +217,7 @@ class UnifiedConfig:
|
|||||||
'feishu': asdict(self.feishu),
|
'feishu': asdict(self.feishu),
|
||||||
'ai_accuracy': asdict(self.ai_accuracy),
|
'ai_accuracy': asdict(self.ai_accuracy),
|
||||||
'redis': asdict(self.redis),
|
'redis': asdict(self.redis),
|
||||||
|
'embedding': asdict(self.embedding),
|
||||||
}
|
}
|
||||||
|
|
||||||
# --- 全局单例模式 ---
|
# --- 全局单例模式 ---
|
||||||
|
|||||||
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
@@ -18,16 +18,19 @@ class AuthManager:
|
|||||||
"""认证管理器"""
|
"""认证管理器"""
|
||||||
|
|
||||||
def __init__(self):
|
def __init__(self):
|
||||||
self.secret_key = "your-secret-key-change-this-in-production" # 应该从配置中读取
|
import os
|
||||||
|
self.secret_key = os.environ.get('SECRET_KEY', 'change-this-in-production')
|
||||||
self.token_expiry = timedelta(hours=24)
|
self.token_expiry = timedelta(hours=24)
|
||||||
|
|
||||||
def hash_password(self, password: str) -> str:
|
def hash_password(self, password: str) -> str:
|
||||||
"""密码哈希"""
|
import bcrypt
|
||||||
return hashlib.sha256(password.encode()).hexdigest()
|
return bcrypt.hashpw(password.encode(), bcrypt.gensalt()).decode()
|
||||||
|
|
||||||
def verify_password(self, password: str, password_hash: str) -> bool:
|
def verify_password(self, password: str, password_hash: str) -> bool:
|
||||||
"""验证密码"""
|
import bcrypt
|
||||||
return self.hash_password(password) == password_hash
|
if password_hash and password_hash.startswith('$2b$'):
|
||||||
|
return bcrypt.checkpw(password.encode(), password_hash.encode())
|
||||||
|
return hashlib.sha256(password.encode()).hexdigest() == password_hash
|
||||||
|
|
||||||
def generate_token(self, user_data: dict) -> str:
|
def generate_token(self, user_data: dict) -> str:
|
||||||
"""生成JWT token"""
|
"""生成JWT token"""
|
||||||
@@ -67,6 +70,7 @@ class AuthManager:
|
|||||||
name_val = user.name
|
name_val = user.name
|
||||||
role_val = user.role
|
role_val = user.role
|
||||||
is_active_val = user.is_active
|
is_active_val = user.is_active
|
||||||
|
tenant_id_val = user.tenant_id
|
||||||
created_at_val = user.created_at
|
created_at_val = user.created_at
|
||||||
last_login_val = datetime.now()
|
last_login_val = datetime.now()
|
||||||
|
|
||||||
@@ -86,6 +90,7 @@ class AuthManager:
|
|||||||
'name': name_val,
|
'name': name_val,
|
||||||
'role': role_val,
|
'role': role_val,
|
||||||
'is_active': is_active_val,
|
'is_active': is_active_val,
|
||||||
|
'tenant_id': tenant_id_val,
|
||||||
'created_at': created_at_val,
|
'created_at': created_at_val,
|
||||||
'last_login': last_login_val
|
'last_login': last_login_val
|
||||||
}
|
}
|
||||||
@@ -133,10 +138,12 @@ class AuthManager:
|
|||||||
return None
|
return None
|
||||||
|
|
||||||
def create_default_admin(self):
|
def create_default_admin(self):
|
||||||
"""创建默认管理员用户"""
|
"""创建默认管理员用户(密码从环境变量 ADMIN_PASSWORD 读取,默认 admin123)"""
|
||||||
admin = self.create_user('admin', 'admin123', '系统管理员', 'admin@example.com', 'admin')
|
import os
|
||||||
|
admin_pwd = os.environ.get('ADMIN_PASSWORD', 'admin123')
|
||||||
|
admin = self.create_user('admin', admin_pwd, '系统管理员', 'admin@example.com', 'admin')
|
||||||
if admin:
|
if admin:
|
||||||
print("默认管理员用户已创建: admin/admin123")
|
print(f"默认管理员用户已创建: admin/{admin_pwd}")
|
||||||
return admin
|
return admin
|
||||||
|
|
||||||
|
|
||||||
|
|||||||
@@ -168,6 +168,29 @@ class CacheManager:
|
|||||||
)
|
)
|
||||||
del self.memory_cache[oldest_key]
|
del self.memory_cache[oldest_key]
|
||||||
|
|
||||||
|
def check_and_set_message_processed(self, message_id: str, ttl: int = 300) -> bool:
|
||||||
|
"""
|
||||||
|
检查消息是否已处理,如果未处理则标记为已处理
|
||||||
|
|
||||||
|
Args:
|
||||||
|
message_id: 消息ID
|
||||||
|
ttl: 过期时间(秒),默认5分钟
|
||||||
|
|
||||||
|
Returns:
|
||||||
|
bool: True 表示已处理(重复消息),False 表示未处理(新消息)
|
||||||
|
"""
|
||||||
|
key = f"msg_processed:{message_id}"
|
||||||
|
|
||||||
|
# 使用锁确保原子性(针对内存缓存)
|
||||||
|
with self.cache_lock:
|
||||||
|
# 1. 检查是否存在
|
||||||
|
if self.get(key):
|
||||||
|
return True
|
||||||
|
|
||||||
|
# 2. 如果不存在,则标记为已处理
|
||||||
|
self.set(key, 1, ttl)
|
||||||
|
return False
|
||||||
|
|
||||||
def get_stats(self) -> Dict[str, Any]:
|
def get_stats(self) -> Dict[str, Any]:
|
||||||
"""获取缓存统计信息"""
|
"""获取缓存统计信息"""
|
||||||
with self.cache_lock:
|
with self.cache_lock:
|
||||||
@@ -222,7 +245,6 @@ class DatabaseCache:
|
|||||||
return result
|
return result
|
||||||
return wrapper
|
return wrapper
|
||||||
|
|
||||||
|
|
||||||
# 全局缓存管理器实例
|
# 全局缓存管理器实例
|
||||||
cache_manager = CacheManager()
|
cache_manager = CacheManager()
|
||||||
|
|
||||||
|
|||||||
@@ -68,10 +68,74 @@ class DatabaseManager:
|
|||||||
Base.metadata.create_all(bind=self.engine)
|
Base.metadata.create_all(bind=self.engine)
|
||||||
logger.info("数据库初始化成功")
|
logger.info("数据库初始化成功")
|
||||||
|
|
||||||
|
# 运行 schema 迁移(处理字段变更)
|
||||||
|
self._run_migrations()
|
||||||
|
|
||||||
|
# 确保默认租户存在
|
||||||
|
self._ensure_default_tenant()
|
||||||
|
|
||||||
except Exception as e:
|
except Exception as e:
|
||||||
logger.error(f"数据库初始化失败: {e}")
|
logger.error(f"数据库初始化失败: {e}")
|
||||||
raise
|
raise
|
||||||
|
|
||||||
|
def _run_migrations(self):
|
||||||
|
"""运行轻量级 schema 迁移(SQLite 兼容)"""
|
||||||
|
try:
|
||||||
|
session = self.SessionLocal()
|
||||||
|
try:
|
||||||
|
# 检查并添加缺失的列(SQLite 支持 ADD COLUMN)
|
||||||
|
from sqlalchemy import inspect, text
|
||||||
|
inspector = inspect(self.engine)
|
||||||
|
migrations = [
|
||||||
|
# (表名, 列名, SQL 类型默认值)
|
||||||
|
('conversations', 'tenant_id', "VARCHAR(50) DEFAULT 'default'"),
|
||||||
|
('chat_sessions', 'tenant_id', "VARCHAR(50) DEFAULT 'default'"),
|
||||||
|
('work_orders', 'tenant_id', "VARCHAR(50) DEFAULT 'default'"),
|
||||||
|
('knowledge_entries', 'tenant_id', "VARCHAR(50) DEFAULT 'default'"),
|
||||||
|
('users', 'tenant_id', "VARCHAR(50) DEFAULT 'default'"),
|
||||||
|
('alerts', 'tenant_id', "VARCHAR(50) DEFAULT 'default'"),
|
||||||
|
('analytics', 'tenant_id', "VARCHAR(50) DEFAULT 'default'"),
|
||||||
|
('work_order_suggestions', 'tenant_id', "VARCHAR(50) DEFAULT 'default'"),
|
||||||
|
('work_order_process_history', 'tenant_id', "VARCHAR(50) DEFAULT 'default'"),
|
||||||
|
('vehicle_data', 'tenant_id', "VARCHAR(50) DEFAULT 'default'"),
|
||||||
|
]
|
||||||
|
for table_name, col_name, col_type in migrations:
|
||||||
|
if table_name in inspector.get_table_names():
|
||||||
|
existing_cols = [c['name'] for c in inspector.get_columns(table_name)]
|
||||||
|
if col_name not in existing_cols:
|
||||||
|
session.execute(text(f"ALTER TABLE {table_name} ADD COLUMN {col_name} {col_type}"))
|
||||||
|
logger.info(f"迁移: {table_name} 添加列 {col_name}")
|
||||||
|
session.commit()
|
||||||
|
except Exception as e:
|
||||||
|
session.rollback()
|
||||||
|
logger.warning(f"Schema 迁移失败(不影响启动): {e}")
|
||||||
|
finally:
|
||||||
|
session.close()
|
||||||
|
except Exception as e:
|
||||||
|
logger.warning(f"Schema 迁移检查失败: {e}")
|
||||||
|
|
||||||
|
def _ensure_default_tenant(self):
|
||||||
|
"""确保默认租户记录存在"""
|
||||||
|
try:
|
||||||
|
from .models import Tenant, DEFAULT_TENANT
|
||||||
|
session = self.SessionLocal()
|
||||||
|
try:
|
||||||
|
existing = session.query(Tenant).filter(Tenant.tenant_id == DEFAULT_TENANT).first()
|
||||||
|
if not existing:
|
||||||
|
session.add(Tenant(
|
||||||
|
tenant_id=DEFAULT_TENANT,
|
||||||
|
name="默认租户",
|
||||||
|
description="系统默认租户"
|
||||||
|
))
|
||||||
|
session.commit()
|
||||||
|
logger.info("默认租户已创建")
|
||||||
|
except Exception:
|
||||||
|
session.rollback()
|
||||||
|
finally:
|
||||||
|
session.close()
|
||||||
|
except Exception as e:
|
||||||
|
logger.warning(f"确保默认租户失败(不影响启动): {e}")
|
||||||
|
|
||||||
@contextmanager
|
@contextmanager
|
||||||
def get_session(self) -> Generator[Session, None, None]:
|
def get_session(self) -> Generator[Session, None, None]:
|
||||||
"""获取数据库会话的上下文管理器"""
|
"""获取数据库会话的上下文管理器"""
|
||||||
|
|||||||
152
src/core/embedding_client.py
Normal file
152
src/core/embedding_client.py
Normal file
@@ -0,0 +1,152 @@
|
|||||||
|
# -*- coding: utf-8 -*-
|
||||||
|
"""
|
||||||
|
Embedding 向量客户端(本地模型方案)
|
||||||
|
使用 sentence-transformers 在本地运行轻量级中文 embedding 模型
|
||||||
|
零 API 调用、零成本、低延迟
|
||||||
|
"""
|
||||||
|
|
||||||
|
import logging
|
||||||
|
import hashlib
|
||||||
|
import threading
|
||||||
|
from typing import List, Optional
|
||||||
|
|
||||||
|
from src.config.unified_config import get_config
|
||||||
|
from src.core.cache_manager import cache_manager
|
||||||
|
|
||||||
|
logger = logging.getLogger(__name__)
|
||||||
|
|
||||||
|
|
||||||
|
class EmbeddingClient:
|
||||||
|
"""本地 Embedding 向量客户端"""
|
||||||
|
|
||||||
|
def __init__(self):
|
||||||
|
config = get_config()
|
||||||
|
self.enabled = config.embedding.enabled
|
||||||
|
self.model_name = config.embedding.model
|
||||||
|
self.dimension = config.embedding.dimension
|
||||||
|
self.cache_ttl = config.embedding.cache_ttl
|
||||||
|
|
||||||
|
self._model = None
|
||||||
|
self._lock = threading.Lock()
|
||||||
|
|
||||||
|
if self.enabled:
|
||||||
|
logger.info(f"Embedding 客户端初始化: model={self.model_name} (本地模式)")
|
||||||
|
else:
|
||||||
|
logger.debug("Embedding 功能已禁用,将使用关键词匹配降级")
|
||||||
|
|
||||||
|
def _get_model(self):
|
||||||
|
"""延迟加载模型(首次调用时下载并加载)"""
|
||||||
|
if self._model is not None:
|
||||||
|
return self._model
|
||||||
|
|
||||||
|
with self._lock:
|
||||||
|
if self._model is not None:
|
||||||
|
return self._model
|
||||||
|
|
||||||
|
try:
|
||||||
|
import os
|
||||||
|
# 设置 HuggingFace 镜像,解决国内下载问题
|
||||||
|
os.environ.setdefault("HF_ENDPOINT", "https://hf-mirror.com")
|
||||||
|
|
||||||
|
from sentence_transformers import SentenceTransformer
|
||||||
|
logger.info(f"正在加载 embedding 模型: {self.model_name} ...")
|
||||||
|
self._model = SentenceTransformer(self.model_name)
|
||||||
|
logger.info(f"Embedding 模型加载完成: {self.model_name}")
|
||||||
|
return self._model
|
||||||
|
except ImportError:
|
||||||
|
logger.error(
|
||||||
|
"sentence-transformers 未安装,请运行: pip install sentence-transformers"
|
||||||
|
)
|
||||||
|
self.enabled = False
|
||||||
|
return None
|
||||||
|
except Exception as e:
|
||||||
|
logger.error(f"加载 embedding 模型失败: {e}")
|
||||||
|
self.enabled = False
|
||||||
|
return None
|
||||||
|
|
||||||
|
# ------------------------------------------------------------------
|
||||||
|
# 公开接口
|
||||||
|
# ------------------------------------------------------------------
|
||||||
|
|
||||||
|
def embed_text(self, text: str) -> Optional[List[float]]:
|
||||||
|
"""对单条文本生成 embedding 向量,优先从缓存读取"""
|
||||||
|
if not self.enabled or not text.strip():
|
||||||
|
return None
|
||||||
|
|
||||||
|
cache_key = self._cache_key(text)
|
||||||
|
cached = cache_manager.get(cache_key)
|
||||||
|
if cached is not None:
|
||||||
|
return cached
|
||||||
|
|
||||||
|
model = self._get_model()
|
||||||
|
if model is None:
|
||||||
|
return None
|
||||||
|
|
||||||
|
try:
|
||||||
|
vec = model.encode(text, normalize_embeddings=True).tolist()
|
||||||
|
cache_manager.set(cache_key, vec, self.cache_ttl)
|
||||||
|
return vec
|
||||||
|
except Exception as e:
|
||||||
|
logger.error(f"Embedding 生成失败: {e}")
|
||||||
|
return None
|
||||||
|
|
||||||
|
def embed_batch(self, texts: List[str]) -> List[Optional[List[float]]]:
|
||||||
|
"""批量生成 embedding"""
|
||||||
|
if not self.enabled:
|
||||||
|
return [None] * len(texts)
|
||||||
|
|
||||||
|
results: List[Optional[List[float]]] = [None] * len(texts)
|
||||||
|
uncached_indices = []
|
||||||
|
uncached_texts = []
|
||||||
|
|
||||||
|
# 1. 先查缓存
|
||||||
|
for i, t in enumerate(texts):
|
||||||
|
if not t.strip():
|
||||||
|
continue
|
||||||
|
cached = cache_manager.get(self._cache_key(t))
|
||||||
|
if cached is not None:
|
||||||
|
results[i] = cached
|
||||||
|
else:
|
||||||
|
uncached_indices.append(i)
|
||||||
|
uncached_texts.append(t)
|
||||||
|
|
||||||
|
if not uncached_texts:
|
||||||
|
return results
|
||||||
|
|
||||||
|
# 2. 批量推理
|
||||||
|
model = self._get_model()
|
||||||
|
if model is None:
|
||||||
|
return results
|
||||||
|
|
||||||
|
try:
|
||||||
|
vectors = model.encode(
|
||||||
|
uncached_texts, normalize_embeddings=True, batch_size=32
|
||||||
|
).tolist()
|
||||||
|
|
||||||
|
for j, vec in enumerate(vectors):
|
||||||
|
idx = uncached_indices[j]
|
||||||
|
results[idx] = vec
|
||||||
|
cache_manager.set(self._cache_key(uncached_texts[j]), vec, self.cache_ttl)
|
||||||
|
except Exception as e:
|
||||||
|
logger.error(f"批量 embedding 生成失败: {e}")
|
||||||
|
|
||||||
|
return results
|
||||||
|
|
||||||
|
def test_connection(self) -> bool:
|
||||||
|
"""测试模型是否可用"""
|
||||||
|
try:
|
||||||
|
vec = self.embed_text("测试连接")
|
||||||
|
return vec is not None and len(vec) > 0
|
||||||
|
except Exception as e:
|
||||||
|
logger.error(f"Embedding 模型测试失败: {e}")
|
||||||
|
return False
|
||||||
|
|
||||||
|
# ------------------------------------------------------------------
|
||||||
|
# 内部方法
|
||||||
|
# ------------------------------------------------------------------
|
||||||
|
|
||||||
|
@staticmethod
|
||||||
|
def _cache_key(text: str) -> str:
|
||||||
|
"""生成缓存键(基于文本哈希)"""
|
||||||
|
h = hashlib.md5(text.encode("utf-8")).hexdigest()
|
||||||
|
return f"emb:{h}"
|
||||||
@@ -1,35 +1,97 @@
|
|||||||
|
# -*- coding: utf-8 -*-
|
||||||
|
"""
|
||||||
|
统一 LLM 客户端
|
||||||
|
兼容所有 OpenAI 格式 API(千问、Gemini、DeepSeek、本地 Ollama 等)
|
||||||
|
通过 .env 中 LLM_PROVIDER / LLM_BASE_URL / LLM_MODEL 切换模型
|
||||||
|
"""
|
||||||
|
|
||||||
import requests
|
import requests
|
||||||
import json
|
import json
|
||||||
import logging
|
import logging
|
||||||
from typing import Dict, List, Optional, Any
|
from typing import Dict, List, Optional, Any, Generator
|
||||||
from datetime import datetime
|
from datetime import datetime
|
||||||
|
|
||||||
from src.config.unified_config import get_config
|
from src.config.unified_config import get_config
|
||||||
|
|
||||||
logger = logging.getLogger(__name__)
|
logger = logging.getLogger(__name__)
|
||||||
|
|
||||||
class QwenClient:
|
|
||||||
"""阿里云千问API客户端"""
|
|
||||||
|
|
||||||
def __init__(self):
|
class LLMClient:
|
||||||
|
"""
|
||||||
|
统一大模型客户端
|
||||||
|
所有 OpenAI 兼容 API 都走这一个类,不再区分 provider。
|
||||||
|
"""
|
||||||
|
|
||||||
|
def __init__(self, base_url: str = None, api_key: str = None,
|
||||||
|
model: str = None, timeout: int = None):
|
||||||
config = get_config()
|
config = get_config()
|
||||||
self.base_url = config.llm.base_url or "https://dashscope.aliyuncs.com/compatible-mode/v1"
|
self.base_url = (base_url or config.llm.base_url or
|
||||||
self.api_key = config.llm.api_key
|
"https://dashscope.aliyuncs.com/compatible-mode/v1")
|
||||||
self.model_name = config.llm.model
|
self.api_key = api_key or config.llm.api_key
|
||||||
self.timeout = config.llm.timeout
|
self.model_name = model or config.llm.model
|
||||||
|
self.timeout = timeout or config.llm.timeout
|
||||||
self.headers = {
|
self.headers = {
|
||||||
"Authorization": f"Bearer {self.api_key}",
|
"Authorization": f"Bearer {self.api_key}",
|
||||||
"Content-Type": "application/json"
|
"Content-Type": "application/json",
|
||||||
}
|
}
|
||||||
|
|
||||||
|
# ── 普通请求 ──────────────────────────────────────────
|
||||||
|
|
||||||
def chat_completion(
|
def chat_completion(
|
||||||
self,
|
self,
|
||||||
messages: List[Dict[str, str]],
|
messages: List[Dict[str, str]],
|
||||||
temperature: float = 0.7,
|
temperature: float = 0.7,
|
||||||
max_tokens: int = 1000,
|
max_tokens: int = 1000,
|
||||||
stream: bool = False
|
max_retries: int = 2,
|
||||||
|
**kwargs,
|
||||||
) -> Dict[str, Any]:
|
) -> Dict[str, Any]:
|
||||||
"""发送聊天请求"""
|
"""标准聊天补全(非流式),支持自动重试"""
|
||||||
|
url = f"{self.base_url}/chat/completions"
|
||||||
|
payload = {
|
||||||
|
"model": self.model_name,
|
||||||
|
"messages": messages,
|
||||||
|
"temperature": temperature,
|
||||||
|
"max_tokens": max_tokens,
|
||||||
|
"stream": False,
|
||||||
|
}
|
||||||
|
|
||||||
|
last_error = None
|
||||||
|
for attempt in range(max_retries + 1):
|
||||||
|
try:
|
||||||
|
response = requests.post(
|
||||||
|
url, headers=self.headers, json=payload, timeout=self.timeout
|
||||||
|
)
|
||||||
|
if response.status_code == 200:
|
||||||
|
return response.json()
|
||||||
|
elif response.status_code >= 500:
|
||||||
|
last_error = f"API 服务端错误: {response.status_code}"
|
||||||
|
logger.warning(f"LLM API 第{attempt+1}次请求失败({response.status_code}),{'重试中...' if attempt < max_retries else '放弃'}")
|
||||||
|
continue
|
||||||
|
else:
|
||||||
|
logger.error(f"LLM API 失败: {response.status_code} - {response.text}")
|
||||||
|
return {"error": f"API请求失败: {response.status_code}"}
|
||||||
|
|
||||||
|
except requests.exceptions.Timeout:
|
||||||
|
last_error = "请求超时"
|
||||||
|
logger.warning(f"LLM API 第{attempt+1}次超时,{'重试中...' if attempt < max_retries else '放弃'}")
|
||||||
|
except requests.exceptions.RequestException as e:
|
||||||
|
last_error = str(e)
|
||||||
|
logger.warning(f"LLM API 第{attempt+1}次异常: {e}")
|
||||||
|
except Exception as e:
|
||||||
|
logger.error(f"LLM 未知错误: {e}")
|
||||||
|
return {"error": f"未知错误: {str(e)}"}
|
||||||
|
|
||||||
|
return {"error": last_error or "请求失败"}
|
||||||
|
|
||||||
|
# ── 流式请求 ──────────────────────────────────────────
|
||||||
|
|
||||||
|
def chat_completion_stream(
|
||||||
|
self,
|
||||||
|
messages: List[Dict[str, str]],
|
||||||
|
temperature: float = 0.7,
|
||||||
|
max_tokens: int = 1000,
|
||||||
|
) -> Generator[str, None, None]:
|
||||||
|
"""流式聊天补全,逐 token yield 文本片段"""
|
||||||
try:
|
try:
|
||||||
url = f"{self.base_url}/chat/completions"
|
url = f"{self.base_url}/chat/completions"
|
||||||
payload = {
|
payload = {
|
||||||
@@ -37,114 +99,107 @@ class QwenClient:
|
|||||||
"messages": messages,
|
"messages": messages,
|
||||||
"temperature": temperature,
|
"temperature": temperature,
|
||||||
"max_tokens": max_tokens,
|
"max_tokens": max_tokens,
|
||||||
"stream": stream
|
"stream": True,
|
||||||
}
|
}
|
||||||
|
|
||||||
response = requests.post(
|
response = requests.post(
|
||||||
url,
|
url, headers=self.headers, json=payload,
|
||||||
headers=self.headers,
|
timeout=self.timeout, stream=True,
|
||||||
json=payload,
|
|
||||||
timeout=self.timeout
|
|
||||||
)
|
)
|
||||||
|
|
||||||
if response.status_code == 200:
|
if response.status_code != 200:
|
||||||
result = response.json()
|
logger.error(f"流式 API 失败: {response.status_code}")
|
||||||
logger.info("API请求成功")
|
return
|
||||||
return result
|
|
||||||
else:
|
for line in response.iter_lines(decode_unicode=True):
|
||||||
logger.error(f"API请求失败: {response.status_code} - {response.text}")
|
if not line or not line.startswith("data: "):
|
||||||
return {"error": f"API请求失败: {response.status_code}"}
|
continue
|
||||||
|
data_str = line[6:]
|
||||||
|
if data_str.strip() == "[DONE]":
|
||||||
|
break
|
||||||
|
try:
|
||||||
|
chunk = json.loads(data_str)
|
||||||
|
delta = chunk.get("choices", [{}])[0].get("delta", {})
|
||||||
|
content = delta.get("content", "")
|
||||||
|
if content:
|
||||||
|
yield content
|
||||||
|
except (json.JSONDecodeError, IndexError, KeyError):
|
||||||
|
continue
|
||||||
|
|
||||||
except requests.exceptions.Timeout:
|
except requests.exceptions.Timeout:
|
||||||
logger.error("API请求超时")
|
logger.error("流式 API 超时")
|
||||||
return {"error": "请求超时"}
|
|
||||||
except requests.exceptions.RequestException as e:
|
|
||||||
logger.error(f"API请求异常: {e}")
|
|
||||||
return {"error": f"请求异常: {str(e)}"}
|
|
||||||
except Exception as e:
|
except Exception as e:
|
||||||
logger.error(f"未知错误: {e}")
|
logger.error(f"流式 API 异常: {e}")
|
||||||
return {"error": f"未知错误: {str(e)}"}
|
|
||||||
|
# ── 便捷方法 ──────────────────────────────────────────
|
||||||
|
|
||||||
def generate_response(
|
def generate_response(
|
||||||
self,
|
self,
|
||||||
user_message: str,
|
user_message: str,
|
||||||
context: Optional[str] = None,
|
context: Optional[str] = None,
|
||||||
knowledge_base: Optional[List[str]] = None
|
knowledge_base: Optional[List[str]] = None,
|
||||||
) -> Dict[str, Any]:
|
) -> Dict[str, Any]:
|
||||||
"""生成回复"""
|
"""快捷生成回复"""
|
||||||
messages = []
|
system_prompt = "你是一个专业的智能客服助手,请根据用户问题提供准确、有帮助的回复。"
|
||||||
|
|
||||||
# 系统提示词
|
|
||||||
system_prompt = "你是一个专业的客服助手,请根据用户问题提供准确、 helpful的回复。"
|
|
||||||
if context:
|
if context:
|
||||||
system_prompt += f"\n\n上下文信息: {context}"
|
system_prompt += f"\n\n上下文信息: {context}"
|
||||||
if knowledge_base:
|
if knowledge_base:
|
||||||
system_prompt += f"\n\n相关知识库: {' '.join(knowledge_base)}"
|
system_prompt += f"\n\n相关知识库: {' '.join(knowledge_base)}"
|
||||||
|
|
||||||
messages.append({"role": "system", "content": system_prompt})
|
messages = [
|
||||||
messages.append({"role": "user", "content": user_message})
|
{"role": "system", "content": system_prompt},
|
||||||
|
{"role": "user", "content": user_message},
|
||||||
|
]
|
||||||
|
|
||||||
result = self.chat_completion(messages)
|
result = self.chat_completion(messages)
|
||||||
|
|
||||||
if "error" in result:
|
if "error" in result:
|
||||||
return result
|
return result
|
||||||
|
|
||||||
try:
|
try:
|
||||||
response_content = result["choices"][0]["message"]["content"]
|
|
||||||
return {
|
return {
|
||||||
"response": response_content,
|
"response": result["choices"][0]["message"]["content"],
|
||||||
"usage": result.get("usage", {}),
|
"usage": result.get("usage", {}),
|
||||||
"model": result.get("model", ""),
|
"model": result.get("model", ""),
|
||||||
"timestamp": datetime.now().isoformat()
|
"timestamp": datetime.now().isoformat(),
|
||||||
}
|
}
|
||||||
except (KeyError, IndexError) as e:
|
except (KeyError, IndexError) as e:
|
||||||
logger.error(f"解析API响应失败: {e}")
|
logger.error(f"解析响应失败: {e}")
|
||||||
return {"error": f"解析响应失败: {str(e)}"}
|
return {"error": f"解析响应失败: {str(e)}"}
|
||||||
|
|
||||||
def extract_entities(self, text: str) -> Dict[str, Any]:
|
def extract_entities(self, text: str) -> Dict[str, Any]:
|
||||||
"""提取文本中的实体信息"""
|
"""提取文本中的实体信息"""
|
||||||
prompt = f"""
|
import re
|
||||||
请从以下文本中提取关键信息,包括:
|
prompt = (
|
||||||
1. 问题类型/类别
|
f"请从以下文本中提取关键信息,包括:\n"
|
||||||
2. 优先级(高/中/低)
|
f"1. 问题类型/类别\n2. 优先级(高/中/低)\n"
|
||||||
3. 关键词
|
f"3. 关键词\n4. 情感倾向(正面/负面/中性)\n\n"
|
||||||
4. 情感倾向(正面/负面/中性)
|
f"文本: {text}\n\n请以JSON格式返回结果。"
|
||||||
|
)
|
||||||
文本: {text}
|
|
||||||
|
|
||||||
请以JSON格式返回结果。
|
|
||||||
"""
|
|
||||||
|
|
||||||
messages = [
|
messages = [
|
||||||
{"role": "system", "content": "你是一个信息提取专家,请准确提取文本中的关键信息。"},
|
{"role": "system", "content": "你是一个信息提取专家,请准确提取文本中的关键信息。"},
|
||||||
{"role": "user", "content": prompt}
|
{"role": "user", "content": prompt},
|
||||||
]
|
]
|
||||||
|
|
||||||
result = self.chat_completion(messages, temperature=0.3)
|
result = self.chat_completion(messages, temperature=0.3)
|
||||||
|
|
||||||
if "error" in result:
|
if "error" in result:
|
||||||
return result
|
return result
|
||||||
|
|
||||||
try:
|
try:
|
||||||
response_content = result["choices"][0]["message"]["content"]
|
content = result["choices"][0]["message"]["content"]
|
||||||
# 尝试解析JSON
|
json_match = re.search(r'\{.*\}', content, re.DOTALL)
|
||||||
import re
|
return json.loads(json_match.group()) if json_match else {"raw_response": content}
|
||||||
json_match = re.search(r'\{.*\}', response_content, re.DOTALL)
|
|
||||||
if json_match:
|
|
||||||
return json.loads(json_match.group())
|
|
||||||
else:
|
|
||||||
return {"raw_response": response_content}
|
|
||||||
except Exception as e:
|
except Exception as e:
|
||||||
logger.error(f"解析实体提取结果失败: {e}")
|
|
||||||
return {"error": f"解析失败: {str(e)}"}
|
return {"error": f"解析失败: {str(e)}"}
|
||||||
|
|
||||||
def test_connection(self) -> bool:
|
def test_connection(self) -> bool:
|
||||||
"""测试API连接"""
|
"""测试连接"""
|
||||||
try:
|
try:
|
||||||
result = self.chat_completion([
|
result = self.chat_completion(
|
||||||
{"role": "user", "content": "你好"}
|
[{"role": "user", "content": "你好"}], max_tokens=10
|
||||||
], max_tokens=10)
|
)
|
||||||
return "error" not in result
|
return "error" not in result
|
||||||
except Exception as e:
|
except Exception:
|
||||||
logger.error(f"API连接测试失败: {e}")
|
|
||||||
return False
|
return False
|
||||||
|
|
||||||
|
|
||||||
|
# ── 向后兼容别名 ──────────────────────────────────────────
|
||||||
|
# 旧代码中 `from src.core.llm_client import QwenClient` 仍然能用
|
||||||
|
QwenClient = LLMClient
|
||||||
|
|||||||
@@ -1,4 +1,4 @@
|
|||||||
from sqlalchemy import Column, Integer, String, Text, DateTime, Float, Boolean, ForeignKey
|
from sqlalchemy import Column, Integer, String, Text, DateTime, Float, Boolean, ForeignKey, Index
|
||||||
from sqlalchemy.ext.declarative import declarative_base
|
from sqlalchemy.ext.declarative import declarative_base
|
||||||
from sqlalchemy.orm import relationship
|
from sqlalchemy.orm import relationship
|
||||||
from datetime import datetime
|
from datetime import datetime
|
||||||
@@ -6,17 +6,50 @@ import hashlib
|
|||||||
|
|
||||||
Base = declarative_base()
|
Base = declarative_base()
|
||||||
|
|
||||||
|
# 默认租户ID,单租户部署时使用
|
||||||
|
DEFAULT_TENANT = "default"
|
||||||
|
|
||||||
|
|
||||||
|
class Tenant(Base):
|
||||||
|
"""租户模型 — 管理多租户(市场)"""
|
||||||
|
__tablename__ = "tenants"
|
||||||
|
|
||||||
|
id = Column(Integer, primary_key=True)
|
||||||
|
tenant_id = Column(String(50), unique=True, nullable=False) # 唯一标识,如 market_a
|
||||||
|
name = Column(String(100), nullable=False) # 显示名称,如 "市场A"
|
||||||
|
description = Column(Text, nullable=True)
|
||||||
|
is_active = Column(Boolean, default=True)
|
||||||
|
created_at = Column(DateTime, default=datetime.now)
|
||||||
|
updated_at = Column(DateTime, default=datetime.now, onupdate=datetime.now)
|
||||||
|
|
||||||
|
# 租户级配置:飞书 app 凭证、LLM 参数等(JSON 格式)
|
||||||
|
config = Column(Text, nullable=True)
|
||||||
|
|
||||||
|
def to_dict(self):
|
||||||
|
import json
|
||||||
|
return {
|
||||||
|
'id': self.id,
|
||||||
|
'tenant_id': self.tenant_id,
|
||||||
|
'name': self.name,
|
||||||
|
'description': self.description,
|
||||||
|
'is_active': self.is_active,
|
||||||
|
'config': json.loads(self.config) if self.config else {},
|
||||||
|
'created_at': self.created_at.isoformat() if self.created_at else None,
|
||||||
|
'updated_at': self.updated_at.isoformat() if self.updated_at else None,
|
||||||
|
}
|
||||||
|
|
||||||
class WorkOrder(Base):
|
class WorkOrder(Base):
|
||||||
"""工单模型"""
|
"""工单模型"""
|
||||||
__tablename__ = "work_orders"
|
__tablename__ = "work_orders"
|
||||||
|
|
||||||
id = Column(Integer, primary_key=True)
|
id = Column(Integer, primary_key=True)
|
||||||
|
tenant_id = Column(String(50), nullable=False, default=DEFAULT_TENANT, index=True)
|
||||||
order_id = Column(String(50), unique=True, nullable=False)
|
order_id = Column(String(50), unique=True, nullable=False)
|
||||||
title = Column(String(200), nullable=False)
|
title = Column(String(200), nullable=False)
|
||||||
description = Column(Text, nullable=False)
|
description = Column(Text, nullable=False)
|
||||||
category = Column(String(100), nullable=False)
|
category = Column(String(100), nullable=False)
|
||||||
priority = Column(String(20), nullable=False)
|
priority = Column(String(20), nullable=False)
|
||||||
status = Column(String(20), nullable=False)
|
status = Column(String(20), nullable=False, index=True)
|
||||||
created_at = Column(DateTime, default=datetime.now)
|
created_at = Column(DateTime, default=datetime.now)
|
||||||
updated_at = Column(DateTime, default=datetime.now, onupdate=datetime.now)
|
updated_at = Column(DateTime, default=datetime.now, onupdate=datetime.now)
|
||||||
resolution = Column(Text)
|
resolution = Column(Text)
|
||||||
@@ -58,20 +91,60 @@ class WorkOrder(Base):
|
|||||||
# 关联处理过程记录
|
# 关联处理过程记录
|
||||||
process_history = relationship("WorkOrderProcessHistory", back_populates="work_order", order_by="WorkOrderProcessHistory.process_time")
|
process_history = relationship("WorkOrderProcessHistory", back_populates="work_order", order_by="WorkOrderProcessHistory.process_time")
|
||||||
|
|
||||||
|
class ChatSession(Base):
|
||||||
|
"""对话会话模型 — 将多轮对话组织为一个会话"""
|
||||||
|
__tablename__ = "chat_sessions"
|
||||||
|
|
||||||
|
id = Column(Integer, primary_key=True)
|
||||||
|
tenant_id = Column(String(50), nullable=False, default=DEFAULT_TENANT, index=True)
|
||||||
|
session_id = Column(String(100), unique=True, nullable=False)
|
||||||
|
user_id = Column(String(100), nullable=True, index=True)
|
||||||
|
work_order_id = Column(Integer, ForeignKey("work_orders.id"), nullable=True)
|
||||||
|
title = Column(String(200), nullable=True) # 会话标题(取首条消息摘要)
|
||||||
|
status = Column(String(20), default="active") # active, ended
|
||||||
|
message_count = Column(Integer, default=0) # 消息轮数
|
||||||
|
source = Column(String(100), nullable=True) # 来源:websocket, api, feishu_bot(group) 等
|
||||||
|
ip_address = Column(String(200), nullable=True) # IP地址或来源标识
|
||||||
|
created_at = Column(DateTime, default=datetime.now)
|
||||||
|
updated_at = Column(DateTime, default=datetime.now, onupdate=datetime.now)
|
||||||
|
ended_at = Column(DateTime, nullable=True)
|
||||||
|
|
||||||
|
# 关联消息
|
||||||
|
messages = relationship("Conversation", back_populates="chat_session", order_by="Conversation.timestamp")
|
||||||
|
|
||||||
|
def to_dict(self):
|
||||||
|
return {
|
||||||
|
'id': self.id,
|
||||||
|
'session_id': self.session_id,
|
||||||
|
'user_id': self.user_id,
|
||||||
|
'work_order_id': self.work_order_id,
|
||||||
|
'title': self.title,
|
||||||
|
'status': self.status,
|
||||||
|
'message_count': self.message_count,
|
||||||
|
'source': self.source,
|
||||||
|
'ip_address': self.ip_address,
|
||||||
|
'created_at': self.created_at.isoformat() if self.created_at else None,
|
||||||
|
'updated_at': self.updated_at.isoformat() if self.updated_at else None,
|
||||||
|
'ended_at': self.ended_at.isoformat() if self.ended_at else None,
|
||||||
|
}
|
||||||
|
|
||||||
|
|
||||||
class Conversation(Base):
|
class Conversation(Base):
|
||||||
"""对话记录模型"""
|
"""对话记录模型"""
|
||||||
__tablename__ = "conversations"
|
__tablename__ = "conversations"
|
||||||
|
|
||||||
id = Column(Integer, primary_key=True)
|
id = Column(Integer, primary_key=True)
|
||||||
work_order_id = Column(Integer, ForeignKey("work_orders.id"))
|
tenant_id = Column(String(50), nullable=False, default=DEFAULT_TENANT, index=True)
|
||||||
|
session_id = Column(String(100), ForeignKey("chat_sessions.session_id"), nullable=True, index=True)
|
||||||
|
work_order_id = Column(Integer, ForeignKey("work_orders.id"), index=True)
|
||||||
user_message = Column(Text, nullable=False)
|
user_message = Column(Text, nullable=False)
|
||||||
assistant_response = Column(Text, nullable=False)
|
assistant_response = Column(Text, nullable=False)
|
||||||
timestamp = Column(DateTime, default=datetime.now)
|
timestamp = Column(DateTime, default=datetime.now)
|
||||||
confidence_score = Column(Float)
|
confidence_score = Column(Float)
|
||||||
knowledge_used = Column(Text) # 使用的知识库条目
|
knowledge_used = Column(Text) # 使用的知识库条目
|
||||||
response_time = Column(Float) # 响应时间(秒)
|
response_time = Column(Float) # 响应时间(秒)
|
||||||
ip_address = Column(String(45), nullable=True) # IP地址
|
ip_address = Column(String(200), nullable=True) # IP地址或来源标识(如 feishu:uid:name)
|
||||||
invocation_method = Column(String(50), nullable=True) # 调用方式(websocket, api等)
|
invocation_method = Column(String(100), nullable=True) # 调用方式(websocket, api, feishu_bot(group) 等)
|
||||||
|
|
||||||
# 系统优化字段
|
# 系统优化字段
|
||||||
processing_time = Column(Float) # 处理时间
|
processing_time = Column(Float) # 处理时间
|
||||||
@@ -79,21 +152,23 @@ class Conversation(Base):
|
|||||||
cpu_usage = Column(Float) # CPU使用率
|
cpu_usage = Column(Float) # CPU使用率
|
||||||
|
|
||||||
work_order = relationship("WorkOrder", back_populates="conversations")
|
work_order = relationship("WorkOrder", back_populates="conversations")
|
||||||
|
chat_session = relationship("ChatSession", back_populates="messages")
|
||||||
|
|
||||||
class KnowledgeEntry(Base):
|
class KnowledgeEntry(Base):
|
||||||
"""知识库条目模型"""
|
"""知识库条目模型"""
|
||||||
__tablename__ = "knowledge_entries"
|
__tablename__ = "knowledge_entries"
|
||||||
|
|
||||||
id = Column(Integer, primary_key=True)
|
id = Column(Integer, primary_key=True)
|
||||||
|
tenant_id = Column(String(50), nullable=False, default=DEFAULT_TENANT, index=True)
|
||||||
question = Column(Text, nullable=False)
|
question = Column(Text, nullable=False)
|
||||||
answer = Column(Text, nullable=False)
|
answer = Column(Text, nullable=False)
|
||||||
category = Column(String(100), nullable=False)
|
category = Column(String(100), nullable=False, index=True)
|
||||||
confidence_score = Column(Float, default=0.0)
|
confidence_score = Column(Float, default=0.0)
|
||||||
usage_count = Column(Integer, default=0)
|
usage_count = Column(Integer, default=0)
|
||||||
created_at = Column(DateTime, default=datetime.now)
|
created_at = Column(DateTime, default=datetime.now)
|
||||||
updated_at = Column(DateTime, default=datetime.now, onupdate=datetime.now)
|
updated_at = Column(DateTime, default=datetime.now, onupdate=datetime.now)
|
||||||
is_active = Column(Boolean, default=True)
|
is_active = Column(Boolean, default=True, index=True)
|
||||||
is_verified = Column(Boolean, default=False) # 是否已验证
|
is_verified = Column(Boolean, default=False, index=True)
|
||||||
verified_by = Column(String(100)) # 验证人
|
verified_by = Column(String(100)) # 验证人
|
||||||
verified_at = Column(DateTime) # 验证时间
|
verified_at = Column(DateTime) # 验证时间
|
||||||
vector_embedding = Column(Text) # 向量嵌入的JSON字符串
|
vector_embedding = Column(Text) # 向量嵌入的JSON字符串
|
||||||
@@ -108,6 +183,7 @@ class VehicleData(Base):
|
|||||||
__tablename__ = "vehicle_data"
|
__tablename__ = "vehicle_data"
|
||||||
|
|
||||||
id = Column(Integer, primary_key=True)
|
id = Column(Integer, primary_key=True)
|
||||||
|
tenant_id = Column(String(50), nullable=False, default=DEFAULT_TENANT, index=True)
|
||||||
vehicle_id = Column(String(50), nullable=False) # 车辆ID
|
vehicle_id = Column(String(50), nullable=False) # 车辆ID
|
||||||
vehicle_vin = Column(String(17)) # 车架号
|
vehicle_vin = Column(String(17)) # 车架号
|
||||||
data_type = Column(String(50), nullable=False) # 数据类型(位置、状态、故障等)
|
data_type = Column(String(50), nullable=False) # 数据类型(位置、状态、故障等)
|
||||||
@@ -125,6 +201,7 @@ class Analytics(Base):
|
|||||||
__tablename__ = "analytics"
|
__tablename__ = "analytics"
|
||||||
|
|
||||||
id = Column(Integer, primary_key=True)
|
id = Column(Integer, primary_key=True)
|
||||||
|
tenant_id = Column(String(50), nullable=False, default=DEFAULT_TENANT, index=True)
|
||||||
date = Column(DateTime, nullable=False)
|
date = Column(DateTime, nullable=False)
|
||||||
total_orders = Column(Integer, default=0)
|
total_orders = Column(Integer, default=0)
|
||||||
resolved_orders = Column(Integer, default=0)
|
resolved_orders = Column(Integer, default=0)
|
||||||
@@ -145,6 +222,7 @@ class Alert(Base):
|
|||||||
__tablename__ = "alerts"
|
__tablename__ = "alerts"
|
||||||
|
|
||||||
id = Column(Integer, primary_key=True)
|
id = Column(Integer, primary_key=True)
|
||||||
|
tenant_id = Column(String(50), nullable=False, default=DEFAULT_TENANT, index=True)
|
||||||
rule_name = Column(String(100), nullable=False)
|
rule_name = Column(String(100), nullable=False)
|
||||||
alert_type = Column(String(50), nullable=False)
|
alert_type = Column(String(50), nullable=False)
|
||||||
level = Column(String(20), nullable=False) # info, warning, error, critical
|
level = Column(String(20), nullable=False) # info, warning, error, critical
|
||||||
@@ -160,6 +238,7 @@ class WorkOrderSuggestion(Base):
|
|||||||
__tablename__ = "work_order_suggestions"
|
__tablename__ = "work_order_suggestions"
|
||||||
|
|
||||||
id = Column(Integer, primary_key=True)
|
id = Column(Integer, primary_key=True)
|
||||||
|
tenant_id = Column(String(50), nullable=False, default=DEFAULT_TENANT, index=True)
|
||||||
work_order_id = Column(Integer, ForeignKey("work_orders.id"), nullable=False)
|
work_order_id = Column(Integer, ForeignKey("work_orders.id"), nullable=False)
|
||||||
ai_suggestion = Column(Text)
|
ai_suggestion = Column(Text)
|
||||||
human_resolution = Column(Text)
|
human_resolution = Column(Text)
|
||||||
@@ -174,6 +253,7 @@ class WorkOrderProcessHistory(Base):
|
|||||||
__tablename__ = "work_order_process_history"
|
__tablename__ = "work_order_process_history"
|
||||||
|
|
||||||
id = Column(Integer, primary_key=True)
|
id = Column(Integer, primary_key=True)
|
||||||
|
tenant_id = Column(String(50), nullable=False, default=DEFAULT_TENANT, index=True)
|
||||||
work_order_id = Column(Integer, ForeignKey("work_orders.id"), nullable=False)
|
work_order_id = Column(Integer, ForeignKey("work_orders.id"), nullable=False)
|
||||||
|
|
||||||
# 处理人员信息
|
# 处理人员信息
|
||||||
@@ -203,6 +283,7 @@ class User(Base):
|
|||||||
__tablename__ = "users"
|
__tablename__ = "users"
|
||||||
|
|
||||||
id = Column(Integer, primary_key=True)
|
id = Column(Integer, primary_key=True)
|
||||||
|
tenant_id = Column(String(50), nullable=False, default=DEFAULT_TENANT, index=True)
|
||||||
username = Column(String(50), unique=True, nullable=False)
|
username = Column(String(50), unique=True, nullable=False)
|
||||||
password_hash = Column(String(128), nullable=False)
|
password_hash = Column(String(128), nullable=False)
|
||||||
email = Column(String(120), unique=True, nullable=True)
|
email = Column(String(120), unique=True, nullable=True)
|
||||||
@@ -213,12 +294,19 @@ class User(Base):
|
|||||||
last_login = Column(DateTime)
|
last_login = Column(DateTime)
|
||||||
|
|
||||||
def set_password(self, password):
|
def set_password(self, password):
|
||||||
"""设置密码哈希"""
|
"""设置密码哈希(bcrypt)"""
|
||||||
self.password_hash = hashlib.sha256(password.encode()).hexdigest()
|
import bcrypt
|
||||||
|
self.password_hash = bcrypt.hashpw(password.encode(), bcrypt.gensalt()).decode()
|
||||||
|
|
||||||
def check_password(self, password):
|
def check_password(self, password):
|
||||||
"""验证密码"""
|
"""验证密码(兼容旧 SHA-256,验证通过后自动升级为 bcrypt)"""
|
||||||
return self.password_hash == hashlib.sha256(password.encode()).hexdigest()
|
import bcrypt
|
||||||
|
if self.password_hash and self.password_hash.startswith('$2b$'):
|
||||||
|
return bcrypt.checkpw(password.encode(), self.password_hash.encode())
|
||||||
|
if self.password_hash == hashlib.sha256(password.encode()).hexdigest():
|
||||||
|
self.set_password(password)
|
||||||
|
return True
|
||||||
|
return False
|
||||||
|
|
||||||
def to_dict(self):
|
def to_dict(self):
|
||||||
"""转换为字典格式(用于API响应)"""
|
"""转换为字典格式(用于API响应)"""
|
||||||
|
|||||||
@@ -1,89 +1,65 @@
|
|||||||
# -*- coding: utf-8 -*-
|
# -*- coding: utf-8 -*-
|
||||||
"""
|
"""
|
||||||
性能优化配置
|
性能优化配置
|
||||||
集中管理所有性能相关的配置参数
|
从 system_settings.json 读取,硬编码值仅作为 fallback
|
||||||
"""
|
"""
|
||||||
|
import os
|
||||||
|
import json
|
||||||
|
import logging
|
||||||
|
|
||||||
|
logger = logging.getLogger(__name__)
|
||||||
|
|
||||||
|
# 默认值
|
||||||
|
_DEFAULTS = {
|
||||||
|
"database": {"pool_size": 20, "max_overflow": 30, "pool_recycle": 1800, "pool_timeout": 10},
|
||||||
|
"cache": {"default_ttl": 60, "max_memory_size": 2000, "conversation_ttl": 60, "workorder_ttl": 30, "monitoring_ttl": 30},
|
||||||
|
"query": {"default_limit": 100, "conversations_limit": 1000, "workorders_limit": 100, "monitoring_limit": 1000},
|
||||||
|
"api": {"timeout": 10, "retry_count": 3, "batch_size": 50},
|
||||||
|
"monitoring": {"interval": 60, "slow_query_threshold": 1.0, "performance_log_enabled": True}
|
||||||
|
}
|
||||||
|
|
||||||
|
def _load():
|
||||||
|
try:
|
||||||
|
path = os.path.join('data', 'system_settings.json')
|
||||||
|
if os.path.exists(path):
|
||||||
|
with open(path, 'r', encoding='utf-8') as f:
|
||||||
|
return json.load(f).get('performance', {})
|
||||||
|
except Exception as e:
|
||||||
|
logger.debug(f"加载性能配置失败: {e}")
|
||||||
|
return {}
|
||||||
|
|
||||||
class PerformanceConfig:
|
class PerformanceConfig:
|
||||||
"""性能配置类"""
|
"""性能配置类 — 优先从配置文件读取"""
|
||||||
|
|
||||||
# 数据库连接池配置
|
@classmethod
|
||||||
DATABASE_POOL_SIZE = 20
|
def _get(cls, section, key):
|
||||||
DATABASE_MAX_OVERFLOW = 30
|
cfg = _load()
|
||||||
DATABASE_POOL_RECYCLE = 1800
|
return cfg.get(section, {}).get(key, _DEFAULTS.get(section, {}).get(key))
|
||||||
DATABASE_POOL_TIMEOUT = 10
|
|
||||||
|
|
||||||
# 缓存配置
|
|
||||||
CACHE_DEFAULT_TTL = 60 # 默认缓存时间(秒)
|
|
||||||
CACHE_MAX_MEMORY_SIZE = 2000 # 最大内存缓存条目数
|
|
||||||
CACHE_CONVERSATION_TTL = 60 # 对话缓存时间
|
|
||||||
CACHE_WORKORDER_TTL = 30 # 工单缓存时间
|
|
||||||
CACHE_MONITORING_TTL = 30 # 监控数据缓存时间
|
|
||||||
|
|
||||||
# 查询优化配置
|
|
||||||
QUERY_LIMIT_DEFAULT = 100 # 默认查询限制
|
|
||||||
QUERY_LIMIT_CONVERSATIONS = 1000 # 对话查询限制
|
|
||||||
QUERY_LIMIT_WORKORDERS = 100 # 工单查询限制
|
|
||||||
QUERY_LIMIT_MONITORING = 1000 # 监控查询限制
|
|
||||||
|
|
||||||
# 前端缓存配置
|
|
||||||
FRONTEND_CACHE_TIMEOUT = 30000 # 前端缓存时间(毫秒)
|
|
||||||
FRONTEND_PARALLEL_LOADING = True # 是否启用并行加载
|
|
||||||
|
|
||||||
# API响应优化
|
|
||||||
API_TIMEOUT = 10 # API超时时间(秒)
|
|
||||||
API_RETRY_COUNT = 3 # API重试次数
|
|
||||||
API_BATCH_SIZE = 50 # 批量操作大小
|
|
||||||
|
|
||||||
# 系统监控配置
|
|
||||||
MONITORING_INTERVAL = 60 # 监控间隔(秒)
|
|
||||||
SLOW_QUERY_THRESHOLD = 1.0 # 慢查询阈值(秒)
|
|
||||||
PERFORMANCE_LOG_ENABLED = True # 是否启用性能日志
|
|
||||||
|
|
||||||
@classmethod
|
@classmethod
|
||||||
def get_database_config(cls):
|
def get_database_config(cls):
|
||||||
"""获取数据库配置"""
|
cfg = _load().get('database', {})
|
||||||
return {
|
return {**_DEFAULTS['database'], **cfg}
|
||||||
'pool_size': cls.DATABASE_POOL_SIZE,
|
|
||||||
'max_overflow': cls.DATABASE_MAX_OVERFLOW,
|
|
||||||
'pool_recycle': cls.DATABASE_POOL_RECYCLE,
|
|
||||||
'pool_timeout': cls.DATABASE_POOL_TIMEOUT
|
|
||||||
}
|
|
||||||
|
|
||||||
@classmethod
|
@classmethod
|
||||||
def get_cache_config(cls):
|
def get_cache_config(cls):
|
||||||
"""获取缓存配置"""
|
cfg = _load().get('cache', {})
|
||||||
return {
|
return {**_DEFAULTS['cache'], **cfg}
|
||||||
'default_ttl': cls.CACHE_DEFAULT_TTL,
|
|
||||||
'max_memory_size': cls.CACHE_MAX_MEMORY_SIZE,
|
|
||||||
'conversation_ttl': cls.CACHE_CONVERSATION_TTL,
|
|
||||||
'workorder_ttl': cls.CACHE_WORKORDER_TTL,
|
|
||||||
'monitoring_ttl': cls.CACHE_MONITORING_TTL
|
|
||||||
}
|
|
||||||
|
|
||||||
@classmethod
|
@classmethod
|
||||||
def get_query_config(cls):
|
def get_query_config(cls):
|
||||||
"""获取查询配置"""
|
cfg = _load().get('query', {})
|
||||||
return {
|
return {**_DEFAULTS['query'], **cfg}
|
||||||
'default_limit': cls.QUERY_LIMIT_DEFAULT,
|
|
||||||
'conversations_limit': cls.QUERY_LIMIT_CONVERSATIONS,
|
|
||||||
'workorders_limit': cls.QUERY_LIMIT_WORKORDERS,
|
|
||||||
'monitoring_limit': cls.QUERY_LIMIT_MONITORING
|
|
||||||
}
|
|
||||||
|
|
||||||
@classmethod
|
|
||||||
def get_frontend_config(cls):
|
|
||||||
"""获取前端配置"""
|
|
||||||
return {
|
|
||||||
'cache_timeout': cls.FRONTEND_CACHE_TIMEOUT,
|
|
||||||
'parallel_loading': cls.FRONTEND_PARALLEL_LOADING
|
|
||||||
}
|
|
||||||
|
|
||||||
@classmethod
|
@classmethod
|
||||||
def get_api_config(cls):
|
def get_api_config(cls):
|
||||||
"""获取API配置"""
|
cfg = _load().get('api', {})
|
||||||
return {
|
return {**_DEFAULTS['api'], **cfg}
|
||||||
'timeout': cls.API_TIMEOUT,
|
|
||||||
'retry_count': cls.API_RETRY_COUNT,
|
# 向后兼容的类属性(读取时动态获取)
|
||||||
'batch_size': cls.API_BATCH_SIZE
|
DATABASE_POOL_SIZE = property(lambda self: self._get('database', 'pool_size'))
|
||||||
}
|
CACHE_DEFAULT_TTL = property(lambda self: self._get('cache', 'default_ttl'))
|
||||||
|
CACHE_MAX_MEMORY_SIZE = property(lambda self: self._get('cache', 'max_memory_size'))
|
||||||
|
QUERY_LIMIT_DEFAULT = property(lambda self: self._get('query', 'default_limit'))
|
||||||
|
API_TIMEOUT = property(lambda self: self._get('api', 'timeout'))
|
||||||
|
MONITORING_INTERVAL = property(lambda self: self._get('monitoring', 'interval'))
|
||||||
|
|||||||
@@ -86,13 +86,16 @@ class QueryOptimizer:
|
|||||||
for conv in conversations:
|
for conv in conversations:
|
||||||
conversation_list.append({
|
conversation_list.append({
|
||||||
'id': conv.id,
|
'id': conv.id,
|
||||||
|
'session_id': conv.session_id,
|
||||||
'user_message': conv.user_message,
|
'user_message': conv.user_message,
|
||||||
'assistant_response': conv.assistant_response,
|
'assistant_response': conv.assistant_response,
|
||||||
'timestamp': conv.timestamp.isoformat() if conv.timestamp else None,
|
'timestamp': conv.timestamp.isoformat() if conv.timestamp else None,
|
||||||
'confidence_score': conv.confidence_score,
|
'confidence_score': conv.confidence_score,
|
||||||
'work_order_id': conv.work_order_id,
|
'work_order_id': conv.work_order_id,
|
||||||
'ip_address': conv.ip_address,
|
'ip_address': conv.ip_address,
|
||||||
'invocation_method': conv.invocation_method
|
'invocation_method': conv.invocation_method,
|
||||||
|
# 构造用户显示名称:如果有IP则显示IP,否则显示匿名
|
||||||
|
'user_id': f"{conv.ip_address} ({conv.invocation_method})" if conv.ip_address else "匿名"
|
||||||
})
|
})
|
||||||
|
|
||||||
# 记录查询时间
|
# 记录查询时间
|
||||||
@@ -201,7 +204,6 @@ class QueryOptimizer:
|
|||||||
# 清除相关缓存
|
# 清除相关缓存
|
||||||
cache_manager.delete('get_conversations_paginated')
|
cache_manager.delete('get_conversations_paginated')
|
||||||
|
|
||||||
logger.info(f"批量插入 {len(conversations)} 条对话记录")
|
|
||||||
return True
|
return True
|
||||||
|
|
||||||
except Exception as e:
|
except Exception as e:
|
||||||
@@ -233,25 +235,31 @@ class QueryOptimizer:
|
|||||||
logger.error(f"批量更新工单失败: {e}")
|
logger.error(f"批量更新工单失败: {e}")
|
||||||
return False
|
return False
|
||||||
|
|
||||||
def get_analytics_optimized(self, days: int = 30) -> Dict[str, Any]:
|
def get_analytics_optimized(self, days: int = 30, tenant_id: str = None) -> Dict[str, Any]:
|
||||||
"""优化版分析数据查询"""
|
"""优化版分析数据查询(支持按租户筛选)"""
|
||||||
start_time = time.time()
|
start_time = time.time()
|
||||||
|
|
||||||
try:
|
try:
|
||||||
with db_manager.get_session() as session:
|
with db_manager.get_session() as session:
|
||||||
from datetime import datetime, timedelta
|
from datetime import datetime, timedelta
|
||||||
|
|
||||||
end_time = datetime.now()
|
# 查询工单
|
||||||
start_time_query = end_time - timedelta(days=days-1)
|
wo_query = session.query(WorkOrder)
|
||||||
|
if tenant_id:
|
||||||
|
wo_query = wo_query.filter(WorkOrder.tenant_id == tenant_id)
|
||||||
|
workorders = wo_query.all()
|
||||||
|
|
||||||
# 批量查询所有需要的数据
|
# 查询预警
|
||||||
# 修改:查询所有工单,不限制时间范围
|
alert_query = session.query(Alert)
|
||||||
workorders = session.query(WorkOrder).all()
|
if tenant_id:
|
||||||
|
alert_query = alert_query.filter(Alert.tenant_id == tenant_id)
|
||||||
|
alerts = alert_query.all()
|
||||||
|
|
||||||
# 修改:查询所有预警和对话,不限制时间范围
|
# 查询对话
|
||||||
alerts = session.query(Alert).all()
|
conv_query = session.query(Conversation)
|
||||||
|
if tenant_id:
|
||||||
conversations = session.query(Conversation).all()
|
conv_query = conv_query.filter(Conversation.tenant_id == tenant_id)
|
||||||
|
conversations = conv_query.all()
|
||||||
|
|
||||||
# 处理数据
|
# 处理数据
|
||||||
analytics = self._process_analytics_data(workorders, alerts, conversations, days)
|
analytics = self._process_analytics_data(workorders, alerts, conversations, days)
|
||||||
|
|||||||
@@ -34,10 +34,18 @@ class RedisManager:
|
|||||||
self.connection_lock = threading.Lock()
|
self.connection_lock = threading.Lock()
|
||||||
self._initialized = True
|
self._initialized = True
|
||||||
|
|
||||||
# Redis配置
|
# Redis配置(从统一配置读取)
|
||||||
self.host = '43.134.68.207'
|
try:
|
||||||
self.port = 6379
|
from src.config.unified_config import get_config
|
||||||
self.password = '123456'
|
redis_cfg = get_config().redis
|
||||||
|
self.host = redis_cfg.host
|
||||||
|
self.port = redis_cfg.port
|
||||||
|
self.password = redis_cfg.password
|
||||||
|
except Exception:
|
||||||
|
import os
|
||||||
|
self.host = os.environ.get('REDIS_HOST', 'localhost')
|
||||||
|
self.port = int(os.environ.get('REDIS_PORT', 6379))
|
||||||
|
self.password = os.environ.get('REDIS_PASSWORD') or None
|
||||||
self.connect_timeout = 2
|
self.connect_timeout = 2
|
||||||
self.socket_timeout = 2
|
self.socket_timeout = 2
|
||||||
|
|
||||||
|
|||||||
@@ -30,44 +30,50 @@ class SystemOptimizer:
|
|||||||
self.request_counts = defaultdict(int)
|
self.request_counts = defaultdict(int)
|
||||||
self.response_times = deque(maxlen=1000)
|
self.response_times = deque(maxlen=1000)
|
||||||
|
|
||||||
# 流量控制
|
# 从系统设置加载配置,硬编码值仅作为 fallback
|
||||||
self.rate_limits = {
|
self._load_settings()
|
||||||
"per_minute": 60, # 每分钟最大请求数
|
|
||||||
"per_hour": 1000, # 每小时最大请求数
|
|
||||||
"per_day": 10000 # 每天最大请求数
|
|
||||||
}
|
|
||||||
|
|
||||||
# 成本控制
|
|
||||||
self.cost_limits = {
|
|
||||||
"daily": 100.0, # 每日成本限制(元)
|
|
||||||
"hourly": 20.0, # 每小时成本限制(元)
|
|
||||||
"per_request": 0.1 # 单次请求成本限制(元)
|
|
||||||
}
|
|
||||||
|
|
||||||
# 安全设置
|
|
||||||
self.security_settings = {
|
|
||||||
"max_input_length": 10000, # 最大输入长度
|
|
||||||
"max_output_length": 5000, # 最大输出长度
|
|
||||||
"blocked_keywords": ["恶意", "攻击", "病毒"], # 屏蔽关键词
|
|
||||||
"max_concurrent_users": 50 # 最大并发用户数(调整为更合理的值)
|
|
||||||
}
|
|
||||||
|
|
||||||
# 延迟启动监控线程(避免启动时阻塞)
|
# 延迟启动监控线程(避免启动时阻塞)
|
||||||
threading.Timer(5.0, self._start_monitoring).start()
|
threading.Timer(5.0, self._start_monitoring).start()
|
||||||
|
|
||||||
|
def _load_settings(self):
|
||||||
|
"""从 system_settings.json 加载配置,未配置则使用默认值"""
|
||||||
|
import json, os
|
||||||
|
defaults_rate = {"per_minute": 60, "per_hour": 1000, "per_day": 10000}
|
||||||
|
defaults_cost = {"daily": 100.0, "hourly": 20.0, "per_request": 0.1}
|
||||||
|
defaults_security = {
|
||||||
|
"max_input_length": 10000, "max_output_length": 5000,
|
||||||
|
"blocked_keywords": [], "max_concurrent_users": 50
|
||||||
|
}
|
||||||
|
try:
|
||||||
|
settings_path = os.path.join('data', 'system_settings.json')
|
||||||
|
if os.path.exists(settings_path):
|
||||||
|
with open(settings_path, 'r', encoding='utf-8') as f:
|
||||||
|
settings = json.load(f)
|
||||||
|
self.rate_limits = {**defaults_rate, **settings.get('rate_limits', {})}
|
||||||
|
self.cost_limits = {**defaults_cost, **settings.get('cost_limits', {})}
|
||||||
|
self.security_settings = {**defaults_security, **settings.get('security_settings', {})}
|
||||||
|
return
|
||||||
|
except Exception as e:
|
||||||
|
logger.warning(f"加载系统优化配置失败,使用默认值: {e}")
|
||||||
|
self.rate_limits = defaults_rate
|
||||||
|
self.cost_limits = defaults_cost
|
||||||
|
self.security_settings = defaults_security
|
||||||
|
|
||||||
def _init_redis(self):
|
def _init_redis(self):
|
||||||
"""初始化Redis连接(延迟连接)"""
|
"""初始化Redis连接(延迟连接)"""
|
||||||
self.redis_client = None
|
self.redis_client = None
|
||||||
self.redis_connected = False
|
self.redis_connected = False
|
||||||
|
|
||||||
def _ensure_redis_connection(self):
|
def _ensure_redis_connection(self):
|
||||||
"""确保Redis连接"""
|
"""确保Redis连接(从统一配置读取)"""
|
||||||
if not self.redis_connected:
|
if not self.redis_connected:
|
||||||
try:
|
try:
|
||||||
|
config = get_config()
|
||||||
self.redis_client = redis.Redis(
|
self.redis_client = redis.Redis(
|
||||||
host='43.134.68.207',
|
host=config.redis.host,
|
||||||
port=6379,
|
port=config.redis.port,
|
||||||
password='123456',
|
password=config.redis.password,
|
||||||
decode_responses=True,
|
decode_responses=True,
|
||||||
socket_connect_timeout=2,
|
socket_connect_timeout=2,
|
||||||
socket_timeout=2,
|
socket_timeout=2,
|
||||||
|
|||||||
164
src/core/vector_store.py
Normal file
164
src/core/vector_store.py
Normal file
@@ -0,0 +1,164 @@
|
|||||||
|
# -*- coding: utf-8 -*-
|
||||||
|
"""
|
||||||
|
向量存储与检索
|
||||||
|
使用 numpy 实现轻量级向量索引(无需额外依赖)
|
||||||
|
支持从 DB 加载已有 embedding 构建索引,增量更新
|
||||||
|
"""
|
||||||
|
|
||||||
|
import logging
|
||||||
|
import json
|
||||||
|
import threading
|
||||||
|
import numpy as np
|
||||||
|
from typing import List, Dict, Any, Optional, Tuple
|
||||||
|
|
||||||
|
from src.core.database import db_manager
|
||||||
|
from src.core.models import KnowledgeEntry
|
||||||
|
|
||||||
|
logger = logging.getLogger(__name__)
|
||||||
|
|
||||||
|
|
||||||
|
class VectorStore:
|
||||||
|
"""轻量级向量存储,基于 numpy 余弦相似度"""
|
||||||
|
|
||||||
|
def __init__(self):
|
||||||
|
self._lock = threading.RLock()
|
||||||
|
# 索引数据: entry_id -> embedding vector
|
||||||
|
self._ids: List[int] = []
|
||||||
|
self._matrix: Optional[np.ndarray] = None # shape: (n, dim)
|
||||||
|
self._loaded = False
|
||||||
|
|
||||||
|
# ------------------------------------------------------------------
|
||||||
|
# 索引管理
|
||||||
|
# ------------------------------------------------------------------
|
||||||
|
|
||||||
|
def load_from_db(self):
|
||||||
|
"""从数据库加载所有已有 embedding 构建索引"""
|
||||||
|
try:
|
||||||
|
with db_manager.get_session() as session:
|
||||||
|
entries = session.query(
|
||||||
|
KnowledgeEntry.id, KnowledgeEntry.vector_embedding
|
||||||
|
).filter(
|
||||||
|
KnowledgeEntry.is_active == True,
|
||||||
|
KnowledgeEntry.vector_embedding.isnot(None),
|
||||||
|
KnowledgeEntry.vector_embedding != ''
|
||||||
|
).all()
|
||||||
|
|
||||||
|
ids = []
|
||||||
|
vectors = []
|
||||||
|
for entry_id, vec_json in entries:
|
||||||
|
try:
|
||||||
|
vec = json.loads(vec_json)
|
||||||
|
if isinstance(vec, list) and len(vec) > 0:
|
||||||
|
ids.append(entry_id)
|
||||||
|
vectors.append(vec)
|
||||||
|
except (json.JSONDecodeError, TypeError):
|
||||||
|
continue
|
||||||
|
|
||||||
|
with self._lock:
|
||||||
|
if vectors:
|
||||||
|
self._ids = ids
|
||||||
|
self._matrix = np.array(vectors, dtype=np.float32)
|
||||||
|
# L2 归一化,方便后续用点积算余弦相似度
|
||||||
|
norms = np.linalg.norm(self._matrix, axis=1, keepdims=True)
|
||||||
|
norms[norms == 0] = 1.0
|
||||||
|
self._matrix = self._matrix / norms
|
||||||
|
else:
|
||||||
|
self._ids = []
|
||||||
|
self._matrix = None
|
||||||
|
self._loaded = True
|
||||||
|
|
||||||
|
logger.info(f"向量索引加载完成: {len(ids)} 条记录")
|
||||||
|
|
||||||
|
except Exception as e:
|
||||||
|
logger.error(f"加载向量索引失败: {e}")
|
||||||
|
self._loaded = True # 标记为已加载,避免重复尝试
|
||||||
|
|
||||||
|
def add(self, entry_id: int, vector: List[float]):
|
||||||
|
"""增量添加一条向量"""
|
||||||
|
with self._lock:
|
||||||
|
vec = np.array(vector, dtype=np.float32).reshape(1, -1)
|
||||||
|
norm = np.linalg.norm(vec)
|
||||||
|
if norm > 0:
|
||||||
|
vec = vec / norm
|
||||||
|
|
||||||
|
if self._matrix is not None:
|
||||||
|
self._ids.append(entry_id)
|
||||||
|
self._matrix = np.vstack([self._matrix, vec])
|
||||||
|
else:
|
||||||
|
self._ids = [entry_id]
|
||||||
|
self._matrix = vec
|
||||||
|
|
||||||
|
def remove(self, entry_id: int):
|
||||||
|
"""移除一条向量"""
|
||||||
|
with self._lock:
|
||||||
|
if entry_id in self._ids:
|
||||||
|
idx = self._ids.index(entry_id)
|
||||||
|
self._ids.pop(idx)
|
||||||
|
if self._matrix is not None and len(self._ids) > 0:
|
||||||
|
self._matrix = np.delete(self._matrix, idx, axis=0)
|
||||||
|
else:
|
||||||
|
self._matrix = None
|
||||||
|
|
||||||
|
def update(self, entry_id: int, vector: List[float]):
|
||||||
|
"""更新一条向量"""
|
||||||
|
self.remove(entry_id)
|
||||||
|
self.add(entry_id, vector)
|
||||||
|
|
||||||
|
# ------------------------------------------------------------------
|
||||||
|
# 检索
|
||||||
|
# ------------------------------------------------------------------
|
||||||
|
|
||||||
|
def search(
|
||||||
|
self,
|
||||||
|
query_vector: List[float],
|
||||||
|
top_k: int = 5,
|
||||||
|
threshold: float = 0.0
|
||||||
|
) -> List[Tuple[int, float]]:
|
||||||
|
"""
|
||||||
|
向量相似度检索
|
||||||
|
|
||||||
|
Returns:
|
||||||
|
[(entry_id, similarity_score), ...] 按相似度降序
|
||||||
|
"""
|
||||||
|
if not self._loaded:
|
||||||
|
self.load_from_db()
|
||||||
|
|
||||||
|
with self._lock:
|
||||||
|
if self._matrix is None or len(self._ids) == 0:
|
||||||
|
return []
|
||||||
|
|
||||||
|
q = np.array(query_vector, dtype=np.float32).reshape(1, -1)
|
||||||
|
norm = np.linalg.norm(q)
|
||||||
|
if norm > 0:
|
||||||
|
q = q / norm
|
||||||
|
|
||||||
|
# 余弦相似度 = 归一化向量的点积
|
||||||
|
similarities = (self._matrix @ q.T).flatten()
|
||||||
|
|
||||||
|
# 筛选超过阈值的
|
||||||
|
valid_mask = similarities >= threshold
|
||||||
|
valid_indices = np.where(valid_mask)[0]
|
||||||
|
|
||||||
|
if len(valid_indices) == 0:
|
||||||
|
return []
|
||||||
|
|
||||||
|
# 取 top_k
|
||||||
|
if len(valid_indices) > top_k:
|
||||||
|
top_indices = valid_indices[np.argsort(-similarities[valid_indices])[:top_k]]
|
||||||
|
else:
|
||||||
|
top_indices = valid_indices[np.argsort(-similarities[valid_indices])]
|
||||||
|
|
||||||
|
results = []
|
||||||
|
for idx in top_indices:
|
||||||
|
results.append((self._ids[idx], float(similarities[idx])))
|
||||||
|
|
||||||
|
return results
|
||||||
|
|
||||||
|
@property
|
||||||
|
def size(self) -> int:
|
||||||
|
with self._lock:
|
||||||
|
return len(self._ids)
|
||||||
|
|
||||||
|
|
||||||
|
# 全局单例
|
||||||
|
vector_store = VectorStore()
|
||||||
Binary file not shown.
Binary file not shown.
Some files were not shown because too many files have changed in this diff Show More
Reference in New Issue
Block a user