Compare commits

24 Commits

Author SHA1 Message Date
65d69358d7 revert 96e1cc4e70
revert docs: Update README with AI accuracy optimization and modular architecture features
2025-12-07 14:50:52 +08:00
4e5ece0829 revert e81e6bdc4e
revert 更新 README.md
2025-12-07 14:47:59 +08:00
13ff67a4f5 revert 96e1cc4e70
revert docs: Update README with AI accuracy optimization and modular architecture features
2025-12-07 14:44:00 +08:00
Jeason
20c5ce355a fix: 修复前端导航和页面跳转问题
- 添加统一的导航菜单到所有页面
- 修复页面路由映射和高亮状态
- 创建 navigation.js 统一管理页面跳转
- 添加 test_navigation.py 路由测试工具
- 支持仪表板、预警管理、智能对话、HTTP对话页面间无缝切换

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
2025-12-07 10:12:17 +08:00
96e1cc4e70 docs: Update README with AI accuracy optimization and modular architecture features 2025-11-05 15:23:23 +08:00
148a2fc9d6 feat:测试验证 2025-11-05 10:43:36 +08:00
赵杰 Jie Zhao (雄狮汽车科技)
c9d5c80f42 feat: 娣诲姞澶氫釜鏂板姛鑳藉拰淇 - 鍖呮嫭鐢ㄦ埛绠$悊銆佹暟鎹簱杩佺Щ銆丟it鎺ㄩ€佸伐鍏风瓑 2025-11-05 10:16:34 +08:00
赵杰 Jie Zhao (雄狮汽车科技)
a4261ef06f feat: optimize AI suggestion and workorder sync - support same-day multiple update numbering - insert new suggestions at top maintaining reverse chronological order - reference process history when generating suggestions - simplify prompts to avoid forcing log analysis - fix Chinese comment encoding issues 2025-10-27 10:34:33 +08:00
赵杰
18d59b71cb feat: 快速提交 - 周三 2025/10/08 10:49:52.83 2025-10-08 10:49:55 +01:00
e81e6bdc4e 更新 README.md 2025-09-25 21:53:27 +08:00
赵杰
95501736ec 优化推送脚本 2025-09-23 15:37:59 +01:00
赵杰
63600d1bc2 feat: 自动提交 - 周二 2025/09/23 15:32:55.51 2025-09-23 15:32:55 +01:00
赵杰
6b0c03439f feat: 自动提交 - 周二 2025/09/23 14:03:10.47 2025-09-23 14:03:10 +01:00
赵杰
4da97d600a 优化界面布局,参考CRM系统,调整字体,优化分页显示 2025-09-22 17:06:43 +01:00
赵杰
eff24947e0 feat: 自动提交 - 周一 2025/09/22 16:50:40.70 2025-09-22 16:50:40 +01:00
赵杰
1e4376ba56 添加docker部署方式,优化目前项目结构,更新了readme文件 2025-09-22 16:36:50 +01:00
赵杰
d6c88d87dd feat: 自动提交 - 周一 2025/09/22 16:28:00.19 2025-09-22 16:28:00 +01:00
赵杰
f75176ec69 feat: 自动提交 - 周一 2025/09/22 15:18:57.75 2025-09-22 15:18:57 +01:00
赵杰
b635c9e7d4 feat: 自动提交 - 周一 2025/09/22 15:12:38.91 2025-09-22 15:12:38 +01:00
赵杰
9306e7a401 feat: 自动提交 - 周一 2025/09/22 14:48:02.54 2025-09-22 14:48:02 +01:00
赵杰
1f55f65fa0 feat: 自动提交 - 周一 2025/09/22 14:40:25.43 2025-09-22 14:40:25 +01:00
赵杰
070422cd06 减少不必要模块,增加中英文切换 2025-09-22 13:55:29 +01:00
赵杰
87552148fd feat: 自动提交 - 周一 2025/09/22 13:30:40.76 2025-09-22 13:30:40 +01:00
赵杰 Jie Zhao (雄狮汽车科技)
54a13531c4 feat: 快速提交 - 周一 2025/09/22 13:29:14.32 2025-09-22 13:29:14 +01:00
168 changed files with 23223 additions and 12700 deletions

57
.editorconfig Normal file
View File

@@ -0,0 +1,57 @@
# EditorConfig is awesome: https://EditorConfig.org
# 顶级配置文件
root = true
# 所有文件
[*]
charset = utf-8
end_of_line = lf
insert_final_newline = true
trim_trailing_whitespace = true
# Python 文件
[*.py]
charset = utf-8
indent_style = space
indent_size = 4
# JSON 文件
[*.json]
charset = utf-8
indent_style = space
indent_size = 2
# Markdown 文件
[*.md]
charset = utf-8
trim_trailing_whitespace = false
# YAML 文件
[*.{yml,yaml}]
charset = utf-8
indent_style = space
indent_size = 2
# JavaScript/TypeScript 文件
[*.{js,ts,jsx,tsx}]
charset = utf-8
indent_style = space
indent_size = 2
# HTML/CSS 文件
[*.{html,css}]
charset = utf-8
indent_style = space
indent_size = 2
# Batch 文件 (Windows)
[*.bat]
charset = utf-8
end_of_line = crlf
# Shell 脚本
[*.sh]
charset = utf-8
end_of_line = lf

107
.gitignore vendored
View File

@@ -1,107 +0,0 @@
# Python缓存文件
__pycache__/
*.py[cod]
*$py.class
*.so
# 分发/打包
.Python
build/
develop-eggs/
dist/
downloads/
eggs/
.eggs/
lib/
lib64/
parts/
sdist/
var/
wheels/
*.egg-info/
.installed.cfg
*.egg
# PyInstaller
*.manifest
*.spec
# 单元测试/覆盖率报告
htmlcov/
.tox/
.coverage
.coverage.*
.cache
nosetests.xml
coverage.xml
*.cover
.hypothesis/
.pytest_cache/
# 环境变量
.env
.venv
env/
venv/
ENV/
env.bak/
venv.bak/
# IDE文件
.vscode/
.idea/
*.swp
*.swo
*~
# 操作系统文件
.DS_Store
.DS_Store?
._*
.Spotlight-V100
.Trashes
ehthumbs.db
Thumbs.db
# 日志文件
*.log
logs/
# 数据库文件(开发环境)
*.db
*.sqlite
*.sqlite3
# 备份文件
backups/
*.backup
*.bak
# 临时文件
*.tmp
*.temp
temp/
tmp/
# 部署相关
deploy_config.json
dev_deploy/
# 测试文件
test_*.py
*_test.py
test_sample.txt
# 文档草稿
note/
*问题修复*.md
*修复总结*.md
*使用指南*.md
# Excel文件除了模板
*.xlsx
!uploads/workorder_template.xlsx
# 配置文件(敏感信息)
config/local_config.py
.env.local

8
.idea/.gitignore generated vendored Normal file
View File

@@ -0,0 +1,8 @@
# 默认忽略的文件
/shelf/
/workspace.xml
# 基于编辑器的 HTTP 客户端请求
/httpRequests/
# Datasource local storage ignored files
/dataSources/
/dataSources.local.xml

12
.idea/dataSources.xml generated Normal file
View File

@@ -0,0 +1,12 @@
<?xml version="1.0" encoding="UTF-8"?>
<project version="4">
<component name="DataSourceManagerImpl" format="xml" multifile-model="true">
<data-source source="LOCAL" name="@43.134.68.207" uuid="715b070d-f258-43df-a066-49e825a9b04f">
<driver-ref>mysql.8</driver-ref>
<synchronize>true</synchronize>
<jdbc-driver>com.mysql.cj.jdbc.Driver</jdbc-driver>
<jdbc-url>jdbc:mysql://43.134.68.207:3306</jdbc-url>
<working-dir>$ProjectFileDir$</working-dir>
</data-source>
</component>
</project>

6
.idea/data_source_mapping.xml generated Normal file
View File

@@ -0,0 +1,6 @@
<?xml version="1.0" encoding="UTF-8"?>
<project version="4">
<component name="DataSourcePerFileMappings">
<file url="file://$APPLICATION_CONFIG_DIR$/consoles/db/715b070d-f258-43df-a066-49e825a9b04f/console.sql" value="715b070d-f258-43df-a066-49e825a9b04f" />
</component>
</project>

View File

@@ -0,0 +1,6 @@
<component name="InspectionProjectProfileManager">
<settings>
<option name="USE_PROJECT_PROFILE" value="false" />
<version value="1.0" />
</settings>
</component>

7
.idea/misc.xml generated Normal file
View File

@@ -0,0 +1,7 @@
<?xml version="1.0" encoding="UTF-8"?>
<project version="4">
<component name="Black">
<option name="sdkName" value="Python 3.11 (tsp-assistant)" />
</component>
<component name="ProjectRootManager" version="2" project-jdk-name="Python 3.11 (tsp-assistant)" project-jdk-type="Python SDK" />
</project>

8
.idea/modules.xml generated Normal file
View File

@@ -0,0 +1,8 @@
<?xml version="1.0" encoding="UTF-8"?>
<project version="4">
<component name="ProjectModuleManager">
<modules>
<module fileurl="file://$PROJECT_DIR$/.idea/tsp-assistant.iml" filepath="$PROJECT_DIR$/.idea/tsp-assistant.iml" />
</modules>
</component>
</project>

14
.idea/tsp-assistant.iml generated Normal file
View File

@@ -0,0 +1,14 @@
<?xml version="1.0" encoding="UTF-8"?>
<module type="PYTHON_MODULE" version="4">
<component name="NewModuleRootManager">
<content url="file://$MODULE_DIR$">
<excludeFolder url="file://$MODULE_DIR$/.venv" />
</content>
<orderEntry type="jdk" jdkName="Python 3.11 (tsp-assistant)" jdkType="Python SDK" />
<orderEntry type="sourceFolder" forTests="false" />
</component>
<component name="PyDocumentationSettings">
<option name="format" value="PLAIN" />
<option name="myDocStringFormat" value="Plain" />
</component>
</module>

6
.idea/vcs.xml generated Normal file
View File

@@ -0,0 +1,6 @@
<?xml version="1.0" encoding="UTF-8"?>
<project version="4">
<component name="VcsDirectoryMappings">
<mapping directory="" vcs="Git" />
</component>
</project>

26
.vscode/settings.json vendored Normal file
View File

@@ -0,0 +1,26 @@
{
"files.autoGuessEncoding": false,
"files.encoding": "utf8",
"files.eol": "\n",
"[python]": {
"files.encoding": "utf8",
"files.eol": "\n"
},
"[json]": {
"files.encoding": "utf8"
},
"[javascript]": {
"files.encoding": "utf8"
},
"[html]": {
"files.encoding": "utf8"
},
"[css]": {
"files.encoding": "utf8"
},
"[markdown]": {
"files.encoding": "utf8"
},
"python.defaultInterpreterPath": "${workspaceFolder}/.venv/Scripts/python.exe",
"Codegeex.RepoIndex": true
}

View File

@@ -1,218 +0,0 @@
# AI建议权限问题最终解决方案
## 🔍 问题确认
通过深度诊断工具确认:
-**访问令牌获取正常**tenant_access_token获取成功
-**表格访问正常**:可以成功访问表格和字段信息
-**记录读取正常**:可以成功读取表格记录
-**AI建议字段存在**:确认表格中有"AI建议"字段
-**记录写入失败**403 Forbidden权限不足
**根本原因**:飞书应用缺少**记录写入权限**
## 🛠️ 最终解决方案
### 步骤1飞书开放平台权限配置
#### 1.1 登录飞书开放平台
1. 访问https://open.feishu.cn/
2. 使用管理员账号登录
3. 进入"应用管理" → "我的应用"
#### 1.2 找到您的应用
- 应用ID`cli_a8b50ec0eed1500d`
- 应用名称您的TSP智能助手应用
#### 1.3 添加必需权限
在"权限管理"页面,确保应用具有以下权限:
**核心权限**
```
bitable:app # 多维表格应用权限
bitable:app:readonly # 多维表格只读权限
bitable:app:readwrite # 多维表格读写权限 ⭐ 关键权限
base:record:write # 记录写入权限 ⭐ 关键权限
```
#### 1.4 重新发布应用
- 权限修改后,点击"发布"或"上线"
- 等待权限生效通常需要1-5分钟
### 步骤2飞书多维表格协作者权限
#### 2.1 打开飞书多维表格
- 使用浏览器打开您的飞书多维表格
- 确保您有表格的管理权限
#### 2.2 添加应用为协作者
1. 点击表格右上角的"分享"按钮
2. 点击"添加协作者"
3. 搜索您的飞书应用名称或应用ID
4. 将应用添加为协作者
#### 2.3 设置协作者权限
**重要**:将权限设置为以下之一:
- **编辑者** ✅ (推荐)
- **管理员** ✅ (完全权限)
**不要设置为**
- **查看者** ❌ (只能读取,无法写入)
### 步骤3验证修复结果
#### 3.1 使用Web界面验证
1. 打开TSP智能助手主页面
2. 点击"飞书同步"标签页
3. 点击"权限检查"按钮
4. 查看检查结果
#### 3.2 测试AI建议功能
1. 点击"同步+AI建议"按钮
2. 查看是否还有403错误
3. 检查飞书表格中是否出现AI建议
## 📋 详细修复步骤
### 🔧 飞书开放平台操作
1. **登录飞书开放平台**
```
URL: https://open.feishu.cn/
账号: 管理员账号
```
2. **进入应用管理**
```
路径: 应用管理 → 我的应用
应用ID: cli_a8b50ec0eed1500d
```
3. **权限配置**
```
页面: 权限管理
操作: 添加权限
权限列表:
- bitable:app
- bitable:app:readonly
- bitable:app:readwrite ⭐
- base:record:write ⭐
```
4. **发布应用**
```
操作: 点击"发布"按钮
等待: 1-5分钟权限生效
```
### 🔧 飞书多维表格操作
1. **打开表格**
```
方式: 浏览器访问飞书多维表格
权限: 确保有管理权限
```
2. **添加协作者**
```
操作: 点击右上角"分享"按钮
步骤: 添加协作者 → 搜索应用名称
应用: 您的TSP智能助手应用
```
3. **设置权限**
```
权限级别: 编辑者 或 管理员
不要选择: 查看者
保存: 确认设置
```
## 🚨 常见问题解决
### 问题1找不到权限设置
**解决方案**
- 确保使用管理员账号登录飞书开放平台
- 检查应用是否已发布
- 联系飞书技术支持
### 问题2权限添加后仍然失败
**解决方案**
- 等待5-10分钟让权限生效
- 重新发布应用
- 清除浏览器缓存后重试
### 问题3找不到应用名称
**解决方案**
- 使用应用ID`cli_a8b50ec0eed1500d`
- 在飞书开放平台搜索应用ID
- 确认应用状态为"已发布"
### 问题4表格分享设置找不到
**解决方案**
- 确保您有表格的管理权限
- 使用表格创建者账号
- 联系表格管理员协助设置
## 📊 权限配置检查清单
### ✅ 飞书开放平台
- [ ] 应用已启用并发布
- [ ] 已添加`bitable:app`权限
- [ ] 已添加`bitable:app:readonly`权限
- [ ] 已添加`bitable:app:readwrite`权限 ⭐
- [ ] 已添加`base:record:write`权限 ⭐
- [ ] 权限修改后已重新发布
### ✅ 飞书多维表格
- [ ] 应用已添加为表格协作者
- [ ] 协作者权限设置为"编辑者"或"管理员"
- [ ] 表格未被锁定或设置为只读
- [ ] "AI建议"字段存在且类型正确
## 🎯 验证成功标志
修复成功后,您应该看到:
1. **权限检查通过**
```
✅ 访问令牌获取成功
✅ 表格访问权限正常
✅ 记录读取权限正常
✅ 记录写入权限正常
✅ AI建议字段存在
```
2. **AI建议功能正常**
```
- 点击"同步+AI建议"无403错误
- 飞书表格中出现AI建议内容
- 日志显示"更新飞书AI建议成功"
```
3. **系统日志正常**
```
2025-09-22 XX:XX:XX - INFO - 更新飞书AI建议成功
2025-09-22 XX:XX:XX - INFO - 飞书同步完成
```
## 📞 技术支持
如果按照以上步骤仍然无法解决问题,请:
1. **收集信息**
- 飞书应用ID`cli_a8b50ec0eed1500d`
- 表格ID`tblnl3vJPpgMTSiP`
- 完整的错误日志
- 权限检查结果截图
2. **联系支持**
- 飞书开放平台技术支持
- TSP智能助手技术支持
3. **检查企业设置**
- 确认是否有企业级权限限制
- 联系企业飞书管理员
---
**重要提醒**403权限错误通常需要飞书管理员权限才能解决建议联系相关技术人员协助配置。修复完成后AI建议功能将可以正常工作🎉

View File

@@ -1,42 +0,0 @@
# TSP智能助手Docker镜像
FROM python:3.9-slim
# 设置工作目录
WORKDIR /app
# 设置环境变量
ENV PYTHONPATH=/app
ENV PYTHONUNBUFFERED=1
# 安装系统依赖
RUN apt-get update && apt-get install -y \
gcc \
g++ \
git \
curl \
&& rm -rf /var/lib/apt/lists/*
# 复制依赖文件
COPY requirements.txt .
# 安装Python依赖
RUN pip install --no-cache-dir -r requirements.txt
# 复制应用代码
COPY . .
# 创建必要目录
RUN mkdir -p logs data backups
# 设置权限
RUN chmod +x scripts/deploy.sh
# 暴露端口
EXPOSE 5000
# 健康检查
HEALTHCHECK --interval=30s --timeout=10s --start-period=5s --retries=3 \
CMD curl -f http://localhost:5000/api/health || exit 1
# 启动命令
CMD ["python", "start_dashboard.py"]

144
LLM配置统一说明.md Normal file
View File

@@ -0,0 +1,144 @@
# LLM配置统一管理说明
## ? 概述
本项目已将LLM配置统一管理确保整个项目只在一个地方配置千问模型所有地方都从统一配置获取。
## ?? 配置架构
### 1. 核心配置文件:`config/llm_config.py`
这是**唯一的**LLM配置源定义了千问模型的所有配置
```python
QWEN_CONFIG = LLMConfig(
provider="qwen",
api_key="sk-c0dbefa1718d46eaa897199135066f00",
base_url="https://dashscope.aliyuncs.com/compatible-mode/v1",
model="qwen-plus-latest",
temperature=0.7,
max_tokens=2000
)
# 默认使用千问模型
DEFAULT_CONFIG = QWEN_CONFIG
```
### 2. 统一配置管理器:`src/config/unified_config.py`
统一配置管理器在初始化时自动从 `config/llm_config.py` 加载配置:
```python
def _load_default_llm_config(self) -> LLMConfig:
"""加载默认LLM配置"""
try:
from config.llm_config import DEFAULT_CONFIG
# 转换配置格式
return LLMConfig(...)
except Exception as e:
logger.warning(f"无法加载默认LLM配置使用内置默认值: {e}")
return LLMConfig()
```
### 3. 全局配置实例
通过 `get_config()` 函数获取全局配置实例:
```python
from src.config.unified_config import get_config
config = get_config()
llm_config = config.llm # 获取LLM配置
```
## ? 如何使用
### 在任何需要使用LLM的地方
```python
from src.config.unified_config import get_config
# 获取LLM配置
llm_config = get_config().llm
# 使用配置
print(f"Provider: {llm_config.provider}")
print(f"Model: {llm_config.model}")
print(f"API Key: {llm_config.api_key}")
```
### 示例AI建议服务
```python
class AISuggestionService:
def __init__(self):
# 从统一配置管理器获取LLM配置
self.llm_config = get_config().llm
logger.info(f"使用LLM配置: {self.llm_config.provider} - {self.llm_config.model}")
```
## ? 配置优先级
1. **第一优先级**:统一配置管理器中的配置(可通过配置文件或环境变量设置)
2. **第二优先级**`config/llm_config.py` 中的 `DEFAULT_CONFIG`
3. **最后备选**:内置的默认值
## ? 修改配置
### 方法1修改配置文件推荐
直接编辑 `config/llm_config.py`修改API密钥或模型
```python
QWEN_CONFIG = LLMConfig(
provider="qwen",
api_key="你的新API密钥", # 修改这里
model="qwen-max", # 或修改模型
...
)
```
### 方法2通过统一配置文件
编辑 `config/unified_config.json`(如果存在):
```json
{
"llm": {
"provider": "qwen",
"api_key": "你的新API密钥",
"model": "qwen-plus-latest",
...
}
}
```
### 方法3环境变量可选
```bash
export LLM_API_KEY="你的API密钥"
export LLM_MODEL="qwen-plus-latest"
```
## ? 优势
1. **单一配置源**:只需要在 `config/llm_config.py` 配置一次
2. **统一管理**:所有模块都通过统一配置管理器获取
3. **易于维护**:修改配置不需要修改多处代码
4. **自动同步**:修改配置后,所有使用该配置的地方自动更新
5. **向后兼容**保留fallback机制确保系统稳定运行
## ? 已更新的文件
- ? `config/llm_config.py` - 添加了 `get_default_llm_config()` 函数
- ? `src/config/unified_config.py` - 从 `config/llm_config.py` 加载默认配置
- ? `src/integrations/ai_suggestion_service.py` - 使用统一配置
- ? `src/agent/agent_assistant_core.py` - 使用统一配置
## ? 注意事项
- **不要**在代码中硬编码OpenAI或其他模型的配置
- **不要**直接从 `config/llm_config.py` 导入除非作为fallback
- **总是**通过 `get_config().llm` 获取配置
- 修改配置后,请重启应用使配置生效

209
README.md
View File

@@ -1,7 +1,8 @@
# TSP智能助手 (TSP Assistant) # TSP智能助手 (TSP Assistant)
[![Version](https://img.shields.io/badge/version-1.3.0-blue.svg)](version.json) [![Version](https://img.shields.io/badge/version-2.0.0-blue.svg)](version.json)
[![Python](https://img.shields.io/badge/python-3.8+-green.svg)](requirements.txt) [![Python](https://img.shields.io/badge/python-3.11+-green.svg)](requirements.txt)
[![Docker](https://img.shields.io/badge/docker-supported-blue.svg)](Dockerfile)
[![License](https://img.shields.io/badge/license-MIT-yellow.svg)](LICENSE) [![License](https://img.shields.io/badge/license-MIT-yellow.svg)](LICENSE)
[![Status](https://img.shields.io/badge/status-production-ready-brightgreen.svg)]() [![Status](https://img.shields.io/badge/status-production-ready-brightgreen.svg)]()
@@ -42,11 +43,20 @@
│ 前端界面 │ │ 后端服务 │ │ 数据存储 │ │ 前端界面 │ │ 后端服务 │ │ 数据存储 │
│ │ │ │ │ │ │ │ │ │ │ │
│ • 仪表板 │◄──►│ • Flask API │◄──►│ • MySQL DB │ │ • 仪表板 │◄──►│ • Flask API │◄──►│ • MySQL DB │
│ • 智能对话 │ │ • WebSocket │ │ • 知识库 │ • 智能对话 │ │ • WebSocket │ │ • Redis缓存
│ • Agent管理 │ │ • Agent核心 │ │ • 工单系统 │ • Agent管理 │ │ • Agent核心 │ │ • 知识库
│ • 数据分析 │ │ • LLM集成 │ │ • 车辆数据 │ • 数据分析 │ │ • LLM集成 │ │ • 工单系统
│ • 备份管理 │ │ • 备份系统 │ │ • SQLite备份 │ • 备份管理 │ │ • 备份系统 │ │ • 车辆数据
└─────────────────┘ └─────────────────┘ └─────────────────┘ └─────────────────┘ └─────────────────┘ └─────────────────┘
┌─────────────────┐
│ 监控系统 │
│ │
│ • Prometheus │
│ • Grafana │
│ • Nginx代理 │
└─────────────────┘
``` ```
## 🎯 核心功能 ## 🎯 核心功能
@@ -97,11 +107,12 @@
## 🛠️ 技术栈 ## 🛠️ 技术栈
### 后端技术 ### 后端技术
- **Python 3.8+**: 核心开发语言 - **Python 3.11+**: 核心开发语言
- **Flask**: Web框架和API服务 - **Flask 2.3+**: Web框架和API服务
- **SQLAlchemy**: ORM数据库操作 - **SQLAlchemy 2.0+**: ORM数据库操作
- **WebSocket**: 实时通信支持 - **WebSocket**: 实时通信支持
- **psutil**: 系统资源监控 - **psutil**: 系统资源监控
- **Redis**: 缓存和会话管理
### 前端技术 ### 前端技术
- **Bootstrap 5**: UI框架 - **Bootstrap 5**: UI框架
@@ -114,21 +125,74 @@
- **TF-IDF**: 文本向量化 - **TF-IDF**: 文本向量化
- **余弦相似度**: 语义相似度计算 - **余弦相似度**: 语义相似度计算
- **Agent框架**: 智能任务规划 - **Agent框架**: 智能任务规划
- **Transformers**: 预训练模型支持
### 部署运维 ### 部署运维
- **Docker**: 容器化部署 - **Docker**: 容器化部署
- **Docker Compose**: 多服务编排
- **Nginx**: 反向代理和静态文件服务 - **Nginx**: 反向代理和静态文件服务
- **Systemd**: 服务管理 - **Prometheus**: 监控数据收集
- **Git**: 版本控制 - **Grafana**: 监控仪表板
- **MySQL 8.0**: 主数据库
- **Redis 7**: 缓存服务
## 🚀 快速开始 ## 🚀 快速开始
### 环境要求 ### 环境要求
- Python 3.8+
#### Docker部署推荐
- Docker 20.10+
- Docker Compose 2.0+
- 4GB+ 可用内存
- 10GB+ 可用磁盘空间
#### 本地部署
- Python 3.11+
- Node.js 16+ (可选,用于前端构建) - Node.js 16+ (可选,用于前端构建)
- MySQL 8.0+ 或 SQLite
- Redis 7+ (可选)
- Git - Git
### 安装步骤 ### 🐳 Docker部署推荐
1. **克隆项目**
```bash
git clone http://jeason.online:3000/zhaojie/assist.git
cd assist
```
2. **一键启动所有服务**
```bash
# 使用部署脚本
chmod +x scripts/docker_deploy.sh
./scripts/docker_deploy.sh start
# 或直接使用docker-compose
docker-compose up -d
```
3. **访问系统**
- **TSP助手**: http://localhost:5000
- **Nginx代理**: http://localhost
- **Prometheus监控**: http://localhost:9090
- **Grafana仪表板**: http://localhost:3000 (admin/admin123456)
4. **服务管理**
```bash
# 查看服务状态
./scripts/docker_deploy.sh status
# 查看日志
./scripts/docker_deploy.sh logs tsp-assistant
# 停止服务
./scripts/docker_deploy.sh stop
# 重启服务
./scripts/docker_deploy.sh restart
```
### 💻 本地部署
1. **克隆项目** 1. **克隆项目**
```bash ```bash
@@ -251,10 +315,11 @@ python scripts/update_manager.py auto-update --source ./new_version --environmen
## 🔧 配置说明 ## 🔧 配置说明
### 环境变量 ### Docker环境变量
```bash ```bash
# 数据库配置 # 数据库配置
DATABASE_URL=sqlite:///tsp_assistant.db DATABASE_URL=mysql+pymysql://tsp_user:tsp_password@mysql:3306/tsp_assistant?charset=utf8mb4
REDIS_URL=redis://redis:6379/0
# LLM配置 # LLM配置
LLM_PROVIDER=openai LLM_PROVIDER=openai
@@ -265,13 +330,35 @@ LLM_MODEL=gpt-3.5-turbo
SERVER_PORT=5000 SERVER_PORT=5000
WEBSOCKET_PORT=8765 WEBSOCKET_PORT=8765
LOG_LEVEL=INFO LOG_LEVEL=INFO
TZ=Asia/Shanghai
``` ```
### Docker服务配置
#### 主要服务
- **tsp-assistant**: 主应用服务 (端口: 5000, 8765)
- **mysql**: MySQL数据库 (端口: 3306)
- **redis**: Redis缓存 (端口: 6379)
- **nginx**: 反向代理 (端口: 80, 443)
#### 监控服务
- **prometheus**: 监控数据收集 (端口: 9090)
- **grafana**: 监控仪表板 (端口: 3000)
#### 数据卷
- `mysql_data`: MySQL数据持久化
- `redis_data`: Redis数据持久化
- `prometheus_data`: Prometheus数据持久化
- `grafana_data`: Grafana配置和数据持久化
### 配置文件 ### 配置文件
- `config/llm_config.py`: LLM客户端配置 - `config/llm_config.py`: LLM客户端配置
- `config/integrations_config.json`: 飞书集成配置 - `config/integrations_config.json`: 飞书集成配置
- `update_config.json`: 更新管理器配置 - `nginx.conf`: Nginx反向代理配置
- `version.json`: 版本信息配置 - `monitoring/prometheus.yml`: Prometheus监控配置
- `init.sql`: 数据库初始化脚本
- `docker-compose.yml`: Docker服务编排配置
- `Dockerfile`: 应用镜像构建配置
## 🤝 贡献指南 ## 🤝 贡献指南
@@ -290,6 +377,18 @@ LOG_LEVEL=INFO
## 📝 更新日志 ## 📝 更新日志
### v2.0.0 (2025-09-22) - Docker环境全面升级
- 🐳 **Docker环境重构**: 升级到Python 3.11,优化镜像构建
- 🐳 **多服务编排**: MySQL 8.0 + Redis 7 + Nginx + Prometheus + Grafana
- 🐳 **监控系统**: 集成Prometheus监控和Grafana仪表板
- 🐳 **安全增强**: 非root用户运行数据卷隔离
- 🐳 **部署脚本**: 一键部署脚本,支持启动/停止/重启/清理
- 🔧 **知识库搜索修复**: 简化搜索算法,提升检索准确率
- 🔧 **批量删除优化**: 修复外键约束和缓存问题
- 🔧 **日志编码修复**: 解决中文乱码问题
- 📊 **可视化增强**: 修复预警、性能、满意度图表显示
- 📚 **文档更新**: 完整的Docker部署和使用指南
### v1.4.0 (2025-09-19) ### v1.4.0 (2025-09-19)
- ✅ 飞书集成功能:支持飞书多维表格数据同步 - ✅ 飞书集成功能:支持飞书多维表格数据同步
- ✅ 页面功能合并:飞书同步页面合并到主仪表板 - ✅ 页面功能合并:飞书同步页面合并到主仪表板
@@ -324,11 +423,87 @@ LOG_LEVEL=INFO
本项目采用 MIT 许可证 - 查看 [LICENSE](LICENSE) 文件了解详情 本项目采用 MIT 许可证 - 查看 [LICENSE](LICENSE) 文件了解详情
## 🔧 故障排除
### Docker部署问题
#### 常见问题
1. **端口冲突**
```bash
# 检查端口占用
netstat -tulpn | grep :5000
# 修改docker-compose.yml中的端口映射
```
2. **内存不足**
```bash
# 检查Docker资源使用
docker stats
# 增加Docker内存限制或关闭其他服务
```
3. **数据库连接失败**
```bash
# 检查MySQL服务状态
docker-compose logs mysql
# 等待数据库完全启动约30秒
```
4. **权限问题**
```bash
# 给脚本添加执行权限
chmod +x scripts/docker_deploy.sh
# 检查文件权限
ls -la scripts/
```
#### 日志查看
```bash
# 查看所有服务日志
docker-compose logs -f
# 查看特定服务日志
docker-compose logs -f tsp-assistant
docker-compose logs -f mysql
docker-compose logs -f redis
```
#### 服务重启
```bash
# 重启特定服务
docker-compose restart tsp-assistant
# 重启所有服务
docker-compose down && docker-compose up -d
```
### 性能优化
#### Docker资源限制
```yaml
# 在docker-compose.yml中添加资源限制
services:
tsp-assistant:
deploy:
resources:
limits:
memory: 2G
cpus: '1.0'
```
#### 数据库优化
```sql
-- MySQL性能优化
SET GLOBAL innodb_buffer_pool_size = 1G;
SET GLOBAL max_connections = 200;
```
## 📞 支持与联系 ## 📞 支持与联系
- **项目地址**: http://jeason.online:3000/zhaojie/assist - **项目地址**: http://jeason.online:3000/zhaojie/assist
- **问题反馈**: 请在Issues中提交问题 - **问题反馈**: 请在Issues中提交问题
- **功能建议**: 欢迎提交Feature Request - **功能建议**: 欢迎提交Feature Request
- **Docker问题**: 请提供docker-compose logs输出
## 🙏 致谢 ## 🙏 致谢

83
UTF8_ENCODING_STANDARD.md Normal file
View File

@@ -0,0 +1,83 @@
# UTF-8 编码规范
## 项目编码标准
本项目所有文件必须使用 **UTF-8** 编码格式,以确保中文和特殊字符的正确显示和处理。
## 文件编码要求
### 1. Python 文件
- **必须** 在文件开头添加编码声明:
```python
# -*- coding: utf-8 -*-
```
```python
# coding: utf-8
```
### 2. 文件保存
- 所有文件保存时使用 **UTF-8** 编码无BOM
- 禁止使用 GBK、GB2312 等其他编码格式
### 3. 文件读取/写入
- 所有文件操作必须明确指定 `encoding='utf-8'`
```python
with open('file.txt', 'r', encoding='utf-8') as f:
content = f.read()
with open('file.txt', 'w', encoding='utf-8') as f:
f.write(content)
```
## Cursor/VS Code 配置
项目已配置 `.vscode/settings.json`,确保:
- 默认文件编码UTF-8
- 自动检测编码:禁用(避免误判)
- 文件行尾LFUnix风格
## 控制台输出
### Windows 系统
在 Python 脚本中,需要设置标准输出编码:
```python
import sys
import io
if sys.platform == 'win32':
sys.stdout = io.TextIOWrapper(sys.stdout.buffer, encoding='utf-8', errors='replace')
sys.stderr = io.TextIOWrapper(sys.stderr.buffer, encoding='utf-8', errors='replace')
```
## 检查脚本
使用 `check_encoding.py` 脚本检查所有文件的编码格式:
```bash
python check_encoding.py
```
## 常见问题
### 1. 控制台输出乱码
- 确保文件以 UTF-8 保存
- 在脚本开头设置标准输出编码
- Windows 系统运行 `chcp 65001` 设置控制台代码页
### 2. 文件读取乱码
- 检查文件实际编码(可用 `check_encoding.py`
- 确保使用 `encoding='utf-8'` 参数
### 3. 文件保存乱码
- 检查编辑器编码设置
- 确保 Cursor/VS Code 设置为 UTF-8
## 验证清单
创建新文件时,请确认:
- [ ] 文件以 UTF-8 编码保存
- [ ] Python 文件包含编码声明
- [ ] 文件读写操作指定 `encoding='utf-8'`
- [ ] 控制台输出脚本设置了 UTF-8 编码
- [ ] 测试输出中文显示正常

Binary file not shown.

View File

@@ -30,8 +30,23 @@ if /i "%confirm%" neq "y" (
:: 检查是否有更改需要提交 :: 检查是否有更改需要提交
echo. echo.
echo [2/4] 检查更改状态... echo [2/4] 检查更改状态...
git diff --quiet && git diff --cached --quiet
if %errorlevel% equ 0 ( :: 启用延迟变量扩展
setlocal enabledelayedexpansion
:: 检查未暂存的更改
git diff --quiet
set has_unstaged=%errorlevel%
:: 检查已暂存的更改
git diff --cached --quiet
set has_staged=%errorlevel%
:: 检查未跟踪的文件
git ls-files --others --exclude-standard >nul 2>&1
set has_untracked=%errorlevel%
if %has_unstaged% equ 0 if %has_staged% equ 0 if %has_untracked% neq 0 (
echo 没有检测到任何更改,无需提交 echo 没有检测到任何更改,无需提交
echo. echo.
echo ✅ 工作区干净,无需推送 echo ✅ 工作区干净,无需推送
@@ -39,6 +54,30 @@ if %errorlevel% equ 0 (
exit /b 0 exit /b 0
) )
:: 显示详细状态
echo 📊 详细状态信息:
echo 未暂存更改:
if %has_unstaged% neq 0 (
git diff --name-only
) else (
echo
)
echo 已暂存更改:
if %has_staged% neq 0 (
git diff --cached --name-only
) else (
echo
)
echo 未跟踪文件:
if %has_untracked% neq 0 (
git ls-files --others --exclude-standard
) else (
echo
)
echo.
:: 添加所有更改 :: 添加所有更改
echo 添加所有更改到暂存区... echo 添加所有更改到暂存区...
git add . git add .
@@ -160,22 +199,95 @@ echo ✅ 提交成功
:: 推送到远程仓库 :: 推送到远程仓库
echo. echo.
echo [4/4] 推送到远程仓库... echo [4/4] 推送到远程仓库...
git push origin main
if %errorlevel% neq 0 ( :: 获取当前分支名称(在延迟变量扩展内)
set current_branch=
for /f "tokens=*" %%b in ('git branch --show-current 2^>nul') do set current_branch=%%b
if "!current_branch!"=="" (
echo ❌ 无法获取当前分支名称
echo 尝试使用默认分支 main...
set current_branch=main
) else (
echo 📍 当前分支: !current_branch!
)
echo.
:: 先尝试拉取最新更改
echo 🔄 检查远程更新...
git fetch origin !current_branch! >nul 2>&1
set fetch_result=!errorlevel!
if !fetch_result! neq 0 (
echo ⚠️ 无法获取远程更新,尝试获取所有分支...
git fetch origin >nul 2>&1
set fetch_all_result=!errorlevel!
if !fetch_all_result! neq 0 (
echo ⚠️ 无法获取远程更新,继续推送...
) else (
echo ✅ 远程更新检查完成
)
) else (
echo ✅ 远程更新检查完成
)
:: 检查远程分支是否存在,如果不存在则设置上游
echo 🔍 检查远程分支状态...
git ls-remote --heads origin !current_branch! >nul 2>&1
set remote_exists=!errorlevel!
set push_result=0
if !remote_exists! equ 0 (
echo 远程分支 !current_branch! 已存在
:: 推送到远程(分支已存在)
git push origin !current_branch!
set push_result=!errorlevel!
) else (
echo 远程分支 !current_branch! 不存在,将创建并设置上游
:: 推送到远程并设置上游(分支不存在)
git push -u origin !current_branch!
set push_result=!errorlevel!
)
if !push_result! neq 0 (
echo ❌ 推送失败 echo ❌ 推送失败
echo.
echo 💡 可能的原因: echo 💡 可能的原因:
echo - 网络连接问题 echo - 网络连接问题
echo - 远程仓库权限不足 echo - 远程仓库权限不足
echo - 分支冲突 echo - 分支冲突
echo - 需要先拉取远程更改 echo - 需要先拉取远程更改
echo. echo.
echo 🔧 建议解决方案: echo 🔧 尝试自动解决冲突...
echo 1. 检查网络连接 git pull origin !current_branch! --rebase
echo 2. 运行: git pull origin main set pull_result=!errorlevel!
echo 3. 重新运行推送脚本 if !pull_result! equ 0 (
echo ✅ 冲突已解决,重新推送...
git push origin !current_branch!
set final_push_result=!errorlevel!
if !final_push_result! equ 0 (
echo ✅ 推送成功!
) else (
echo ❌ 重新推送失败
echo.
echo 🔧 建议手动解决:
echo 1. 运行: git pull origin !current_branch!
echo 2. 解决冲突后运行: git push origin !current_branch!
pause pause
exit /b 1 exit /b 1
) )
) else (
echo ❌ 无法自动解决冲突
echo.
echo 🔧 建议手动解决:
echo 1. 运行: git pull origin !current_branch!
echo 2. 解决冲突后运行: git push origin !current_branch!
pause
exit /b 1
)
) else (
echo ✅ 推送成功!
)
echo. echo.
echo ======================================== echo ========================================

View File

@@ -127,6 +127,36 @@ if ($hasChanges -eq 0) {
exit 0 exit 0
} }
# 显示详细状态
Write-ColorOutput "`n📊 详细状态信息:" "Yellow"
# 检查未暂存的更改
$unstaged = git diff --name-only 2>$null
if ($unstaged) {
Write-ColorOutput " 未暂存更改: $($unstaged.Count) 个文件" "Yellow"
$unstaged | ForEach-Object { Write-ColorOutput " ~ $_" "Yellow" }
} else {
Write-ColorOutput " 未暂存更改: 无" "Green"
}
# 检查已暂存的更改
$staged = git diff --cached --name-only 2>$null
if ($staged) {
Write-ColorOutput " 已暂存更改: $($staged.Count) 个文件" "Green"
$staged | ForEach-Object { Write-ColorOutput " + $_" "Green" }
} else {
Write-ColorOutput " 已暂存更改: 无" "Green"
}
# 检查未跟踪的文件
$untracked = git ls-files --others --exclude-standard 2>$null
if ($untracked) {
Write-ColorOutput " 未跟踪文件: $($untracked.Count) 个文件" "Cyan"
$untracked | ForEach-Object { Write-ColorOutput " + $_" "Cyan" }
} else {
Write-ColorOutput " 未跟踪文件: 无" "Green"
}
# 确认操作 # 确认操作
if (-not $NoConfirm) { if (-not $NoConfirm) {
Write-ColorOutput "`n❓ 是否继续推送?" "Yellow" Write-ColorOutput "`n❓ 是否继续推送?" "Yellow"
@@ -166,15 +196,61 @@ try {
# 推送到远程 # 推送到远程
Write-Step 4 4 "推送到远程仓库" Write-Step 4 4 "推送到远程仓库"
# 先尝试拉取最新更改
Write-ColorOutput "🔄 检查远程更新..." "Cyan"
try {
git fetch origin main
Write-ColorOutput "✅ 远程更新检查完成" "Green"
} catch {
Write-ColorOutput "⚠️ 无法获取远程更新,继续推送..." "Yellow"
}
# 推送到远程
try { try {
git push origin main git push origin main
Write-ColorOutput "✅ 推送成功" "Green" Write-ColorOutput "✅ 推送成功" "Green"
} catch { } catch {
Write-ColorOutput "❌ 推送失败: $($_.Exception.Message)" "Red" Write-ColorOutput "❌ 推送失败: $($_.Exception.Message)" "Red"
Write-ColorOutput "请检查网络连接和远程仓库权限" "Yellow" Write-ColorOutput "`n💡 可能的原因:" "Yellow"
Write-ColorOutput " - 网络连接问题" "White"
Write-ColorOutput " - 远程仓库权限不足" "White"
Write-ColorOutput " - 分支冲突" "White"
Write-ColorOutput " - 需要先拉取远程更改" "White"
Write-ColorOutput "`n🔧 尝试自动解决冲突..." "Cyan"
try {
git pull origin main --rebase
if ($LASTEXITCODE -eq 0) {
Write-ColorOutput "✅ 冲突已解决,重新推送..." "Green"
git push origin main
if ($LASTEXITCODE -eq 0) {
Write-ColorOutput "✅ 推送成功!" "Green"
} else {
Write-ColorOutput "❌ 重新推送失败" "Red"
Write-ColorOutput "`n🔧 建议手动解决:" "Yellow"
Write-ColorOutput " 1. 运行: git pull origin main" "White"
Write-ColorOutput " 2. 解决冲突后运行: git push origin main" "White"
Read-Host "按任意键退出" Read-Host "按任意键退出"
exit 1 exit 1
} }
} else {
Write-ColorOutput "❌ 无法自动解决冲突" "Red"
Write-ColorOutput "`n🔧 建议手动解决:" "Yellow"
Write-ColorOutput " 1. 运行: git pull origin main" "White"
Write-ColorOutput " 2. 解决冲突后运行: git push origin main" "White"
Read-Host "按任意键退出"
exit 1
}
} catch {
Write-ColorOutput "❌ 自动解决冲突失败: $($_.Exception.Message)" "Red"
Write-ColorOutput "`n🔧 建议手动解决:" "Yellow"
Write-ColorOutput " 1. 运行: git pull origin main" "White"
Write-ColorOutput " 2. 解决冲突后运行: git push origin main" "White"
Read-Host "按任意键退出"
exit 1
}
}
# 显示结果 # 显示结果
Write-ColorOutput "`n========================================" "Magenta" Write-ColorOutput "`n========================================" "Magenta"

147
check_and_fix_users.py Normal file
View File

@@ -0,0 +1,147 @@
# -*- coding: utf-8 -*-
import sys
import os
sys.path.insert(0, os.path.dirname(os.path.abspath(__file__)))
from src.core.database import db_manager
from sqlalchemy import text, inspect
print("Checking users table structure...")
try:
with db_manager.get_session() as session:
inspector = inspect(db_manager.engine)
cols = inspector.get_columns('users')
print("\nUsers table columns:")
required_fields = {}
optional_fields = {}
for col in cols:
name = col['name']
nullable = col.get('nullable', True)
default = col.get('default', None)
if nullable or default is not None:
optional_fields[name] = col
print(f" {name}: {col['type']} (nullable: {nullable}, default: {default})")
else:
required_fields[name] = col
print(f" {name}: {col['type']} (REQUIRED, nullable: {nullable})")
print(f"\nRequired fields: {list(required_fields.keys())}")
print(f"Optional fields: {list(optional_fields.keys())}")
# Check for existing admin user
result = session.execute(text("SELECT * FROM users WHERE username = 'admin' LIMIT 1"))
admin_row = result.fetchone()
if admin_row:
print("\nAdmin user found in database")
# Update password
from werkzeug.security import generate_password_hash
password_hash = generate_password_hash('admin123')
session.execute(text("""
UPDATE users
SET password_hash = :password_hash,
is_active = 1,
updated_at = NOW()
WHERE username = 'admin'
"""), {'password_hash': password_hash})
session.commit()
print("Admin password updated successfully")
else:
print("\nAdmin user not found, creating...")
from werkzeug.security import generate_password_hash
password_hash = generate_password_hash('admin123')
# Build INSERT with all required fields
insert_fields = ['username', 'email', 'password_hash', 'role']
insert_values = {
'username': 'admin',
'email': 'admin@tsp.com',
'password_hash': password_hash,
'role': 'admin'
}
# Add optional fields that exist in table
if 'is_active' in optional_fields or 'is_active' not in required_fields:
insert_fields.append('is_active')
insert_values['is_active'] = True
if 'region' in optional_fields or 'region' not in required_fields:
insert_fields.append('region')
insert_values['region'] = None
# Handle full_name if it exists
if 'full_name' in required_fields:
insert_fields.append('full_name')
insert_values['full_name'] = 'Administrator'
elif 'full_name' in optional_fields:
insert_fields.append('full_name')
insert_values['full_name'] = 'Administrator'
# Handle other required fields with defaults
for field_name in required_fields:
if field_name not in insert_fields:
if field_name in ['created_at', 'updated_at']:
insert_fields.append(field_name)
insert_values[field_name] = 'NOW()'
else:
# Use empty string or default value
insert_fields.append(field_name)
insert_values[field_name] = ''
fields_str = ', '.join(insert_fields)
values_str = ', '.join([f':{f}' for f in insert_fields])
sql = f"""
INSERT INTO users ({fields_str})
VALUES ({values_str})
"""
# Fix NOW() placeholders
final_values = {}
for k, v in insert_values.items():
if v == 'NOW()':
# Will use SQL NOW()
continue
final_values[k] = v
# Use raw SQL with NOW()
sql_final = f"""
INSERT INTO users ({fields_str.replace(':created_at', 'created_at').replace(':updated_at', 'updated_at')})
VALUES ({values_str.replace(':created_at', 'NOW()').replace(':updated_at', 'NOW()')})
"""
# Clean up the SQL
for k in ['created_at', 'updated_at']:
if f':{k}' in values_str:
values_str = values_str.replace(f':{k}', 'NOW()')
if k in final_values:
del final_values[k]
# Final SQL construction
final_sql = f"INSERT INTO users ({', '.join([f if f not in ['created_at', 'updated_at'] else f for f in insert_fields])}) VALUES ({', '.join([f':{f}' if f not in ['created_at', 'updated_at'] else 'NOW()' for f in insert_fields])})"
print(f"Executing SQL with fields: {insert_fields}")
session.execute(text(final_sql), final_values)
session.commit()
print("Admin user created successfully")
# Verify
result = session.execute(text("SELECT username, email, role, is_active FROM users WHERE username = 'admin'"))
admin_data = result.fetchone()
if admin_data:
print(f"\nVerification:")
print(f" Username: {admin_data[0]}")
print(f" Email: {admin_data[1]}")
print(f" Role: {admin_data[2]}")
print(f" Is Active: {admin_data[3]}")
print("\nAdmin user ready for login!")
except Exception as e:
print(f"Error: {e}")
import traceback
traceback.print_exc()

157
check_encoding.py Normal file
View File

@@ -0,0 +1,157 @@
#!/usr/bin/env python
# -*- coding: utf-8 -*-
"""
文件编码检查工具
检查项目中所有文件是否使用UTF-8编码
"""
import os
import sys
import chardet
from pathlib import Path
def check_file_encoding(file_path: Path) -> dict:
"""检查文件编码"""
try:
with open(file_path, 'rb') as f:
raw_data = f.read()
result = chardet.detect(raw_data)
encoding = result.get('encoding', 'unknown')
confidence = result.get('confidence', 0)
# 检查文件是否有BOM
has_bom = False
if raw_data.startswith(b'\xef\xbb\xbf'):
has_bom = True
encoding = 'utf-8-sig'
return {
'file': str(file_path),
'encoding': encoding,
'confidence': confidence,
'has_bom': has_bom,
'is_utf8': encoding.lower() in ['utf-8', 'utf-8-sig', 'ascii'],
'size': len(raw_data)
}
except Exception as e:
return {
'file': str(file_path),
'error': str(e)
}
def check_python_file_header(file_path: Path) -> bool:
"""检查Python文件是否有编码声明"""
try:
with open(file_path, 'r', encoding='utf-8') as f:
first_lines = [f.readline() for _ in range(3)]
for line in first_lines:
if 'coding' in line.lower() or 'encoding' in line.lower():
return True
return False
except:
return False
def main():
"""主函数"""
project_root = Path(__file__).parent
# 需要检查的文件扩展名
check_extensions = {'.py', '.json', '.md', '.txt', '.html', '.css', '.js', '.sql', '.bat', '.sh'}
# 排除的目录
exclude_dirs = {'.git', '.venv', '__pycache__', 'node_modules', '.idea', 'logs', 'data', 'dist', 'build'}
results = []
python_files_without_encoding = []
print("=" * 80)
print("文件编码检查工具")
print("=" * 80)
print()
# 遍历所有文件
for root, dirs, files in os.walk(project_root):
# 排除指定目录
dirs[:] = [d for d in dirs if d not in exclude_dirs]
for file in files:
file_path = Path(root) / file
# 只检查指定扩展名的文件
if file_path.suffix.lower() not in check_extensions:
continue
# 检查编码
result = check_file_encoding(file_path)
results.append(result)
# 检查Python文件的编码声明
if file_path.suffix == '.py':
if not check_python_file_header(file_path):
python_files_without_encoding.append(file_path)
# 统计结果
total_files = len(results)
utf8_files = sum(1 for r in results if r.get('is_utf8', False))
non_utf8_files = total_files - utf8_files
print(f"总计检查文件: {total_files}")
print(f"UTF-8 编码文件: {utf8_files}")
print(f"非 UTF-8 编码文件: {non_utf8_files}")
print()
# 显示非UTF-8文件
if non_utf8_files > 0:
print("=" * 80)
print("⚠️ 非 UTF-8 编码文件:")
print("=" * 80)
for result in results:
if not result.get('is_utf8', False) and 'error' not in result:
print(f" {result['file']}")
print(f" 编码: {result['encoding']} (置信度: {result['confidence']:.2%})")
if result.get('has_bom'):
print(f" ⚠️ 包含 BOM")
print()
# 显示缺少编码声明的Python文件
if python_files_without_encoding:
print("=" * 80)
print("⚠️ Python 文件缺少编码声明:")
print("=" * 80)
for file_path in python_files_without_encoding:
print(f" {file_path}")
print()
print("建议在这些文件开头添加: # -*- coding: utf-8 -*-")
print()
# 显示错误
errors = [r for r in results if 'error' in r]
if errors:
print("=" * 80)
print("❌ 检查出错的文件:")
print("=" * 80)
for result in errors:
print(f" {result['file']}: {result['error']}")
print()
# 总结
print("=" * 80)
if non_utf8_files == 0 and not python_files_without_encoding:
print("✅ 所有文件编码检查通过!")
else:
print("⚠️ 发现编码问题,请根据上述信息修复")
print("=" * 80)
return non_utf8_files == 0 and not python_files_without_encoding
if __name__ == "__main__":
try:
import chardet
except ImportError:
print("错误: 需要安装 chardet 库")
print("运行: pip install chardet")
sys.exit(1)
success = main()
sys.exit(0 if success else 1)

Binary file not shown.

View File

@@ -7,7 +7,7 @@
"TR Status": "status", "TR Status": "status",
"Source": "source", "Source": "source",
"Date creation": "created_at", "Date creation": "created_at",
"处理过程": "solution", "处理过程": "resolution",
"TR tracking": "resolution", "TR tracking": "resolution",
"Created by": "created_by", "Created by": "created_by",
"Module模块": "module", "Module模块": "module",
@@ -21,7 +21,10 @@
"Has it been updated on the same day": "has_updated_same_day", "Has it been updated on the same day": "has_updated_same_day",
"Operating time": "operating_time", "Operating time": "operating_time",
"AI建议": "ai_suggestion", "AI建议": "ai_suggestion",
"Issue Start Time": "updated_at" "Issue Start Time": "updated_at",
"Wilfulness责任人<E4BBBB>?": "wilfulness",
"父<>?<3F>录": "parent_record",
"AI建<49>??": "ai_suggestion"
}, },
"field_aliases": { "field_aliases": {
"order_id": [ "order_id": [
@@ -307,17 +310,17 @@
}, },
"field_priorities": { "field_priorities": {
"order_id": 3, "order_id": 3,
"description": 1, "description": 3,
"category": 1, "category": 3,
"priority": 1, "priority": 3,
"status": 1, "status": 3,
"created_at": 1, "created_at": 3,
"source": 2, "source": 3,
"solution": 2, "solution": 3,
"resolution": 2, "resolution": 3,
"created_by": 2, "created_by": 3,
"vehicle_type": 2, "vehicle_type": 3,
"vin_sim": 2, "vin_sim": 3,
"module": 3, "module": 3,
"wilfulness": 3, "wilfulness": 3,
"date_of_close": 3, "date_of_close": 3,
@@ -327,7 +330,7 @@
"has_updated_same_day": 3, "has_updated_same_day": 3,
"operating_time": 3, "operating_time": 3,
"ai_suggestion": 3, "ai_suggestion": 3,
"updated_at": 2 "updated_at": 3
}, },
"auto_mapping_enabled": true, "auto_mapping_enabled": true,
"similarity_threshold": 0.6 "similarity_threshold": 0.6

View File

@@ -34,3 +34,27 @@ ANTHROPIC_CONFIG = LLMConfig(
# 默认使用千问模型 # 默认使用千问模型
DEFAULT_CONFIG = QWEN_CONFIG DEFAULT_CONFIG = QWEN_CONFIG
def get_default_llm_config() -> LLMConfig:
"""
获取默认的LLM配置
优先从统一配置管理器获取,如果失败则使用本地配置
"""
try:
from src.config.unified_config import get_config
config = get_config()
llm_dict = config.get_llm_config()
# 创建LLMConfig对象
return LLMConfig(
provider=llm_dict.get("provider", "qwen"),
api_key=llm_dict.get("api_key", ""),
base_url=llm_dict.get("base_url", "https://dashscope.aliyuncs.com/compatible-mode/v1"),
model=llm_dict.get("model", "qwen-plus-latest"),
temperature=llm_dict.get("temperature", 0.7),
max_tokens=llm_dict.get("max_tokens", 2000)
)
except Exception:
# 如果统一配置不可用,使用本地配置
return DEFAULT_CONFIG

View File

@@ -0,0 +1,52 @@
{
"database": {
"url": "mysql+pymysql://tsp_assistant:password@jeason.online/tsp_assistant?charset=utf8mb4",
"pool_size": 10,
"max_overflow": 20,
"pool_timeout": 30,
"pool_recycle": 3600
},
"llm": {
"provider": "openai",
"api_key": "",
"base_url": "https://dashscope.aliyuncs.com/compatible-mode/v1",
"model": "qwen-turbo",
"temperature": 0.7,
"max_tokens": 2000,
"timeout": 30
},
"server": {
"host": "0.0.0.0",
"port": 5000,
"websocket_port": 8765,
"debug": false,
"log_level": "INFO"
},
"feishu": {
"app_id": "",
"app_secret": "",
"app_token": "",
"table_id": "",
"status": "active",
"sync_limit": 10,
"auto_sync_interval": 0
},
"ai_accuracy": {
"auto_approve_threshold": 0.95,
"use_human_resolution_threshold": 0.9,
"manual_review_threshold": 0.8,
"ai_suggestion_confidence": 0.95,
"human_resolution_confidence": 0.9,
"prefer_human_when_low_accuracy": true,
"enable_auto_approval": true,
"enable_human_fallback": true
},
"system": {
"backup_enabled": true,
"backup_interval": 24,
"max_backup_files": 7,
"cache_enabled": true,
"cache_ttl": 3600,
"monitoring_enabled": true
}
}

94
convert_encoding.bat Normal file
View File

@@ -0,0 +1,94 @@
@echo off
setlocal enabledelayedexpansion
chcp 65001 >nul 2>&1
echo ========================================
echo 代码文件编码批量转换工具
echo ========================================
echo.
:: 设置要扫描的文件扩展名
set "extensions=*.py *.java *.js *.ts *.html *.css *.xml *.json *.md *.txt *.bat *.cmd *.ps1 *.sh *.yml *.yaml *.ini *.cfg *.conf *.properties"
:: 设置目标目录(默认为当前目录)
set "target_dir=%cd%"
if not "%~1"=="" set "target_dir=%~1"
echo 目标目录: %target_dir%
echo.
:: 检查目录是否存在
if not exist "%target_dir%" (
echo 错误:指定的目录不存在
pause
exit /b 1
)
:: 创建临时目录
set "temp_dir=%temp%\encoding_convert_%random%"
mkdir "%temp_dir%"
:: 统计变量
set "total_files=0"
set "converted_files=0"
set "skipped_files=0"
echo 开始扫描文件...
echo.
:: 遍历所有指定扩展名的文件
for %%e in (%extensions%) do (
for /r "%target_dir%" %%f in (%%e) do (
set /a total_files+=1
call :process_file "%%f"
)
)
echo.
echo ========================================
echo 转换完成
echo ========================================
echo 总文件数: %total_files%
echo 已转换: %converted_files%
echo 已跳过: %skipped_files%
echo.
:: 清理临时目录
rd /s /q "%temp_dir%" 2>nul
pause
exit /b
:process_file
set "file=%~1"
set "is_utf8=0"
:: 检查文件是否已经是UTF-8编码
powershell -Command "try { $content = [System.IO.File]::ReadAllText('%file%', [System.Text.Encoding]::UTF8); $content | Out-Null; exit 0 } catch { exit 1 }" >nul 2>&1
if %errorlevel% equ 0 set "is_utf8=1"
if %is_utf8% equ 1 (
echo [跳过] %file% (已经是UTF-8)
set /a skipped_files+=1
) else (
echo [转换] %file%
:: 尝试检测并转换编码
powershell -Command ^
"$path = '%file%'; ^
try { ^
$bytes = [System.IO.File]::ReadAllBytes($path); ^
$encoding = [System.Text.Encoding]::GetEncoding('GB2312'); ^
$content = $encoding.GetString($bytes); ^
[System.IO.File]::WriteAllText($path, $content, [System.Text.Encoding]::UTF8); ^
exit 0 ^
} catch { ^
exit 1 ^
}"
if %errorlevel% equ 0 (
set /a converted_files+=1
) else (
echo [警告] 无法转换 %file%
)
)
goto :eof

173
create_admin_user.py Normal file
View File

@@ -0,0 +1,173 @@
# -*- coding: utf-8 -*-
"""
创建或修复默认管理员用户
"""
import sys
import os
sys.path.insert(0, os.path.dirname(os.path.abspath(__file__)))
from src.core.database import db_manager
from src.core.models import User
from werkzeug.security import generate_password_hash, check_password_hash
from sqlalchemy import text, inspect
print("=" * 60)
print("创建/修复管理员用户")
print("=" * 60)
try:
with db_manager.get_session() as session:
# 检查表结构
inspector = inspect(db_manager.engine)
if 'users' not in inspector.get_table_names():
print("错误: users表不存在请先运行 python init_database.py")
sys.exit(1)
existing_columns = [col['name'] for col in inspector.get_columns('users')]
print(f"users表字段: {existing_columns}")
# 检查是否存在admin用户
admin_user = session.query(User).filter(User.username == 'admin').first()
if admin_user:
print(f"\n找到admin用户 (ID: {admin_user.id})")
print(f" 邮箱: {admin_user.email}")
print(f" 角色: {admin_user.role}")
print(f" 激活状态: {admin_user.is_active}")
# 验证密码
password_ok = check_password_hash(admin_user.password_hash, 'admin123')
print(f" 密码验证: {'正确' if password_ok else '错误'}")
if not password_ok:
print("\n密码不匹配,正在更新密码...")
admin_user.password_hash = generate_password_hash('admin123')
admin_user.is_active = True
if hasattr(admin_user, 'region'):
admin_user.region = None
session.commit()
print("密码已更新为: admin123")
if not admin_user.is_active:
print("用户未激活,正在激活...")
admin_user.is_active = True
session.commit()
print("用户已激活")
# 最终验证
test_password = check_password_hash(admin_user.password_hash, 'admin123')
if test_password and admin_user.is_active:
print("\n管理员用户已就绪!")
print(" 用户名: admin")
print(" 密码: admin123")
print(" 状态: 已激活")
else:
print("\n警告: 用户状态异常")
print(f" 密码正确: {test_password}")
print(f" 已激活: {admin_user.is_active}")
else:
print("\n未找到admin用户正在创建...")
# 准备密码哈希
password_hash = generate_password_hash('admin123')
# 检查表结构使用SQL直接插入避免字段不匹配
try:
# 先尝试使用模型创建
new_admin = User(
username='admin',
email='admin@tsp.com',
password_hash=password_hash,
role='admin',
is_active=True
)
if 'region' in existing_columns:
new_admin.region = None
session.add(new_admin)
session.commit()
print("使用模型创建成功")
except Exception as model_error:
print(f"模型创建失败: {model_error}")
print("尝试使用SQL直接插入...")
session.rollback()
# 使用SQL直接插入
insert_fields = ['username', 'email', 'password_hash', 'role']
insert_values = {
'username': 'admin',
'email': 'admin@tsp.com',
'password_hash': password_hash,
'role': 'admin'
}
if 'is_active' in existing_columns:
insert_fields.append('is_active')
insert_values['is_active'] = True
if 'region' in existing_columns:
insert_fields.append('region')
insert_values['region'] = None
fields_str = ', '.join(insert_fields)
values_str = ', '.join([f":{k}" for k in insert_fields])
sql = f"""
INSERT INTO users ({fields_str})
VALUES ({values_str})
"""
session.execute(text(sql), insert_values)
session.commit()
print("使用SQL创建成功")
# 验证创建结果
verify_user = session.query(User).filter(User.username == 'admin').first()
if verify_user:
test_password = check_password_hash(verify_user.password_hash, 'admin123')
print(f"\n验证结果:")
print(f" 用户ID: {verify_user.id}")
print(f" 密码正确: {test_password}")
print(f" 已激活: {verify_user.is_active}")
if test_password and verify_user.is_active:
print("\n管理员用户创建成功!")
print(" 用户名: admin")
print(" 密码: admin123")
else:
print("\n警告: 用户创建成功但状态异常")
# 创建其他示例用户
for username, email, password, role, region in [
('overseas_ops', 'overseas@tsp.com', 'ops123', 'overseas_ops', 'overseas'),
('domestic_ops', 'domestic@tsp.com', 'ops123', 'domestic_ops', 'domestic')
]:
ops_user = session.query(User).filter(User.username == username).first()
if not ops_user:
print(f"\n创建{username}用户...")
try:
new_user = User(
username=username,
email=email,
password_hash=generate_password_hash(password),
role=role,
is_active=True
)
if 'region' in existing_columns:
new_user.region = region
session.add(new_user)
session.commit()
print(f" {username}用户创建成功")
except Exception as e:
print(f" {username}用户创建失败: {e}")
session.rollback()
print("\n" + "=" * 60)
print("操作完成!")
print("=" * 60)
except Exception as e:
print(f"错误: {e}")
import traceback
traceback.print_exc()
sys.exit(1)

View File

@@ -3,5 +3,14 @@
"max_history": 10, "max_history": 10,
"refresh_interval": 10, "refresh_interval": 10,
"auto_monitoring": true, "auto_monitoring": true,
"agent_mode": true "agent_mode": true,
"api_provider": "openai",
"api_base_url": "",
"api_key": "",
"model_name": "qwen-turbo",
"model_temperature": 0.7,
"model_max_tokens": 1000,
"server_port": 5000,
"websocket_port": 8765,
"log_level": "INFO"
} }

View File

@@ -1,9 +1,9 @@
{ {
"init_time": "2025-09-19T18:57:01.015501", "init_time": "2025-10-31T12:46:01.637890",
"database_version": "MySQL 8.4.6", "database_version": "MySQL 8.4.6",
"database_url": "mysql+pymysql://tsp_assistant:***@43.134.68.207/tsp_assistant?charset=utf8mb4", "database_url": "mysql+pymysql://tsp_assistant:***@43.134.68.207/tsp_assistant?charset=utf8mb4",
"migrations_applied": 0, "migrations_applied": 0,
"tables_created": 15, "tables_created": 16,
"initial_data_inserted": true, "initial_data_inserted": true,
"verification_passed": true "verification_passed": true
} }

119
database_migration_notes.md Normal file
View File

@@ -0,0 +1,119 @@
# 数据库迁移说明
## 工单处理过程记录系统迁移
### 新增表:`work_order_process_history`
```sql
CREATE TABLE IF NOT EXISTS work_order_process_history (
id INTEGER PRIMARY KEY AUTOINCREMENT,
work_order_id INTEGER NOT NULL,
processor_name VARCHAR(100) NOT NULL,
processor_role VARCHAR(50),
processor_region VARCHAR(50),
process_content TEXT NOT NULL,
action_type VARCHAR(50) NOT NULL,
previous_status VARCHAR(50),
new_status VARCHAR(50),
assigned_module VARCHAR(50),
process_time DATETIME NOT NULL,
created_at DATETIME DEFAULT CURRENT_TIMESTAMP,
FOREIGN KEY (work_order_id) REFERENCES work_orders(id)
);
CREATE INDEX idx_process_history_workorder ON work_order_process_history(work_order_id);
CREATE INDEX idx_process_history_time ON work_order_process_history(process_time);
```
### WorkOrder表新增字段
如果使用SQLite可以使用以下SQL添加字段
```sql
-- 注意SQLite不支持直接ALTER TABLE添加多个列需要逐个添加
ALTER TABLE work_orders ADD COLUMN assigned_module VARCHAR(50);
ALTER TABLE work_orders ADD COLUMN module_owner VARCHAR(100);
ALTER TABLE work_orders ADD COLUMN dispatcher VARCHAR(100);
ALTER TABLE work_orders ADD COLUMN dispatch_time DATETIME;
ALTER TABLE work_orders ADD COLUMN region VARCHAR(50);
```
### 使用Python脚本迁移推荐
创建迁移脚本 `migrate_process_history.py`
```python
# -*- coding: utf-8 -*-
"""
数据库迁移脚本:添加工单处理过程记录表和相关字段
"""
from src.core.database import db_manager
from src.core.models import Base, WorkOrderProcessHistory
from sqlalchemy import text
def migrate_database():
"""执行数据库迁移"""
try:
with db_manager.get_session() as session:
# 创建新表
WorkOrderProcessHistory.__table__.create(db_manager.engine, checkfirst=True)
# 检查并添加新字段SQLite需要特殊处理
try:
# 尝试添加字段(如果已存在会报错,可以忽略)
session.execute(text("ALTER TABLE work_orders ADD COLUMN assigned_module VARCHAR(50)"))
except Exception as e:
print(f"字段 assigned_module 可能已存在: {e}")
try:
session.execute(text("ALTER TABLE work_orders ADD COLUMN module_owner VARCHAR(100)"))
except Exception as e:
print(f"字段 module_owner 可能已存在: {e}")
try:
session.execute(text("ALTER TABLE work_orders ADD COLUMN dispatcher VARCHAR(100)"))
except Exception as e:
print(f"字段 dispatcher 可能已存在: {e}")
try:
session.execute(text("ALTER TABLE work_orders ADD COLUMN dispatch_time DATETIME"))
except Exception as e:
print(f"字段 dispatch_time 可能已存在: {e}")
try:
session.execute(text("ALTER TABLE work_orders ADD COLUMN region VARCHAR(50)"))
except Exception as e:
print(f"字段 region 可能已存在: {e}")
session.commit()
print("数据库迁移完成!")
except Exception as e:
print(f"数据库迁移失败: {e}")
raise
if __name__ == "__main__":
migrate_database()
```
### 执行迁移
运行迁移脚本:
```bash
python migrate_process_history.py
```
或者直接在Python交互式环境中执行
```python
from migrate_process_history import migrate_database
migrate_database()
```
## 注意事项
1. **备份数据库**:在执行迁移前,请务必备份现有数据库
2. **SQLite限制**如果使用SQLiteALTER TABLE添加列的操作在某些情况下可能失败如果字段已存在会报错
3. **数据迁移**:现有工单的处理过程历史记录(存储在`resolution`字段中的)不会自动迁移到新表,需要手动处理
4. **索引优化**:新表已包含必要的索引,如果数据量大可以考虑添加更多索引

View File

@@ -1,58 +0,0 @@
version: '3.8'
services:
tsp-assistant:
build: .
container_name: tsp_assistant
ports:
- "5000:5000"
environment:
- PYTHONPATH=/app
- DATABASE_URL=sqlite:///tsp_assistant.db
volumes:
- ./data:/app/data
- ./logs:/app/logs
- ./backups:/app/backups
- tsp_db:/app
restart: unless-stopped
healthcheck:
test: ["CMD", "curl", "-f", "http://localhost:5000/api/health"]
interval: 30s
timeout: 10s
retries: 3
start_period: 40s
# MySQL数据库服务可选
mysql:
image: mysql:8.0
container_name: tsp_mysql
environment:
MYSQL_ROOT_PASSWORD: root123456
MYSQL_DATABASE: tsp_assistant
MYSQL_USER: tsp_user
MYSQL_PASSWORD: tsp_password
ports:
- "3306:3306"
volumes:
- mysql_data:/var/lib/mysql
- ./init.sql:/docker-entrypoint-initdb.d/init.sql
restart: unless-stopped
command: --default-authentication-plugin=mysql_native_password
# Nginx反向代理可选
nginx:
image: nginx:alpine
container_name: tsp_nginx
ports:
- "80:80"
- "443:443"
volumes:
- ./nginx.conf:/etc/nginx/nginx.conf
- ./ssl:/etc/nginx/ssl
depends_on:
- tsp-assistant
restart: unless-stopped
volumes:
tsp_db:
mysql_data:

86
fix_admin_user.py Normal file
View File

@@ -0,0 +1,86 @@
# -*- coding: utf-8 -*-
"""
修复管理员用户
"""
import sys
import os
sys.path.insert(0, os.path.dirname(os.path.abspath(__file__)))
from src.core.database import db_manager
from sqlalchemy import text
from werkzeug.security import generate_password_hash
print("Fixing admin user...")
try:
with db_manager.get_session() as session:
# Check if admin user exists
result = session.execute(text("SELECT id FROM users WHERE username = 'admin'"))
admin_row = result.fetchone()
password_hash = generate_password_hash('admin123')
if admin_row:
print("Admin user exists, updating password...")
session.execute(text("""
UPDATE users
SET password_hash = :password_hash,
is_active = 1,
updated_at = NOW()
WHERE username = 'admin'
"""), {'password_hash': password_hash})
session.commit()
print("Admin password updated successfully")
else:
print("Admin user not found, creating...")
session.execute(text("""
INSERT INTO users (
username, email, password_hash, role, full_name,
is_active, region, created_at, updated_at
) VALUES (
'admin', 'admin@tsp.com', :password_hash, 'admin', 'Administrator',
1, NULL, NOW(), NOW()
)
"""), {'password_hash': password_hash})
session.commit()
print("Admin user created successfully")
# Verify
result = session.execute(text("""
SELECT username, email, role, is_active, full_name
FROM users
WHERE username = 'admin'
"""))
admin_data = result.fetchone()
if admin_data:
print("\nVerification:")
print(f" Username: {admin_data[0]}")
print(f" Email: {admin_data[1]}")
print(f" Role: {admin_data[2]}")
print(f" Is Active: {admin_data[3]}")
print(f" Full Name: {admin_data[4]}")
# Test password verification
result = session.execute(text("SELECT password_hash FROM users WHERE username = 'admin'"))
stored_hash = result.fetchone()[0]
from werkzeug.security import check_password_hash
password_ok = check_password_hash(stored_hash, 'admin123')
print(f" Password Check: {'PASS' if password_ok else 'FAIL'}")
if password_ok and admin_data[3]:
print("\n[SUCCESS] Admin user is ready for login!")
print(" Username: admin")
print(" Password: admin123")
else:
print("\n[WARNING] User exists but password or status issue")
else:
print("\n[ERROR] User not found after creation")
except Exception as e:
print(f"Error: {e}")
import traceback
traceback.print_exc()
sys.exit(1)

161
fix_git_push.bat Normal file
View File

@@ -0,0 +1,161 @@
@echo off
setlocal enabledelayedexpansion
chcp 65001 >nul 2>&1
echo ========================================
echo Git推送问题诊断和修复工具
echo ========================================
echo.
:: 1. 检查Git状态
echo [1] 检查Git状态...
git status >nul 2>&1
if %errorlevel% neq 0 (
echo ? Git未初始化或不可用
echo 请确保:
echo 1. 已安装Git
echo 2. 当前目录是Git仓库
pause
exit /b 1
)
echo ? Git状态正常
echo.
:: 2. 检查远程仓库配置
echo [2] 检查远程仓库配置...
git remote -v >nul 2>&1
if %errorlevel% neq 0 (
echo ? 未配置远程仓库
echo 请先运行: git remote add origin ^<仓库地址^>
pause
exit /b 1
)
echo ? 远程仓库配置正常
git remote -v
echo.
:: 3. 检查当前分支
echo [3] 检查当前分支...
for /f "tokens=*" %%b in ('git branch --show-current 2^>nul') do set current_branch=%%b
if "!current_branch!"=="" (
echo ? 无法获取当前分支信息
pause
exit /b 1
)
echo 当前分支: !current_branch!
echo.
:: 4. 检查是否有未提交的更改
echo [4] 检查未提交的更改...
git status --porcelain >nul 2>&1
if %errorlevel% equ 0 (
git status --porcelain | findstr /r "." >nul
if !errorlevel! equ 0 (
echo ?? 有未提交的更改
set /p commit="是否先提交更改? (y/n): "
if /i "!commit!"=="y" (
git add .
set /p msg="请输入提交信息: "
if "!msg!"=="" set msg=自动提交
git commit -m "!msg!"
if !errorlevel! neq 0 (
echo ? 提交失败
pause
exit /b 1
)
)
)
)
echo ? 暂存区状态正常
echo.
:: 5. 尝试获取远程分支信息
echo [5] 获取远程分支信息...
git fetch origin >nul 2>&1
if %errorlevel% neq 0 (
echo ? 无法连接到远程仓库
echo.
echo 可能的原因:
echo 1. 网络连接问题
echo 2. 远程仓库地址错误
echo 3. 需要认证请检查是否已配置SSH密钥或Token
echo.
echo 远程仓库地址:
git config --get remote.origin.url
echo.
echo 解决建议:
echo 1. 检查网络连接
echo 2. 验证远程仓库地址
echo 3. 配置SSH密钥或访问令牌
pause
exit /b 1
)
echo ? 远程仓库连接成功
echo.
:: 6. 检查分支跟踪关系
echo [6] 检查分支跟踪关系...
git branch -vv
echo.
:: 7. 尝试推送到远程
echo [7] 尝试推送...
echo 当前分支: !current_branch!
echo.
:: 检查远程是否存在该分支
git ls-remote --heads origin !current_branch! >nul 2>&1
if %errorlevel% equ 0 (
echo 远程分支 !current_branch! 已存在
echo.
echo 尝试使用当前分支名称推送...
git push origin !current_branch!
if !errorlevel! neq 0 (
echo.
echo ? 推送失败,尝试拉取最新更改...
git pull origin !current_branch! --rebase
if !errorlevel! equ 0 (
echo ? 重新尝试推送...
git push origin !current_branch!
) else (
echo ? 拉取失败,请检查冲突
pause
exit /b 1
)
)
) else (
echo 远程分支 !current_branch! 不存在
echo.
echo 尝试设置上游并推送...
git push -u origin !current_branch!
)
if %errorlevel% equ 0 (
echo.
echo ? 推送成功!
) else (
echo.
echo ? 推送失败
echo.
echo ? 常见问题和解决方案:
echo.
echo 1. 如果是认证问题:
echo - 检查SSH密钥: ssh -T git@github.com (GitHub) 或 ssh -T git@gitee.com (Gitee)
echo - 或使用HTTPS + Token方式
echo.
echo 2. 如果是分支冲突:
echo - 运行: git pull origin !current_branch! --rebase
echo - 解决冲突后: git push origin !current_branch!
echo.
echo 3. 如果远程分支名称不同:
echo - 检查远程分支: git branch -r
echo - 可能需要推送主分支: git push origin main 或 git push origin master
echo.
pause
exit /b 1
)
echo.
echo ========================================
echo ? 诊断完成!
echo ========================================
pause

211
init.sql Normal file
View File

@@ -0,0 +1,211 @@
-- TSP智能助手数据库初始化脚本
-- 创建数据库(如果不存在)
CREATE DATABASE IF NOT EXISTS tsp_assistant CHARACTER SET utf8mb4 COLLATE utf8mb4_unicode_ci;
-- 使用数据库
USE tsp_assistant;
-- 创建用户表
CREATE TABLE IF NOT EXISTS users (
id INT AUTO_INCREMENT PRIMARY KEY,
username VARCHAR(50) UNIQUE NOT NULL,
email VARCHAR(100) UNIQUE NOT NULL,
password_hash VARCHAR(255) NOT NULL,
role ENUM('admin', 'user', 'operator') DEFAULT 'user',
is_active BOOLEAN DEFAULT TRUE,
created_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP,
updated_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP
);
-- 创建工单表
CREATE TABLE IF NOT EXISTS work_orders (
id INT AUTO_INCREMENT PRIMARY KEY,
order_id VARCHAR(50) UNIQUE NOT NULL,
title VARCHAR(200) NOT NULL,
description TEXT NOT NULL,
category VARCHAR(100) NOT NULL,
priority VARCHAR(20) NOT NULL DEFAULT 'medium',
status VARCHAR(20) NOT NULL DEFAULT 'pending',
created_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP,
updated_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP,
resolution TEXT,
satisfaction_score FLOAT,
-- 飞书集成字段
feishu_record_id VARCHAR(100) UNIQUE,
assignee VARCHAR(100),
solution TEXT,
ai_suggestion TEXT,
-- 扩展飞书字段
source VARCHAR(50),
module VARCHAR(100),
created_by VARCHAR(100),
wilfulness VARCHAR(100),
date_of_close TIMESTAMP NULL,
vehicle_type VARCHAR(100),
vin_sim VARCHAR(50),
app_remote_control_version VARCHAR(100),
hmi_sw VARCHAR(100),
parent_record VARCHAR(100),
has_updated_same_day VARCHAR(50),
operating_time VARCHAR(100),
-- 工单分发和权限管理字段
assigned_module VARCHAR(50),
module_owner VARCHAR(100),
dispatcher VARCHAR(100),
dispatch_time TIMESTAMP NULL,
region VARCHAR(50),
INDEX idx_order_id (order_id),
INDEX idx_status (status),
INDEX idx_priority (priority),
INDEX idx_created_at (created_at),
INDEX idx_assigned_module (assigned_module),
INDEX idx_region (region),
INDEX idx_feishu_record_id (feishu_record_id)
);
-- 创建预警表
CREATE TABLE IF NOT EXISTS alerts (
id INT AUTO_INCREMENT PRIMARY KEY,
rule_name VARCHAR(100) NOT NULL,
alert_type VARCHAR(50) NOT NULL,
level VARCHAR(20) NOT NULL DEFAULT 'info',
severity VARCHAR(20) NOT NULL DEFAULT 'medium',
message TEXT NOT NULL,
data TEXT,
is_active BOOLEAN DEFAULT TRUE,
created_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP,
resolved_at TIMESTAMP NULL,
INDEX idx_level (level),
INDEX idx_alert_type (alert_type),
INDEX idx_severity (severity),
INDEX idx_is_active (is_active),
INDEX idx_created_at (created_at)
);
-- 创建对话表
CREATE TABLE IF NOT EXISTS conversations (
id INT AUTO_INCREMENT PRIMARY KEY,
work_order_id INT,
user_message TEXT NOT NULL,
assistant_response TEXT NOT NULL,
timestamp TIMESTAMP DEFAULT CURRENT_TIMESTAMP,
confidence_score FLOAT,
knowledge_used TEXT,
response_time FLOAT,
FOREIGN KEY (work_order_id) REFERENCES work_orders(id) ON DELETE CASCADE,
INDEX idx_work_order_id (work_order_id),
INDEX idx_timestamp (timestamp)
);
-- 创建知识库表
CREATE TABLE IF NOT EXISTS knowledge_entries (
id INT AUTO_INCREMENT PRIMARY KEY,
question TEXT NOT NULL,
answer TEXT NOT NULL,
category VARCHAR(100) NOT NULL,
confidence_score FLOAT DEFAULT 0.0,
usage_count INT DEFAULT 0,
created_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP,
updated_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP,
is_active BOOLEAN DEFAULT TRUE,
is_verified BOOLEAN DEFAULT FALSE,
verified_by VARCHAR(100),
verified_at TIMESTAMP NULL,
vector_embedding TEXT,
INDEX idx_category (category),
INDEX idx_is_active (is_active),
INDEX idx_is_verified (is_verified)
);
-- 创建工单建议表
CREATE TABLE IF NOT EXISTS work_order_suggestions (
id INT AUTO_INCREMENT PRIMARY KEY,
work_order_id INT NOT NULL,
ai_suggestion TEXT,
human_resolution TEXT,
ai_similarity FLOAT,
approved BOOLEAN DEFAULT FALSE,
use_human_resolution BOOLEAN DEFAULT FALSE,
created_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP,
updated_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP,
FOREIGN KEY (work_order_id) REFERENCES work_orders(id) ON DELETE CASCADE,
INDEX idx_work_order_id (work_order_id),
INDEX idx_approved (approved)
);
-- 创建工单处理过程记录表
CREATE TABLE IF NOT EXISTS work_order_process_history (
id INT AUTO_INCREMENT PRIMARY KEY,
work_order_id INT NOT NULL,
processor_name VARCHAR(100) NOT NULL,
processor_role VARCHAR(50),
processor_region VARCHAR(50),
process_content TEXT NOT NULL,
action_type VARCHAR(50) NOT NULL,
previous_status VARCHAR(50),
new_status VARCHAR(50),
assigned_module VARCHAR(50),
process_time TIMESTAMP NOT NULL DEFAULT CURRENT_TIMESTAMP,
created_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP,
FOREIGN KEY (work_order_id) REFERENCES work_orders(id) ON DELETE CASCADE,
INDEX idx_work_order_id (work_order_id),
INDEX idx_process_time (process_time),
INDEX idx_action_type (action_type),
INDEX idx_processor_name (processor_name)
);
-- 创建系统配置表
CREATE TABLE IF NOT EXISTS system_settings (
id INT AUTO_INCREMENT PRIMARY KEY,
key_name VARCHAR(100) UNIQUE NOT NULL,
value TEXT,
description TEXT,
created_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP,
updated_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP
);
-- 插入默认管理员用户
INSERT IGNORE INTO users (username, email, password_hash, role) VALUES
('admin', 'admin@tsp.com', '$2b$12$LQv3c1yqBWVHxkd0LHAkCOYz6TtxMQJqhN8/LewdBPj4J/8K8K8K8', 'admin');
-- 插入默认系统配置
INSERT IGNORE INTO system_settings (key_name, value, description) VALUES
('system_name', 'TSP智能助手', '系统名称'),
('version', '2.0.0', '系统版本'),
('maintenance_mode', 'false', '维护模式'),
('max_concurrent_users', '100', '最大并发用户数'),
('session_timeout', '3600', '会话超时时间(秒)');
-- 创建分析统计表
CREATE TABLE IF NOT EXISTS analytics (
id INT AUTO_INCREMENT PRIMARY KEY,
date TIMESTAMP NOT NULL,
total_orders INT DEFAULT 0,
resolved_orders INT DEFAULT 0,
avg_resolution_time FLOAT DEFAULT 0.0,
satisfaction_avg FLOAT DEFAULT 0.0,
knowledge_hit_rate FLOAT DEFAULT 0.0,
category_distribution TEXT,
created_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP,
INDEX idx_date (date)
);
-- 创建车辆实时数据表(如果不存在)
CREATE TABLE IF NOT EXISTS vehicle_data (
id INT AUTO_INCREMENT PRIMARY KEY,
vehicle_id VARCHAR(50) NOT NULL,
vehicle_vin VARCHAR(17),
data_type VARCHAR(50) NOT NULL,
data_value TEXT NOT NULL,
timestamp TIMESTAMP DEFAULT CURRENT_TIMESTAMP,
is_active BOOLEAN DEFAULT TRUE,
INDEX idx_vehicle_id (vehicle_id),
INDEX idx_vehicle_vin (vehicle_vin),
INDEX idx_data_type (data_type),
INDEX idx_timestamp (timestamp)
);

View File

@@ -1,7 +1,6 @@
# -*- coding: utf-8 -*- # -*- coding: utf-8 -*-
""" """
TSP助手数据库初始化脚本 - 重构版本 TSP助手数据库初始化脚本
结合项目新特性,提供更高效的数据库初始化和管理功能
""" """
import sys import sys
@@ -19,10 +18,12 @@ sys.path.append(os.path.dirname(os.path.abspath(__file__)))
from src.config.config import Config from src.config.config import Config
from src.utils.helpers import setup_logging from src.utils.helpers import setup_logging
from src.core.database import db_manager from src.core.database import db_manager
from src.core.models import Base, WorkOrder, KnowledgeEntry, Conversation, Analytics, Alert, VehicleData from src.core.models import (
Base, WorkOrder, KnowledgeEntry, Conversation, Analytics, Alert, VehicleData,
WorkOrderSuggestion, WorkOrderProcessHistory
)
class DatabaseInitializer: class DatabaseInitializer:
"""数据库初始化器 - 重构版本"""
def __init__(self): def __init__(self):
self.logger = logging.getLogger(__name__) self.logger = logging.getLogger(__name__)
@@ -57,11 +58,11 @@ class DatabaseInitializer:
def initialize_database(self, force_reset: bool = False) -> bool: def initialize_database(self, force_reset: bool = False) -> bool:
"""初始化数据库 - 主入口函数""" """初始化数据库 - 主入口函数"""
print("=" * 80) print("=" * 80)
print("🚀 TSP智能助手数据库初始化系统") print("TSP智能助手数据库初始化系统")
print("=" * 80) print("=" * 80)
print(f"📊 数据库类型: {self.db_version}") print(f"数据库类型: {self.db_version}")
print(f"🔗 连接地址: {self.db_url}") print(f"连接地址: {self.db_url}")
print(f"初始化时间: {datetime.now().strftime('%Y-%m-%d %H:%M:%S')}") print(f"初始化时间: {datetime.now().strftime('%Y-%m-%d %H:%M:%S')}")
print("=" * 80) print("=" * 80)
try: try:
@@ -97,49 +98,49 @@ class DatabaseInitializer:
self._generate_init_report() self._generate_init_report()
print("\n" + "=" * 80) print("\n" + "=" * 80)
print("🎉 数据库初始化完成!") print("数据库初始化完成!")
print("=" * 80) print("=" * 80)
return True return True
except Exception as e: except Exception as e:
print(f"\n数据库初始化失败: {e}") print(f"\n数据库初始化失败: {e}")
self.logger.error(f"数据库初始化失败: {e}", exc_info=True) self.logger.error(f"数据库初始化失败: {e}", exc_info=True)
return False return False
def _test_connection(self) -> bool: def _test_connection(self) -> bool:
"""测试数据库连接""" """测试数据库连接"""
print("\n🔌 测试数据库连接...") print("\n测试数据库连接...")
try: try:
if db_manager.test_connection(): if db_manager.test_connection():
print("数据库连接成功") print("数据库连接成功")
return True return True
else: else:
print("数据库连接失败") print("数据库连接失败")
return False return False
except Exception as e: except Exception as e:
print(f"数据库连接测试异常: {e}") print(f"数据库连接测试异常: {e}")
return False return False
def _reset_database(self) -> bool: def _reset_database(self) -> bool:
"""重置数据库(谨慎使用)""" """重置数据库(谨慎使用)"""
print("\n⚠️ 重置数据库...") print("\n重置数据库...")
try: try:
# 删除所有表 # 删除所有表
Base.metadata.drop_all(bind=db_manager.engine) Base.metadata.drop_all(bind=db_manager.engine)
print("数据库表删除成功") print("数据库表删除成功")
# 重新创建所有表 # 重新创建所有表
Base.metadata.create_all(bind=db_manager.engine) Base.metadata.create_all(bind=db_manager.engine)
print("数据库表重新创建成功") print("数据库表重新创建成功")
return True return True
except Exception as e: except Exception as e:
print(f"数据库重置失败: {e}") print(f"数据库重置失败: {e}")
return False return False
def _create_tables(self) -> bool: def _create_tables(self) -> bool:
"""创建数据库表""" """创建数据库表"""
print("\n📋 创建数据库表...") print("\n创建数据库表...")
try: try:
# 获取现有表信息 # 获取现有表信息
inspector = inspect(db_manager.engine) inspector = inspect(db_manager.engine)
@@ -153,18 +154,18 @@ class DatabaseInitializer:
created_tables = set(new_tables) - set(existing_tables) created_tables = set(new_tables) - set(existing_tables)
if created_tables: if created_tables:
print(f"新创建表: {', '.join(created_tables)}") print(f"新创建表: {', '.join(created_tables)}")
else: else:
print("所有表已存在") print("所有表已存在")
return True return True
except Exception as e: except Exception as e:
print(f"创建数据库表失败: {e}") print(f"创建数据库表失败: {e}")
return False return False
def _run_migrations(self) -> bool: def _run_migrations(self) -> bool:
"""执行数据库迁移""" """执行数据库迁移"""
print("\n🔄 执行数据库迁移...") print("\n执行数据库迁移...")
migrations = [ migrations = [
self._migrate_knowledge_verification_fields, self._migrate_knowledge_verification_fields,
@@ -173,6 +174,8 @@ class DatabaseInitializer:
self._migrate_conversation_enhancements, self._migrate_conversation_enhancements,
self._migrate_workorder_enhancements, self._migrate_workorder_enhancements,
self._migrate_workorder_suggestions_enhancements, self._migrate_workorder_suggestions_enhancements,
self._migrate_workorder_dispatch_fields,
self._migrate_workorder_process_history_table,
self._migrate_analytics_enhancements, self._migrate_analytics_enhancements,
self._migrate_system_optimization_fields self._migrate_system_optimization_fields
] ]
@@ -184,14 +187,14 @@ class DatabaseInitializer:
success_count += 1 success_count += 1
except Exception as e: except Exception as e:
self.logger.error(f"迁移失败: {migration.__name__}: {e}") self.logger.error(f"迁移失败: {migration.__name__}: {e}")
print(f"⚠️ 迁移 {migration.__name__} 失败: {e}") print(f"迁移 {migration.__name__} 失败: {e}")
print(f"完成 {success_count}/{len(migrations)} 个迁移") print(f"完成 {success_count}/{len(migrations)} 个迁移")
return success_count > 0 return success_count > 0
def _migrate_knowledge_verification_fields(self) -> bool: def _migrate_knowledge_verification_fields(self) -> bool:
"""迁移知识库验证字段""" """迁移知识库验证字段"""
print(" 📝 检查知识库验证字段...") print(" 检查知识库验证字段...")
fields_to_add = [ fields_to_add = [
('is_verified', 'BOOLEAN DEFAULT FALSE'), ('is_verified', 'BOOLEAN DEFAULT FALSE'),
@@ -203,7 +206,7 @@ class DatabaseInitializer:
def _migrate_alert_severity_field(self) -> bool: def _migrate_alert_severity_field(self) -> bool:
"""迁移预警严重程度字段""" """迁移预警严重程度字段"""
print(" 🚨 检查预警严重程度字段...") print(" 检查预警严重程度字段...")
fields_to_add = [ fields_to_add = [
('severity', 'VARCHAR(20) DEFAULT \'medium\'') ('severity', 'VARCHAR(20) DEFAULT \'medium\'')
@@ -213,50 +216,50 @@ class DatabaseInitializer:
def _migrate_vehicle_data_table(self) -> bool: def _migrate_vehicle_data_table(self) -> bool:
"""迁移车辆数据表""" """迁移车辆数据表"""
print(" 🚗 检查车辆数据表...") print(" 检查车辆数据表...")
try: try:
with db_manager.get_session() as session: with db_manager.get_session() as session:
# 检查表是否存在 # 检查表是否存在
inspector = inspect(db_manager.engine) inspector = inspect(db_manager.engine)
if 'vehicle_data' not in inspector.get_table_names(): if 'vehicle_data' not in inspector.get_table_names():
print(" 创建vehicle_data表...") print(" 创建vehicle_data表...")
VehicleData.__table__.create(session.bind, checkfirst=True) VehicleData.__table__.create(session.bind, checkfirst=True)
print(" vehicle_data表创建成功") print(" vehicle_data表创建成功")
else: else:
print(" vehicle_data表已存在") print(" vehicle_data表已存在")
session.commit() session.commit()
return True return True
except Exception as e: except Exception as e:
print(f" 车辆数据表迁移失败: {e}") print(f" 车辆数据表迁移失败: {e}")
return False return False
def _migrate_conversation_enhancements(self) -> bool: def _migrate_conversation_enhancements(self) -> bool:
"""迁移对话增强字段""" """迁移对话增强字段"""
print(" 💬 检查对话增强字段...") print(" 检查对话增强字段...")
fields_to_add = [ fields_to_add = [
('response_time', 'FLOAT'), ('timestamp', 'TIMESTAMP DEFAULT CURRENT_TIMESTAMP'),
('user_satisfaction', 'INTEGER'), ('knowledge_used', 'TEXT'),
('ai_confidence', 'FLOAT'), ('response_time', 'FLOAT')
('context_data', 'TEXT')
] ]
return self._add_table_columns('conversations', fields_to_add) return self._add_table_columns('conversations', fields_to_add)
def _migrate_workorder_enhancements(self) -> bool: def _migrate_workorder_enhancements(self) -> bool:
"""迁移工单增强字段""" """迁移工单增强字段"""
print(" 📋 检查工单增强字段...") print(" 检查工单增强字段...")
fields_to_add = [ fields_to_add = [
('ai_suggestion', 'TEXT'), ('resolution', 'TEXT'),
('human_resolution', 'TEXT'), ('satisfaction_score', 'FLOAT'),
('ai_similarity', 'FLOAT'), # 飞书集成字段
('ai_approved', 'BOOLEAN DEFAULT FALSE'),
('feishu_record_id', 'VARCHAR(100)'), ('feishu_record_id', 'VARCHAR(100)'),
('sync_status', 'VARCHAR(20) DEFAULT \'pending\''), ('assignee', 'VARCHAR(100)'),
# 飞书集成扩展字段 ('solution', 'TEXT'),
('ai_suggestion', 'TEXT'),
# 扩展飞书字段
('source', 'VARCHAR(50)'), ('source', 'VARCHAR(50)'),
('module', 'VARCHAR(100)'), ('module', 'VARCHAR(100)'),
('created_by', 'VARCHAR(100)'), ('created_by', 'VARCHAR(100)'),
@@ -275,17 +278,69 @@ class DatabaseInitializer:
def _migrate_workorder_suggestions_enhancements(self) -> bool: def _migrate_workorder_suggestions_enhancements(self) -> bool:
"""迁移工单建议表增强字段""" """迁移工单建议表增强字段"""
print(" 💡 检查工单建议表增强字段...") print(" 检查工单建议表增强字段...")
fields_to_add = [ fields_to_add = [
('ai_similarity', 'FLOAT'),
('approved', 'BOOLEAN DEFAULT FALSE'),
('use_human_resolution', 'BOOLEAN DEFAULT FALSE') # 是否使用人工描述入库 ('use_human_resolution', 'BOOLEAN DEFAULT FALSE') # 是否使用人工描述入库
] ]
return self._add_table_columns('work_order_suggestions', fields_to_add) return self._add_table_columns('work_order_suggestions', fields_to_add)
def _migrate_workorder_dispatch_fields(self) -> bool:
"""迁移工单分发和权限管理字段"""
print(" 检查工单分发和权限管理字段...")
fields_to_add = [
('assigned_module', 'VARCHAR(50)'),
('module_owner', 'VARCHAR(100)'),
('dispatcher', 'VARCHAR(100)'),
('dispatch_time', 'DATETIME'),
('region', 'VARCHAR(50)')
]
return self._add_table_columns('work_orders', fields_to_add)
def _migrate_workorder_process_history_table(self) -> bool:
"""迁移工单处理过程记录表"""
print(" 检查工单处理过程记录表...")
try:
with db_manager.get_session() as session:
# 检查表是否存在
inspector = inspect(db_manager.engine)
if 'work_order_process_history' not in inspector.get_table_names():
print(" 创建work_order_process_history表...")
WorkOrderProcessHistory.__table__.create(session.bind, checkfirst=True)
print(" work_order_process_history表创建成功")
else:
print(" work_order_process_history表已存在")
# 检查字段是否完整
existing_columns = [col['name'] for col in inspector.get_columns('work_order_process_history')]
required_columns = [
'processor_name', 'processor_role', 'processor_region',
'process_content', 'action_type', 'previous_status',
'new_status', 'assigned_module', 'process_time'
]
missing_columns = [col for col in required_columns if col not in existing_columns]
if missing_columns:
print(f" 缺少字段: {', '.join(missing_columns)}")
# 这里可以选择性地添加缺失字段,但通常表已经完整创建
else:
print(" 所有必需字段已存在")
session.commit()
return True
except Exception as e:
print(f" 工单处理过程记录表迁移失败: {e}")
return False
def _migrate_analytics_enhancements(self) -> bool: def _migrate_analytics_enhancements(self) -> bool:
"""迁移分析增强字段""" """迁移分析增强字段"""
print(" 📊 检查分析增强字段...") print(" 检查分析增强字段...")
fields_to_add = [ fields_to_add = [
('performance_score', 'FLOAT'), ('performance_score', 'FLOAT'),
@@ -298,7 +353,7 @@ class DatabaseInitializer:
def _migrate_system_optimization_fields(self) -> bool: def _migrate_system_optimization_fields(self) -> bool:
"""迁移系统优化字段""" """迁移系统优化字段"""
print(" ⚙️ 检查系统优化字段...") print(" 检查系统优化字段...")
# 为各个表添加系统优化相关字段 # 为各个表添加系统优化相关字段
tables_and_fields = { tables_and_fields = {
@@ -337,7 +392,7 @@ class DatabaseInitializer:
skipped_count += 1 skipped_count += 1
continue continue
print(f" 添加字段 {table_name}.{field_name}...") print(f" 添加字段 {table_name}.{field_name}...")
# 使用单独的会话添加每个字段,避免长时间锁定 # 使用单独的会话添加每个字段,避免长时间锁定
with db_manager.get_session() as session: with db_manager.get_session() as session:
@@ -345,22 +400,22 @@ class DatabaseInitializer:
session.execute(text(alter_sql)) session.execute(text(alter_sql))
session.commit() session.commit()
print(f" 字段 {field_name} 添加成功") print(f" 字段 {field_name} 添加成功")
added_count += 1 added_count += 1
except Exception as field_error: except Exception as field_error:
print(f" ⚠️ 字段 {field_name} 添加失败: {field_error}") print(f" 字段 {field_name} 添加失败: {field_error}")
# 继续处理其他字段,不中断整个过程 # 继续处理其他字段,不中断整个过程
if added_count > 0: if added_count > 0:
print(f" 📊 成功添加 {added_count} 个字段,跳过 {skipped_count} 个已存在字段") print(f" 成功添加 {added_count} 个字段,跳过 {skipped_count} 个已存在字段")
else: else:
print(f" 📊 所有字段都已存在,跳过 {skipped_count} 个字段") print(f" 所有字段都已存在,跳过 {skipped_count} 个字段")
return True return True
except Exception as e: except Exception as e:
print(f" 添加字段过程失败: {e}") print(f" 添加字段过程失败: {e}")
return False return False
def _column_exists(self, table_name: str, column_name: str) -> bool: def _column_exists(self, table_name: str, column_name: str) -> bool:
@@ -395,14 +450,14 @@ class DatabaseInitializer:
def _insert_initial_data(self) -> bool: def _insert_initial_data(self) -> bool:
"""插入初始数据""" """插入初始数据"""
print("\n📊 插入初始数据...") print("\n插入初始数据...")
try: try:
with db_manager.get_session() as session: with db_manager.get_session() as session:
# 检查是否已有数据 # 检查是否已有数据
existing_count = session.query(KnowledgeEntry).count() existing_count = session.query(KnowledgeEntry).count()
if existing_count > 0: if existing_count > 0:
print(f" 数据库中已有 {existing_count} 条知识库条目,跳过初始数据插入") print(f" 数据库中已有 {existing_count} 条知识库条目,跳过初始数据插入")
return True return True
# 插入初始知识库数据 # 插入初始知识库数据
@@ -412,7 +467,7 @@ class DatabaseInitializer:
session.add(entry) session.add(entry)
session.commit() session.commit()
print(f" 成功插入 {len(initial_data)} 条知识库条目") print(f" 成功插入 {len(initial_data)} 条知识库条目")
# 添加示例车辆数据 # 添加示例车辆数据
self._add_sample_vehicle_data() self._add_sample_vehicle_data()
@@ -422,7 +477,7 @@ class DatabaseInitializer:
return True return True
except Exception as e: except Exception as e:
print(f" 插入初始数据失败: {e}") print(f" 插入初始数据失败: {e}")
return False return False
def _get_initial_knowledge_data(self) -> List[Dict[str, Any]]: def _get_initial_knowledge_data(self) -> List[Dict[str, Any]]:
@@ -549,13 +604,13 @@ class DatabaseInitializer:
success = vehicle_manager.add_sample_vehicle_data() success = vehicle_manager.add_sample_vehicle_data()
if success: if success:
print(" 示例车辆数据添加成功") print(" 示例车辆数据添加成功")
else: else:
print(" 示例车辆数据添加失败") print(" 示例车辆数据添加失败")
return success return success
except Exception as e: except Exception as e:
print(f" 添加示例车辆数据失败: {e}") print(f" 添加示例车辆数据失败: {e}")
return False return False
def _verify_existing_knowledge(self) -> bool: def _verify_existing_knowledge(self) -> bool:
@@ -568,7 +623,7 @@ class DatabaseInitializer:
).all() ).all()
if unverified_entries: if unverified_entries:
print(f" 📝 发现 {len(unverified_entries)} 条未验证的知识库条目") print(f" 发现 {len(unverified_entries)} 条未验证的知识库条目")
# 将现有的知识库条目标记为已验证 # 将现有的知识库条目标记为已验证
for entry in unverified_entries: for entry in unverified_entries:
@@ -581,18 +636,18 @@ class DatabaseInitializer:
entry.relevance_score = 0.7 entry.relevance_score = 0.7
session.commit() session.commit()
print(f" 成功验证 {len(unverified_entries)} 条知识库条目") print(f" 成功验证 {len(unverified_entries)} 条知识库条目")
else: else:
print(" 所有知识库条目已验证") print(" 所有知识库条目已验证")
return True return True
except Exception as e: except Exception as e:
print(f" 验证知识库条目失败: {e}") print(f" 验证知识库条目失败: {e}")
return False return False
def _verify_database_integrity(self) -> bool: def _verify_database_integrity(self) -> bool:
"""验证数据库完整性""" """验证数据库完整性"""
print("\n🔍 验证数据库完整性...") print("\n验证数据库完整性...")
try: try:
with db_manager.get_session() as session: with db_manager.get_session() as session:
@@ -603,7 +658,9 @@ class DatabaseInitializer:
'knowledge_entries': KnowledgeEntry, 'knowledge_entries': KnowledgeEntry,
'analytics': Analytics, 'analytics': Analytics,
'alerts': Alert, 'alerts': Alert,
'vehicle_data': VehicleData 'vehicle_data': VehicleData,
'work_order_suggestions': WorkOrderSuggestion,
'work_order_process_history': WorkOrderProcessHistory
} }
total_records = 0 total_records = 0
@@ -611,19 +668,19 @@ class DatabaseInitializer:
try: try:
count = session.query(model_class).count() count = session.query(model_class).count()
total_records += count total_records += count
print(f" 📋 {table_name}: {count} 条记录") print(f" {table_name}: {count} 条记录")
except Exception as e: except Exception as e:
print(f" ⚠️ {table_name}: 检查失败 - {e}") print(f" {table_name}: 检查失败 - {e}")
print(f" 📊 总记录数: {total_records}") print(f" 总记录数: {total_records}")
# 检查关键字段 # 检查关键字段
self._check_critical_fields() self._check_critical_fields()
print(" 数据库完整性验证通过") print(" 数据库完整性验证通过")
return True return True
except Exception as e: except Exception as e:
print(f" 数据库完整性验证失败: {e}") print(f" 数据库完整性验证失败: {e}")
return False return False
def _check_critical_fields(self): def _check_critical_fields(self):
@@ -632,19 +689,23 @@ class DatabaseInitializer:
("knowledge_entries", "is_verified"), ("knowledge_entries", "is_verified"),
("alerts", "severity"), ("alerts", "severity"),
("vehicle_data", "vehicle_id"), ("vehicle_data", "vehicle_id"),
("conversations", "timestamp"),
("conversations", "response_time"), ("conversations", "response_time"),
("work_orders", "ai_suggestion") ("work_orders", "ai_suggestion"),
("work_orders", "assigned_module"),
("work_order_process_history", "processor_name"),
("work_order_suggestions", "ai_similarity")
] ]
for table_name, field_name in critical_checks: for table_name, field_name in critical_checks:
if self._column_exists(table_name, field_name): if self._column_exists(table_name, field_name):
print(f" {table_name}.{field_name} 字段存在") print(f" {table_name}.{field_name} 字段存在")
else: else:
print(f" ⚠️ {table_name}.{field_name} 字段缺失") print(f" {table_name}.{field_name} 字段缺失")
def _generate_init_report(self): def _generate_init_report(self):
"""生成初始化报告""" """生成初始化报告"""
print("\n📋 生成初始化报告...") print("\n生成初始化报告...")
try: try:
report = { report = {
@@ -662,10 +723,10 @@ class DatabaseInitializer:
with open(report_path, 'w', encoding='utf-8') as f: with open(report_path, 'w', encoding='utf-8') as f:
json.dump(report, f, indent=2, ensure_ascii=False) json.dump(report, f, indent=2, ensure_ascii=False)
print(f" 初始化报告已保存到: {report_path}") print(f" 初始化报告已保存到: {report_path}")
except Exception as e: except Exception as e:
print(f" ⚠️ 生成初始化报告失败: {e}") print(f" 生成初始化报告失败: {e}")
def _get_table_count(self) -> int: def _get_table_count(self) -> int:
"""获取表数量""" """获取表数量"""
@@ -678,7 +739,7 @@ class DatabaseInitializer:
def check_database_status(self) -> Dict[str, Any]: def check_database_status(self) -> Dict[str, Any]:
"""检查数据库状态""" """检查数据库状态"""
print("\n" + "=" * 80) print("\n" + "=" * 80)
print("📊 数据库状态检查") print("数据库状态检查")
print("=" * 80) print("=" * 80)
try: try:
@@ -690,7 +751,9 @@ class DatabaseInitializer:
'knowledge_entries': KnowledgeEntry, 'knowledge_entries': KnowledgeEntry,
'analytics': Analytics, 'analytics': Analytics,
'alerts': Alert, 'alerts': Alert,
'vehicle_data': VehicleData 'vehicle_data': VehicleData,
'work_order_suggestions': WorkOrderSuggestion,
'work_order_process_history': WorkOrderProcessHistory
} }
status = { status = {
@@ -706,10 +769,10 @@ class DatabaseInitializer:
count = session.query(model_class).count() count = session.query(model_class).count()
status["tables"][table_name] = count status["tables"][table_name] = count
status["total_records"] += count status["total_records"] += count
print(f"📋 {table_name}: {count} 条记录") print(f"{table_name}: {count} 条记录")
except Exception as e: except Exception as e:
status["tables"][table_name] = f"错误: {e}" status["tables"][table_name] = f"错误: {e}"
print(f"⚠️ {table_name}: 检查失败 - {e}") print(f"{table_name}: 检查失败 - {e}")
# 检查车辆数据详情 # 检查车辆数据详情
if 'vehicle_data' in status["tables"] and isinstance(status["tables"]['vehicle_data'], int): if 'vehicle_data' in status["tables"] and isinstance(status["tables"]['vehicle_data'], int):
@@ -736,18 +799,18 @@ class DatabaseInitializer:
"unverified": unverified_count "unverified": unverified_count
} }
print(f"\n📊 总记录数: {status['total_records']}") print(f"\n总记录数: {status['total_records']}")
print("\n数据库状态检查完成") print("\n数据库状态检查完成")
return status return status
except Exception as e: except Exception as e:
print(f"数据库状态检查失败: {e}") print(f"数据库状态检查失败: {e}")
return {"error": str(e)} return {"error": str(e)}
def main(): def main():
"""主函数""" """主函数"""
print("🚀 TSP智能助手数据库初始化工具 - 重构版本") print("TSP智能助手数据库初始化工具")
print("=" * 80) print("=" * 80)
# 创建初始化器 # 创建初始化器
@@ -757,7 +820,7 @@ def main():
force_reset = '--reset' in sys.argv or '--force' in sys.argv force_reset = '--reset' in sys.argv or '--force' in sys.argv
if force_reset: if force_reset:
print("⚠️ 警告:将重置数据库,所有数据将被删除!") print("警告:将重置数据库,所有数据将被删除!")
try: try:
confirm = input("确定要继续吗?(y/N): ") confirm = input("确定要继续吗?(y/N): ")
if confirm.lower() != 'y': if confirm.lower() != 'y':
@@ -772,39 +835,13 @@ def main():
initializer.check_database_status() initializer.check_database_status()
print("\n" + "=" * 80) print("\n" + "=" * 80)
print("🎉 数据库初始化成功!") print("数据库初始化成功!")
print("=" * 80) print("=" * 80)
print("✅ 已完成的操作:")
print(" - 创建所有数据库表")
print(" - 执行数据库迁移")
print(" - 添加知识库验证字段")
print(" - 创建车辆数据表")
print(" - 插入初始知识库数据")
print(" - 添加示例车辆数据")
print(" - 验证所有知识库条目")
print(" - 生成初始化报告")
print("\n🚀 现在您可以运行以下命令启动系统:")
print(" python start_dashboard.py")
print("\n🧪 或运行功能测试:")
print(" python test_new_features.py")
print("\n📋 新功能包括:")
print(" - 知识库分页显示")
print(" - 知识库验证机制")
print(" - 车辆实时数据管理")
print(" - 文件上传生成知识库")
print(" - 智能对话结合车辆数据")
print(" - 飞书同步功能")
print(" - 系统性能优化")
else: else:
print("\n" + "=" * 80) print("\n" + "=" * 80)
print("数据库初始化失败!") print("数据库初始化失败!")
print("=" * 80) print("=" * 80)
print("请检查:")
print("1. 数据库文件权限")
print("2. 数据库服务是否运行")
print("3. 磁盘空间是否充足")
print("4. Python依赖库是否完整")
print("5. 配置文件是否正确")
if __name__ == "__main__": if __name__ == "__main__":
main() main()

15432
logs/dashboard.log Normal file

File diff suppressed because one or more lines are too long

70
logs/tsp_assistant.log Normal file
View File

@@ -0,0 +1,70 @@
2025-09-19 18:26:27,748 - src.vehicle.vehicle_data_manager - INFO - 添加车辆数据成功: V001 - location
2025-09-19 18:26:27,752 - src.vehicle.vehicle_data_manager - INFO - 添加车辆数据成功: V001 - status
2025-09-19 18:26:27,756 - src.vehicle.vehicle_data_manager - INFO - 添加车辆数据成功: V001 - battery
2025-09-19 18:26:27,759 - src.vehicle.vehicle_data_manager - INFO - 添加车辆数据成功: V001 - engine
2025-09-19 18:26:27,764 - src.vehicle.vehicle_data_manager - INFO - 添加车辆数据成功: V002 - location
2025-09-19 18:26:27,768 - src.vehicle.vehicle_data_manager - INFO - 添加车辆数据成功: V002 - status
2025-09-19 18:26:27,772 - src.vehicle.vehicle_data_manager - INFO - 添加车辆数据成功: V002 - fault
2025-09-19 18:26:27,773 - src.vehicle.vehicle_data_manager - INFO - 示例车辆数据添加成功
2025-09-19 18:53:30,187 - sqlalchemy.pool.impl.QueuePool - ERROR - Exception during reset or similar
Traceback (most recent call last):
File "C:\Users\Administrator.CHERY-NOT-8217.000\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.13_qbz5n2kfra8p0\LocalCache\local-packages\Python313\site-packages\sqlalchemy\pool\base.py", line 985, in _finalize_fairy
fairy._reset(
~~~~~~~~~~~~^
pool,
^^^^^
...<2 lines>...
asyncio_safe=can_manipulate_connection,
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
)
^
File "C:\Users\Administrator.CHERY-NOT-8217.000\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.13_qbz5n2kfra8p0\LocalCache\local-packages\Python313\site-packages\sqlalchemy\pool\base.py", line 1433, in _reset
pool._dialect.do_rollback(self)
~~~~~~~~~~~~~~~~~~~~~~~~~^^^^^^
File "C:\Users\Administrator.CHERY-NOT-8217.000\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.13_qbz5n2kfra8p0\LocalCache\local-packages\Python313\site-packages\sqlalchemy\engine\default.py", line 711, in do_rollback
dbapi_connection.rollback()
~~~~~~~~~~~~~~~~~~~~~~~~~^^
File "C:\Users\Administrator.CHERY-NOT-8217.000\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.13_qbz5n2kfra8p0\LocalCache\local-packages\Python313\site-packages\pymysql\connections.py", line 505, in rollback
self._read_ok_packet()
~~~~~~~~~~~~~~~~~~~~^^
File "C:\Users\Administrator.CHERY-NOT-8217.000\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.13_qbz5n2kfra8p0\LocalCache\local-packages\Python313\site-packages\pymysql\connections.py", line 465, in _read_ok_packet
pkt = self._read_packet()
File "C:\Users\Administrator.CHERY-NOT-8217.000\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.13_qbz5n2kfra8p0\LocalCache\local-packages\Python313\site-packages\pymysql\connections.py", line 751, in _read_packet
packet_header = self._read_bytes(4)
File "C:\Users\Administrator.CHERY-NOT-8217.000\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.13_qbz5n2kfra8p0\LocalCache\local-packages\Python313\site-packages\pymysql\connections.py", line 789, in _read_bytes
data = self._rfile.read(num_bytes)
File "C:\Program Files\WindowsApps\PythonSoftwareFoundation.Python.3.13_3.13.2032.0_x64__qbz5n2kfra8p0\Lib\socket.py", line 719, in readinto
return self._sock.recv_into(b)
~~~~~~~~~~~~~~~~~~~~^^^
KeyboardInterrupt
2025-09-19 18:54:31,332 - sqlalchemy.pool.impl.QueuePool - ERROR - Exception during reset or similar
Traceback (most recent call last):
File "C:\Users\Administrator.CHERY-NOT-8217.000\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.13_qbz5n2kfra8p0\LocalCache\local-packages\Python313\site-packages\sqlalchemy\pool\base.py", line 985, in _finalize_fairy
fairy._reset(
~~~~~~~~~~~~^
pool,
^^^^^
...<2 lines>...
asyncio_safe=can_manipulate_connection,
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
)
^
File "C:\Users\Administrator.CHERY-NOT-8217.000\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.13_qbz5n2kfra8p0\LocalCache\local-packages\Python313\site-packages\sqlalchemy\pool\base.py", line 1433, in _reset
pool._dialect.do_rollback(self)
~~~~~~~~~~~~~~~~~~~~~~~~~^^^^^^
File "C:\Users\Administrator.CHERY-NOT-8217.000\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.13_qbz5n2kfra8p0\LocalCache\local-packages\Python313\site-packages\sqlalchemy\engine\default.py", line 711, in do_rollback
dbapi_connection.rollback()
~~~~~~~~~~~~~~~~~~~~~~~~~^^
File "C:\Users\Administrator.CHERY-NOT-8217.000\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.13_qbz5n2kfra8p0\LocalCache\local-packages\Python313\site-packages\pymysql\connections.py", line 505, in rollback
self._read_ok_packet()
~~~~~~~~~~~~~~~~~~~~^^
File "C:\Users\Administrator.CHERY-NOT-8217.000\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.13_qbz5n2kfra8p0\LocalCache\local-packages\Python313\site-packages\pymysql\connections.py", line 465, in _read_ok_packet
pkt = self._read_packet()
File "C:\Users\Administrator.CHERY-NOT-8217.000\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.13_qbz5n2kfra8p0\LocalCache\local-packages\Python313\site-packages\pymysql\connections.py", line 751, in _read_packet
packet_header = self._read_bytes(4)
File "C:\Users\Administrator.CHERY-NOT-8217.000\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.13_qbz5n2kfra8p0\LocalCache\local-packages\Python313\site-packages\pymysql\connections.py", line 789, in _read_bytes
data = self._rfile.read(num_bytes)
File "C:\Program Files\WindowsApps\PythonSoftwareFoundation.Python.3.13_3.13.2032.0_x64__qbz5n2kfra8p0\Lib\socket.py", line 719, in readinto
return self._sock.recv_into(b)
~~~~~~~~~~~~~~~~~~~~^^^
KeyboardInterrupt

Binary file not shown.

Binary file not shown.

102
nginx.conf Normal file
View File

@@ -0,0 +1,102 @@
events {
worker_connections 1024;
}
http {
include /etc/nginx/mime.types;
default_type application/octet-stream;
# 日志格式
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
access_log /var/log/nginx/access.log main;
error_log /var/log/nginx/error.log warn;
# 基本设置
sendfile on;
tcp_nopush on;
tcp_nodelay on;
keepalive_timeout 65;
types_hash_max_size 2048;
client_max_body_size 100M;
# Gzip压缩
gzip on;
gzip_vary on;
gzip_min_length 1024;
gzip_types text/plain text/css text/xml text/javascript application/javascript application/xml+rss application/json;
# 上游服务器
upstream tsp_backend {
server tsp-assistant:5000;
}
# HTTP服务器
server {
listen 80;
server_name localhost;
# 健康检查
location /health {
access_log off;
return 200 "healthy\n";
add_header Content-Type text/plain;
}
# API代理
location /api/ {
proxy_pass http://tsp_backend;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_connect_timeout 30s;
proxy_send_timeout 30s;
proxy_read_timeout 30s;
}
# WebSocket代理
location /ws/ {
proxy_pass http://tsp_backend;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
}
# 静态文件
location /static/ {
proxy_pass http://tsp_backend;
expires 1y;
add_header Cache-Control "public, immutable";
}
# 主应用
location / {
proxy_pass http://tsp_backend;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
}
}
# HTTPS服务器可选
# server {
# listen 443 ssl http2;
# server_name localhost;
#
# ssl_certificate /etc/nginx/ssl/cert.pem;
# ssl_certificate_key /etc/nginx/ssl/key.pem;
# ssl_protocols TLSv1.2 TLSv1.3;
# ssl_ciphers ECDHE-RSA-AES128-GCM-SHA256:ECDHE-RSA-AES256-GCM-SHA384;
# ssl_prefer_server_ciphers off;
#
# # 其他配置与HTTP相同...
# }
}

View File

@@ -105,9 +105,19 @@ if "%1"=="" (
echo 📝 提交信息: %commit_msg% echo 📝 提交信息: %commit_msg%
echo. echo.
:: 检查是否有更改需要提交 :: 检查是否有更改需要提交(含未跟踪文件)
git diff --quiet && git diff --cached --quiet setlocal enabledelayedexpansion
if %errorlevel% equ 0 (
git diff --quiet
set has_unstaged=%errorlevel%
git diff --cached --quiet
set has_staged=%errorlevel%
set has_untracked=0
for /f "delims=" %%f in ('git ls-files --others --exclude-standard') do set has_untracked=1
if %has_unstaged% equ 0 if %has_staged% equ 0 if %has_untracked% equ 0 (
echo 没有检测到任何更改,无需提交 echo 没有检测到任何更改,无需提交
echo. echo.
echo ✅ 工作区干净,无需推送 echo ✅ 工作区干净,无需推送
@@ -134,6 +144,7 @@ if %errorlevel% neq 0 (
exit /b 1 exit /b 1
) )
git fetch origin main
git push origin main git push origin main
if %errorlevel% equ 0 ( if %errorlevel% equ 0 (
echo. echo.
@@ -142,11 +153,22 @@ if %errorlevel% equ 0 (
git log --oneline -1 git log --oneline -1
) else ( ) else (
echo. echo.
echo ❌ 推送失败,请检查错误信息 echo ❌ 推送失败,尝试自动解决...
echo 💡 可能的原因: echo 🔄 执行: git pull origin main --rebase
echo - 网络连接问题 git pull origin main --rebase
echo - 远程仓库权限不足 if %errorlevel% equ 0 (
echo - 分支冲突 echo ✅ 重试推送...
git push origin main
if %errorlevel% equ 0 (
echo ✅ 推送成功!
echo 📊 最新提交:
git log --oneline -1
) else (
echo ❌ 重试推送失败,请手动处理
)
) else (
echo ❌ 自动rebase失败请手动处理冲突后重试
)
) )
echo. echo.

View File

@@ -1,50 +1,66 @@
# 核心依赖 # 核心依赖
sqlalchemy>=2.0.0 sqlalchemy==2.0.32
requests>=2.31.0 requests==2.32.3
numpy>=1.24.0 numpy==1.26.4
scikit-learn>=1.3.0 scikit-learn==1.4.2
# 数据库驱动 # 数据库驱动
pymysql>=1.1.0 pymysql==1.1.1
cryptography>=3.4.0 cryptography==43.0.1
flask>=2.0.0 flask==3.0.3
flask-cors>=3.0.0 flask-cors==5.0.0
websockets>=10.0 websockets==15.0.1
# 中文处理 # 中文处理
jieba>=0.42.1 jieba==0.42.1
# 系统监控 # 系统监控
psutil>=5.9.0 psutil==5.9.8
# 数据处理 # 数据处理
pandas>=2.0.0 pandas==2.2.2
openpyxl>=3.1.0 openpyxl==3.1.5
# 向量化
sentence-transformers>=2.2.0
# 日志和配置 # 日志和配置
python-dotenv>=1.0.0 python-dotenv==1.0.1
structlog==24.4.0
# 时间处理 # 时间处理
python-dateutil>=2.8.0 python-dateutil==2.9.0
# JSON处理 # JSON处理
ujson>=5.8.0 ujson==5.10.0
# 异步支持(可选) # 异步支持
aiohttp>=3.8.0 aiohttp==3.10.10
asyncio>=3.4.3 # asyncio是Python内置模块不需要安装
# Redis缓存 # Redis缓存
redis>=4.5.0
# 测试框架 # 测试框架
pytest>=7.4.0 pytest==8.3.3
pytest-asyncio>=0.21.0 pytest-asyncio==0.24.0
pytest-cov==6.0.0
# 代码质量 # 代码质量
black>=23.0.0 black==24.8.0
flake8>=6.0.0 flake8==7.1.1
mypy>=1.5.0 mypy==1.11.1
isort==5.13.2
# 安全
bcrypt==4.2.1
pyjwt==2.9.0
# 文件处理
python-magic==0.4.27
pillow==11.0.0
# 网络工具
urllib3==2.2.3
httpx==0.27.2
# 数据验证
pydantic==2.9.2
marshmallow==3.23.3

Binary file not shown.

204
scripts/docker_deploy.sh Normal file
View File

@@ -0,0 +1,204 @@
#!/bin/bash
# TSP智能助手Docker部署脚本
set -e
# 颜色定义
RED='\033[0;31m'
GREEN='\033[0;32m'
YELLOW='\033[1;33m'
BLUE='\033[0;34m'
NC='\033[0m' # No Color
# 日志函数
log_info() {
echo -e "${BLUE}[INFO]${NC} $1"
}
log_success() {
echo -e "${GREEN}[SUCCESS]${NC} $1"
}
log_warning() {
echo -e "${YELLOW}[WARNING]${NC} $1"
}
log_error() {
echo -e "${RED}[ERROR]${NC} $1"
}
# 检查Docker和Docker Compose
check_dependencies() {
log_info "检查依赖..."
if ! command -v docker &> /dev/null; then
log_error "Docker未安装请先安装Docker"
exit 1
fi
if ! command -v docker-compose &> /dev/null; then
log_error "Docker Compose未安装请先安装Docker Compose"
exit 1
fi
log_success "依赖检查通过"
}
# 创建必要的目录
create_directories() {
log_info "创建必要的目录..."
mkdir -p logs/nginx
mkdir -p monitoring/grafana/provisioning/datasources
mkdir -p monitoring/grafana/provisioning/dashboards
mkdir -p ssl
mkdir -p data
mkdir -p backups
mkdir -p uploads
mkdir -p config
log_success "目录创建完成"
}
# 构建镜像
build_images() {
log_info "构建Docker镜像..."
# 构建主应用镜像
docker-compose build --no-cache tsp-assistant
log_success "镜像构建完成"
}
# 启动服务
start_services() {
log_info "启动服务..."
# 启动基础服务MySQL, Redis
docker-compose up -d mysql redis
# 等待数据库启动
log_info "等待数据库启动..."
sleep 30
# 启动主应用
docker-compose up -d tsp-assistant
# 启动其他服务
docker-compose up -d nginx prometheus grafana
log_success "服务启动完成"
}
# 检查服务状态
check_services() {
log_info "检查服务状态..."
sleep 10
# 检查主应用
if curl -f http://localhost:5000/api/health &> /dev/null; then
log_success "TSP助手服务正常"
else
log_warning "TSP助手服务可能未完全启动"
fi
# 检查Nginx
if curl -f http://localhost/health &> /dev/null; then
log_success "Nginx服务正常"
else
log_warning "Nginx服务可能未完全启动"
fi
# 检查Prometheus
if curl -f http://localhost:9090 &> /dev/null; then
log_success "Prometheus服务正常"
else
log_warning "Prometheus服务可能未完全启动"
fi
# 检查Grafana
if curl -f http://localhost:3000 &> /dev/null; then
log_success "Grafana服务正常"
else
log_warning "Grafana服务可能未完全启动"
fi
}
# 显示服务信息
show_info() {
log_info "服务访问信息:"
echo " TSP助手: http://localhost:5000"
echo " Nginx代理: http://localhost"
echo " Prometheus: http://localhost:9090"
echo " Grafana: http://localhost:3000 (admin/admin123456)"
echo " MySQL: localhost:3306 (root/root123456)"
echo " Redis: localhost:6379 (密码: redis123456)"
echo ""
log_info "查看日志命令:"
echo " docker-compose logs -f tsp-assistant"
echo " docker-compose logs -f mysql"
echo " docker-compose logs -f redis"
echo " docker-compose logs -f nginx"
}
# 停止服务
stop_services() {
log_info "停止服务..."
docker-compose down
log_success "服务已停止"
}
# 清理资源
cleanup() {
log_info "清理Docker资源..."
docker system prune -f
log_success "清理完成"
}
# 主函数
main() {
case "${1:-start}" in
"start")
check_dependencies
create_directories
build_images
start_services
check_services
show_info
;;
"stop")
stop_services
;;
"restart")
stop_services
sleep 5
start_services
check_services
show_info
;;
"cleanup")
stop_services
cleanup
;;
"logs")
docker-compose logs -f "${2:-tsp-assistant}"
;;
"status")
docker-compose ps
;;
*)
echo "用法: $0 {start|stop|restart|cleanup|logs|status}"
echo " start - 启动所有服务"
echo " stop - 停止所有服务"
echo " restart - 重启所有服务"
echo " cleanup - 清理Docker资源"
echo " logs - 查看日志 (可选指定服务名)"
echo " status - 查看服务状态"
exit 1
;;
esac
}
# 执行主函数
main "$@"

108
simple_git_push.bat Normal file
View File

@@ -0,0 +1,108 @@
@echo off
chcp 65001 >nul
setlocal enabledelayedexpansion
echo ========================================
echo 简单Git推送工具
echo ========================================
echo.
:: 1. 显示Git状态
echo [1] Git状态:
git status --short
echo.
:: 2. 显示远程仓库
echo [2] 远程仓库:
git remote -v
if %errorlevel% neq 0 (
echo 错误: 未配置远程仓库
pause
exit /b 1
)
echo.
:: 3. 显示当前分支
echo [3] 当前分支:
for /f "tokens=*" %%b in ('git branch --show-current 2^>nul') do set branch=%%b
if "!branch!"=="" (
echo 警告: 无法获取分支名称尝试使用main
set branch=main
)
echo 分支: !branch!
echo.
:: 4. 检查是否有未提交的更改
echo [4] 检查未提交的更改...
git diff --quiet
set has_uncommitted=%errorlevel%
git diff --cached --quiet
set has_staged=%errorlevel%
if %has_uncommitted% neq 0 (
echo 有未暂存的更改
)
if %has_staged% neq 0 (
echo 有已暂存的更改
)
if %has_uncommitted% equ 0 if %has_staged% equ 0 (
echo 所有更改已提交
)
echo.
:: 5. 尝试推送
echo [5] 开始推送...
echo 命令: git push origin !branch!
echo.
git push origin !branch! 2>&1 | findstr /v "^$"
set push_error=!errorlevel!
if !push_error! equ 0 (
echo.
echo ========================================
echo 推送成功!
echo ========================================
) else (
echo.
echo ========================================
echo 推送失败!错误码: !push_error!
echo ========================================
echo.
echo 尝试设置上游并推送...
git push -u origin !branch! 2>&1 | findstr /v "^$"
set push_u_error=!errorlevel!
if !push_u_error! equ 0 (
echo.
echo ========================================
echo 推送成功(已设置上游)!
echo ========================================
) else (
echo.
echo ========================================
echo 推送仍然失败
echo ========================================
echo.
echo 常见问题和解决方案:
echo.
echo 1. 认证问题:
echo - 检查SSH密钥: ssh -T git@github.com (GitHub)
echo - 检查SSH密钥: ssh -T git@gitee.com (Gitee)
echo - 或使用HTTPS + Personal Access Token
echo.
echo 2. 远程仓库地址:
git config --get remote.origin.url
echo.
echo 3. 分支冲突:
echo - 先拉取: git pull origin !branch! --rebase
echo - 解决冲突后推送: git push origin !branch!
echo.
echo 4. 检查网络连接和远程仓库权限
echo.
)
)
echo.
pause

Binary file not shown.

View File

@@ -37,8 +37,7 @@ class TSPAgentAssistantCore(TSPAssistant):
# 初始化智能Agent # 初始化智能Agent
self.intelligent_agent = IntelligentAgent( self.intelligent_agent = IntelligentAgent(
llm_manager=self.llm_manager, llm_client=self.llm_manager
agent_core=self.agent_core
) )
# 初始化动作执行器 # 初始化动作执行器
@@ -60,15 +59,30 @@ class TSPAgentAssistantCore(TSPAssistant):
if llm_config: if llm_config:
self.llm_manager = LLMManager(llm_config) self.llm_manager = LLMManager(llm_config)
else: else:
# 使用默认配置 - 千问模型 # 从统一配置管理器获取LLM配置
try:
from src.config.unified_config import get_config
unified_llm = get_config().llm
# 将统一配置的LLMConfig转换为agent需要的LLMConfig
agent_llm_config = LLMConfig(
provider=unified_llm.provider,
api_key=unified_llm.api_key,
base_url=unified_llm.base_url,
model=unified_llm.model,
temperature=unified_llm.temperature,
max_tokens=unified_llm.max_tokens
)
self.llm_manager = LLMManager(agent_llm_config)
except Exception as e:
logger.warning(f"无法从统一配置加载LLM配置使用config/llm_config.py: {e}")
try: try:
from config.llm_config import DEFAULT_CONFIG from config.llm_config import DEFAULT_CONFIG
self.llm_manager = LLMManager(DEFAULT_CONFIG) self.llm_manager = LLMManager(DEFAULT_CONFIG)
except ImportError: except ImportError:
# 如果配置文件不存在,使用内置配置 # 最后的fallback
default_config = LLMConfig( default_config = LLMConfig(
provider="openai", provider="qwen",
api_key="sk-your-qwen-api-key-here", api_key="",
base_url="https://dashscope.aliyuncs.com/compatible-mode/v1", base_url="https://dashscope.aliyuncs.com/compatible-mode/v1",
model="qwen-turbo", model="qwen-turbo",
temperature=0.7, temperature=0.7,

File diff suppressed because it is too large Load Diff

Binary file not shown.

View File

@@ -10,7 +10,7 @@ class Config:
ALIBABA_MODEL_NAME = "qwen-plus-latest" ALIBABA_MODEL_NAME = "qwen-plus-latest"
# 数据库配置 # 数据库配置
DATABASE_URL = "mysql+pymysql://tsp_assistant:123456@43.134.68.207/tsp_assistant?charset=utf8mb4" DATABASE_URL = "mysql+pymysql://tsp_assistant:123456@jeason.online/tsp_assistant?charset=utf8mb4"
# 知识库配置 # 知识库配置
KNOWLEDGE_BASE_PATH = "data/knowledge_base" KNOWLEDGE_BASE_PATH = "data/knowledge_base"

View File

@@ -18,7 +18,7 @@ logger = logging.getLogger(__name__)
@dataclass @dataclass
class DatabaseConfig: class DatabaseConfig:
"""数据库配置""" """数据库配置"""
url: str = "mysql+pymysql://tsp_assistant:password@43.134.68.207/tsp_assistant?charset=utf8mb4" url: str = "mysql+pymysql://tsp_assistant:password@jeason.online/tsp_assistant?charset=utf8mb4"
pool_size: int = 10 pool_size: int = 10
max_overflow: int = 20 max_overflow: int = 20
pool_timeout: int = 30 pool_timeout: int = 30
@@ -84,9 +84,9 @@ class UnifiedConfig:
self.config_dir = Path(config_dir) self.config_dir = Path(config_dir)
self.config_file = self.config_dir / "unified_config.json" self.config_file = self.config_dir / "unified_config.json"
# 默认配置 # 默认配置 - 从config/llm_config.py加载默认LLM配置
self.database = DatabaseConfig() self.database = DatabaseConfig()
self.llm = LLMConfig() self.llm = self._load_default_llm_config()
self.server = ServerConfig() self.server = ServerConfig()
self.feishu = FeishuConfig() self.feishu = FeishuConfig()
self.ai_accuracy = AIAccuracyConfig() self.ai_accuracy = AIAccuracyConfig()
@@ -95,6 +95,23 @@ class UnifiedConfig:
# 加载配置 # 加载配置
self.load_config() self.load_config()
def _load_default_llm_config(self) -> LLMConfig:
"""加载默认LLM配置"""
try:
from config.llm_config import DEFAULT_CONFIG
# 将config/llm_config.py中的配置转换为统一配置的格式
return LLMConfig(
provider=DEFAULT_CONFIG.provider,
api_key=DEFAULT_CONFIG.api_key,
base_url=DEFAULT_CONFIG.base_url,
model=DEFAULT_CONFIG.model,
temperature=DEFAULT_CONFIG.temperature,
max_tokens=DEFAULT_CONFIG.max_tokens
)
except Exception as e:
logger.warning(f"无法加载默认LLM配置使用内置默认值: {e}")
return LLMConfig()
def load_config(self): def load_config(self):
"""加载配置文件""" """加载配置文件"""
try: try:

Binary file not shown.

Binary file not shown.

Binary file not shown.

Binary file not shown.

Binary file not shown.

View File

@@ -34,10 +34,13 @@ class DatabaseManager:
max_overflow=30, # 增加溢出连接数 max_overflow=30, # 增加溢出连接数
pool_pre_ping=True, pool_pre_ping=True,
pool_recycle=1800, # 减少回收时间 pool_recycle=1800, # 减少回收时间
pool_timeout=10, # 连接超时 pool_timeout=30, # 连接超时(秒)
connect_args={ connect_args={
"charset": "utf8mb4", "charset": "utf8mb4",
"autocommit": False "autocommit": False,
"connect_timeout": 30, # 连接超时(秒)- 适用于网络延迟较大的情况
"read_timeout": 30, # 读取超时(秒)
"write_timeout": 30, # 写入超时(秒)
} }
) )
else: else:

View File

@@ -41,8 +41,17 @@ class WorkOrder(Base):
has_updated_same_day = Column(String(50), nullable=True) # 是否同日更新 has_updated_same_day = Column(String(50), nullable=True) # 是否同日更新
operating_time = Column(String(100), nullable=True) # 操作时间 operating_time = Column(String(100), nullable=True) # 操作时间
# 工单分发和权限管理字段
assigned_module = Column(String(50), nullable=True) # 分配的模块TBOX、OTA等
module_owner = Column(String(100), nullable=True) # 业务接口人/模块负责人
dispatcher = Column(String(100), nullable=True) # 分发人(运维人员)
dispatch_time = Column(DateTime, nullable=True) # 分发时间
region = Column(String(50), nullable=True) # 区域overseas/domestic- 用于区分海外/国内
# 关联对话记录 # 关联对话记录
conversations = relationship("Conversation", back_populates="work_order") conversations = relationship("Conversation", back_populates="work_order")
# 关联处理过程记录
process_history = relationship("WorkOrderProcessHistory", back_populates="work_order", order_by="WorkOrderProcessHistory.process_time")
class Conversation(Base): class Conversation(Base):
"""对话记录模型""" """对话记录模型"""
@@ -136,3 +145,31 @@ class WorkOrderSuggestion(Base):
use_human_resolution = Column(Boolean, default=False) # 是否使用人工描述入库 use_human_resolution = Column(Boolean, default=False) # 是否使用人工描述入库
created_at = Column(DateTime, default=datetime.now) created_at = Column(DateTime, default=datetime.now)
updated_at = Column(DateTime, default=datetime.now, onupdate=datetime.now) updated_at = Column(DateTime, default=datetime.now, onupdate=datetime.now)
class WorkOrderProcessHistory(Base):
"""工单处理过程记录表"""
__tablename__ = "work_order_process_history"
id = Column(Integer, primary_key=True)
work_order_id = Column(Integer, ForeignKey("work_orders.id"), nullable=False)
# 处理人员信息
processor_name = Column(String(100), nullable=False) # 处理人员姓名
processor_role = Column(String(50), nullable=True) # 处理人员角色(运维、业务方等)
processor_region = Column(String(50), nullable=True) # 处理人员区域overseas/domestic
# 处理内容
process_content = Column(Text, nullable=False) # 处理内容/操作描述
action_type = Column(String(50), nullable=False) # 操作类型dispatch、process、close、reassign等
# 处理结果
previous_status = Column(String(50), nullable=True) # 处理前的状态
new_status = Column(String(50), nullable=True) # 处理后的状态
assigned_module = Column(String(50), nullable=True) # 分配的模块(如果是分发操作)
# 时间戳
process_time = Column(DateTime, default=datetime.now, nullable=False) # 处理时间
created_at = Column(DateTime, default=datetime.now)
# 关联工单
work_order = relationship("WorkOrder", back_populates="process_history")

View File

@@ -243,17 +243,13 @@ class QueryOptimizer:
start_time_query = end_time - timedelta(days=days-1) start_time_query = end_time - timedelta(days=days-1)
# 批量查询所有需要的数据 # 批量查询所有需要的数据
workorders = session.query(WorkOrder).filter( # 修改:查询所有工单,不限制时间范围
WorkOrder.created_at >= start_time_query workorders = session.query(WorkOrder).all()
).all()
alerts = session.query(Alert).filter( # 修改:查询所有预警和对话,不限制时间范围
Alert.created_at >= start_time_query alerts = session.query(Alert).all()
).all()
conversations = session.query(Conversation).filter( conversations = session.query(Conversation).all()
Conversation.timestamp >= start_time_query
).all()
# 处理数据 # 处理数据
analytics = self._process_analytics_data(workorders, alerts, conversations, days) analytics = self._process_analytics_data(workorders, alerts, conversations, days)
@@ -291,14 +287,42 @@ class QueryOptimizer:
status_counts = Counter([wo.status for wo in workorders]) status_counts = Counter([wo.status for wo in workorders])
category_counts = Counter([wo.category for wo in workorders]) category_counts = Counter([wo.category for wo in workorders])
priority_counts = Counter([wo.priority for wo in workorders]) priority_counts = Counter([wo.priority for wo in workorders])
resolved_count = status_counts.get('resolved', 0)
# 处理状态映射(支持中英文状态)
status_mapping = {
'open': ['open', '待处理', '新建', 'new'],
'in_progress': ['in_progress', '处理中', '进行中', 'progress', 'processing'],
'resolved': ['resolved', '已解决', '已完成'],
'closed': ['closed', '已关闭', '关闭']
}
# 统计各状态的数量
mapped_counts = {'open': 0, 'in_progress': 0, 'resolved': 0, 'closed': 0}
for status, count in status_counts.items():
if status is None:
continue
status_lower = str(status).lower()
mapped = False
for mapped_status, possible_values in status_mapping.items():
if status_lower in [v.lower() for v in possible_values]:
mapped_counts[mapped_status] += count
mapped = True
break
if not mapped:
logger.warning(f"未映射的状态: '{status}' (数量: {count})")
resolved_count = mapped_counts['resolved']
workorders_stats = { workorders_stats = {
'total': total, 'total': total,
'open': status_counts.get('open', 0), 'open': mapped_counts['open'],
'in_progress': status_counts.get('in_progress', 0), 'in_progress': mapped_counts['in_progress'],
'resolved': resolved_count, 'resolved': mapped_counts['resolved'],
'closed': status_counts.get('closed', 0), 'closed': mapped_counts['closed'],
'by_category': dict(category_counts), 'by_category': dict(category_counts),
'by_priority': dict(priority_counts) 'by_priority': dict(priority_counts)
} }
@@ -365,9 +389,6 @@ class QueryOptimizer:
if len(self.query_stats[query_name]) > 100: if len(self.query_stats[query_name]) > 100:
self.query_stats[query_name] = self.query_stats[query_name][-100:] self.query_stats[query_name] = self.query_stats[query_name][-100:]
# 记录慢查询
if query_time > self.slow_query_threshold:
logger.warning(f"慢查询检测: {query_name} 耗时 {query_time:.2f}s")
def get_query_performance_report(self) -> Dict[str, Any]: def get_query_performance_report(self) -> Dict[str, Any]:
"""获取查询性能报告""" """获取查询性能报告"""
@@ -409,7 +430,6 @@ class QueryOptimizer:
logger.warning(f"创建索引失败: {e}") logger.warning(f"创建索引失败: {e}")
session.commit() session.commit()
logger.info("数据库索引优化完成")
return True return True
except Exception as e: except Exception as e:

View File

@@ -75,7 +75,6 @@ class SystemOptimizer:
) )
self.redis_client.ping() self.redis_client.ping()
self.redis_connected = True self.redis_connected = True
logger.info("系统优化Redis连接成功")
except Exception as e: except Exception as e:
logger.debug(f"系统优化Redis连接失败: {e}") logger.debug(f"系统优化Redis连接失败: {e}")
self.redis_client = None self.redis_client = None
@@ -91,7 +90,6 @@ class SystemOptimizer:
monitor_thread = threading.Thread(target=self._monitor_system, daemon=True) monitor_thread = threading.Thread(target=self._monitor_system, daemon=True)
monitor_thread.start() monitor_thread.start()
logger.info("系统监控线程已启动")
except Exception as e: except Exception as e:
logger.error(f"启动监控线程失败: {e}") logger.error(f"启动监控线程失败: {e}")

View File

@@ -0,0 +1,231 @@
# -*- coding: utf-8 -*-
"""
工单权限管理模块
实现基于角色的访问控制RBAC和工单分发流程
"""
import logging
from typing import List, Dict, Optional, Set
from enum import Enum
logger = logging.getLogger(__name__)
class UserRole(Enum):
"""用户角色枚举"""
# 属地运维(海外/国内)
OVERSEAS_OPS = "overseas_ops" # 海外属地运维
DOMESTIC_OPS = "domestic_ops" # 国内属地运维
# 业务方接口人(各模块负责人)
TBOX_OWNER = "tbox_owner" # TBOX模块负责人
OTA_OWNER = "ota_owner" # OTA模块负责人
DMC_OWNER = "dmc_owner" # DMC模块负责人
MES_OWNER = "mes_owner" # MES模块负责人
APP_OWNER = "app_owner" # APP模块负责人
PKI_OWNER = "pki_owner" # PKI模块负责人
TSP_OWNER = "tsp_owner" # TSP模块负责人
# 系统角色
ADMIN = "admin" # 系统管理员
VIEWER = "viewer" # 只读用户
class WorkOrderModule(Enum):
"""工单模块枚举"""
TBOX = "TBOX"
OTA = "OTA"
DMC = "DMC"
MES = "MES"
APP = "APP"
PKI = "PKI"
TSP = "TSP"
LOCAL_OPS = "local_ops" # 属地运维处理
UNASSIGNED = "unassigned" # 未分配
class WorkOrderStatus:
"""工单状态常量"""
PENDING = "pending" # 待处理
ASSIGNED = "assigned" # 已分配
IN_PROGRESS = "in_progress" # 处理中
RESOLVED = "resolved" # 已解决
CLOSED = "closed" # 已关闭
class WorkOrderPermissionManager:
"""工单权限管理器"""
# 所有模块集合(供属地运维和管理员使用)
ALL_MODULES = {
WorkOrderModule.TBOX, WorkOrderModule.OTA, WorkOrderModule.DMC,
WorkOrderModule.MES, WorkOrderModule.APP, WorkOrderModule.PKI,
WorkOrderModule.TSP, WorkOrderModule.LOCAL_OPS
}
# 角色到模块的映射
ROLE_MODULE_MAP = {
UserRole.TBOX_OWNER: {WorkOrderModule.TBOX},
UserRole.OTA_OWNER: {WorkOrderModule.OTA},
UserRole.DMC_OWNER: {WorkOrderModule.DMC},
UserRole.MES_OWNER: {WorkOrderModule.MES},
UserRole.APP_OWNER: {WorkOrderModule.APP},
UserRole.PKI_OWNER: {WorkOrderModule.PKI},
UserRole.TSP_OWNER: {WorkOrderModule.TSP},
UserRole.OVERSEAS_OPS: ALL_MODULES, # 可访问所有模块
UserRole.DOMESTIC_OPS: ALL_MODULES, # 可访问所有模块
UserRole.ADMIN: ALL_MODULES, # 管理员可访问所有
UserRole.VIEWER: set(), # 只读,由其他逻辑控制
}
@staticmethod
def can_view_all_workorders(role: UserRole) -> bool:
"""判断角色是否可以查看所有工单(属地运维和管理员)"""
return role in [UserRole.OVERSEAS_OPS, UserRole.DOMESTIC_OPS, UserRole.ADMIN]
@staticmethod
def get_accessible_modules(role: UserRole) -> Set[WorkOrderModule]:
"""获取角色可访问的模块列表"""
return WorkOrderPermissionManager.ROLE_MODULE_MAP.get(role, set())
@staticmethod
def can_access_module(role: UserRole, module: WorkOrderModule) -> bool:
"""判断角色是否可以访问指定模块"""
accessible_modules = WorkOrderPermissionManager.get_accessible_modules(role)
# 属地运维和管理员可以访问所有模块
if WorkOrderPermissionManager.can_view_all_workorders(role):
return True
# 业务方只能访问自己的模块
return module in accessible_modules
@staticmethod
def can_dispatch_workorder(role: UserRole) -> bool:
"""判断角色是否可以进行工单分发(属地运维和管理员)"""
return role in [UserRole.OVERSEAS_OPS, UserRole.DOMESTIC_OPS, UserRole.ADMIN]
@staticmethod
def can_update_workorder(role: UserRole, workorder_module: Optional[WorkOrderModule],
assigned_to_module: Optional[WorkOrderModule]) -> bool:
"""判断角色是否可以更新工单"""
# 管理员和属地运维可以更新所有工单
if WorkOrderPermissionManager.can_view_all_workorders(role):
return True
# 业务方只能更新分配给自己的模块的工单
if workorder_module and assigned_to_module:
accessible_modules = WorkOrderPermissionManager.get_accessible_modules(role)
return workorder_module in accessible_modules and workorder_module == assigned_to_module
return False
@staticmethod
def filter_workorders_by_permission(role: UserRole, workorders: List[Dict]) -> List[Dict]:
"""根据权限过滤工单列表"""
if WorkOrderPermissionManager.can_view_all_workorders(role):
# 属地运维和管理员可以看到所有工单
return workorders
# 业务方只能看到自己模块的工单
accessible_modules = WorkOrderPermissionManager.get_accessible_modules(role)
filtered = []
for wo in workorders:
module_str = wo.get("module") or wo.get("assigned_module")
if module_str:
try:
module = WorkOrderModule(module_str)
if module in accessible_modules:
filtered.append(wo)
except ValueError:
# 如果模块值不在枚举中,跳过
continue
else:
# 未分配的工单,业务方看不到
pass
return filtered
class WorkOrderDispatchManager:
"""工单分发管理器"""
# 模块到业务接口人的映射(可以动态配置)
MODULE_OWNER_MAP = {
WorkOrderModule.TBOX: "TBOX业务接口人",
WorkOrderModule.OTA: "OTA业务接口人",
WorkOrderModule.DMC: "DMC业务接口人",
WorkOrderModule.MES: "MES业务接口人",
WorkOrderModule.APP: "APP业务接口人",
WorkOrderModule.PKI: "PKI业务接口人",
WorkOrderModule.TSP: "TSP业务接口人",
}
@staticmethod
def get_module_owner(module: WorkOrderModule) -> str:
"""获取模块的业务接口人"""
return WorkOrderDispatchManager.MODULE_OWNER_MAP.get(module, "未指定")
@staticmethod
def dispatch_workorder(workorder_id: int, target_module: WorkOrderModule,
dispatcher_role: UserRole, dispatcher_name: str) -> Dict:
"""
分发工单到指定模块
Args:
workorder_id: 工单ID
target_module: 目标模块
dispatcher_role: 分发者角色(必须是运维或管理员)
dispatcher_name: 分发者姓名
Returns:
分发结果
"""
# 检查分发权限
if not WorkOrderPermissionManager.can_dispatch_workorder(dispatcher_role):
return {
"success": False,
"error": "无权进行工单分发,只有属地运维和管理员可以分发工单"
}
# 获取模块负责人
module_owner = WorkOrderDispatchManager.get_module_owner(target_module)
# 这里应该更新数据库中的工单信息
# 实际实现时需要调用数据库更新逻辑
return {
"success": True,
"message": f"工单已分发到{target_module.value}模块",
"assigned_module": target_module.value,
"module_owner": module_owner,
"dispatcher": dispatcher_name,
"dispatcher_role": dispatcher_role.value
}
@staticmethod
def suggest_module(description: str, title: str = "") -> Optional[WorkOrderModule]:
"""
根据工单描述建议分配模块可以使用AI分析
Args:
description: 工单描述
title: 工单标题
Returns:
建议的模块
"""
# 简单的关键词匹配实际可以使用AI分析
text = (title + " " + description).lower()
keyword_module_map = {
WorkOrderModule.TBOX: ["tbox", "telematics", "车载", "车联网"],
WorkOrderModule.OTA: ["ota", "over-the-air", "升级", "update"],
WorkOrderModule.DMC: ["dmc", "device management", "设备管理"],
WorkOrderModule.MES: ["mes", "manufacturing", "制造"],
WorkOrderModule.APP: ["app", "application", "应用", "remote control"],
WorkOrderModule.PKI: ["pki", "certificate", "证书"],
WorkOrderModule.TSP: ["tsp", "service", "服务"],
}
for module, keywords in keyword_module_map.items():
for keyword in keywords:
if keyword in text:
return module
return WorkOrderModule.UNASSIGNED

View File

@@ -1,5 +1,3 @@
# -*- coding: utf-8 -*-
""" """
实时对话管理器 实时对话管理器
提供实时对话功能集成知识库搜索和LLM回复 提供实时对话功能集成知识库搜索和LLM回复
@@ -109,7 +107,8 @@ class RealtimeChatManager:
assistant_response = self._generate_response( assistant_response = self._generate_response(
user_message, user_message,
knowledge_results, knowledge_results,
session["context"] session["context"],
session["work_order_id"]
) )
# 创建助手消息 # 创建助手消息
@@ -167,11 +166,14 @@ class RealtimeChatManager:
logger.error(f"搜索知识库失败: {e}") logger.error(f"搜索知识库失败: {e}")
return [] return []
def _generate_response(self, user_message: str, knowledge_results: List[Dict], context: List[Dict]) -> Dict[str, Any]: def _generate_response(self, user_message: str, knowledge_results: List[Dict], context: List[Dict], work_order_id: Optional[int] = None) -> Dict[str, Any]:
"""生成回复""" """生成回复"""
try: try:
# 检查是否有相关的工单AI建议
ai_suggestions = self._get_workorder_ai_suggestions(work_order_id)
# 构建提示词 # 构建提示词
prompt = self._build_chat_prompt(user_message, knowledge_results, context) prompt = self._build_chat_prompt(user_message, knowledge_results, context, ai_suggestions)
# 调用大模型 # 调用大模型
response = self.llm_client.chat_completion( response = self.llm_client.chat_completion(
@@ -184,24 +186,31 @@ class RealtimeChatManager:
content = response['choices'][0]['message']['content'] content = response['choices'][0]['message']['content']
confidence = self._calculate_confidence(knowledge_results, content) confidence = self._calculate_confidence(knowledge_results, content)
# 如果有AI建议在回复中包含
if ai_suggestions:
content = self._format_response_with_ai_suggestions(content, ai_suggestions)
return { return {
"content": content, "content": content,
"confidence": confidence "confidence": confidence,
"ai_suggestions": ai_suggestions
} }
else: else:
return { return {
"content": "抱歉,我暂时无法处理您的问题。请稍后再试或联系人工客服。", "content": "抱歉,我暂时无法处理您的问题。请稍后再试或联系人工客服。",
"confidence": 0.1 "confidence": 0.1,
"ai_suggestions": ai_suggestions
} }
except Exception as e: except Exception as e:
logger.error(f"生成回复失败: {e}") logger.error(f"生成回复失败: {e}")
return { return {
"content": "抱歉,系统出现错误,请稍后再试。", "content": "抱歉,系统出现错误,请稍后再试。",
"confidence": 0.1 "confidence": 0.1,
"ai_suggestions": []
} }
def _build_chat_prompt(self, user_message: str, knowledge_results: List[Dict], context: List[Dict]) -> str: def _build_chat_prompt(self, user_message: str, knowledge_results: List[Dict], context: List[Dict], ai_suggestions: List[str] = None) -> str:
"""构建聊天提示词""" """构建聊天提示词"""
prompt = f""" prompt = f"""
你是一个专业的奇瑞汽车客服助手。请根据用户的问题和提供的知识库信息,给出专业、友好的回复。 你是一个专业的奇瑞汽车客服助手。请根据用户的问题和提供的知识库信息,给出专业、友好的回复。
@@ -219,6 +228,12 @@ class RealtimeChatManager:
else: else:
prompt += "\n未找到相关知识库信息。\n" prompt += "\n未找到相关知识库信息。\n"
# 添加AI建议信息
if ai_suggestions:
prompt += "\n相关AI建议\n"
for suggestion in ai_suggestions:
prompt += f"- {suggestion}\n"
# 添加上下文 # 添加上下文
if context: if context:
prompt += "\n对话历史:\n" prompt += "\n对话历史:\n"
@@ -233,13 +248,72 @@ class RealtimeChatManager:
4. 如果问题需要进站处理,请明确说明 4. 如果问题需要进站处理,请明确说明
5. 回复要简洁明了,避免冗长 5. 回复要简洁明了,避免冗长
6. 如果涉及技术问题,要提供具体的操作步骤 6. 如果涉及技术问题,要提供具体的操作步骤
7. 始终以"您好"开头,以"如有其他问题,请随时联系"结尾
请直接给出回复内容,不要包含其他格式: 请直接给出回复内容,不要包含其他格式:
""" """
return prompt return prompt
def _get_workorder_ai_suggestions(self, work_order_id: Optional[int]) -> List[str]:
"""
获取工单的AI建议
Args:
work_order_id: 工单ID
Returns:
AI建议列表
"""
try:
if not work_order_id:
return []
with db_manager.get_session() as session:
# 查询工单的AI建议
from ..core.models import WorkOrderSuggestion
suggestions = session.query(WorkOrderSuggestion).filter(
WorkOrderSuggestion.work_order_id == work_order_id
).order_by(WorkOrderSuggestion.created_at.desc()).limit(3).all()
ai_suggestions = []
for suggestion in suggestions:
if suggestion.ai_suggestion:
ai_suggestions.append(suggestion.ai_suggestion)
return ai_suggestions
except Exception as e:
logger.error(f"获取工单AI建议失败: {e}")
return []
def _format_response_with_ai_suggestions(self, content: str, ai_suggestions: List[str]) -> str:
"""
在回复中格式化AI建议
Args:
content: 原始回复内容
ai_suggestions: AI建议列表
Returns:
包含AI建议的格式化回复
"""
try:
if not ai_suggestions:
return content
# 在回复末尾添加AI建议
formatted_content = content
formatted_content += "\n\n📋 **相关AI建议**\n"
for i, suggestion in enumerate(ai_suggestions, 1):
formatted_content += f"{i}. {suggestion}\n"
return formatted_content
except Exception as e:
logger.error(f"格式化AI建议失败: {e}")
return content
def _extract_vin(self, text: str) -> Optional[str]: def _extract_vin(self, text: str) -> Optional[str]:
"""从文本中提取VIN17位I/O/Q不使用常见校验""" """从文本中提取VIN17位I/O/Q不使用常见校验"""
try: try:

Binary file not shown.

Some files were not shown because too many files have changed in this diff Show More