1286 lines
38 KiB
Markdown
1286 lines
38 KiB
Markdown
|
|
# 真正的 AI 数据分析 Agent - 设计文档
|
|||
|
|
|
|||
|
|
## 概述
|
|||
|
|
|
|||
|
|
本设计文档描述了一个真正由 AI 驱动的数据分析系统。与传统的基于规则的系统不同,该系统让 AI 像人类分析师一样理解数据、规划分析、执行任务并生成洞察。
|
|||
|
|
|
|||
|
|
### 核心设计理念
|
|||
|
|
|
|||
|
|
1. **AI 优先**:让 AI 做决策,而不是执行预定义的规则
|
|||
|
|
2. **动态适应**:根据数据特征和发现动态调整分析计划
|
|||
|
|
3. **隐私保护**:AI 不读取原始数据,只通过工具获取摘要信息
|
|||
|
|
4. **工具驱动**:通过动态工具集赋能 AI 的分析能力
|
|||
|
|
|
|||
|
|
### 系统架构
|
|||
|
|
|
|||
|
|
系统采用五阶段流水线架构:
|
|||
|
|
|
|||
|
|
```
|
|||
|
|
数据输入 → 数据理解 → 需求理解 → 分析规划 → 任务执行 → 报告生成
|
|||
|
|
```
|
|||
|
|
|
|||
|
|
每个阶段都由 AI 驱动,具有自主决策能力。
|
|||
|
|
|
|||
|
|
## 架构设计
|
|||
|
|
|
|||
|
|
### 整体架构
|
|||
|
|
|
|||
|
|
```mermaid
|
|||
|
|
graph TB
|
|||
|
|
User[用户] --> Input[数据输入]
|
|||
|
|
Input --> DU[数据理解阶段]
|
|||
|
|
Input --> RU[需求理解阶段]
|
|||
|
|
|
|||
|
|
DU --> DataProfile[数据画像]
|
|||
|
|
RU --> ReqSpec[需求规格]
|
|||
|
|
|
|||
|
|
DataProfile --> AP[分析规划阶段]
|
|||
|
|
ReqSpec --> AP
|
|||
|
|
|
|||
|
|
AP --> Plan[分析计划]
|
|||
|
|
Plan --> TE[任务执行阶段]
|
|||
|
|
|
|||
|
|
TE --> TM[工具管理器]
|
|||
|
|
TM --> Tools[动态工具集]
|
|||
|
|
Tools --> TE
|
|||
|
|
|
|||
|
|
TE --> Results[分析结果]
|
|||
|
|
Results --> RG[报告生成阶段]
|
|||
|
|
RG --> Report[分析报告]
|
|||
|
|
Report --> User
|
|||
|
|
```
|
|||
|
|
|
|||
|
|
### 核心组件
|
|||
|
|
|
|||
|
|
|
|||
|
|
#### 1. 数据理解引擎(Data Understanding Engine)
|
|||
|
|
|
|||
|
|
负责分析数据特征并生成数据画像。
|
|||
|
|
|
|||
|
|
**输入**:
|
|||
|
|
- CSV 文件路径
|
|||
|
|
- 可选的数据描述
|
|||
|
|
|
|||
|
|
**输出**:
|
|||
|
|
- 数据画像(DataProfile)
|
|||
|
|
|
|||
|
|
**职责**:
|
|||
|
|
- 加载和解析 CSV 文件
|
|||
|
|
- 推断数据类型(工单、销售、用户等)
|
|||
|
|
- 识别关键字段和业务含义
|
|||
|
|
- 评估数据质量
|
|||
|
|
- 生成数据摘要(供 AI 使用,不包含原始数据)
|
|||
|
|
|
|||
|
|
#### 2. 需求理解引擎(Requirement Understanding Engine)
|
|||
|
|
|
|||
|
|
负责理解用户需求并转化为分析规格。
|
|||
|
|
|
|||
|
|
**输入**:
|
|||
|
|
- 用户需求(自然语言或模板)
|
|||
|
|
- 数据画像
|
|||
|
|
|
|||
|
|
**输出**:
|
|||
|
|
- 需求规格(RequirementSpec)
|
|||
|
|
|
|||
|
|
**职责**:
|
|||
|
|
- 解析用户的自然语言需求
|
|||
|
|
- 将抽象概念转化为具体指标
|
|||
|
|
- 解析和理解分析模板
|
|||
|
|
- 检查数据是否支持需求
|
|||
|
|
- 生成分析目标列表
|
|||
|
|
|
|||
|
|
#### 3. 分析规划引擎(Analysis Planning Engine)
|
|||
|
|
|
|||
|
|
负责生成动态的分析计划。
|
|||
|
|
|
|||
|
|
**输入**:
|
|||
|
|
- 数据画像
|
|||
|
|
- 需求规格
|
|||
|
|
|
|||
|
|
**输出**:
|
|||
|
|
- 分析计划(AnalysisPlan)
|
|||
|
|
|
|||
|
|
**职责**:
|
|||
|
|
- 根据数据特征和需求生成任务列表
|
|||
|
|
- 确定任务优先级和依赖关系
|
|||
|
|
- 选择合适的分析方法
|
|||
|
|
- 生成初始工具配置
|
|||
|
|
|
|||
|
|
|
|||
|
|
#### 4. 任务执行引擎(Task Execution Engine)
|
|||
|
|
|
|||
|
|
负责执行分析任务,使用 ReAct 模式。
|
|||
|
|
|
|||
|
|
**输入**:
|
|||
|
|
- 分析计划
|
|||
|
|
- 工具集
|
|||
|
|
|
|||
|
|
**输出**:
|
|||
|
|
- 分析结果集合
|
|||
|
|
|
|||
|
|
**职责**:
|
|||
|
|
- 按优先级执行任务
|
|||
|
|
- 使用 ReAct 模式(思考-行动-观察)
|
|||
|
|
- 调用工具完成分析
|
|||
|
|
- 验证结果并处理错误
|
|||
|
|
- 根据发现动态调整计划
|
|||
|
|
|
|||
|
|
#### 5. 工具管理器(Tool Manager)
|
|||
|
|
|
|||
|
|
负责管理和提供动态工具集。
|
|||
|
|
|
|||
|
|
**输入**:
|
|||
|
|
- 数据画像
|
|||
|
|
- 当前任务需求
|
|||
|
|
|
|||
|
|
**输出**:
|
|||
|
|
- 可用工具集合
|
|||
|
|
- 工具描述(供 AI 选择)
|
|||
|
|
|
|||
|
|
**职责**:
|
|||
|
|
- 维护基础工具库
|
|||
|
|
- 根据数据特征启用/禁用工具
|
|||
|
|
- 根据需求生成临时工具
|
|||
|
|
- 提供工具的标准接口
|
|||
|
|
- 确保工具返回摘要而非原始数据
|
|||
|
|
|
|||
|
|
#### 6. 报告生成引擎(Report Generation Engine)
|
|||
|
|
|
|||
|
|
负责生成最终的分析报告。
|
|||
|
|
|
|||
|
|
**输入**:
|
|||
|
|
- 所有分析结果
|
|||
|
|
- 需求规格
|
|||
|
|
- 数据画像
|
|||
|
|
|
|||
|
|
**输出**:
|
|||
|
|
- Markdown 格式报告
|
|||
|
|
|
|||
|
|
**职责**:
|
|||
|
|
- 提炼关键发现
|
|||
|
|
- 组织报告结构
|
|||
|
|
- 生成结论和建议
|
|||
|
|
- 嵌入图表和可视化
|
|||
|
|
- 格式化输出
|
|||
|
|
|
|||
|
|
## 组件和接口
|
|||
|
|
|
|||
|
|
### 数据模型
|
|||
|
|
|
|||
|
|
#### DataProfile(数据画像)
|
|||
|
|
|
|||
|
|
```python
|
|||
|
|
@dataclass
|
|||
|
|
class ColumnInfo:
|
|||
|
|
name: str
|
|||
|
|
dtype: str # 'numeric', 'categorical', 'datetime', 'text'
|
|||
|
|
missing_rate: float
|
|||
|
|
unique_count: int
|
|||
|
|
sample_values: List[Any] # 最多5个示例值
|
|||
|
|
statistics: Dict[str, Any] # 数值列的统计信息(min, max, mean等)
|
|||
|
|
|
|||
|
|
@dataclass
|
|||
|
|
class DataProfile:
|
|||
|
|
file_path: str
|
|||
|
|
row_count: int
|
|||
|
|
column_count: int
|
|||
|
|
columns: List[ColumnInfo]
|
|||
|
|
inferred_type: str # 'ticket', 'sales', 'user', 'unknown'
|
|||
|
|
key_fields: Dict[str, str] # 字段名 -> 业务含义
|
|||
|
|
quality_score: float # 0-100
|
|||
|
|
summary: str # AI 生成的数据摘要
|
|||
|
|
```
|
|||
|
|
|
|||
|
|
|
|||
|
|
#### RequirementSpec(需求规格)
|
|||
|
|
|
|||
|
|
```python
|
|||
|
|
@dataclass
|
|||
|
|
class AnalysisObjective:
|
|||
|
|
name: str
|
|||
|
|
description: str
|
|||
|
|
metrics: List[str] # 需要计算的指标
|
|||
|
|
priority: int # 1-5,5最高
|
|||
|
|
|
|||
|
|
@dataclass
|
|||
|
|
class RequirementSpec:
|
|||
|
|
user_input: str # 原始用户输入
|
|||
|
|
objectives: List[AnalysisObjective]
|
|||
|
|
template_path: Optional[str] # 如果使用模板
|
|||
|
|
template_requirements: Optional[Dict[str, Any]] # 模板要求
|
|||
|
|
constraints: List[str] # 约束条件
|
|||
|
|
expected_outputs: List[str] # 期望的输出类型
|
|||
|
|
```
|
|||
|
|
|
|||
|
|
#### AnalysisPlan(分析计划)
|
|||
|
|
|
|||
|
|
```python
|
|||
|
|
@dataclass
|
|||
|
|
class AnalysisTask:
|
|||
|
|
id: str
|
|||
|
|
name: str
|
|||
|
|
description: str
|
|||
|
|
priority: int
|
|||
|
|
dependencies: List[str] # 依赖的任务ID
|
|||
|
|
required_tools: List[str] # 需要的工具名称
|
|||
|
|
expected_output: str
|
|||
|
|
status: str # 'pending', 'running', 'completed', 'failed', 'skipped'
|
|||
|
|
|
|||
|
|
@dataclass
|
|||
|
|
class AnalysisPlan:
|
|||
|
|
objectives: List[AnalysisObjective]
|
|||
|
|
tasks: List[AnalysisTask]
|
|||
|
|
tool_config: Dict[str, Any] # 工具配置
|
|||
|
|
estimated_duration: int # 预计执行时间(秒)
|
|||
|
|
created_at: datetime
|
|||
|
|
updated_at: datetime
|
|||
|
|
```
|
|||
|
|
|
|||
|
|
#### AnalysisResult(分析结果)
|
|||
|
|
|
|||
|
|
```python
|
|||
|
|
@dataclass
|
|||
|
|
class AnalysisResult:
|
|||
|
|
task_id: str
|
|||
|
|
task_name: str
|
|||
|
|
success: bool
|
|||
|
|
data: Dict[str, Any] # 结果数据(聚合后的)
|
|||
|
|
visualizations: List[str] # 生成的图表路径
|
|||
|
|
insights: List[str] # AI 提炼的洞察
|
|||
|
|
error: Optional[str]
|
|||
|
|
execution_time: float
|
|||
|
|
```
|
|||
|
|
|
|||
|
|
|
|||
|
|
### 工具系统设计
|
|||
|
|
|
|||
|
|
#### 工具接口
|
|||
|
|
|
|||
|
|
所有工具必须实现标准接口:
|
|||
|
|
|
|||
|
|
```python
|
|||
|
|
class AnalysisTool(ABC):
|
|||
|
|
@property
|
|||
|
|
@abstractmethod
|
|||
|
|
def name(self) -> str:
|
|||
|
|
"""工具名称"""
|
|||
|
|
pass
|
|||
|
|
|
|||
|
|
@property
|
|||
|
|
@abstractmethod
|
|||
|
|
def description(self) -> str:
|
|||
|
|
"""工具描述(供 AI 理解)"""
|
|||
|
|
pass
|
|||
|
|
|
|||
|
|
@property
|
|||
|
|
@abstractmethod
|
|||
|
|
def parameters(self) -> Dict[str, Any]:
|
|||
|
|
"""参数定义(JSON Schema 格式)"""
|
|||
|
|
pass
|
|||
|
|
|
|||
|
|
@abstractmethod
|
|||
|
|
def execute(self, data: pd.DataFrame, **kwargs) -> Dict[str, Any]:
|
|||
|
|
"""
|
|||
|
|
执行工具
|
|||
|
|
|
|||
|
|
参数:
|
|||
|
|
data: 原始数据(工具内部使用,不暴露给 AI)
|
|||
|
|
**kwargs: 工具参数
|
|||
|
|
|
|||
|
|
返回:
|
|||
|
|
聚合后的结果(不包含原始数据)
|
|||
|
|
"""
|
|||
|
|
pass
|
|||
|
|
|
|||
|
|
@abstractmethod
|
|||
|
|
def is_applicable(self, data_profile: DataProfile) -> bool:
|
|||
|
|
"""判断工具是否适用于当前数据"""
|
|||
|
|
pass
|
|||
|
|
```
|
|||
|
|
|
|||
|
|
#### 基础工具集
|
|||
|
|
|
|||
|
|
1. **数据查询工具**
|
|||
|
|
- `get_column_distribution`: 获取列的分布统计
|
|||
|
|
- `get_value_counts`: 获取值计数
|
|||
|
|
- `get_time_series`: 获取时间序列数据
|
|||
|
|
- `get_correlation`: 获取相关性分析
|
|||
|
|
|
|||
|
|
2. **统计分析工具**
|
|||
|
|
- `calculate_statistics`: 计算描述性统计
|
|||
|
|
- `perform_groupby`: 执行分组聚合
|
|||
|
|
- `detect_outliers`: 检测异常值
|
|||
|
|
- `calculate_trend`: 计算趋势
|
|||
|
|
|
|||
|
|
3. **可视化工具**
|
|||
|
|
- `create_bar_chart`: 创建柱状图
|
|||
|
|
- `create_line_chart`: 创建折线图
|
|||
|
|
- `create_pie_chart`: 创建饼图
|
|||
|
|
- `create_heatmap`: 创建热力图
|
|||
|
|
|
|||
|
|
4. **数据清洗工具**
|
|||
|
|
- `handle_missing_values`: 处理缺失值
|
|||
|
|
- `remove_duplicates`: 删除重复值
|
|||
|
|
- `normalize_data`: 数据标准化
|
|||
|
|
|
|||
|
|
|
|||
|
|
#### 动态工具调整策略
|
|||
|
|
|
|||
|
|
工具管理器根据数据特征动态调整工具集:
|
|||
|
|
|
|||
|
|
```python
|
|||
|
|
class ToolManager:
|
|||
|
|
def select_tools(self, data_profile: DataProfile) -> List[AnalysisTool]:
|
|||
|
|
"""根据数据画像选择合适的工具"""
|
|||
|
|
tools = []
|
|||
|
|
|
|||
|
|
# 检查时间字段
|
|||
|
|
if self._has_datetime_column(data_profile):
|
|||
|
|
tools.extend([
|
|||
|
|
TimeSeriesAnalysisTool(),
|
|||
|
|
TrendAnalysisTool(),
|
|||
|
|
SeasonalityTool()
|
|||
|
|
])
|
|||
|
|
|
|||
|
|
# 检查分类字段
|
|||
|
|
if self._has_categorical_column(data_profile):
|
|||
|
|
tools.extend([
|
|||
|
|
DistributionAnalysisTool(),
|
|||
|
|
CategoryComparisonTool()
|
|||
|
|
])
|
|||
|
|
|
|||
|
|
# 检查数值字段
|
|||
|
|
if self._has_numeric_column(data_profile):
|
|||
|
|
tools.extend([
|
|||
|
|
StatisticalAnalysisTool(),
|
|||
|
|
CorrelationAnalysisTool(),
|
|||
|
|
OutlierDetectionTool()
|
|||
|
|
])
|
|||
|
|
|
|||
|
|
# 检查地理字段
|
|||
|
|
if self._has_geo_column(data_profile):
|
|||
|
|
tools.append(GeoVisualizationTool())
|
|||
|
|
|
|||
|
|
return tools
|
|||
|
|
|
|||
|
|
def generate_custom_tool(self, requirement: str, data_profile: DataProfile) -> AnalysisTool:
|
|||
|
|
"""根据特定需求生成临时工具"""
|
|||
|
|
# 使用 AI 生成工具代码
|
|||
|
|
# 例如:计算两个时间字段的差值
|
|||
|
|
pass
|
|||
|
|
```
|
|||
|
|
|
|||
|
|
### 隐私保护机制
|
|||
|
|
|
|||
|
|
#### 数据访问控制
|
|||
|
|
|
|||
|
|
AI 不能直接访问原始数据,只能通过以下方式获取信息:
|
|||
|
|
|
|||
|
|
1. **数据画像**:包含元数据和统计摘要
|
|||
|
|
2. **工具结果**:工具返回聚合后的结果
|
|||
|
|
3. **示例值**:每列最多5个示例值
|
|||
|
|
|
|||
|
|
```python
|
|||
|
|
class DataAccessLayer:
|
|||
|
|
def __init__(self, data: pd.DataFrame):
|
|||
|
|
self._data = data # 私有,AI 不可访问
|
|||
|
|
|
|||
|
|
def get_profile(self) -> DataProfile:
|
|||
|
|
"""返回数据画像(安全)"""
|
|||
|
|
return self._generate_profile()
|
|||
|
|
|
|||
|
|
def execute_tool(self, tool: AnalysisTool, **kwargs) -> Dict[str, Any]:
|
|||
|
|
"""执行工具并返回聚合结果(安全)"""
|
|||
|
|
result = tool.execute(self._data, **kwargs)
|
|||
|
|
return self._sanitize_result(result)
|
|||
|
|
|
|||
|
|
def _sanitize_result(self, result: Dict[str, Any]) -> Dict[str, Any]:
|
|||
|
|
"""确保结果不包含原始数据"""
|
|||
|
|
# 检查结果大小,限制返回的数据量
|
|||
|
|
# 确保只返回聚合数据,不返回行级数据
|
|||
|
|
pass
|
|||
|
|
```
|
|||
|
|
|
|||
|
|
|
|||
|
|
### AI 决策流程
|
|||
|
|
|
|||
|
|
#### 数据理解阶段
|
|||
|
|
|
|||
|
|
```python
|
|||
|
|
def understand_data(file_path: str) -> DataProfile:
|
|||
|
|
"""
|
|||
|
|
AI 驱动的数据理解
|
|||
|
|
|
|||
|
|
流程:
|
|||
|
|
1. 加载数据并生成基础统计
|
|||
|
|
2. AI 分析列名和数据类型
|
|||
|
|
3. AI 推断数据的业务类型
|
|||
|
|
4. AI 识别关键字段的业务含义
|
|||
|
|
5. AI 评估数据质量
|
|||
|
|
"""
|
|||
|
|
# 加载数据
|
|||
|
|
data = load_csv(file_path)
|
|||
|
|
|
|||
|
|
# 生成基础统计(不包含原始数据)
|
|||
|
|
basic_stats = generate_basic_stats(data)
|
|||
|
|
|
|||
|
|
# AI 分析
|
|||
|
|
prompt = f"""
|
|||
|
|
分析以下数据的特征:
|
|||
|
|
|
|||
|
|
列信息:{basic_stats['columns']}
|
|||
|
|
行数:{basic_stats['row_count']}
|
|||
|
|
|
|||
|
|
请回答:
|
|||
|
|
1. 这是什么类型的数据?(工单、销售、用户等)
|
|||
|
|
2. 每个字段的业务含义是什么?
|
|||
|
|
3. 哪些是关键字段?
|
|||
|
|
4. 数据质量如何?(0-100分)
|
|||
|
|
"""
|
|||
|
|
|
|||
|
|
ai_analysis = call_llm(prompt)
|
|||
|
|
|
|||
|
|
return DataProfile(
|
|||
|
|
file_path=file_path,
|
|||
|
|
row_count=basic_stats['row_count'],
|
|||
|
|
columns=basic_stats['columns'],
|
|||
|
|
inferred_type=ai_analysis['data_type'],
|
|||
|
|
key_fields=ai_analysis['key_fields'],
|
|||
|
|
quality_score=ai_analysis['quality_score'],
|
|||
|
|
summary=ai_analysis['summary']
|
|||
|
|
)
|
|||
|
|
```
|
|||
|
|
|
|||
|
|
#### 需求理解阶段
|
|||
|
|
|
|||
|
|
```python
|
|||
|
|
def understand_requirement(user_input: str, data_profile: DataProfile) -> RequirementSpec:
|
|||
|
|
"""
|
|||
|
|
AI 驱动的需求理解
|
|||
|
|
|
|||
|
|
流程:
|
|||
|
|
1. 解析用户输入
|
|||
|
|
2. 将抽象概念转化为具体指标
|
|||
|
|
3. 检查数据是否支持需求
|
|||
|
|
4. 生成分析目标
|
|||
|
|
"""
|
|||
|
|
prompt = f"""
|
|||
|
|
用户需求:{user_input}
|
|||
|
|
|
|||
|
|
数据特征:
|
|||
|
|
- 类型:{data_profile.inferred_type}
|
|||
|
|
- 关键字段:{data_profile.key_fields}
|
|||
|
|
- 列:{[col.name for col in data_profile.columns]}
|
|||
|
|
|
|||
|
|
请回答:
|
|||
|
|
1. 用户想要什么类型的分析?
|
|||
|
|
2. 需要计算哪些指标?
|
|||
|
|
3. 数据是否支持这些分析?
|
|||
|
|
4. 如果不支持,如何调整?
|
|||
|
|
"""
|
|||
|
|
|
|||
|
|
ai_analysis = call_llm(prompt)
|
|||
|
|
|
|||
|
|
return RequirementSpec(
|
|||
|
|
user_input=user_input,
|
|||
|
|
objectives=ai_analysis['objectives'],
|
|||
|
|
constraints=ai_analysis['constraints'],
|
|||
|
|
expected_outputs=ai_analysis['expected_outputs']
|
|||
|
|
)
|
|||
|
|
```
|
|||
|
|
|
|||
|
|
|
|||
|
|
#### 分析规划阶段
|
|||
|
|
|
|||
|
|
```python
|
|||
|
|
def plan_analysis(data_profile: DataProfile, requirement: RequirementSpec) -> AnalysisPlan:
|
|||
|
|
"""
|
|||
|
|
AI 驱动的分析规划
|
|||
|
|
|
|||
|
|
流程:
|
|||
|
|
1. 根据需求和数据特征生成任务列表
|
|||
|
|
2. 确定任务优先级
|
|||
|
|
3. 识别任务依赖关系
|
|||
|
|
4. 选择合适的工具
|
|||
|
|
"""
|
|||
|
|
prompt = f"""
|
|||
|
|
数据特征:{data_profile.summary}
|
|||
|
|
分析目标:{requirement.objectives}
|
|||
|
|
|
|||
|
|
请生成分析计划:
|
|||
|
|
1. 需要执行哪些分析任务?
|
|||
|
|
2. 每个任务的优先级是什么?
|
|||
|
|
3. 任务之间有什么依赖关系?
|
|||
|
|
4. 每个任务需要哪些工具?
|
|||
|
|
|
|||
|
|
注意:
|
|||
|
|
- 任务应该是具体的、可执行的
|
|||
|
|
- 优先级:1-5,5最高
|
|||
|
|
- 必需的分析优先,可选的分析靠后
|
|||
|
|
"""
|
|||
|
|
|
|||
|
|
ai_plan = call_llm(prompt)
|
|||
|
|
|
|||
|
|
return AnalysisPlan(
|
|||
|
|
objectives=requirement.objectives,
|
|||
|
|
tasks=ai_plan['tasks'],
|
|||
|
|
tool_config=ai_plan['tool_config'],
|
|||
|
|
estimated_duration=ai_plan['estimated_duration']
|
|||
|
|
)
|
|||
|
|
```
|
|||
|
|
|
|||
|
|
#### 任务执行阶段(ReAct 模式)
|
|||
|
|
|
|||
|
|
```python
|
|||
|
|
def execute_task(task: AnalysisTask, tools: List[AnalysisTool], data_access: DataAccessLayer) -> AnalysisResult:
|
|||
|
|
"""
|
|||
|
|
使用 ReAct 模式执行任务
|
|||
|
|
|
|||
|
|
ReAct 循环:
|
|||
|
|
1. Thought(思考):分析当前状态,决定下一步
|
|||
|
|
2. Action(行动):选择并调用工具
|
|||
|
|
3. Observation(观察):查看工具结果
|
|||
|
|
4. 重复直到任务完成
|
|||
|
|
"""
|
|||
|
|
max_iterations = 10
|
|||
|
|
history = []
|
|||
|
|
|
|||
|
|
for i in range(max_iterations):
|
|||
|
|
# Thought
|
|||
|
|
prompt = f"""
|
|||
|
|
任务:{task.description}
|
|||
|
|
|
|||
|
|
可用工具:{[tool.name for tool in tools]}
|
|||
|
|
|
|||
|
|
执行历史:{history}
|
|||
|
|
|
|||
|
|
思考:
|
|||
|
|
1. 当前状态是什么?
|
|||
|
|
2. 下一步应该做什么?
|
|||
|
|
3. 需要使用哪个工具?
|
|||
|
|
4. 任务是否已完成?
|
|||
|
|
"""
|
|||
|
|
|
|||
|
|
thought = call_llm(prompt)
|
|||
|
|
history.append({"type": "thought", "content": thought})
|
|||
|
|
|
|||
|
|
if thought['is_completed']:
|
|||
|
|
break
|
|||
|
|
|
|||
|
|
# Action
|
|||
|
|
tool_name = thought['selected_tool']
|
|||
|
|
tool_params = thought['tool_params']
|
|||
|
|
tool = find_tool(tools, tool_name)
|
|||
|
|
|
|||
|
|
action_result = data_access.execute_tool(tool, **tool_params)
|
|||
|
|
history.append({"type": "action", "tool": tool_name, "params": tool_params})
|
|||
|
|
|
|||
|
|
# Observation
|
|||
|
|
history.append({"type": "observation", "result": action_result})
|
|||
|
|
|
|||
|
|
# 提炼洞察
|
|||
|
|
insights = extract_insights(history)
|
|||
|
|
|
|||
|
|
return AnalysisResult(
|
|||
|
|
task_id=task.id,
|
|||
|
|
task_name=task.name,
|
|||
|
|
success=True,
|
|||
|
|
data=action_result,
|
|||
|
|
insights=insights
|
|||
|
|
)
|
|||
|
|
```
|
|||
|
|
|
|||
|
|
|
|||
|
|
#### 动态计划调整
|
|||
|
|
|
|||
|
|
```python
|
|||
|
|
def adjust_plan(plan: AnalysisPlan, completed_results: List[AnalysisResult]) -> AnalysisPlan:
|
|||
|
|
"""
|
|||
|
|
根据中间结果动态调整计划
|
|||
|
|
|
|||
|
|
流程:
|
|||
|
|
1. 分析已完成任务的结果
|
|||
|
|
2. 识别关键发现和异常
|
|||
|
|
3. 决定是否需要深入分析
|
|||
|
|
4. 生成新的任务或调整优先级
|
|||
|
|
"""
|
|||
|
|
prompt = f"""
|
|||
|
|
原始计划:{plan.tasks}
|
|||
|
|
|
|||
|
|
已完成的分析结果:
|
|||
|
|
{[result.insights for result in completed_results]}
|
|||
|
|
|
|||
|
|
请分析:
|
|||
|
|
1. 是否发现了异常或关键问题?
|
|||
|
|
2. 是否需要深入分析?
|
|||
|
|
3. 应该增加哪些新任务?
|
|||
|
|
4. 应该跳过哪些任务?
|
|||
|
|
5. 应该调整哪些任务的优先级?
|
|||
|
|
"""
|
|||
|
|
|
|||
|
|
ai_adjustment = call_llm(prompt)
|
|||
|
|
|
|||
|
|
# 更新计划
|
|||
|
|
if ai_adjustment['new_tasks']:
|
|||
|
|
plan.tasks.extend(ai_adjustment['new_tasks'])
|
|||
|
|
|
|||
|
|
if ai_adjustment['skip_tasks']:
|
|||
|
|
for task_id in ai_adjustment['skip_tasks']:
|
|||
|
|
task = find_task(plan.tasks, task_id)
|
|||
|
|
task.status = 'skipped'
|
|||
|
|
|
|||
|
|
if ai_adjustment['priority_changes']:
|
|||
|
|
for task_id, new_priority in ai_adjustment['priority_changes'].items():
|
|||
|
|
task = find_task(plan.tasks, task_id)
|
|||
|
|
task.priority = new_priority
|
|||
|
|
|
|||
|
|
return plan
|
|||
|
|
```
|
|||
|
|
|
|||
|
|
#### 报告生成阶段
|
|||
|
|
|
|||
|
|
```python
|
|||
|
|
def generate_report(results: List[AnalysisResult], requirement: RequirementSpec, data_profile: DataProfile) -> str:
|
|||
|
|
"""
|
|||
|
|
AI 驱动的报告生成
|
|||
|
|
|
|||
|
|
流程:
|
|||
|
|
1. 提炼所有结果的关键发现
|
|||
|
|
2. 组织报告结构
|
|||
|
|
3. 生成结论和建议
|
|||
|
|
4. 格式化输出
|
|||
|
|
"""
|
|||
|
|
prompt = f"""
|
|||
|
|
分析目标:{requirement.objectives}
|
|||
|
|
数据类型:{data_profile.inferred_type}
|
|||
|
|
|
|||
|
|
分析结果:
|
|||
|
|
{[{"task": r.task_name, "insights": r.insights} for r in results]}
|
|||
|
|
|
|||
|
|
请生成分析报告:
|
|||
|
|
1. 执行摘要(3-5个关键发现)
|
|||
|
|
2. 详细分析(按主题组织)
|
|||
|
|
3. 结论和建议
|
|||
|
|
|
|||
|
|
要求:
|
|||
|
|
- 突出异常和趋势
|
|||
|
|
- 提供可操作的建议
|
|||
|
|
- 说明建议的依据
|
|||
|
|
- 使用清晰的结构
|
|||
|
|
"""
|
|||
|
|
|
|||
|
|
report_content = call_llm(prompt)
|
|||
|
|
|
|||
|
|
# 格式化为 Markdown
|
|||
|
|
markdown = format_as_markdown(report_content, results)
|
|||
|
|
|
|||
|
|
return markdown
|
|||
|
|
```
|
|||
|
|
|
|||
|
|
|
|||
|
|
## 数据模型
|
|||
|
|
|
|||
|
|
### 核心数据结构
|
|||
|
|
|
|||
|
|
所有数据模型已在"组件和接口"部分定义:
|
|||
|
|
- `DataProfile`:数据画像
|
|||
|
|
- `ColumnInfo`:列信息
|
|||
|
|
- `RequirementSpec`:需求规格
|
|||
|
|
- `AnalysisObjective`:分析目标
|
|||
|
|
- `AnalysisPlan`:分析计划
|
|||
|
|
- `AnalysisTask`:分析任务
|
|||
|
|
- `AnalysisResult`:分析结果
|
|||
|
|
|
|||
|
|
### 数据流
|
|||
|
|
|
|||
|
|
```
|
|||
|
|
CSV 文件
|
|||
|
|
→ DataProfile(元数据 + 统计摘要)
|
|||
|
|
→ RequirementSpec(分析目标)
|
|||
|
|
→ AnalysisPlan(任务列表)
|
|||
|
|
→ List[AnalysisResult](执行结果)
|
|||
|
|
→ Markdown 报告
|
|||
|
|
```
|
|||
|
|
|
|||
|
|
### 持久化
|
|||
|
|
|
|||
|
|
- 输入数据:CSV 文件(用户提供)
|
|||
|
|
- 中间结果:内存中(不持久化)
|
|||
|
|
- 输出报告:Markdown 文件
|
|||
|
|
- 生成图表:PNG/SVG 文件
|
|||
|
|
|
|||
|
|
## 正确性属性
|
|||
|
|
|
|||
|
|
*属性是一个特征或行为,应该在系统的所有有效执行中保持为真——本质上是关于系统应该做什么的形式化陈述。属性是人类可读规范和机器可验证正确性保证之间的桥梁。*
|
|||
|
|
|
|||
|
|
### 属性反思
|
|||
|
|
|
|||
|
|
在定义属性之前,我先识别可能的冗余:
|
|||
|
|
|
|||
|
|
**潜在冗余分析**:
|
|||
|
|
1. 场景1.3"AI 能执行分析并生成报告"和场景2.3"AI 能执行针对性分析"本质上测试相同的执行能力,可以合并为一个属性
|
|||
|
|
2. 场景3.3"报告按模板结构组织"和场景3.4"报告说明哪些分析被跳过"都是测试报告内容的完整性,可以合并
|
|||
|
|
3. 工具动态性的1和2(启用相关工具、禁用无关工具)是同一个决策的两面,可以合并为一个属性
|
|||
|
|
|
|||
|
|
经过反思,我将23个可测试标准整合为18个独立的属性。
|
|||
|
|
|
|||
|
|
|
|||
|
|
### 数据理解属性
|
|||
|
|
|
|||
|
|
**属性 1:数据类型识别**
|
|||
|
|
*对于任何*有效的 CSV 文件,数据理解引擎应该能够推断出数据的业务类型(如工单、销售、用户等),并且推断结果应该基于列名、数据类型和值分布的分析。
|
|||
|
|
**验证需求:场景1验收.1**
|
|||
|
|
|
|||
|
|
**属性 2:数据画像完整性**
|
|||
|
|
*对于任何*有效的 CSV 文件,生成的数据画像应该包含所有必需字段(行数、列数、列信息、推断类型、关键字段、质量分数),并且列信息应该包含每列的名称、类型、缺失率和统计信息。
|
|||
|
|
**验证需求:FR-1.2, FR-1.3, FR-1.4**
|
|||
|
|
|
|||
|
|
### 需求理解属性
|
|||
|
|
|
|||
|
|
**属性 3:抽象需求转化**
|
|||
|
|
*对于任何*抽象的用户需求(如"健康度"、"质量分析"),需求理解引擎应该能够将其转化为具体的分析目标列表,每个目标包含名称、描述和相关指标。
|
|||
|
|
**验证需求:场景2验收.1, 场景2验收.2**
|
|||
|
|
|
|||
|
|
**属性 4:模板解析**
|
|||
|
|
*对于任何*有效的分析模板,需求理解引擎应该能够解析模板结构并提取要求的指标和图表列表。
|
|||
|
|
**验证需求:场景3验收.1**
|
|||
|
|
|
|||
|
|
**属性 5:数据-需求匹配检查**
|
|||
|
|
*对于任何*需求规格和数据画像,需求理解引擎应该能够识别数据是否满足需求,如果不满足应该标记缺失的字段或能力。
|
|||
|
|
**验证需求:场景3验收.2**
|
|||
|
|
|
|||
|
|
### 分析规划属性
|
|||
|
|
|
|||
|
|
**属性 6:动态任务生成**
|
|||
|
|
*对于任何*数据画像和需求规格,分析规划引擎应该能够生成非空的任务列表,每个任务包含唯一ID、描述、优先级和所需工具。
|
|||
|
|
**验证需求:场景1验收.2, FR-3.1**
|
|||
|
|
|
|||
|
|
**属性 7:任务依赖一致性**
|
|||
|
|
*对于任何*生成的分析计划,所有任务的依赖关系应该形成有向无环图(DAG),不存在循环依赖。
|
|||
|
|
**验证需求:FR-3.1**
|
|||
|
|
|
|||
|
|
**属性 8:计划动态调整**
|
|||
|
|
*对于任何*分析计划和中间结果集合,如果结果中包含异常发现,计划调整功能应该能够生成新的深入分析任务或调整现有任务的优先级。
|
|||
|
|
**验证需求:场景4验收.2, 场景4验收.3, FR-3.3**
|
|||
|
|
|
|||
|
|
|
|||
|
|
### 工具管理属性
|
|||
|
|
|
|||
|
|
**属性 9:工具选择适配性**
|
|||
|
|
*对于任何*数据画像,工具管理器选择的工具集应该与数据特征匹配:包含时间字段时启用时间序列工具,包含分类字段时启用分布分析工具,包含数值字段时启用统计工具,不包含地理字段时不启用地理工具。
|
|||
|
|
**验证需求:工具动态性验收.1, 工具动态性验收.2, FR-4.2**
|
|||
|
|
|
|||
|
|
**属性 10:工具接口一致性**
|
|||
|
|
*对于任何*工具,它应该实现标准接口(name, description, parameters, execute, is_applicable),并且 execute 方法应该接受 DataFrame 和参数,返回字典格式的聚合结果。
|
|||
|
|
**验证需求:FR-4.1**
|
|||
|
|
|
|||
|
|
**属性 11:工具适用性判断**
|
|||
|
|
*对于任何*工具和数据画像,工具的 is_applicable 方法应该正确判断该工具是否适用于当前数据(例如时间序列工具只适用于包含时间字段的数据)。
|
|||
|
|
**验证需求:FR-4.3**
|
|||
|
|
|
|||
|
|
**属性 12:工具需求识别**
|
|||
|
|
*对于任何*分析任务和可用工具集,如果任务需要的工具不在可用工具集中,工具管理器应该能够识别缺失的工具并记录需求。
|
|||
|
|
**验证需求:工具动态性验收.3, FR-4.2**
|
|||
|
|
|
|||
|
|
### 任务执行属性
|
|||
|
|
|
|||
|
|
**属性 13:任务执行完整性**
|
|||
|
|
*对于任何*有效的分析计划和工具集,任务执行引擎应该能够执行所有未标记为跳过的任务,并为每个任务生成分析结果(成功或失败)。
|
|||
|
|
**验证需求:场景1验收.3, FR-5.1**
|
|||
|
|
|
|||
|
|
**属性 14:ReAct 循环终止**
|
|||
|
|
*对于任何*分析任务,ReAct 执行循环应该在有限步骤内终止(要么完成任务,要么达到最大迭代次数),不会无限循环。
|
|||
|
|
**验证需求:FR-5.1**
|
|||
|
|
|
|||
|
|
**属性 15:异常识别**
|
|||
|
|
*对于任何*包含明显异常的数据(如某个类别占比超过80%,或数值超出正常范围3倍标准差),任务执行引擎应该能够在分析结果的洞察中标记该异常。
|
|||
|
|
**验证需求:场景4验收.1**
|
|||
|
|
|
|||
|
|
|
|||
|
|
### 报告生成属性
|
|||
|
|
|
|||
|
|
**属性 16:报告结构完整性**
|
|||
|
|
*对于任何*分析结果集合和需求规格,生成的报告应该包含执行摘要、详细分析和结论建议三个主要部分,并且如果使用了模板,报告结构应该遵循模板的章节组织。
|
|||
|
|
**验证需求:场景3验收.3, FR-6.2**
|
|||
|
|
|
|||
|
|
**属性 17:报告内容追溯性**
|
|||
|
|
*对于任何*生成的报告和分析结果集合,报告中提到的所有发现和数据应该能够追溯到某个分析结果,并且如果某些计划中的分析被跳过,报告应该说明原因。
|
|||
|
|
**验证需求:场景3验收.4, 场景4验收.4, FR-6.1**
|
|||
|
|
|
|||
|
|
### 隐私保护属性
|
|||
|
|
|
|||
|
|
**属性 18:数据访问限制**
|
|||
|
|
*对于任何*AI 调用,传递给 LLM 的上下文应该只包含数据画像(元数据和统计摘要)和工具执行结果(聚合数据),不应该包含原始的行级数据。
|
|||
|
|
**验证需求:约束条件5.3**
|
|||
|
|
|
|||
|
|
**属性 19:工具输出过滤**
|
|||
|
|
*对于任何*工具的执行结果,返回的数据应该是聚合后的(如统计值、分组计数、图表数据),单次返回的数据行数不应超过100行,并且不应包含完整的原始数据表。
|
|||
|
|
**验证需求:约束条件5.3**
|
|||
|
|
|
|||
|
|
## 错误处理
|
|||
|
|
|
|||
|
|
### 错误类型
|
|||
|
|
|
|||
|
|
1. **数据加载错误**
|
|||
|
|
- 文件不存在
|
|||
|
|
- 文件格式错误
|
|||
|
|
- 编码问题
|
|||
|
|
- 数据过大
|
|||
|
|
|
|||
|
|
2. **AI 调用错误**
|
|||
|
|
- API 调用失败
|
|||
|
|
- 超时
|
|||
|
|
- 返回格式错误
|
|||
|
|
- Token 限制
|
|||
|
|
|
|||
|
|
3. **工具执行错误**
|
|||
|
|
- 工具不存在
|
|||
|
|
- 参数错误
|
|||
|
|
- 执行异常
|
|||
|
|
- 结果验证失败
|
|||
|
|
|
|||
|
|
4. **任务执行错误**
|
|||
|
|
- 依赖任务失败
|
|||
|
|
- ReAct 循环超时
|
|||
|
|
- 资源不足
|
|||
|
|
|
|||
|
|
|
|||
|
|
### 错误处理策略
|
|||
|
|
|
|||
|
|
#### 数据加载错误处理
|
|||
|
|
|
|||
|
|
```python
|
|||
|
|
def load_data_with_retry(file_path: str, max_retries: int = 3) -> pd.DataFrame:
|
|||
|
|
"""
|
|||
|
|
带重试的数据加载
|
|||
|
|
|
|||
|
|
策略:
|
|||
|
|
1. 尝试多种编码(UTF-8, GBK, GB2312)
|
|||
|
|
2. 处理常见格式问题(分隔符、引号)
|
|||
|
|
3. 如果文件过大,采样加载
|
|||
|
|
"""
|
|||
|
|
encodings = ['utf-8', 'gbk', 'gb2312', 'latin1']
|
|||
|
|
|
|||
|
|
for encoding in encodings:
|
|||
|
|
try:
|
|||
|
|
data = pd.read_csv(file_path, encoding=encoding)
|
|||
|
|
|
|||
|
|
# 检查数据大小
|
|||
|
|
if len(data) > 1_000_000:
|
|||
|
|
logger.warning(f"数据过大({len(data)}行),采样到100万行")
|
|||
|
|
data = data.sample(n=1_000_000, random_state=42)
|
|||
|
|
|
|||
|
|
return data
|
|||
|
|
except Exception as e:
|
|||
|
|
logger.debug(f"编码 {encoding} 失败: {e}")
|
|||
|
|
continue
|
|||
|
|
|
|||
|
|
raise DataLoadError(f"无法加载文件 {file_path},尝试了所有编码")
|
|||
|
|
```
|
|||
|
|
|
|||
|
|
#### AI 调用错误处理
|
|||
|
|
|
|||
|
|
```python
|
|||
|
|
def call_llm_with_fallback(prompt: str, max_retries: int = 3) -> Dict[str, Any]:
|
|||
|
|
"""
|
|||
|
|
带降级的 AI 调用
|
|||
|
|
|
|||
|
|
策略:
|
|||
|
|
1. 重试失败的调用
|
|||
|
|
2. 使用指数退避
|
|||
|
|
3. 如果持续失败,使用规则降级
|
|||
|
|
"""
|
|||
|
|
for attempt in range(max_retries):
|
|||
|
|
try:
|
|||
|
|
response = llm_client.call(prompt)
|
|||
|
|
return parse_response(response)
|
|||
|
|
except TimeoutError:
|
|||
|
|
wait_time = 2 ** attempt
|
|||
|
|
logger.warning(f"AI 调用超时,等待 {wait_time}秒后重试")
|
|||
|
|
time.sleep(wait_time)
|
|||
|
|
except APIError as e:
|
|||
|
|
logger.error(f"AI 调用失败: {e}")
|
|||
|
|
if attempt == max_retries - 1:
|
|||
|
|
# 最后一次尝试失败,使用规则降级
|
|||
|
|
return fallback_rule_based_analysis(prompt)
|
|||
|
|
|
|||
|
|
raise AICallError("AI 调用失败,已达最大重试次数")
|
|||
|
|
```
|
|||
|
|
|
|||
|
|
#### 工具执行错误处理
|
|||
|
|
|
|||
|
|
```python
|
|||
|
|
def execute_tool_safely(tool: AnalysisTool, data: pd.DataFrame, **kwargs) -> Dict[str, Any]:
|
|||
|
|
"""
|
|||
|
|
安全的工具执行
|
|||
|
|
|
|||
|
|
策略:
|
|||
|
|
1. 验证参数
|
|||
|
|
2. 捕获异常
|
|||
|
|
3. 返回错误信息而不是崩溃
|
|||
|
|
"""
|
|||
|
|
try:
|
|||
|
|
# 验证参数
|
|||
|
|
validate_tool_params(tool, kwargs)
|
|||
|
|
|
|||
|
|
# 执行工具
|
|||
|
|
result = tool.execute(data, **kwargs)
|
|||
|
|
|
|||
|
|
# 验证结果
|
|||
|
|
validate_tool_result(result)
|
|||
|
|
|
|||
|
|
return {"success": True, "data": result}
|
|||
|
|
except Exception as e:
|
|||
|
|
logger.error(f"工具 {tool.name} 执行失败: {e}")
|
|||
|
|
return {
|
|||
|
|
"success": False,
|
|||
|
|
"error": str(e),
|
|||
|
|
"tool": tool.name
|
|||
|
|
}
|
|||
|
|
```
|
|||
|
|
|
|||
|
|
|
|||
|
|
#### 任务执行错误处理
|
|||
|
|
|
|||
|
|
```python
|
|||
|
|
def execute_task_with_recovery(task: AnalysisTask, plan: AnalysisPlan) -> AnalysisResult:
|
|||
|
|
"""
|
|||
|
|
带恢复的任务执行
|
|||
|
|
|
|||
|
|
策略:
|
|||
|
|
1. 检查依赖任务状态
|
|||
|
|
2. 如果依赖失败,跳过任务
|
|||
|
|
3. 如果任务失败,标记但继续执行其他任务
|
|||
|
|
"""
|
|||
|
|
# 检查依赖
|
|||
|
|
for dep_id in task.dependencies:
|
|||
|
|
dep_task = find_task(plan.tasks, dep_id)
|
|||
|
|
if dep_task.status == 'failed':
|
|||
|
|
logger.warning(f"任务 {task.id} 的依赖 {dep_id} 失败,跳过该任务")
|
|||
|
|
task.status = 'skipped'
|
|||
|
|
return AnalysisResult(
|
|||
|
|
task_id=task.id,
|
|||
|
|
task_name=task.name,
|
|||
|
|
success=False,
|
|||
|
|
error="依赖任务失败"
|
|||
|
|
)
|
|||
|
|
|
|||
|
|
# 执行任务
|
|||
|
|
try:
|
|||
|
|
result = execute_task(task, tools, data_access)
|
|||
|
|
task.status = 'completed'
|
|||
|
|
return result
|
|||
|
|
except Exception as e:
|
|||
|
|
logger.error(f"任务 {task.id} 执行失败: {e}")
|
|||
|
|
task.status = 'failed'
|
|||
|
|
return AnalysisResult(
|
|||
|
|
task_id=task.id,
|
|||
|
|
task_name=task.name,
|
|||
|
|
success=False,
|
|||
|
|
error=str(e)
|
|||
|
|
)
|
|||
|
|
```
|
|||
|
|
|
|||
|
|
### 错误恢复机制
|
|||
|
|
|
|||
|
|
1. **部分失败容忍**:单个任务失败不影响整体流程
|
|||
|
|
2. **降级策略**:AI 调用失败时使用规则方法
|
|||
|
|
3. **用户通知**:在报告中说明哪些分析失败及原因
|
|||
|
|
4. **日志记录**:记录所有错误以便调试
|
|||
|
|
|
|||
|
|
## 测试策略
|
|||
|
|
|
|||
|
|
### 双重测试方法
|
|||
|
|
|
|||
|
|
本系统采用单元测试和基于属性的测试相结合的方法:
|
|||
|
|
|
|||
|
|
- **单元测试**:验证特定示例、边缘情况和错误条件
|
|||
|
|
- **属性测试**:验证跨所有输入的通用属性
|
|||
|
|
- 两者互补,共同确保全面覆盖
|
|||
|
|
|
|||
|
|
### 单元测试策略
|
|||
|
|
|
|||
|
|
单元测试专注于:
|
|||
|
|
- 特定示例(如识别工单数据类型)
|
|||
|
|
- 组件之间的集成点
|
|||
|
|
- 边缘情况和错误条件(如空文件、格式错误)
|
|||
|
|
|
|||
|
|
避免编写过多单元测试 - 基于属性的测试处理大量输入覆盖。
|
|||
|
|
|
|||
|
|
|
|||
|
|
### 基于属性的测试配置
|
|||
|
|
|
|||
|
|
**测试库**:使用 Python 的 `hypothesis` 库进行基于属性的测试
|
|||
|
|
|
|||
|
|
**配置要求**:
|
|||
|
|
- 每个属性测试最少运行 100 次迭代(由于随机化)
|
|||
|
|
- 每个属性测试必须引用其设计文档属性
|
|||
|
|
- 标签格式:`# Feature: true-ai-agent, Property {number}: {property_text}`
|
|||
|
|
|
|||
|
|
**测试组织**:
|
|||
|
|
- 每个正确性属性由单个基于属性的测试实现
|
|||
|
|
- 测试应该生成随机输入并验证属性
|
|||
|
|
- 测试应该使用 hypothesis 的策略生成有效数据
|
|||
|
|
|
|||
|
|
### 属性测试示例
|
|||
|
|
|
|||
|
|
```python
|
|||
|
|
from hypothesis import given, strategies as st
|
|||
|
|
import hypothesis
|
|||
|
|
|
|||
|
|
# Feature: true-ai-agent, Property 1: 数据类型识别
|
|||
|
|
@given(csv_data=st.data())
|
|||
|
|
@hypothesis.settings(max_examples=100)
|
|||
|
|
def test_data_type_inference(csv_data):
|
|||
|
|
"""
|
|||
|
|
属性 1:对于任何有效的 CSV 文件,数据理解引擎应该能够
|
|||
|
|
推断出数据的业务类型
|
|||
|
|
"""
|
|||
|
|
# 生成随机 CSV 数据
|
|||
|
|
df = generate_random_dataframe(csv_data)
|
|||
|
|
|
|||
|
|
# 执行数据理解
|
|||
|
|
profile = understand_data(df)
|
|||
|
|
|
|||
|
|
# 验证:应该有推断的类型
|
|||
|
|
assert profile.inferred_type is not None
|
|||
|
|
assert profile.inferred_type in ['ticket', 'sales', 'user', 'unknown']
|
|||
|
|
|
|||
|
|
# 验证:推断应该基于数据特征
|
|||
|
|
assert len(profile.key_fields) > 0
|
|||
|
|
|
|||
|
|
# Feature: true-ai-agent, Property 7: 任务依赖一致性
|
|||
|
|
@given(data_profile=st.builds(DataProfile),
|
|||
|
|
requirement=st.builds(RequirementSpec))
|
|||
|
|
@hypothesis.settings(max_examples=100)
|
|||
|
|
def test_task_dependency_dag(data_profile, requirement):
|
|||
|
|
"""
|
|||
|
|
属性 7:对于任何生成的分析计划,所有任务的依赖关系
|
|||
|
|
应该形成有向无环图(DAG)
|
|||
|
|
"""
|
|||
|
|
# 生成分析计划
|
|||
|
|
plan = plan_analysis(data_profile, requirement)
|
|||
|
|
|
|||
|
|
# 验证:检查是否存在循环依赖
|
|||
|
|
assert not has_circular_dependency(plan.tasks)
|
|||
|
|
|
|||
|
|
# 验证:所有依赖的任务都存在
|
|||
|
|
task_ids = {task.id for task in plan.tasks}
|
|||
|
|
for task in plan.tasks:
|
|||
|
|
for dep_id in task.dependencies:
|
|||
|
|
assert dep_id in task_ids
|
|||
|
|
|
|||
|
|
# Feature: true-ai-agent, Property 18: 数据访问限制
|
|||
|
|
@given(data=st.data())
|
|||
|
|
@hypothesis.settings(max_examples=100)
|
|||
|
|
def test_data_privacy_protection(data):
|
|||
|
|
"""
|
|||
|
|
属性 18:对于任何 AI 调用,传递给 LLM 的上下文应该只包含
|
|||
|
|
数据画像和工具结果,不应该包含原始行级数据
|
|||
|
|
"""
|
|||
|
|
# 生成随机数据
|
|||
|
|
df = generate_random_dataframe(data)
|
|||
|
|
|
|||
|
|
# 模拟 AI 调用
|
|||
|
|
with mock.patch('llm_client.call') as mock_call:
|
|||
|
|
understand_data(df)
|
|||
|
|
|
|||
|
|
# 获取传递给 LLM 的提示
|
|||
|
|
call_args = mock_call.call_args[0][0]
|
|||
|
|
|
|||
|
|
# 验证:提示中不应包含原始数据
|
|||
|
|
for _, row in df.iterrows():
|
|||
|
|
for value in row.values:
|
|||
|
|
assert str(value) not in call_args
|
|||
|
|
|
|||
|
|
# 验证:提示应该包含元数据
|
|||
|
|
assert 'row_count' in call_args.lower()
|
|||
|
|
assert 'column' in call_args.lower()
|
|||
|
|
```
|
|||
|
|
|
|||
|
|
|
|||
|
|
### 单元测试示例
|
|||
|
|
|
|||
|
|
```python
|
|||
|
|
def test_load_ticket_data():
|
|||
|
|
"""测试加载工单数据的特定示例"""
|
|||
|
|
data = load_csv('test_data/ticket_sample.csv')
|
|||
|
|
profile = understand_data(data)
|
|||
|
|
|
|||
|
|
assert profile.inferred_type == 'ticket'
|
|||
|
|
assert 'status' in profile.key_fields
|
|||
|
|
assert 'created_at' in profile.key_fields
|
|||
|
|
|
|||
|
|
def test_empty_file_handling():
|
|||
|
|
"""测试空文件的边缘情况"""
|
|||
|
|
with pytest.raises(DataLoadError):
|
|||
|
|
load_csv('test_data/empty.csv')
|
|||
|
|
|
|||
|
|
def test_invalid_encoding():
|
|||
|
|
"""测试编码错误的处理"""
|
|||
|
|
# 应该自动尝试多种编码
|
|||
|
|
data = load_csv('test_data/gbk_encoded.csv')
|
|||
|
|
assert len(data) > 0
|
|||
|
|
|
|||
|
|
def test_ai_call_timeout():
|
|||
|
|
"""测试 AI 调用超时的错误处理"""
|
|||
|
|
with mock.patch('llm_client.call', side_effect=TimeoutError):
|
|||
|
|
# 应该使用降级策略
|
|||
|
|
result = call_llm_with_fallback("test prompt")
|
|||
|
|
assert result is not None # 降级结果
|
|||
|
|
|
|||
|
|
def test_tool_execution_error():
|
|||
|
|
"""测试工具执行错误"""
|
|||
|
|
tool = StatisticalAnalysisTool()
|
|||
|
|
result = execute_tool_safely(tool, invalid_data)
|
|||
|
|
|
|||
|
|
assert result['success'] == False
|
|||
|
|
assert 'error' in result
|
|||
|
|
```
|
|||
|
|
|
|||
|
|
### 集成测试
|
|||
|
|
|
|||
|
|
```python
|
|||
|
|
def test_end_to_end_analysis():
|
|||
|
|
"""端到端集成测试"""
|
|||
|
|
# 准备测试数据
|
|||
|
|
data_file = 'test_data/sample_tickets.csv'
|
|||
|
|
user_requirement = "分析工单健康度"
|
|||
|
|
|
|||
|
|
# 执行完整流程
|
|||
|
|
profile = understand_data(data_file)
|
|||
|
|
requirement = understand_requirement(user_requirement, profile)
|
|||
|
|
plan = plan_analysis(profile, requirement)
|
|||
|
|
results = execute_plan(plan, data_file)
|
|||
|
|
report = generate_report(results, requirement, profile)
|
|||
|
|
|
|||
|
|
# 验证结果
|
|||
|
|
assert profile.inferred_type == 'ticket'
|
|||
|
|
assert len(requirement.objectives) > 0
|
|||
|
|
assert len(plan.tasks) > 0
|
|||
|
|
assert len(results) > 0
|
|||
|
|
assert len(report) > 0
|
|||
|
|
assert '健康度' in report
|
|||
|
|
|
|||
|
|
def test_template_based_analysis():
|
|||
|
|
"""基于模板的分析集成测试"""
|
|||
|
|
data_file = 'test_data/sample_tickets.csv'
|
|||
|
|
template_file = 'templates/ticket_analysis.md'
|
|||
|
|
|
|||
|
|
# 执行流程
|
|||
|
|
profile = understand_data(data_file)
|
|||
|
|
requirement = understand_requirement(
|
|||
|
|
f"按照模板 {template_file} 分析",
|
|||
|
|
profile
|
|||
|
|
)
|
|||
|
|
plan = plan_analysis(profile, requirement)
|
|||
|
|
results = execute_plan(plan, data_file)
|
|||
|
|
report = generate_report(results, requirement, profile)
|
|||
|
|
|
|||
|
|
# 验证:报告应该遵循模板结构
|
|||
|
|
template_sections = parse_template_sections(template_file)
|
|||
|
|
for section in template_sections:
|
|||
|
|
assert section in report or f"跳过:{section}" in report
|
|||
|
|
```
|
|||
|
|
|
|||
|
|
|
|||
|
|
### 测试数据生成
|
|||
|
|
|
|||
|
|
使用 hypothesis 策略生成测试数据:
|
|||
|
|
|
|||
|
|
```python
|
|||
|
|
from hypothesis import strategies as st
|
|||
|
|
|
|||
|
|
# 生成随机列信息
|
|||
|
|
column_info_strategy = st.builds(
|
|||
|
|
ColumnInfo,
|
|||
|
|
name=st.text(min_size=1, max_size=20),
|
|||
|
|
dtype=st.sampled_from(['numeric', 'categorical', 'datetime', 'text']),
|
|||
|
|
missing_rate=st.floats(min_value=0.0, max_value=1.0),
|
|||
|
|
unique_count=st.integers(min_value=1, max_value=1000),
|
|||
|
|
sample_values=st.lists(st.text(), min_size=1, max_size=5),
|
|||
|
|
statistics=st.dictionaries(st.text(), st.floats())
|
|||
|
|
)
|
|||
|
|
|
|||
|
|
# 生成随机数据画像
|
|||
|
|
data_profile_strategy = st.builds(
|
|||
|
|
DataProfile,
|
|||
|
|
file_path=st.text(),
|
|||
|
|
row_count=st.integers(min_value=1, max_value=1000000),
|
|||
|
|
column_count=st.integers(min_value=1, max_value=100),
|
|||
|
|
columns=st.lists(column_info_strategy, min_size=1, max_size=20),
|
|||
|
|
inferred_type=st.sampled_from(['ticket', 'sales', 'user', 'unknown']),
|
|||
|
|
key_fields=st.dictionaries(st.text(), st.text()),
|
|||
|
|
quality_score=st.floats(min_value=0.0, max_value=100.0),
|
|||
|
|
summary=st.text()
|
|||
|
|
)
|
|||
|
|
|
|||
|
|
# 生成随机 DataFrame
|
|||
|
|
@st.composite
|
|||
|
|
def dataframe_strategy(draw):
|
|||
|
|
"""生成随机 DataFrame"""
|
|||
|
|
n_rows = draw(st.integers(min_value=10, max_value=1000))
|
|||
|
|
n_cols = draw(st.integers(min_value=2, max_value=20))
|
|||
|
|
|
|||
|
|
data = {}
|
|||
|
|
for i in range(n_cols):
|
|||
|
|
col_type = draw(st.sampled_from(['int', 'float', 'str', 'datetime']))
|
|||
|
|
if col_type == 'int':
|
|||
|
|
data[f'col_{i}'] = draw(st.lists(st.integers(), min_size=n_rows, max_size=n_rows))
|
|||
|
|
elif col_type == 'float':
|
|||
|
|
data[f'col_{i}'] = draw(st.lists(st.floats(allow_nan=False), min_size=n_rows, max_size=n_rows))
|
|||
|
|
elif col_type == 'str':
|
|||
|
|
data[f'col_{i}'] = draw(st.lists(st.text(), min_size=n_rows, max_size=n_rows))
|
|||
|
|
else: # datetime
|
|||
|
|
data[f'col_{i}'] = pd.date_range('2020-01-01', periods=n_rows)
|
|||
|
|
|
|||
|
|
return pd.DataFrame(data)
|
|||
|
|
```
|
|||
|
|
|
|||
|
|
### 性能测试
|
|||
|
|
|
|||
|
|
```python
|
|||
|
|
import time
|
|||
|
|
|
|||
|
|
def test_data_understanding_performance():
|
|||
|
|
"""测试数据理解的性能"""
|
|||
|
|
# 生成大数据集
|
|||
|
|
large_data = generate_large_dataframe(rows=100000, cols=50)
|
|||
|
|
|
|||
|
|
start_time = time.time()
|
|||
|
|
profile = understand_data(large_data)
|
|||
|
|
duration = time.time() - start_time
|
|||
|
|
|
|||
|
|
# 验证:应该在30秒内完成
|
|||
|
|
assert duration < 30, f"数据理解耗时 {duration}秒,超过30秒限制"
|
|||
|
|
|
|||
|
|
def test_full_analysis_performance():
|
|||
|
|
"""测试完整分析流程的性能"""
|
|||
|
|
data_file = 'test_data/large_dataset.csv'
|
|||
|
|
|
|||
|
|
start_time = time.time()
|
|||
|
|
# 执行完整流程
|
|||
|
|
profile = understand_data(data_file)
|
|||
|
|
requirement = understand_requirement("完整分析", profile)
|
|||
|
|
plan = plan_analysis(profile, requirement)
|
|||
|
|
results = execute_plan(plan, data_file)
|
|||
|
|
report = generate_report(results, requirement, profile)
|
|||
|
|
duration = time.time() - start_time
|
|||
|
|
|
|||
|
|
# 验证:应该在30分钟内完成
|
|||
|
|
assert duration < 1800, f"完整分析耗时 {duration}秒,超过30分钟限制"
|
|||
|
|
```
|
|||
|
|
|
|||
|
|
### 测试覆盖率目标
|
|||
|
|
|
|||
|
|
- 代码覆盖率:> 80%
|
|||
|
|
- 属性测试覆盖:所有19个正确性属性
|
|||
|
|
- 单元测试覆盖:所有核心组件和错误处理路径
|
|||
|
|
- 集成测试覆盖:端到端流程和主要使用场景
|
|||
|
|
|
|||
|
|
---
|
|||
|
|
|
|||
|
|
**版本**: v1.0.0
|
|||
|
|
**日期**: 2026-03-06
|
|||
|
|
**状态**: 设计完成
|