增加lyxy-reader-office skill
This commit is contained in:
@@ -1,279 +0,0 @@
|
||||
---
|
||||
name: lyxy-reader-docx
|
||||
description: 优先解析 docx 文档的 skill,将 DOCX 文件转换为纯文本内容,不支持图片和格式提取。
|
||||
compatibility: Requires Python 3.6+ and at least one of: markitdown or python-docx
|
||||
---
|
||||
|
||||
# DOCX 文档解析 Skill
|
||||
|
||||
将 Microsoft Word (.docx) 文档解析为纯文本内容,支持多种解析模式和检索功能。
|
||||
|
||||
## Purpose
|
||||
|
||||
**依赖选项**: 此 skill 可以使用 lyxy-runner-python skill(推荐)或直接使用 Python 执行。
|
||||
|
||||
### 优先使用 lyxy-runner-python
|
||||
|
||||
如果环境中存在 lyxy-runner-python skill,应优先使用它来执行 docx_parser.py 脚本:
|
||||
- lyxy-runner-python 使用 uv 管理依赖,自动安装 markitdown 或 python-docx
|
||||
- 环境隔离,不污染系统 Python
|
||||
- 跨平台兼容(Windows/macOS/Linux)
|
||||
|
||||
### 降级到直接执行
|
||||
|
||||
如果环境中不存在 lyxy-runner-python skill,则直接使用 Python 执行 docx_parser.py:
|
||||
- 需要手动安装 markitdown 或 python-docx
|
||||
- 脚本内部实现了多策略解析降级:MarkItDown → python-docx → XML 原生
|
||||
|
||||
## When to Use
|
||||
|
||||
任何需要读取或解析 .docx 文件内容的任务都应使用此 skill。
|
||||
|
||||
### 典型场景
|
||||
- **文档内容提取**: 将 Word 文档转换为可读的文本内容
|
||||
- **文档元数据**: 获取文档的字数、行数等信息
|
||||
- **标题分析**: 提取文档的标题结构
|
||||
- **章节提取**: 提取特定章节的内容
|
||||
- **内容搜索**: 在文档中搜索关键词或模式
|
||||
|
||||
### 不适用场景
|
||||
- ✗ 需要提取图片内容(仅支持纯文本)
|
||||
- ✗ 需要保留复杂的格式信息(如字体、颜色、布局)
|
||||
- ✗ 需要编辑或修改 .docx 文件
|
||||
- ✗ 需要处理 .doc 或其他文档格式
|
||||
|
||||
## Capabilities
|
||||
|
||||
### 1. 全文转换为 Markdown
|
||||
将完整的 DOCX 文档解析为 Markdown 格式文本,移除所有图片但保留文本格式。
|
||||
|
||||
**支持格式转换**:
|
||||
- 标题(1-6级)
|
||||
- 列表(有序和无序)
|
||||
- 表格
|
||||
- 粗体、斜体、下划线
|
||||
- 连续空行规范化
|
||||
|
||||
### 2. 获取文档元信息
|
||||
提供文档的基本统计信息:
|
||||
- 字数统计(使用 `-c` 参数)
|
||||
- 行数统计(使用 `-l` 参数)
|
||||
|
||||
### 3. 标题列表提取
|
||||
提取文档中的所有标题(1-6级),按原始层级关系返回。
|
||||
|
||||
### 4. 指定章节内容提取
|
||||
根据标题名称提取特定章节的完整内容,包括:
|
||||
- 完整的上级标题链
|
||||
- 该标题下的所有下级内容
|
||||
|
||||
支持同名标题提取,返回所有匹配章节。
|
||||
|
||||
### 5. 正则表达式搜索
|
||||
在文档中搜索关键词或模式:
|
||||
- 支持标准正则表达式
|
||||
- 可自定义上下文行数(默认前后各 2 行)
|
||||
- 多个匹配结果用 `---` 分隔
|
||||
- 上下文行数不包含空行
|
||||
|
||||
## Triggers
|
||||
|
||||
### 中文触发词
|
||||
- "读取 docx"
|
||||
- "解析 docx"
|
||||
- "打开 word 文档"
|
||||
- "提取 word 文档内容"
|
||||
- "解析 word 文件"
|
||||
|
||||
### 英文触发词
|
||||
- "read docx"
|
||||
- "parse docx"
|
||||
- "extract from word document"
|
||||
- "parse word file"
|
||||
- "read word document"
|
||||
|
||||
### 文件扩展名
|
||||
- `.docx` 文件
|
||||
|
||||
## Execution
|
||||
|
||||
### 优先使用 lyxy-runner-python
|
||||
|
||||
**推荐执行方式**(如果 lyxy-runner-python skill 可用):
|
||||
|
||||
```bash
|
||||
# 使用 lyxy-runner-python 执行(推荐)
|
||||
uv run --with markitdown skills/lyxy-reader-docx/scripts/docx_parser.py /path/to/document.docx
|
||||
|
||||
# 或使用 python-docx
|
||||
uv run --with python-docx skills/lyxy-reader-docx/scripts/docx_parser.py /path/to/document.docx
|
||||
```
|
||||
|
||||
**大模型应优先检查 lyxy-runner-python skill 是否存在,如果存在则使用它执行。**
|
||||
|
||||
### 降级到直接执行
|
||||
|
||||
如果 lyxy-runner-python skill 不可用,则直接使用 Python:
|
||||
|
||||
```bash
|
||||
# 直接使用 Python 执行
|
||||
python3 skills/lyxy-reader-docx/scripts/docx_parser.py /path/to/document.docx
|
||||
```
|
||||
|
||||
**提示用户**: 建议用户安装 markitdown 或 python-docx 以获得更好的解析效果:
|
||||
```
|
||||
建议:安装 markitdown 或 python-docx 以获得更好的解析效果
|
||||
|
||||
安装命令:
|
||||
pip install markitdown # 推荐
|
||||
pip install python-docx # 备选
|
||||
```
|
||||
|
||||
**禁止自动安装**: 不得自动执行 pip install 命令安装依赖。仅向用户提示安装建议即可。
|
||||
|
||||
**降级策略说明**: 脚本内部实现了多策略解析降级(MarkItDown → python-docx → XML 原生),即使未安装第三方库也能通过 XML 原生解析工作,只是功能可能受限。
|
||||
|
||||
## Command Usage
|
||||
|
||||
### 基本语法
|
||||
```bash
|
||||
python3 docx_parser.py [options] <file_path>
|
||||
```
|
||||
|
||||
### 参数说明
|
||||
|
||||
| 参数 | 说明 |
|
||||
| ----------- | ------------------- |
|
||||
| `file_path` | DOCX 文件的绝对路径 |
|
||||
|
||||
### 选项参数
|
||||
|
||||
| 参数 | 长参数 | 类型 | 默认值 | 说明 |
|
||||
| ---- | ----------- | ---- | ------ | -------------------------------------------------------------- |
|
||||
| `-n` | `--context` | 整数 | 2 | 与 `-s` 配合使用,指定每个检索结果包含的前后行数(不包含空行) |
|
||||
|
||||
### 互斥参数(只能使用其中一个)
|
||||
|
||||
| 参数 | 长参数 | 说明 |
|
||||
| ----- | ----------------- | ----------------------------------------------------- |
|
||||
| `-c` | `--count` | 返回解析后的 markdown 文档的总字数 |
|
||||
| `-l` | `--lines` | 返回解析后的 markdown 文档的总行数 |
|
||||
| `-t` | `--titles` | 返回解析后的 markdown 文档的标题行(1-6级) |
|
||||
| `-tc` | `--title-content` | 指定标题名称,输出该标题及其下级内容(不包含#号) |
|
||||
| `-s` | --search | 使用正则表达式搜索文档,返回所有匹配结果(用---分隔) |
|
||||
|
||||
## Examples
|
||||
|
||||
### 示例 1: 提取完整文档内容
|
||||
```bash
|
||||
# 提取完整文档
|
||||
python3 docx_parser.py /path/to/document.docx
|
||||
```
|
||||
|
||||
输出:完整的 Markdown 格式文档内容
|
||||
|
||||
### 示例 2: 获取文档字数
|
||||
```bash
|
||||
# 获取字数
|
||||
python3 docx_parser.py -c /path/to/document.docx
|
||||
```
|
||||
|
||||
输出:文档总字数(数字)
|
||||
|
||||
### 示例 3: 提取所有标题
|
||||
```bash
|
||||
# 提取标题
|
||||
python3 docx_parser.py -t /path/to/document.docx
|
||||
```
|
||||
|
||||
输出:所有 1-6 级标题列表
|
||||
|
||||
### 示例 4: 提取指定章节
|
||||
```bash
|
||||
# 提取 "第一章" 内容
|
||||
python3 docx_parser.py -tc "第一章" /path/to/document.docx
|
||||
```
|
||||
|
||||
输出:该章节的完整内容(包含上级标题链和所有下级内容)
|
||||
|
||||
### 示例 5: 搜索关键词
|
||||
```bash
|
||||
# 搜索关键词(默认 2 行上下文)
|
||||
python3 docx_parser.py -s "关键词" /path/to/document.docx
|
||||
|
||||
# 自定义 5 行上下文
|
||||
python3 docx_parser.py -s "关键词" -n 5 /path/to/document.docx
|
||||
```
|
||||
|
||||
输出:所有匹配结果及其上下文,用 `---` 分隔
|
||||
|
||||
## 依赖安装
|
||||
|
||||
### 推荐方式:使用 lyxy-runner-python
|
||||
|
||||
如果使用 lyxy-runner-python skill,依赖会自动管理,无需手动安装。
|
||||
|
||||
### 手动安装(降级模式)
|
||||
|
||||
如果直接使用 Python 执行,需要手动安装至少一个解析库:
|
||||
|
||||
```bash
|
||||
# 安装 MarkItDown(推荐)
|
||||
pip install markitdown
|
||||
|
||||
# 安装 python-docx(备选)
|
||||
pip install python-docx
|
||||
```
|
||||
|
||||
**重要限制**:
|
||||
- ✗ **禁止自动安装**: 不得自动执行 pip install 命令安装依赖
|
||||
- ✗ **仅提示即可**: 向用户展示安装建议,但由用户决定是否安装
|
||||
- ✗ **不阻塞执行**: 即使未安装依赖,脚本也能通过 XML 原生解析运行
|
||||
|
||||
### 多策略解析
|
||||
|
||||
脚本自动尝试以下解析方法,确保最大兼容性:
|
||||
1. **MarkItDown**(微软官方库,效果最佳)
|
||||
2. **python-docx**(成熟的 Python 库)
|
||||
3. **XML 原生解析**(备选方案,无需任何依赖)
|
||||
|
||||
即使未安装任何依赖库,脚本也会尝试使用 XML 原生解析,但功能可能受限。
|
||||
|
||||
## Error Handling
|
||||
|
||||
### 常见错误
|
||||
|
||||
| 错误类型 | 说明 | 解决方案 |
|
||||
| --------- | ---- | -------- |
|
||||
| 文件不存在 | 提供的文件路径无效 | 检查文件路径是否正确 |
|
||||
| 无效的 DOCX | 文件不是有效的 DOCX 格式或已损坏 | 确认文件格式正确 |
|
||||
| 未找到标题 | 指定的标题名称不存在 | 使用 `-t` 参数查看所有标题 |
|
||||
| 正则表达式无效 | 提供的正则表达式格式错误 | 检查正则表达式语法 |
|
||||
| 解析库未安装 | 未安装 markitdown 或 python-docx | 提示用户安装以获得更好的解析效果,但禁止自动安装。脚本会自动降级到 XML 原生解析。 |
|
||||
|
||||
## Notes
|
||||
|
||||
### 为什么选择 lyxy-runner-python?
|
||||
|
||||
| 特性 | 优势 |
|
||||
| ------ | ------ |
|
||||
| 环境隔离 | 不污染系统 Python |
|
||||
| 自动依赖 | 自动安装 markitdown 或 python-docx |
|
||||
| 快速启动 | 比 venv 快 10-100 倍 |
|
||||
| 跨平台 | 自动适配 Windows/macOS/Linux |
|
||||
| 零配置 | 开箱即用,无需预安装依赖 |
|
||||
|
||||
### 最佳实践
|
||||
|
||||
1. **优先使用 lyxy-runner-python**: 如果环境中存在,应优先使用 lyxy-runner-python 执行脚本
|
||||
2. **大文件处理**: 对于大文档,建议使用章节提取或关键词搜索来限制处理范围
|
||||
3. **依赖管理**: 使用 lyxy-runner-python 可以自动管理依赖,避免环境配置问题
|
||||
4. **错误处理**: 脚本会自动尝试多种解析方法,确保最大兼容性
|
||||
5. **禁止自动安装**: 在降级到直接 Python 执行时,仅向用户提示安装依赖,不得自动执行 pip install
|
||||
|
||||
### 限制
|
||||
|
||||
- ✗ 不支持图片提取(仅纯文本)
|
||||
- ✗ 不支持复杂的格式保留(字体、颜色、布局等)
|
||||
- ✗ 不支持文档编辑或修改
|
||||
- ✗ 仅支持 .docx 格式(不支持 .doc 或其他格式)
|
||||
@@ -1,319 +0,0 @@
|
||||
# DOCX 解析器使用说明
|
||||
|
||||
## 简介
|
||||
|
||||
`docx_parser.py` 是一个功能强大的 DOCX 文件解析工具,支持将 Microsoft Word (.docx) 文档转换为 Markdown 格式。该脚本采用多策略解析机制,按优先级尝试以下解析方法:
|
||||
|
||||
1. **MarkItDown**(微软官方库)
|
||||
2. **python-docx**(成熟的 Python 库)
|
||||
3. **XML 原生解析**(备选方案)
|
||||
|
||||
## 环境要求
|
||||
|
||||
- Python 3.6+
|
||||
- pip
|
||||
|
||||
## 安装依赖
|
||||
|
||||
根据你的需求安装相应的解析库:
|
||||
|
||||
```bash
|
||||
# 安装 MarkItDown(推荐)
|
||||
pip install markitdown
|
||||
|
||||
# 安装 python-docx(备选)
|
||||
pip install python-docx
|
||||
```
|
||||
|
||||
> 注意:建议至少安装一种解析库。如果未安装任何库,脚本会尝试使用 XML 原生解析,但功能可能受限。
|
||||
|
||||
## 命令行参数
|
||||
|
||||
### 基本语法
|
||||
|
||||
```bash
|
||||
python3 docx_parser.py [options] <file_path>
|
||||
```
|
||||
|
||||
### 位置参数
|
||||
|
||||
| 参数 | 说明 |
|
||||
| ----------- | ------------------- |
|
||||
| `file_path` | DOCX 文件的绝对路径 |
|
||||
|
||||
### 选项参数
|
||||
|
||||
| 参数 | 长参数 | 类型 | 默认值 | 说明 |
|
||||
| ---- | ----------- | ---- | ------ | -------------------------------------------------------------- |
|
||||
| `-n` | `--context` | 整数 | 2 | 与 `-s` 配合使用,指定每个检索结果包含的前后行数(不包含空行) |
|
||||
|
||||
### 互斥参数
|
||||
|
||||
以下参数只能使用其中一个:
|
||||
|
||||
| 参数 | 长参数 | 说明 |
|
||||
| ----- | ----------------- | ----------------------------------------------------- |
|
||||
| `-c` | `--count` | 返回解析后的 markdown 文档的总字数 |
|
||||
| `-l` | `--lines` | 返回解析后的 markdown 文档的总行数 |
|
||||
| `-t` | `--titles` | 返回解析后的 markdown 文档的标题行(1-6级) |
|
||||
| `-tc` | `--title-content` | 指定标题名称,输出该标题及其下级内容(不包含#号) |
|
||||
| `-s` | `--search` | 使用正则表达式搜索文档,返回所有匹配结果(用---分隔) |
|
||||
|
||||
## 使用示例
|
||||
|
||||
### 1. 输出完整 Markdown 内容
|
||||
|
||||
```bash
|
||||
python3 docx_parser.py /path/to/document.docx
|
||||
```
|
||||
|
||||
输出:完整的 Markdown 格式文档内容
|
||||
|
||||
### 2. 获取文档字数
|
||||
|
||||
```bash
|
||||
python3 docx_parser.py -c /path/to/document.docx
|
||||
```
|
||||
|
||||
输出:文档总字数(数字)
|
||||
|
||||
### 3. 获取文档行数
|
||||
|
||||
```bash
|
||||
python3 docx_parser.py -l /path/to/document.docx
|
||||
```
|
||||
|
||||
输出:文档总行数(数字)
|
||||
|
||||
### 4. 提取所有标题
|
||||
|
||||
```bash
|
||||
python3 docx_parser.py -t /path/to/document.docx
|
||||
```
|
||||
|
||||
输出示例:
|
||||
|
||||
```
|
||||
# 主标题
|
||||
## 第一章
|
||||
### 1.1 简介
|
||||
### 1.2 内容
|
||||
## 第二章
|
||||
```
|
||||
|
||||
### 5. 提取指定标题内容
|
||||
|
||||
```bash
|
||||
python3 docx_parser.py -tc "第一章" /path/to/document.docx
|
||||
```
|
||||
|
||||
输出:包含所有上级标题的指定章节内容
|
||||
|
||||
**特点:**
|
||||
|
||||
- 支持多个同名标题
|
||||
- 自动包含完整的上级标题链
|
||||
- 包含所有下级内容
|
||||
|
||||
示例输出:
|
||||
|
||||
```
|
||||
# 主标题
|
||||
## 第一章
|
||||
这是第一章的内容
|
||||
包含所有子章节...
|
||||
|
||||
### 1.1 简介
|
||||
简介内容
|
||||
|
||||
### 1.2 内容
|
||||
详细内容
|
||||
```
|
||||
|
||||
### 6. 搜索关键词
|
||||
|
||||
#### 6.1 基本搜索
|
||||
|
||||
```bash
|
||||
python3 docx_parser.py -s "关键词" /path/to/document.docx
|
||||
```
|
||||
|
||||
输出:所有匹配关键词的内容片段,默认前后各 2 行上下文,用 `---` 分隔
|
||||
|
||||
#### 6.2 自定义上下文行数
|
||||
|
||||
```bash
|
||||
# 前后各 5 行
|
||||
python3 docx_parser.py -s "关键词" -n 5 /path/to/document.docx
|
||||
|
||||
# 不包含上下文
|
||||
python3 docx_parser.py -s "关键词" -n 0 /path/to/document.docx
|
||||
```
|
||||
|
||||
#### 6.3 正则表达式搜索
|
||||
|
||||
```bash
|
||||
# 搜索包含数字的行
|
||||
python3 docx_parser.py -s r"数字\d+" /path/to/document.docx
|
||||
|
||||
# 搜索邮箱地址
|
||||
python3 docx_parser.py -s r"\b[\w.-]+@[\w.-]+\.\w+\b" /path/to/document.docx
|
||||
|
||||
# 搜索日期格式
|
||||
python3 docx_parser.py -s r"\d{4}-\d{2}-\d{2}" /path/to/document.docx
|
||||
```
|
||||
|
||||
输出示例:
|
||||
|
||||
```
|
||||
这是前一行
|
||||
包含匹配关键词
|
||||
这是后一行
|
||||
---
|
||||
另一个匹配
|
||||
---
|
||||
第三个匹配
|
||||
```
|
||||
|
||||
### 7. 将输出保存到文件
|
||||
|
||||
```bash
|
||||
# 保存完整 Markdown
|
||||
python3 docx_parser.py /path/to/document.docx > output.md
|
||||
|
||||
# 保存标题内容
|
||||
python3 docx_parser.py -tc "第一章" /path/to/document.docx > chapter1.md
|
||||
|
||||
# 保存搜索结果
|
||||
python3 docx_parser.py -s "关键词" /path/to/document.docx > search_results.md
|
||||
```
|
||||
|
||||
## 功能特性
|
||||
|
||||
### 多策略解析
|
||||
|
||||
脚本自动尝试三种解析方法,确保最大的兼容性:
|
||||
|
||||
1. **MarkItDown**:微软官方库,解析效果最佳
|
||||
2. **python-docx**:功能完善的第三方库
|
||||
3. **XML 原生解析**:不依赖任何库的备选方案
|
||||
|
||||
### 智能匹配
|
||||
|
||||
#### 标题提取
|
||||
|
||||
- 支持 1-6 级标题识别
|
||||
- 自动处理不同样式的标题(Title、Heading 1-6)
|
||||
- 保留原始标题层级关系
|
||||
|
||||
#### 标题内容提取
|
||||
|
||||
- 支持同名标题提取
|
||||
- 自动构建完整上级标题链
|
||||
- 包含所有下级内容
|
||||
- 保持文档结构完整
|
||||
|
||||
#### 搜索功能
|
||||
|
||||
- 支持正则表达式
|
||||
- 智能合并相近匹配
|
||||
- 上下文行数控制(不包含空行)
|
||||
- 结果用 `---` 清晰分隔
|
||||
|
||||
### 文档处理
|
||||
|
||||
- 自动移除 Markdown 图片
|
||||
- 规范化空白行(连续多个空行合并为一个)
|
||||
- 支持表格、列表、粗体、斜体、下划线等格式
|
||||
|
||||
### 错误处理
|
||||
|
||||
- 文件存在性检查
|
||||
- DOCX 格式验证
|
||||
- 解析失败时自动尝试下一种方法
|
||||
- 详细的错误提示信息
|
||||
|
||||
## 常见问题
|
||||
|
||||
### Q: 如何处理大文档?
|
||||
|
||||
A: 对于非常大的文档,建议:
|
||||
|
||||
1. 使用 `-tc` 参数只提取需要的章节
|
||||
2. 使用 `-s` 参数搜索特定内容
|
||||
3. 将输出重定向到文件进行处理
|
||||
|
||||
### Q: 搜索功能支持哪些正则表达式?
|
||||
|
||||
A: 支持所有 Python 标准正则表达式语法。需要注意特殊字符的转义:
|
||||
|
||||
```bash
|
||||
# 错误:括号需要转义
|
||||
python3 docx_parser.py -s "(关键词)" /path/to/document.docx
|
||||
|
||||
# 正确
|
||||
python3 docx_parser.py -s "\(关键词\)" /path/to/document.docx
|
||||
```
|
||||
|
||||
### Q: 如何获取更多上下文?
|
||||
|
||||
A: 使用 `-n` 参数调整上下文行数:
|
||||
|
||||
```bash
|
||||
# 默认 2 行(推荐)
|
||||
python3 docx_parser.py -s "关键词" /path/to/document.docx
|
||||
|
||||
# 更多上下文(5 行)
|
||||
python3 docx_parser.py -s "关键词" -n 5 /path/to/document.docx
|
||||
|
||||
# 不包含上下文
|
||||
python3 docx_parser.py -s "关键词" -n 0 /path/to/document.docx
|
||||
```
|
||||
|
||||
### Q: 多个同名标题如何处理?
|
||||
|
||||
A: `-tc` 参数会返回所有同名标题,每个标题都包含其完整的上级标题链:
|
||||
|
||||
```markdown
|
||||
# 主标题
|
||||
|
||||
## 同名标题 1
|
||||
|
||||
内容1
|
||||
|
||||
# 主标题
|
||||
|
||||
## 同名标题 2
|
||||
|
||||
内容2
|
||||
```
|
||||
|
||||
## 技术细节
|
||||
|
||||
### 标题识别规则
|
||||
|
||||
| 样式名称 | Markdown 标题级别 |
|
||||
| --------- | ----------------- |
|
||||
| Title | # |
|
||||
| Heading 1 | # |
|
||||
| Heading 2 | ## |
|
||||
| Heading 3 | ### |
|
||||
| Heading 4 | #### |
|
||||
| Heading 5 | ##### |
|
||||
| Heading 6 | ###### |
|
||||
|
||||
### 列表识别规则
|
||||
|
||||
| 样式名称 | Markdown 列表格式 |
|
||||
| -------------------- | ----------------- |
|
||||
| List Bullet / Bullet | - (无序列表) |
|
||||
| List Number / Number | 1. (有序列表) |
|
||||
|
||||
### 文本格式支持
|
||||
|
||||
| 格式 | 转换结果 |
|
||||
| ------ | ----------------- |
|
||||
| 粗体 | `**文本**` |
|
||||
| 斜体 | `*文本*` |
|
||||
| 下划线 | `<u>文本</u>` |
|
||||
| 表格 | Markdown 表格格式 |
|
||||
@@ -1,551 +0,0 @@
|
||||
#!/usr/bin/env python3
|
||||
"""整合的 DOCX 解析器,按优先级尝试多种解析方法:
|
||||
1. MarkItDown (微软官方库)
|
||||
2. python-docx (成熟的 Python 库)
|
||||
3. XML 原生解析 (备选方案)
|
||||
|
||||
代码风格要求:
|
||||
- Python 3.6+ 兼容
|
||||
- 遵循 PEP 8 规范
|
||||
- 所有公共 API 函数添加类型提示
|
||||
- 字符串优先内联使用,不提取为常量,除非被使用超过3次
|
||||
- 其他被多次使用的对象根据具体情况可考虑被提取为常量(如正则表达式)
|
||||
- 模块级和公共 API 函数保留文档字符串
|
||||
- 内部辅助函数不添加文档字符串(函数名足够描述)
|
||||
- 变量命名清晰,避免单字母变量名
|
||||
"""
|
||||
|
||||
import argparse
|
||||
import os
|
||||
import re
|
||||
import sys
|
||||
import zipfile
|
||||
import xml.etree.ElementTree as ET
|
||||
from typing import List, Optional, Tuple
|
||||
|
||||
IMAGE_PATTERN = re.compile(r"!\[[^\]]*\]\([^)]+\)")
|
||||
|
||||
|
||||
def normalize_markdown_whitespace(content: str) -> str:
|
||||
lines = content.split("\n")
|
||||
result = []
|
||||
empty_count = 0
|
||||
|
||||
for line in lines:
|
||||
stripped = line.strip()
|
||||
if not stripped:
|
||||
empty_count += 1
|
||||
if empty_count == 1:
|
||||
result.append(line)
|
||||
else:
|
||||
empty_count = 0
|
||||
result.append(line)
|
||||
|
||||
return "\n".join(result)
|
||||
|
||||
|
||||
def is_valid_docx(file_path: str) -> bool:
|
||||
try:
|
||||
with zipfile.ZipFile(file_path, "r") as zip_file:
|
||||
required_files = ["[Content_Types].xml", "_rels/.rels", "word/document.xml"]
|
||||
for required in required_files:
|
||||
if required not in zip_file.namelist():
|
||||
return False
|
||||
return True
|
||||
except (zipfile.BadZipFile, zipfile.LargeZipFile):
|
||||
return False
|
||||
|
||||
|
||||
def remove_markdown_images(markdown_text: str) -> str:
|
||||
return IMAGE_PATTERN.sub("", markdown_text)
|
||||
|
||||
|
||||
def extract_titles(markdown_text: str) -> List[str]:
|
||||
"""提取 markdown 文本中的所有标题行(1-6级)"""
|
||||
title_lines = []
|
||||
for line in markdown_text.split("\n"):
|
||||
stripped = line.lstrip()
|
||||
if stripped.startswith("#"):
|
||||
level = 0
|
||||
for char in stripped:
|
||||
if char == "#":
|
||||
level += 1
|
||||
else:
|
||||
break
|
||||
if 1 <= level <= 6:
|
||||
title_lines.append(stripped)
|
||||
return title_lines
|
||||
|
||||
|
||||
def get_heading_level(line: str) -> int:
|
||||
stripped = line.lstrip()
|
||||
if not stripped.startswith("#"):
|
||||
return 0
|
||||
level = 0
|
||||
for char in stripped:
|
||||
if char == "#":
|
||||
level += 1
|
||||
else:
|
||||
break
|
||||
return level if 1 <= level <= 6 else 0
|
||||
|
||||
|
||||
def extract_title_content(markdown_text: str, title_name: str) -> Optional[str]:
|
||||
"""提取所有指定标题及其下级内容(每个包含上级标题)"""
|
||||
lines = markdown_text.split("\n")
|
||||
match_indices = []
|
||||
|
||||
for i, line in enumerate(lines):
|
||||
level = get_heading_level(line)
|
||||
if level > 0:
|
||||
stripped = line.lstrip()
|
||||
title_text = stripped[level:].strip()
|
||||
if title_text == title_name:
|
||||
match_indices.append(i)
|
||||
|
||||
if not match_indices:
|
||||
return None
|
||||
|
||||
result_lines = []
|
||||
for idx in match_indices:
|
||||
target_level = get_heading_level(lines[idx])
|
||||
|
||||
parent_titles = []
|
||||
current_level = target_level
|
||||
for i in range(idx - 1, -1, -1):
|
||||
line_level = get_heading_level(lines[i])
|
||||
if line_level > 0 and line_level < current_level:
|
||||
parent_titles.append(lines[i])
|
||||
current_level = line_level
|
||||
if current_level == 1:
|
||||
break
|
||||
|
||||
parent_titles.reverse()
|
||||
result_lines.extend(parent_titles)
|
||||
|
||||
result_lines.append(lines[idx])
|
||||
for i in range(idx + 1, len(lines)):
|
||||
line = lines[i]
|
||||
line_level = get_heading_level(line)
|
||||
if line_level == 0 or line_level > target_level:
|
||||
result_lines.append(line)
|
||||
else:
|
||||
break
|
||||
|
||||
return "\n".join(result_lines)
|
||||
|
||||
|
||||
def search_markdown(
|
||||
content: str, pattern: str, context_lines: int = 0
|
||||
) -> Optional[str]:
|
||||
"""使用正则表达式搜索 markdown 文档,返回匹配结果及其上下文"""
|
||||
try:
|
||||
regex = re.compile(pattern)
|
||||
except re.error:
|
||||
return None
|
||||
|
||||
lines = content.split("\n")
|
||||
|
||||
non_empty_indices = []
|
||||
non_empty_to_original = {}
|
||||
for i, line in enumerate(lines):
|
||||
if line.strip():
|
||||
non_empty_indices.append(i)
|
||||
non_empty_to_original[i] = len(non_empty_indices) - 1
|
||||
|
||||
matched_non_empty_indices = []
|
||||
for orig_idx in non_empty_indices:
|
||||
if regex.search(lines[orig_idx]):
|
||||
matched_non_empty_indices.append(non_empty_to_original[orig_idx])
|
||||
|
||||
if not matched_non_empty_indices:
|
||||
return None
|
||||
|
||||
merged_ranges = []
|
||||
current_start = matched_non_empty_indices[0]
|
||||
current_end = matched_non_empty_indices[0]
|
||||
|
||||
for idx in matched_non_empty_indices[1:]:
|
||||
if idx - current_end <= context_lines * 2:
|
||||
current_end = idx
|
||||
else:
|
||||
merged_ranges.append((current_start, current_end))
|
||||
current_start = idx
|
||||
current_end = idx
|
||||
merged_ranges.append((current_start, current_end))
|
||||
|
||||
results = []
|
||||
for start, end in merged_ranges:
|
||||
actual_start = max(0, start - context_lines)
|
||||
actual_end = min(len(non_empty_indices) - 1, end + context_lines)
|
||||
|
||||
start_line_idx = non_empty_indices[actual_start]
|
||||
end_line_idx = non_empty_indices[actual_end]
|
||||
|
||||
selected_indices = set(non_empty_indices[actual_start : actual_end + 1])
|
||||
result_lines = [
|
||||
line
|
||||
for i, line in enumerate(lines)
|
||||
if start_line_idx <= i <= end_line_idx
|
||||
and (line.strip() or i in selected_indices)
|
||||
]
|
||||
results.append("\n".join(result_lines))
|
||||
|
||||
return "\n---\n".join(results)
|
||||
|
||||
|
||||
def parse_with_markitdown(file_path: str) -> Optional[Tuple[str, None]]:
|
||||
try:
|
||||
from markitdown import MarkItDown
|
||||
|
||||
md = MarkItDown()
|
||||
result = md.convert(file_path)
|
||||
if not result.text_content.strip():
|
||||
return None, "文档为空"
|
||||
return result.text_content, None
|
||||
except ImportError:
|
||||
return None, "MarkItDown 库未安装"
|
||||
except Exception as e:
|
||||
return None, f"MarkItDown 解析失败: {str(e)}"
|
||||
|
||||
|
||||
def parse_with_python_docx(file_path: str) -> Optional[Tuple[str, None]]:
|
||||
try:
|
||||
from docx import Document
|
||||
except ImportError:
|
||||
return None, "python-docx 库未安装"
|
||||
|
||||
try:
|
||||
doc = Document(file_path)
|
||||
|
||||
def get_heading_level(para) -> int:
|
||||
if para.style and para.style.name:
|
||||
style_name = para.style.name
|
||||
if "Heading 1" in style_name or "Title" in style_name:
|
||||
return 1
|
||||
elif "Heading 2" in style_name:
|
||||
return 2
|
||||
elif "Heading 3" in style_name:
|
||||
return 3
|
||||
elif "Heading 4" in style_name:
|
||||
return 4
|
||||
elif "Heading 5" in style_name:
|
||||
return 5
|
||||
elif "Heading 6" in style_name:
|
||||
return 6
|
||||
return 0
|
||||
|
||||
def get_list_style(para) -> Optional[str]:
|
||||
if not para.style or not para.style.name:
|
||||
return None
|
||||
style_name = para.style.name
|
||||
if "List Bullet" in style_name or "Bullet" in style_name:
|
||||
return "bullet"
|
||||
elif "List Number" in style_name or "Number" in style_name:
|
||||
return "number"
|
||||
return None
|
||||
|
||||
def convert_runs_to_markdown(runs) -> str:
|
||||
result = []
|
||||
for run in runs:
|
||||
text = run.text
|
||||
if not text:
|
||||
continue
|
||||
if run.bold:
|
||||
text = f"**{text}**"
|
||||
if run.italic:
|
||||
text = f"*{text}*"
|
||||
if run.underline:
|
||||
text = f"<u>{text}</u>"
|
||||
result.append(text)
|
||||
return "".join(result)
|
||||
|
||||
def convert_table_to_markdown(table) -> str:
|
||||
md_lines = []
|
||||
for i, row in enumerate(table.rows):
|
||||
cells = []
|
||||
for cell in row.cells:
|
||||
cell_text = cell.text.strip().replace("\n", " ")
|
||||
cells.append(cell_text)
|
||||
if cells:
|
||||
md_line = "| " + " | ".join(cells) + " |"
|
||||
md_lines.append(md_line)
|
||||
if i == 0:
|
||||
sep_line = "| " + " | ".join(["---"] * len(cells)) + " |"
|
||||
md_lines.append(sep_line)
|
||||
return "\n".join(md_lines)
|
||||
|
||||
markdown_lines = []
|
||||
|
||||
for para in doc.paragraphs:
|
||||
text = convert_runs_to_markdown(para.runs)
|
||||
if not text.strip():
|
||||
continue
|
||||
heading_level = get_heading_level(para)
|
||||
if heading_level > 0:
|
||||
markdown_lines.append(f"{'#' * heading_level} {text}")
|
||||
else:
|
||||
list_style = get_list_style(para)
|
||||
if list_style == "bullet":
|
||||
markdown_lines.append(f"- {text}")
|
||||
elif list_style == "number":
|
||||
markdown_lines.append(f"1. {text}")
|
||||
else:
|
||||
markdown_lines.append(text)
|
||||
markdown_lines.append("")
|
||||
|
||||
for table in doc.tables:
|
||||
table_md = convert_table_to_markdown(table)
|
||||
markdown_lines.append(table_md)
|
||||
markdown_lines.append("")
|
||||
|
||||
content = "\n".join(markdown_lines)
|
||||
if not content.strip():
|
||||
return None, "文档为空"
|
||||
return content, None
|
||||
except Exception as e:
|
||||
return None, f"python-docx 解析失败: {str(e)}"
|
||||
|
||||
|
||||
def parse_with_xml(file_path: str) -> Optional[Tuple[str, None]]:
|
||||
word_namespace = "http://schemas.openxmlformats.org/wordprocessingml/2006/main"
|
||||
namespaces = {"w": word_namespace}
|
||||
|
||||
def safe_open_zip(zip_file: zipfile.ZipFile, name: str):
|
||||
if name.startswith("..") or "/" not in name:
|
||||
return None
|
||||
return zip_file.open(name)
|
||||
|
||||
def get_heading_level(style_id: Optional[str], style_to_level: dict) -> int:
|
||||
return style_to_level.get(style_id, 0)
|
||||
|
||||
def get_list_style(style_id: Optional[str], style_to_list: dict) -> Optional[str]:
|
||||
return style_to_list.get(style_id, None)
|
||||
|
||||
def extract_text_with_formatting(para, namespaces: dict) -> str:
|
||||
texts = []
|
||||
for run in para.findall(".//w:r", namespaces=namespaces):
|
||||
text_elem = run.find(".//w:t", namespaces=namespaces)
|
||||
if text_elem is not None and text_elem.text:
|
||||
text = text_elem.text
|
||||
bold = run.find(".//w:b", namespaces=namespaces) is not None
|
||||
italic = run.find(".//w:i", namespaces=namespaces) is not None
|
||||
if bold:
|
||||
text = f"**{text}**"
|
||||
if italic:
|
||||
text = f"*{text}*"
|
||||
texts.append(text)
|
||||
return "".join(texts).strip()
|
||||
|
||||
def convert_table_to_markdown(table_elem, namespaces: dict) -> str:
|
||||
rows = table_elem.findall(".//w:tr", namespaces=namespaces)
|
||||
if not rows:
|
||||
return ""
|
||||
md_lines = []
|
||||
for i, row in enumerate(rows):
|
||||
cells = row.findall(".//w:tc", namespaces=namespaces)
|
||||
cell_texts = []
|
||||
for cell in cells:
|
||||
cell_text = extract_text_with_formatting(cell, namespaces)
|
||||
cell_text = cell_text.replace("\n", " ").strip()
|
||||
cell_texts.append(cell_text if cell_text else "")
|
||||
if cell_texts:
|
||||
md_line = "| " + " | ".join(cell_texts) + " |"
|
||||
md_lines.append(md_line)
|
||||
if i == 0:
|
||||
sep_line = "| " + " | ".join(["---"] * len(cell_texts)) + " |"
|
||||
md_lines.append(sep_line)
|
||||
return "\n".join(md_lines)
|
||||
|
||||
try:
|
||||
style_to_level = {}
|
||||
style_to_list = {}
|
||||
markdown_lines = []
|
||||
|
||||
with zipfile.ZipFile(file_path) as zip_file:
|
||||
try:
|
||||
styles_file = safe_open_zip(zip_file, "word/styles.xml")
|
||||
if styles_file:
|
||||
styles_root = ET.parse(styles_file)
|
||||
for style in styles_root.findall(
|
||||
".//w:style", namespaces=namespaces
|
||||
):
|
||||
style_id = style.get(f"{{{word_namespace}}}styleId")
|
||||
style_name_elem = style.find("w:name", namespaces=namespaces)
|
||||
if style_id and style_name_elem is not None:
|
||||
style_name = style_name_elem.get(f"{{{word_namespace}}}val")
|
||||
if style_name:
|
||||
if style_name == "Title":
|
||||
style_to_level[style_id] = 1
|
||||
elif style_name == "heading 1":
|
||||
style_to_level[style_id] = 1
|
||||
elif style_name == "heading 2":
|
||||
style_to_level[style_id] = 2
|
||||
elif style_name == "heading 3":
|
||||
style_to_level[style_id] = 3
|
||||
elif style_name == "heading 4":
|
||||
style_to_level[style_id] = 4
|
||||
elif style_name == "heading 5":
|
||||
style_to_level[style_id] = 5
|
||||
elif style_name == "heading 6":
|
||||
style_to_level[style_id] = 6
|
||||
elif (
|
||||
"List Bullet" in style_name
|
||||
or "Bullet" in style_name
|
||||
):
|
||||
style_to_list[style_id] = "bullet"
|
||||
elif (
|
||||
"List Number" in style_name
|
||||
or "Number" in style_name
|
||||
):
|
||||
style_to_list[style_id] = "number"
|
||||
except Exception:
|
||||
pass
|
||||
|
||||
document_file = safe_open_zip(zip_file, "word/document.xml")
|
||||
if not document_file:
|
||||
return None, "document.xml 不存在或无法访问"
|
||||
|
||||
root = ET.parse(document_file)
|
||||
body = root.find(".//w:body", namespaces=namespaces)
|
||||
if body is None:
|
||||
return None, "document.xml 中未找到 w:body 元素"
|
||||
|
||||
for child in body.findall("./*", namespaces=namespaces):
|
||||
if child.tag.endswith("}p"):
|
||||
style_elem = child.find(".//w:pStyle", namespaces=namespaces)
|
||||
style_id = (
|
||||
style_elem.get(f"{{{word_namespace}}}val")
|
||||
if style_elem is not None
|
||||
else None
|
||||
)
|
||||
|
||||
heading_level = get_heading_level(style_id, style_to_level)
|
||||
list_style = get_list_style(style_id, style_to_list)
|
||||
para_text = extract_text_with_formatting(child, namespaces)
|
||||
|
||||
if para_text:
|
||||
if heading_level > 0:
|
||||
markdown_lines.append(f"{'#' * heading_level} {para_text}")
|
||||
elif list_style == "bullet":
|
||||
markdown_lines.append(f"- {para_text}")
|
||||
elif list_style == "number":
|
||||
markdown_lines.append(f"1. {para_text}")
|
||||
else:
|
||||
markdown_lines.append(para_text)
|
||||
markdown_lines.append("")
|
||||
|
||||
elif child.tag.endswith("}tbl"):
|
||||
table_md = convert_table_to_markdown(child, namespaces)
|
||||
if table_md:
|
||||
markdown_lines.append(table_md)
|
||||
markdown_lines.append("")
|
||||
|
||||
content = "\n".join(markdown_lines)
|
||||
if not content.strip():
|
||||
return None, "文档为空"
|
||||
return content, None
|
||||
except Exception as e:
|
||||
return None, f"XML 解析失败: {str(e)}"
|
||||
|
||||
|
||||
def main() -> None:
|
||||
parser = argparse.ArgumentParser(description="将 DOCX 文件解析为 Markdown")
|
||||
|
||||
parser.add_argument("file_path", help="DOCX 文件的绝对路径")
|
||||
|
||||
parser.add_argument(
|
||||
"-n",
|
||||
"--context",
|
||||
type=int,
|
||||
default=2,
|
||||
help="与 -s 配合使用,指定每个检索结果包含的前后行数(不包含空行)",
|
||||
)
|
||||
|
||||
group = parser.add_mutually_exclusive_group()
|
||||
group.add_argument(
|
||||
"-c", "--count", action="store_true", help="返回解析后的 markdown 文档的总字数"
|
||||
)
|
||||
group.add_argument(
|
||||
"-l", "--lines", action="store_true", help="返回解析后的 markdown 文档的总行数"
|
||||
)
|
||||
group.add_argument(
|
||||
"-t",
|
||||
"--titles",
|
||||
action="store_true",
|
||||
help="返回解析后的 markdown 文档的标题行(1-6级)",
|
||||
)
|
||||
group.add_argument(
|
||||
"-tc",
|
||||
"--title-content",
|
||||
help="指定标题名称,输出该标题及其下级内容(不包含#号)",
|
||||
)
|
||||
group.add_argument(
|
||||
"-s",
|
||||
"--search",
|
||||
help="使用正则表达式搜索文档,返回所有匹配结果(用---分隔)",
|
||||
)
|
||||
|
||||
args = parser.parse_args()
|
||||
|
||||
if not os.path.exists(args.file_path):
|
||||
print(f"错误: 文件不存在: {args.file_path}")
|
||||
sys.exit(1)
|
||||
|
||||
if not args.file_path.lower().endswith(".docx"):
|
||||
print(f"警告: 文件扩展名不是 .docx: {args.file_path}")
|
||||
|
||||
if not is_valid_docx(args.file_path):
|
||||
print(f"错误: 文件不是有效的 DOCX 格式或已损坏: {args.file_path}")
|
||||
sys.exit(1)
|
||||
|
||||
parsers = [
|
||||
("MarkItDown", parse_with_markitdown),
|
||||
("python-docx", parse_with_python_docx),
|
||||
("XML 原生解析", parse_with_xml),
|
||||
]
|
||||
|
||||
failures = []
|
||||
content = None
|
||||
|
||||
for parser_name, parser_func in parsers:
|
||||
content, error = parser_func(args.file_path)
|
||||
if content is not None:
|
||||
content = remove_markdown_images(content)
|
||||
content = normalize_markdown_whitespace(content)
|
||||
break
|
||||
else:
|
||||
failures.append(f"- {parser_name}: {error}")
|
||||
|
||||
if content is None:
|
||||
print("所有解析方法均失败:")
|
||||
for failure in failures:
|
||||
print(failure)
|
||||
sys.exit(1)
|
||||
|
||||
if args.count:
|
||||
print(len(content.replace("\n", "")))
|
||||
elif args.lines:
|
||||
print(len(content.split("\n")))
|
||||
elif args.titles:
|
||||
titles = extract_titles(content)
|
||||
for title in titles:
|
||||
print(title)
|
||||
elif args.title_content:
|
||||
title_content = extract_title_content(content, args.title_content)
|
||||
if title_content is None:
|
||||
print(f"错误: 未找到标题 '{args.title_content}'")
|
||||
sys.exit(1)
|
||||
print(title_content, end="")
|
||||
elif args.search:
|
||||
search_result = search_markdown(content, args.search, args.context)
|
||||
if search_result is None:
|
||||
print(f"错误: 正则表达式无效或未找到匹配: '{args.search}'")
|
||||
sys.exit(1)
|
||||
print(search_result, end="")
|
||||
else:
|
||||
print(content, end="")
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
main()
|
||||
212
skills/lyxy-reader-office/SKILL.md
Normal file
212
skills/lyxy-reader-office/SKILL.md
Normal file
@@ -0,0 +1,212 @@
|
||||
---
|
||||
name: lyxy-reader-office
|
||||
description: 优先解析 docx、xlsx、pptx、pdf 四种办公文档的 skill,将文档转换为 Markdown 格式,支持全文提取、标题提取、章节提取、正则搜索、字数统计、行数统计,PDF 额外支持 OCR 高精度模式。使用时请阅读 scripts/README.md 获取详细用法。
|
||||
compatibility: Requires Python 3.6+. DOCX/PPTX/XLSX 无需额外依赖(XML 原生解析);PDF 至少需要 pypdf。推荐通过 lyxy-runner-python skill 使用 uv 自动管理依赖。
|
||||
---
|
||||
|
||||
# 办公文档解析 Skill
|
||||
|
||||
将 Microsoft Office 文档(.docx、.pptx、.xlsx)和 PDF 文件解析为 Markdown 格式,支持多种查询模式。
|
||||
|
||||
## Purpose
|
||||
|
||||
**统一入口**:使用 `scripts/parser.py` 作为统一的命令行入口,自动识别文件类型并分派到对应的格式解析器。
|
||||
|
||||
**依赖选项**:此 skill 必须优先使用 lyxy-runner-python skill 执行,不可用时降级到直接 Python 执行。
|
||||
|
||||
### 必须使用 lyxy-runner-python
|
||||
|
||||
如果环境中存在 lyxy-runner-python skill,**必须**使用它来执行 parser.py 脚本:
|
||||
- lyxy-runner-python 使用 uv 管理依赖,自动安装所需的第三方库
|
||||
- 环境隔离,不污染系统 Python
|
||||
- 跨平台兼容(Windows/macOS/Linux)
|
||||
|
||||
### 降级到直接执行
|
||||
|
||||
**仅当** lyxy-runner-python skill 不存在时,才降级到直接 Python 执行:
|
||||
- 需要用户手动安装依赖
|
||||
- DOCX/PPTX/XLSX 无需依赖也可通过 XML 原生解析工作
|
||||
- PDF 至少需要安装 pypdf
|
||||
- **禁止自动执行 pip install**,仅向用户提示安装建议
|
||||
|
||||
## When to Use
|
||||
|
||||
任何需要读取或解析 .docx、.xlsx、.pptx、.pdf 文件内容的任务都应使用此 skill。
|
||||
|
||||
### 典型场景
|
||||
- **文档内容提取**:将 Word/PPT/Excel/PDF 文档转换为可读的 Markdown 文本
|
||||
- **文档元数据**:获取文档的字数、行数等信息
|
||||
- **标题分析**:提取文档的标题结构
|
||||
- **章节提取**:提取特定章节的内容
|
||||
- **内容搜索**:在文档中搜索关键词或模式
|
||||
- **PDF OCR**:对扫描版 PDF 启用 OCR 高精度解析
|
||||
|
||||
### 不适用场景
|
||||
- 需要提取图片内容(仅支持纯文本)
|
||||
- 需要保留复杂的格式信息(字体、颜色、布局)
|
||||
- 需要编辑或修改文档
|
||||
- 需要处理 .doc、.xls、.ppt 等旧格式
|
||||
|
||||
### 触发词
|
||||
|
||||
**中文触发词**
|
||||
- "读取/解析/打开 docx/word 文档"
|
||||
- "读取/解析/打开 xlsx/excel 文件"
|
||||
- "读取/解析/打开 pptx/ppt 文件"
|
||||
- "读取/解析/打开 pdf 文件"
|
||||
|
||||
**英文触发词**
|
||||
- "read/parse/extract docx/word/xlsx/excel/pptx/powerpoint/pdf"
|
||||
|
||||
**文件扩展名**
|
||||
- `.docx`、`.xlsx`、`.pptx`、`.pdf`
|
||||
|
||||
## Capabilities
|
||||
|
||||
### 1. 全文转换为 Markdown
|
||||
将完整文档解析为 Markdown 格式,移除图片但保留文本格式(标题、列表、表格、粗体、斜体等)。
|
||||
|
||||
各格式的输出特点:
|
||||
- **DOCX**:标准 Markdown 文档结构
|
||||
- **PPTX**:每张幻灯片以 `## Slide N` 为标题,幻灯片之间以 `---` 分隔
|
||||
- **XLSX**:以 `## SheetName` 区分工作表,数据以 Markdown 表格呈现
|
||||
- **PDF**:纯文本流,使用 `--high-res` 可启用 OCR 版面分析识别标题
|
||||
|
||||
### 2. 获取文档元信息
|
||||
- 字数统计(`-c` 参数)
|
||||
- 行数统计(`-l` 参数)
|
||||
|
||||
### 3. 标题列表提取
|
||||
提取文档中所有 1-6 级标题(`-t` 参数),按原始层级关系返回。
|
||||
|
||||
### 4. 指定章节内容提取
|
||||
根据标题名称提取特定章节的完整内容(`-tc` 参数),包含上级标题链和所有下级内容。
|
||||
|
||||
### 5. 正则表达式搜索
|
||||
在文档中搜索关键词或模式(`-s` 参数),支持自定义上下文行数(`-n` 参数,默认 2 行)。
|
||||
|
||||
### 6. PDF OCR 高精度模式
|
||||
对 PDF 文件启用 OCR 版面分析(`--high-res` 参数),适用于扫描版 PDF 或需要识别标题层级的场景。
|
||||
|
||||
## Execution
|
||||
|
||||
### 详细用法参考
|
||||
|
||||
**执行前请阅读 `scripts/README.md`**,其中包含:
|
||||
- 完整的命令行参数说明
|
||||
- 各格式的依赖安装指南(pip 和 uv 方式)
|
||||
- 解析器优先级和对比
|
||||
- 输出格式说明
|
||||
- 错误处理和常见问题
|
||||
|
||||
### 基本语法
|
||||
|
||||
```bash
|
||||
python parser.py <file_path> [options]
|
||||
```
|
||||
|
||||
### 使用 lyxy-runner-python 执行(必须优先使用)
|
||||
|
||||
```bash
|
||||
# DOCX - 推荐依赖
|
||||
uv run --with "markitdown[docx]" skills/lyxy-reader-office/scripts/parser.py /path/to/file.docx
|
||||
|
||||
# PPTX - 推荐依赖
|
||||
uv run --with "markitdown[pptx]" skills/lyxy-reader-office/scripts/parser.py /path/to/file.pptx
|
||||
|
||||
# XLSX - 推荐依赖
|
||||
uv run --with "markitdown[xlsx]" skills/lyxy-reader-office/scripts/parser.py /path/to/file.xlsx
|
||||
|
||||
# PDF - 推荐依赖
|
||||
uv run --with "markitdown[pdf]" --with pypdf skills/lyxy-reader-office/scripts/parser.py /path/to/file.pdf
|
||||
|
||||
# PDF OCR 高精度模式
|
||||
uv run --with docling --with pypdf skills/lyxy-reader-office/scripts/parser.py /path/to/file.pdf --high-res
|
||||
```
|
||||
|
||||
> **注意**:以上为最小推荐依赖,更多解析器依赖和完整安装命令请查阅 `scripts/README.md` 的安装部分。
|
||||
|
||||
### 降级到直接 Python 执行
|
||||
|
||||
仅当 lyxy-runner-python skill 不存在时使用:
|
||||
|
||||
```bash
|
||||
python3 skills/lyxy-reader-office/scripts/parser.py /path/to/file.docx
|
||||
```
|
||||
|
||||
### 互斥参数
|
||||
|
||||
| 参数 | 说明 |
|
||||
|------|------|
|
||||
| (无参数) | 输出完整 Markdown 内容 |
|
||||
| `-c` | 字数统计 |
|
||||
| `-l` | 行数统计 |
|
||||
| `-t` | 提取所有标题 |
|
||||
| `-tc <name>` | 提取指定标题的章节内容(name 不含 # 号) |
|
||||
| `-s <pattern>` | 正则表达式搜索 |
|
||||
| `-n <num>` | 与 `-s` 配合,指定上下文行数(默认 2) |
|
||||
| `--high-res` | PDF 专用,启用 OCR 版面分析 |
|
||||
|
||||
## Examples
|
||||
|
||||
### 提取完整文档内容
|
||||
```bash
|
||||
# DOCX
|
||||
uv run --with "markitdown[docx]" skills/lyxy-reader-office/scripts/parser.py /path/to/report.docx
|
||||
|
||||
# PPTX
|
||||
uv run --with "markitdown[pptx]" skills/lyxy-reader-office/scripts/parser.py /path/to/slides.pptx
|
||||
|
||||
# XLSX
|
||||
uv run --with "markitdown[xlsx]" skills/lyxy-reader-office/scripts/parser.py /path/to/data.xlsx
|
||||
|
||||
# PDF
|
||||
uv run --with "markitdown[pdf]" --with pypdf skills/lyxy-reader-office/scripts/parser.py /path/to/doc.pdf
|
||||
```
|
||||
|
||||
### 获取文档字数
|
||||
```bash
|
||||
uv run --with "markitdown[docx]" skills/lyxy-reader-office/scripts/parser.py -c /path/to/report.docx
|
||||
```
|
||||
|
||||
### 提取所有标题
|
||||
```bash
|
||||
uv run --with "markitdown[docx]" skills/lyxy-reader-office/scripts/parser.py -t /path/to/report.docx
|
||||
```
|
||||
|
||||
### 提取指定章节
|
||||
```bash
|
||||
uv run --with "markitdown[docx]" skills/lyxy-reader-office/scripts/parser.py -tc "第一章" /path/to/report.docx
|
||||
```
|
||||
|
||||
### 搜索关键词
|
||||
```bash
|
||||
uv run --with "markitdown[docx]" skills/lyxy-reader-office/scripts/parser.py -s "关键词" -n 3 /path/to/report.docx
|
||||
```
|
||||
|
||||
### PDF OCR 高精度解析
|
||||
```bash
|
||||
uv run --with docling --with pypdf skills/lyxy-reader-office/scripts/parser.py /path/to/scanned.pdf --high-res
|
||||
```
|
||||
|
||||
## Notes
|
||||
|
||||
### 多策略解析降级
|
||||
|
||||
每种文件格式配备多个解析器,按优先级依次尝试,前一个失败自动回退到下一个。详细的解析器优先级和对比请查阅 `scripts/README.md`。
|
||||
|
||||
### 限制
|
||||
|
||||
- 不支持图片提取(仅纯文本)
|
||||
- 不支持复杂的格式保留(字体、颜色、布局等)
|
||||
- 不支持文档编辑或修改
|
||||
- 仅支持 .docx、.xlsx、.pptx、.pdf 格式(不支持 .doc、.xls、.ppt 等旧格式)
|
||||
- PDF 无内置 XML 原生解析,至少需要安装 pypdf
|
||||
|
||||
### 最佳实践
|
||||
|
||||
1. **必须优先使用 lyxy-runner-python**:如果环境中存在,必须使用 lyxy-runner-python 执行脚本
|
||||
2. **查阅 README**:详细参数、依赖安装、解析器对比等信息请阅读 `scripts/README.md`
|
||||
3. **大文件处理**:对于大文档,建议使用章节提取(`-tc`)或搜索(`-s`)来限制处理范围
|
||||
4. **PDF 标题**:PDF 是版面描述格式,默认不含语义化标题;需要标题层级时使用 `--high-res`
|
||||
5. **禁止自动安装**:降级到直接 Python 执行时,仅向用户提示安装依赖,不得自动执行 pip install
|
||||
449
skills/lyxy-reader-office/scripts/README.md
Normal file
449
skills/lyxy-reader-office/scripts/README.md
Normal file
@@ -0,0 +1,449 @@
|
||||
# Document Parser 使用说明
|
||||
|
||||
模块化文档解析器,将 DOCX、PPTX、XLSX、PDF 文件转换为 Markdown 格式。
|
||||
|
||||
每种文档类型配备多个解析器,按优先级依次尝试,前一个失败自动回退到下一个。不安装任何第三方库时,DOCX/PPTX/XLSX 仍可通过内置 XML 原生解析工作(PDF 至少需要 pypdf)。
|
||||
|
||||
## 快速开始
|
||||
|
||||
```bash
|
||||
# 最简运行(XML 原生解析,无需安装依赖)
|
||||
python parser.py report.docx
|
||||
|
||||
# 安装推荐依赖后运行
|
||||
pip install "markitdown[docx]"
|
||||
python parser.py report.docx
|
||||
|
||||
# 使用 uv 一键运行(自动安装依赖,无需手动 pip install)
|
||||
uv run --with "markitdown[docx]" parser.py report.docx
|
||||
```
|
||||
|
||||
## 命令行用法
|
||||
|
||||
### 基本语法
|
||||
|
||||
```bash
|
||||
python parser.py <file_path> [options]
|
||||
```
|
||||
|
||||
`file_path` 为 DOCX、PPTX、XLSX 或 PDF 文件路径(相对或绝对路径)。不带任何选项时输出完整 Markdown 内容。
|
||||
|
||||
### 参数说明
|
||||
|
||||
以下参数互斥,一次只能使用一个:
|
||||
|
||||
| 短选项 | 长选项 | 说明 |
|
||||
|--------|--------|------|
|
||||
| `-c` | `--count` | 输出解析后文档的总字符数(不含换行符) |
|
||||
| `-l` | `--lines` | 输出解析后文档的总行数 |
|
||||
| `-t` | `--titles` | 输出所有标题行(1-6 级,含 `#` 前缀) |
|
||||
| `-tc <name>` | `--title-content <name>` | 提取指定标题及其下级内容(`name` 不包含 `#` 号) |
|
||||
| `-s <pattern>` | `--search <pattern>` | 使用正则表达式搜索文档,返回匹配结果 |
|
||||
|
||||
搜索辅助参数(与 `-s` 配合使用):
|
||||
|
||||
| 短选项 | 长选项 | 说明 |
|
||||
|--------|--------|------|
|
||||
| `-n <num>` | `--context <num>` | 每个匹配结果包含的前后非空行数(默认:2) |
|
||||
|
||||
PDF 专用参数:
|
||||
|
||||
| 长选项 | 说明 |
|
||||
|--------|------|
|
||||
| `--high-res` | 启用 OCR 版面分析(需要额外依赖,处理较慢) |
|
||||
|
||||
### 退出码
|
||||
|
||||
| 退出码 | 含义 |
|
||||
|--------|------|
|
||||
| `0` | 解析成功 |
|
||||
| `1` | 错误(文件不存在、格式无效、所有解析器失败、标题未找到、正则无效或无匹配) |
|
||||
|
||||
### 使用示例
|
||||
|
||||
**输出完整 Markdown:**
|
||||
|
||||
```bash
|
||||
python parser.py report.docx # 输出到终端
|
||||
python parser.py report.docx > output.md # 重定向到文件
|
||||
```
|
||||
|
||||
**统计信息(`-c` / `-l`):**
|
||||
|
||||
输出单个数字,适合管道处理。
|
||||
|
||||
```bash
|
||||
$ python parser.py report.docx -c
|
||||
8500
|
||||
|
||||
$ python parser.py report.docx -l
|
||||
215
|
||||
```
|
||||
|
||||
**提取标题(`-t`):**
|
||||
|
||||
每行一个标题,保留 `#` 前缀和层级。PDF 通常不包含语义化标题层级。
|
||||
|
||||
```bash
|
||||
$ python parser.py report.docx -t
|
||||
# 第一章 概述
|
||||
## 1.1 背景
|
||||
## 1.2 目标
|
||||
# 第二章 实现
|
||||
```
|
||||
|
||||
**提取标题内容(`-tc`):**
|
||||
|
||||
输出指定标题及其下级内容。如果文档中有多个同名标题,用 `---` 分隔。每段输出包含上级标题链。
|
||||
|
||||
```bash
|
||||
$ python parser.py report.docx -tc "1.1 背景"
|
||||
# 第一章 概述
|
||||
## 1.1 背景
|
||||
这是背景的详细内容...
|
||||
```
|
||||
|
||||
**搜索(`-s`):**
|
||||
|
||||
支持 Python 正则表达式语法。多个匹配结果用 `---` 分隔。`-n` 控制上下文行数。
|
||||
|
||||
```bash
|
||||
$ python parser.py report.docx -s "测试" -n 1
|
||||
上一行内容
|
||||
包含**测试**关键词的行
|
||||
下一行内容
|
||||
---
|
||||
另一处上一行
|
||||
另一处**测试**内容
|
||||
另一处下一行
|
||||
```
|
||||
|
||||
### 批量处理
|
||||
|
||||
```bash
|
||||
# Linux/Mac
|
||||
for file in *.docx; do
|
||||
python parser.py "$file" > "${file%.docx}.md"
|
||||
done
|
||||
|
||||
# Windows PowerShell
|
||||
Get-ChildItem *.docx | ForEach-Object {
|
||||
python parser.py $_.FullName > ($_.BaseName + ".md")
|
||||
}
|
||||
```
|
||||
|
||||
### 管道使用
|
||||
|
||||
```bash
|
||||
# 过滤包含关键词的行
|
||||
python parser.py report.docx | grep "重要" > important.md
|
||||
|
||||
# 统计含表格行数
|
||||
python parser.py data.xlsx | grep -c "^|"
|
||||
```
|
||||
|
||||
## 安装
|
||||
|
||||
脚本基于 Python 3.6+ 运行。每种文档类型有多个解析器按优先级依次尝试,建议安装对应类型的**所有**依赖以获得最佳兼容性。也可以只安装部分依赖,脚本会自动选择可用的解析器。
|
||||
|
||||
### DOCX
|
||||
|
||||
优先级:Docling → unstructured → pypandoc-binary → MarkItDown → python-docx → XML 原生
|
||||
|
||||
```bash
|
||||
# pip
|
||||
pip install docling "unstructured[docx]" markdownify pypandoc-binary "markitdown[docx]" python-docx
|
||||
|
||||
# uv(一键运行,无需预安装)
|
||||
uv run --with docling --with "unstructured[docx]" --with markdownify --with pypandoc-binary --with "markitdown[docx]" --with python-docx parser.py report.docx
|
||||
```
|
||||
|
||||
### PPTX
|
||||
|
||||
优先级:Docling → unstructured → MarkItDown → python-pptx → XML 原生
|
||||
|
||||
```bash
|
||||
# pip
|
||||
pip install docling "unstructured[pptx]" markdownify "markitdown[pptx]" python-pptx
|
||||
|
||||
# uv
|
||||
uv run --with docling --with "unstructured[pptx]" --with markdownify --with "markitdown[pptx]" --with python-pptx parser.py presentation.pptx
|
||||
```
|
||||
|
||||
### XLSX
|
||||
|
||||
优先级:Docling → unstructured → MarkItDown → pandas → XML 原生
|
||||
|
||||
```bash
|
||||
# pip
|
||||
pip install docling "unstructured[xlsx]" markdownify "markitdown[xlsx]" pandas tabulate
|
||||
|
||||
# uv
|
||||
uv run --with docling --with "unstructured[xlsx]" --with markdownify --with "markitdown[xlsx]" --with pandas --with tabulate parser.py data.xlsx
|
||||
```
|
||||
|
||||
### PDF
|
||||
|
||||
默认优先级:Docling → unstructured (fast) → MarkItDown → pypdf
|
||||
|
||||
`--high-res` 优先级:Docling OCR → unstructured OCR (hi_res) → Docling → unstructured (fast) → MarkItDown → pypdf
|
||||
|
||||
```bash
|
||||
# pip - 基础文本提取(fast 策略,无需 OCR)
|
||||
pip install docling "unstructured[pdf]" markdownify "markitdown[pdf]" pypdf
|
||||
|
||||
# pip - OCR 版面分析(--high-res 所需依赖)
|
||||
pip install docling "unstructured[pdf]" unstructured-paddleocr "paddlepaddle==2.6.2" ml-dtypes markdownify "markitdown[pdf]" pypdf
|
||||
|
||||
# uv - 基础文本提取
|
||||
uv run --with docling --with "unstructured[pdf]" --with markdownify --with "markitdown[pdf]" --with pypdf parser.py report.pdf
|
||||
|
||||
# uv - OCR 版面分析
|
||||
uv run --with docling --with "unstructured[pdf]" --with unstructured-paddleocr --with "paddlepaddle==2.6.2" --with ml-dtypes --with markdownify --with "markitdown[pdf]" --with pypdf parser.py report.pdf --high-res
|
||||
```
|
||||
|
||||
> PDF 无内置 XML 原生解析,至少需要安装 pypdf。默认模式下 Docling 不启用 OCR,unstructured 使用 fast 策略。指定 `--high-res` 后,Docling 启用 OCR,unstructured 使用 hi_res 策略配合 PaddleOCR 进行版面分析。hi_res 策略需要额外安装 `unstructured-paddleocr`、`paddlepaddle==2.6.2`、`ml-dtypes`。PaddlePaddle 必须锁定 2.x 版本,3.x 在 Windows 上有 OneDNN 兼容问题。
|
||||
>
|
||||
|
||||
### 安装所有依赖
|
||||
|
||||
```bash
|
||||
# pip - 基础文本提取(不包含 PDF OCR)
|
||||
pip install docling "unstructured[docx,pptx,xlsx,pdf]" markdownify pypandoc-binary "markitdown[docx,pptx,xlsx]" python-docx python-pptx pandas tabulate pypdf
|
||||
|
||||
# pip - 完整版(包含 PDF OCR)
|
||||
pip install docling "unstructured[docx,pptx,xlsx,pdf]" markdownify unstructured-paddleocr "paddlepaddle==2.6.2" ml-dtypes pypandoc-binary "markitdown[docx,pptx,xlsx,pdf]" python-docx python-pptx pandas tabulate pypdf
|
||||
|
||||
# uv - 基础文本提取
|
||||
uv run --with docling --with "unstructured[docx,pptx,xlsx,pdf]" --with markdownify --with pypandoc-binary --with "markitdown[docx,pptx,xlsx]" --with python-docx --with python-pptx --with pandas --with tabulate --with pypdf parser.py file.docx
|
||||
|
||||
# uv - 完整版(包含 PDF OCR)
|
||||
uv run --with docling --with "unstructured[docx,pptx,xlsx,pdf]" --with markdownify --with unstructured-paddleocr --with "paddlepaddle==2.6.2" --with ml-dtypes --with pypandoc-binary --with "markitdown[docx,pptx,xlsx,pdf]" --with python-docx --with python-pptx --with pandas --with tabulate --with pypdf parser.py file.docx
|
||||
```
|
||||
|
||||
### 依赖说明
|
||||
|
||||
**MarkItDown**:需要按文档类型安装可选依赖,直接 `pip install markitdown` 不包含任何格式支持。
|
||||
|
||||
```bash
|
||||
pip install "markitdown[docx]" # DOCX
|
||||
pip install "markitdown[pptx]" # PPTX
|
||||
pip install "markitdown[xlsx]" # XLSX
|
||||
pip install "markitdown[pdf]" # PDF
|
||||
pip install "markitdown[docx,pptx,xlsx,pdf]" # 全部
|
||||
```
|
||||
|
||||
**Docling**:DOCX/PPTX/XLSX 使用 SimplePipeline 直接解析 XML 结构,不涉及 OCR。PDF 默认不启用 OCR(`do_ocr=False`),指定 `--high-res` 后启用 OCR(`do_ocr=True`)。首次运行 OCR 模式会自动下载模型到缓存目录,需保持网络连通。
|
||||
|
||||
**unstructured**:需同时安装 `markdownify`。支持按文档类型安装特定 extras 以减少依赖量:
|
||||
|
||||
- `unstructured[docx]` - DOCX 处理(仅需 `python-docx`)
|
||||
- `unstructured[pptx]` - PPTX 处理(仅需 `python-pptx`)
|
||||
- `unstructured[xlsx]` - XLSX 处理(需 `openpyxl`、`xlrd`、`pandas` 等)
|
||||
- `unstructured` - 基础包(用于 PDF fast 策略)
|
||||
- `unstructured[all-docs]` - 所有文档类型(包含大量不必要的 OCR/视觉依赖)
|
||||
|
||||
**PaddleOCR**:不能用 `paddleocr` 代替 `unstructured-paddleocr`,unstructured 查找的模块名是 `unstructured_paddleocr`。
|
||||
|
||||
## 输出格式
|
||||
|
||||
### Markdown 文档结构
|
||||
|
||||
无选项时输出完整 Markdown,包含以下元素:
|
||||
|
||||
```markdown
|
||||
# 一级标题
|
||||
|
||||
正文段落
|
||||
|
||||
## 二级标题
|
||||
|
||||
- 无序列表项
|
||||
- 无序列表项
|
||||
|
||||
1. 有序列表项
|
||||
2. 有序列表项
|
||||
|
||||
| 列1 | 列2 | 列3 |
|
||||
|------|------|------|
|
||||
| 数据1 | 数据2 | 数据3 |
|
||||
|
||||
**粗体** *斜体* <u>下划线</u>
|
||||
```
|
||||
|
||||
### 各格式特有结构
|
||||
|
||||
**PPTX** — 每张幻灯片以 `## Slide N` 为标题,幻灯片之间以 `---` 分隔:
|
||||
|
||||
```markdown
|
||||
## Slide 1
|
||||
|
||||
幻灯片 1 的内容
|
||||
|
||||
---
|
||||
|
||||
## Slide 2
|
||||
|
||||
幻灯片 2 的内容
|
||||
|
||||
---
|
||||
```
|
||||
|
||||
**XLSX** — 以 `## SheetName` 区分工作表,数据以 Markdown 表格呈现:
|
||||
|
||||
```markdown
|
||||
# Excel数据转换结果 (原生XML解析)
|
||||
|
||||
## Sheet1
|
||||
|
||||
| 列1 | 列2 | 列3 |
|
||||
|------|------|------|
|
||||
| 数据1 | 数据2 | 数据3 |
|
||||
|
||||
## Sheet2
|
||||
|
||||
| 列A | 列B |
|
||||
|------|------|
|
||||
| 值1 | 值2 |
|
||||
```
|
||||
|
||||
**PDF** — 纯文本流,通常不包含语义化标题层级(PDF 是版面描述格式,标题只是视觉样式)。使用 Docling 或 unstructured hi_res 策略可通过版面分析识别部分标题,但准确度取决于排版质量。
|
||||
|
||||
### 内容自动处理
|
||||
|
||||
输出前会自动进行以下处理:
|
||||
|
||||
| 处理 | 说明 |
|
||||
|------|------|
|
||||
| 图片移除 | 删除 `` 语法 |
|
||||
| 空行规范化 | 连续多个空行合并为一个 |
|
||||
| RGB 噪声过滤 | 移除 `R:255 G:128 B:0` 格式的颜色值行(仅 unstructured 解析器) |
|
||||
| 页码噪声过滤 | 移除 `— 3 —` 格式的页码行(仅 unstructured 解析器) |
|
||||
| 页眉/页脚过滤 | 自动跳过 Header/Footer 元素(仅 unstructured 解析器) |
|
||||
|
||||
## 错误处理
|
||||
|
||||
### 错误消息
|
||||
|
||||
```bash
|
||||
# 文件不存在
|
||||
$ python parser.py missing.docx
|
||||
错误: 文件不存在: missing.docx
|
||||
|
||||
# 格式无效
|
||||
$ python parser.py readme.txt
|
||||
错误: 不是有效的 DOCX、PPTX、XLSX 或 PDF 格式: readme.txt
|
||||
|
||||
# 所有解析器失败(DOCX 示例)
|
||||
$ python parser.py report.docx
|
||||
所有解析方法均失败:
|
||||
- Docling: docling 库未安装
|
||||
- unstructured: unstructured 库未安装
|
||||
- pypandoc-binary: pypandoc-binary 库未安装
|
||||
- MarkItDown: MarkItDown 库未安装
|
||||
- python-docx: python-docx 库未安装
|
||||
- XML 原生解析: document.xml 不存在或无法访问
|
||||
|
||||
# 标题未找到
|
||||
$ python parser.py report.docx -tc "不存在的标题"
|
||||
错误: 未找到标题 '不存在的标题'
|
||||
|
||||
# 无效正则或无匹配
|
||||
$ python parser.py report.docx -s "[invalid"
|
||||
错误: 正则表达式无效或未找到匹配: '[invalid'
|
||||
```
|
||||
|
||||
### 解析器回退机制
|
||||
|
||||
脚本按优先级依次尝试各解析器。每个解析器失败后记录原因(库未安装 / 解析失败 / 文档为空),然后自动尝试下一个。全部失败时输出汇总信息并以退出码 1 退出。
|
||||
|
||||
## 解析器对比
|
||||
|
||||
### DOCX
|
||||
|
||||
| 解析器 | 优点 | 缺点 | 适用场景 |
|
||||
|---------|------|--------|---------|
|
||||
| **Docling** | 单一依赖覆盖全格式;自动 OCR;输出结构稳定 | 首次需下载模型;内存占用较高 | 一键解析;需要 OCR |
|
||||
| **unstructured** | 元素类型感知;自动过滤噪声;HTML 表格转 Markdown | 需 `unstructured[docx]` / `[pptx]` / `[xlsx]` + `markdownify` | 结构化输出;表格转换 |
|
||||
| **pypandoc-binary** | 自带 Pandoc;输出整洁;错误信息清晰 | 仅 DOCX;包体积大 | 标准化 Markdown |
|
||||
| **MarkItDown** | 微软官方;格式规范 | 输出简洁 | 标准格式;自动化处理 |
|
||||
| **python-docx** | 输出最详细;保留完整结构;支持复杂样式 | 可能含多余空行 | 精确控制输出 |
|
||||
| **XML 原生** | 无需依赖;速度快 | 样式处理有限 | 依赖不可用时兜底 |
|
||||
|
||||
### PPTX
|
||||
|
||||
| 解析器 | 优点 | 缺点 | 适用场景 |
|
||||
|---------|------|--------|---------|
|
||||
| **Docling** | 文本/表格/图片 OCR;统一 Markdown | 需下载模型 | 一次性转换;含图片的 PPTX |
|
||||
| **unstructured** | 元素感知;过滤 RGB 噪声;表格转换 | 需 `unstructured[pptx]` + `markdownify` | 结构化输出 |
|
||||
| **MarkItDown** | 自动 Slide 分隔;简洁 | 详细度低 | 快速预览 |
|
||||
| **python-pptx** | 输出最详细;支持层级列表 | 依赖私有 API | 完整内容提取 |
|
||||
| **XML 原生** | 无需依赖;速度快 | 分组简单 | 依赖不可用时兜底 |
|
||||
|
||||
### XLSX
|
||||
|
||||
| 解析器 | 优点 | 缺点 | 适用场景 |
|
||||
|---------|------|--------|---------|
|
||||
| **Docling** | 全表导出;处理合并单元格/图像 OCR | 大表可能慢 | 快速全表转换 |
|
||||
| **unstructured** | 元素感知;过滤噪声;表格转换 | 需 `unstructured[xlsx]` + `markdownify` | 结构化输出 |
|
||||
| **MarkItDown** | 支持多工作表;简洁 | 详细度低 | 快速预览 |
|
||||
| **pandas** | 功能强大;支持复杂表格 | 需 `pandas` + `tabulate` | 数据分析 |
|
||||
| **XML 原生** | 无需依赖;支持所有单元格类型 | 无数据处理能力 | 依赖不可用时兜底 |
|
||||
|
||||
### PDF
|
||||
|
||||
| 解析器 | 模式 | 优点 | 缺点 | 适用场景 |
|
||||
|---------|------|------|--------|---------|
|
||||
| **Docling** | 默认 | 结构化 Markdown;表格/图片占位 | 首次需下载模型 | 有文本层的 PDF |
|
||||
| **Docling OCR** | `--high-res` | 内置 OCR;结构化 Markdown | 模型体积大;OCR 耗时长 | 扫描版 PDF;多语言 |
|
||||
| **unstructured** | 默认 | fast 策略;速度快 | 不做版面分析;标题不可靠 | 快速文本提取 |
|
||||
| **unstructured OCR** | `--high-res` | hi_res 版面分析 + PaddleOCR;标题识别 | 需额外 PaddleOCR 依赖 | 版面分析;OCR |
|
||||
| **MarkItDown** | 通用 | 微软官方;格式规范 | 输出简洁 | 标准格式 |
|
||||
| **pypdf** | 通用 | 轻量;速度快;安装简单 | 功能简单 | 快速文本提取 |
|
||||
|
||||
## 常见问题
|
||||
|
||||
### 为什么有些内容没有提取到?
|
||||
|
||||
不同解析器输出详细度不同。优先级高的解析器不一定输出最详细——Docling 和 unstructured 侧重结构化,python-docx/python-pptx 输出最详细但不做噪声过滤。建议安装对应类型的所有依赖,脚本会自动选择优先级最高的可用解析器。
|
||||
|
||||
### PDF 文件没有标题层级?
|
||||
|
||||
PDF 是版面描述格式,不包含语义化标题结构。使用 `--high-res` 参数可启用 Docling OCR 或 unstructured hi_res 策略,通过版面分析识别部分标题,准确度取决于排版质量。默认模式下建议用 `-s` 搜索定位内容,或用 `-c` / `-l` 了解文档规模。
|
||||
|
||||
### 表格格式不正确?
|
||||
|
||||
XML 原生解析器对复杂表格(合并单元格、嵌套表格)支持有限。安装 Docling、unstructured 或对应的专用库可获得更好的表格处理效果。
|
||||
|
||||
### 中文显示乱码?
|
||||
|
||||
脚本输出 UTF-8 编码,确保终端支持:
|
||||
|
||||
```bash
|
||||
# Linux/Mac
|
||||
export LANG=en_US.UTF-8
|
||||
|
||||
# Windows PowerShell
|
||||
[Console]::OutputEncoding = [System.Text.Encoding]::UTF8
|
||||
```
|
||||
|
||||
### 如何只使用特定解析器?
|
||||
|
||||
当前版本不支持指定解析器,总是按优先级自动选择。可以通过只安装目标解析器的依赖来间接控制——未安装的解析器会被跳过。
|
||||
|
||||
### 大文件处理慢?
|
||||
|
||||
Docling 和 unstructured 对大文件较慢(尤其是 OCR)。如果只需要快速提取文本,可以只安装轻量依赖(如 pypdf、python-docx),让脚本回退到这些解析器。DOCX/PPTX/XLSX 不安装任何依赖时使用 XML 原生解析,速度最快。
|
||||
|
||||
## 文件结构
|
||||
|
||||
```
|
||||
scripts/
|
||||
├── common.py # 公共函数和常量
|
||||
├── docx_parser.py # DOCX 文件解析
|
||||
├── pptx_parser.py # PPTX 文件解析
|
||||
├── xlsx_parser.py # XLSX 文件解析
|
||||
├── pdf_parser.py # PDF 文件解析
|
||||
├── parser.py # 命令行入口
|
||||
└── README.md # 本文档
|
||||
```
|
||||
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
337
skills/lyxy-reader-office/scripts/common.py
Normal file
337
skills/lyxy-reader-office/scripts/common.py
Normal file
@@ -0,0 +1,337 @@
|
||||
#!/usr/bin/env python3
|
||||
"""文档解析器的公共模块,包含所有格式共享的工具函数和验证函数。"""
|
||||
|
||||
import os
|
||||
import re
|
||||
import zipfile
|
||||
from typing import List, Optional, Tuple
|
||||
|
||||
IMAGE_PATTERN = re.compile(r"!\[[^\]]*\]\([^)]+\)")
|
||||
|
||||
# unstructured 噪声匹配: pptx 中的 RGB 颜色值(如 "R:255 G:128 B:0")
|
||||
_RGB_PATTERN = re.compile(r"^R:\d+\s+G:\d+\s+B:\d+$")
|
||||
# unstructured 噪声匹配: 破折号页码(如 "— 3 —")
|
||||
_PAGE_NUMBER_PATTERN = re.compile(r"^—\s*\d+\s*—$")
|
||||
|
||||
|
||||
def parse_with_markitdown(
|
||||
file_path: str,
|
||||
) -> Tuple[Optional[str], Optional[str]]:
|
||||
"""使用 MarkItDown 库解析文件"""
|
||||
try:
|
||||
from markitdown import MarkItDown
|
||||
|
||||
md = MarkItDown()
|
||||
result = md.convert(file_path)
|
||||
if not result.text_content.strip():
|
||||
return None, "文档为空"
|
||||
return result.text_content, None
|
||||
except ImportError:
|
||||
return None, "MarkItDown 库未安装"
|
||||
except Exception as e:
|
||||
return None, f"MarkItDown 解析失败: {str(e)}"
|
||||
|
||||
|
||||
def parse_with_docling(file_path: str) -> Tuple[Optional[str], Optional[str]]:
|
||||
"""使用 docling 库解析文件"""
|
||||
try:
|
||||
from docling.document_converter import DocumentConverter
|
||||
except ImportError:
|
||||
return None, "docling 库未安装"
|
||||
|
||||
try:
|
||||
converter = DocumentConverter()
|
||||
result = converter.convert(file_path)
|
||||
markdown_content = result.document.export_to_markdown()
|
||||
if not markdown_content.strip():
|
||||
return None, "文档为空"
|
||||
return markdown_content, None
|
||||
except Exception as e:
|
||||
return None, f"docling 解析失败: {str(e)}"
|
||||
|
||||
|
||||
def build_markdown_table(rows_data: List[List[str]]) -> str:
|
||||
"""将二维列表转换为 Markdown 表格格式"""
|
||||
if not rows_data or not rows_data[0]:
|
||||
return ""
|
||||
|
||||
md_lines = []
|
||||
for i, row_data in enumerate(rows_data):
|
||||
row_text = [cell if cell else "" for cell in row_data]
|
||||
md_lines.append("| " + " | ".join(row_text) + " |")
|
||||
if i == 0:
|
||||
md_lines.append("| " + " | ".join(["---"] * len(row_text)) + " |")
|
||||
return "\n".join(md_lines) + "\n\n"
|
||||
|
||||
|
||||
def flush_list_stack(list_stack: List[str], target: List[str]) -> None:
|
||||
"""将列表堆栈中的非空项添加到目标列表并清空堆栈"""
|
||||
for item in list_stack:
|
||||
if item:
|
||||
target.append(item + "\n")
|
||||
list_stack.clear()
|
||||
|
||||
|
||||
def safe_open_zip(zip_file: zipfile.ZipFile, name: str) -> Optional[zipfile.ZipExtFile]:
|
||||
"""安全地从 ZipFile 中打开文件,防止路径遍历攻击"""
|
||||
if not name:
|
||||
return None
|
||||
if name.startswith("/") or name.startswith(".."):
|
||||
return None
|
||||
if "/../" in name or name.endswith("/.."):
|
||||
return None
|
||||
if "\\" in name:
|
||||
return None
|
||||
return zip_file.open(name)
|
||||
|
||||
|
||||
_CONSECUTIVE_BLANK_LINES = re.compile(r"\n{3,}")
|
||||
|
||||
|
||||
def normalize_markdown_whitespace(content: str) -> str:
|
||||
"""规范化 Markdown 空白字符,保留单行空行"""
|
||||
return _CONSECUTIVE_BLANK_LINES.sub("\n\n", content)
|
||||
|
||||
|
||||
def _is_valid_ooxml(file_path: str, required_files: List[str]) -> bool:
|
||||
try:
|
||||
with zipfile.ZipFile(file_path, "r") as zip_file:
|
||||
names = set(zip_file.namelist())
|
||||
return all(r in names for r in required_files)
|
||||
except (zipfile.BadZipFile, zipfile.LargeZipFile):
|
||||
return False
|
||||
|
||||
|
||||
_DOCX_REQUIRED = ["[Content_Types].xml", "_rels/.rels", "word/document.xml"]
|
||||
_PPTX_REQUIRED = ["[Content_Types].xml", "_rels/.rels", "ppt/presentation.xml"]
|
||||
_XLSX_REQUIRED = ["[Content_Types].xml", "_rels/.rels", "xl/workbook.xml"]
|
||||
|
||||
|
||||
def is_valid_docx(file_path: str) -> bool:
|
||||
"""验证文件是否为有效的 DOCX 格式"""
|
||||
return _is_valid_ooxml(file_path, _DOCX_REQUIRED)
|
||||
|
||||
|
||||
def is_valid_pptx(file_path: str) -> bool:
|
||||
"""验证文件是否为有效的 PPTX 格式"""
|
||||
return _is_valid_ooxml(file_path, _PPTX_REQUIRED)
|
||||
|
||||
|
||||
def is_valid_xlsx(file_path: str) -> bool:
|
||||
"""验证文件是否为有效的 XLSX 格式"""
|
||||
return _is_valid_ooxml(file_path, _XLSX_REQUIRED)
|
||||
|
||||
|
||||
def is_valid_pdf(file_path: str) -> bool:
|
||||
"""验证文件是否为有效的 PDF 格式"""
|
||||
try:
|
||||
with open(file_path, "rb") as f:
|
||||
header = f.read(4)
|
||||
return header == b"%PDF"
|
||||
except (IOError, OSError):
|
||||
return False
|
||||
|
||||
|
||||
def remove_markdown_images(markdown_text: str) -> str:
|
||||
"""移除 Markdown 文本中的图片标记"""
|
||||
return IMAGE_PATTERN.sub("", markdown_text)
|
||||
|
||||
|
||||
def get_heading_level(line: str) -> int:
|
||||
"""获取 Markdown 行的标题级别(1-6),非标题返回 0"""
|
||||
stripped = line.lstrip()
|
||||
if not stripped.startswith("#"):
|
||||
return 0
|
||||
without_hash = stripped.lstrip("#")
|
||||
level = len(stripped) - len(without_hash)
|
||||
if not (1 <= level <= 6):
|
||||
return 0
|
||||
if len(stripped) == level:
|
||||
return level
|
||||
if stripped[level] != " ":
|
||||
return 0
|
||||
return level
|
||||
|
||||
|
||||
def extract_titles(markdown_text: str) -> List[str]:
|
||||
"""提取 markdown 文本中的所有标题行(1-6级)"""
|
||||
title_lines = []
|
||||
for line in markdown_text.split("\n"):
|
||||
if get_heading_level(line) > 0:
|
||||
title_lines.append(line.lstrip())
|
||||
return title_lines
|
||||
|
||||
|
||||
def extract_title_content(markdown_text: str, title_name: str) -> Optional[str]:
|
||||
"""提取所有指定标题及其下级内容(每个包含上级标题)"""
|
||||
lines = markdown_text.split("\n")
|
||||
match_indices = []
|
||||
|
||||
for i, line in enumerate(lines):
|
||||
level = get_heading_level(line)
|
||||
if level > 0:
|
||||
stripped = line.lstrip()
|
||||
title_text = stripped[level:].strip()
|
||||
if title_text == title_name:
|
||||
match_indices.append(i)
|
||||
|
||||
if not match_indices:
|
||||
return None
|
||||
|
||||
result_lines = []
|
||||
for match_num, idx in enumerate(match_indices):
|
||||
if match_num > 0:
|
||||
result_lines.append("\n---\n")
|
||||
|
||||
target_level = get_heading_level(lines[idx])
|
||||
|
||||
parent_titles = []
|
||||
current_level = target_level
|
||||
for i in range(idx - 1, -1, -1):
|
||||
line_level = get_heading_level(lines[i])
|
||||
if line_level > 0 and line_level < current_level:
|
||||
parent_titles.append(lines[i])
|
||||
current_level = line_level
|
||||
if current_level == 1:
|
||||
break
|
||||
|
||||
parent_titles.reverse()
|
||||
result_lines.extend(parent_titles)
|
||||
|
||||
result_lines.append(lines[idx])
|
||||
for i in range(idx + 1, len(lines)):
|
||||
line = lines[i]
|
||||
line_level = get_heading_level(line)
|
||||
if line_level == 0 or line_level > target_level:
|
||||
result_lines.append(line)
|
||||
else:
|
||||
break
|
||||
|
||||
return "\n".join(result_lines)
|
||||
|
||||
|
||||
def search_markdown(
|
||||
content: str, pattern: str, context_lines: int = 0
|
||||
) -> Optional[str]:
|
||||
"""使用正则表达式搜索 markdown 文档,返回匹配结果及其上下文"""
|
||||
try:
|
||||
regex = re.compile(pattern)
|
||||
except re.error:
|
||||
return None
|
||||
|
||||
lines = content.split("\n")
|
||||
|
||||
non_empty_indices = []
|
||||
non_empty_to_original = {}
|
||||
for i, line in enumerate(lines):
|
||||
if line.strip():
|
||||
non_empty_indices.append(i)
|
||||
non_empty_to_original[i] = len(non_empty_indices) - 1
|
||||
|
||||
matched_non_empty_indices = []
|
||||
for orig_idx in non_empty_indices:
|
||||
if regex.search(lines[orig_idx]):
|
||||
matched_non_empty_indices.append(non_empty_to_original[orig_idx])
|
||||
|
||||
if not matched_non_empty_indices:
|
||||
return None
|
||||
|
||||
merged_ranges = []
|
||||
current_start = matched_non_empty_indices[0]
|
||||
current_end = matched_non_empty_indices[0]
|
||||
|
||||
for idx in matched_non_empty_indices[1:]:
|
||||
if idx - current_end <= context_lines * 2:
|
||||
current_end = idx
|
||||
else:
|
||||
merged_ranges.append((current_start, current_end))
|
||||
current_start = idx
|
||||
current_end = idx
|
||||
merged_ranges.append((current_start, current_end))
|
||||
|
||||
results = []
|
||||
for start, end in merged_ranges:
|
||||
context_start_idx = max(0, start - context_lines)
|
||||
context_end_idx = min(len(non_empty_indices) - 1, end + context_lines)
|
||||
|
||||
start_line_idx = non_empty_indices[context_start_idx]
|
||||
end_line_idx = non_empty_indices[context_end_idx]
|
||||
|
||||
result_lines = [
|
||||
line
|
||||
for i, line in enumerate(lines)
|
||||
if start_line_idx <= i <= end_line_idx
|
||||
]
|
||||
results.append("\n".join(result_lines))
|
||||
|
||||
return "\n---\n".join(results)
|
||||
|
||||
|
||||
_FILE_TYPE_VALIDATORS = {
|
||||
".docx": is_valid_docx,
|
||||
".pptx": is_valid_pptx,
|
||||
".xlsx": is_valid_xlsx,
|
||||
".pdf": is_valid_pdf,
|
||||
}
|
||||
|
||||
|
||||
def detect_file_type(file_path: str) -> Optional[str]:
|
||||
"""检测文件类型,返回 'docx'、'pptx'、'xlsx' 或 'pdf'"""
|
||||
ext = os.path.splitext(file_path)[1].lower()
|
||||
validator = _FILE_TYPE_VALIDATORS.get(ext)
|
||||
if validator and validator(file_path):
|
||||
return ext.lstrip(".")
|
||||
return None
|
||||
|
||||
|
||||
def _unstructured_elements_to_markdown(
|
||||
elements: list, trust_titles: bool = True
|
||||
) -> str:
|
||||
"""将 unstructured 解析出的元素列表转换为 Markdown 文本"""
|
||||
try:
|
||||
import markdownify as md_lib
|
||||
from unstructured.documents.elements import (
|
||||
Footer,
|
||||
Header,
|
||||
Image,
|
||||
ListItem,
|
||||
PageBreak,
|
||||
PageNumber,
|
||||
Table,
|
||||
Title,
|
||||
)
|
||||
except ImportError:
|
||||
return "\n\n".join(
|
||||
el.text for el in elements if hasattr(el, "text") and el.text and el.text.strip()
|
||||
)
|
||||
|
||||
skip_types = (Header, Footer, PageBreak, PageNumber)
|
||||
parts = []
|
||||
|
||||
for el in elements:
|
||||
if isinstance(el, skip_types):
|
||||
continue
|
||||
text = el.text.strip() if hasattr(el, "text") else str(el).strip()
|
||||
if not text or _RGB_PATTERN.match(text) or _PAGE_NUMBER_PATTERN.match(text):
|
||||
continue
|
||||
|
||||
if isinstance(el, Table):
|
||||
html = getattr(el.metadata, "text_as_html", None)
|
||||
if html:
|
||||
parts.append(md_lib.markdownify(html, strip=["img"]).strip())
|
||||
else:
|
||||
parts.append(str(el))
|
||||
elif isinstance(el, Title) and trust_titles:
|
||||
depth = getattr(el.metadata, "category_depth", None) or 1
|
||||
depth = min(max(depth, 1), 4)
|
||||
parts.append(f"{'#' * depth} {text}")
|
||||
elif isinstance(el, ListItem):
|
||||
parts.append(f"- {text}")
|
||||
elif isinstance(el, Image):
|
||||
path = getattr(el.metadata, "image_path", None) or ""
|
||||
if path:
|
||||
parts.append(f"")
|
||||
else:
|
||||
parts.append(text)
|
||||
|
||||
return "\n\n".join(parts)
|
||||
308
skills/lyxy-reader-office/scripts/docx_parser.py
Normal file
308
skills/lyxy-reader-office/scripts/docx_parser.py
Normal file
@@ -0,0 +1,308 @@
|
||||
#!/usr/bin/env python3
|
||||
"""DOCX 文件解析模块,提供多种解析方法。"""
|
||||
|
||||
import xml.etree.ElementTree as ET
|
||||
import zipfile
|
||||
from typing import Any, List, Optional, Tuple
|
||||
|
||||
from common import (
|
||||
_unstructured_elements_to_markdown,
|
||||
build_markdown_table,
|
||||
parse_with_docling,
|
||||
parse_with_markitdown,
|
||||
safe_open_zip,
|
||||
)
|
||||
|
||||
|
||||
def parse_docx_with_docling(file_path: str) -> Tuple[Optional[str], Optional[str]]:
|
||||
"""使用 docling 库解析 DOCX 文件"""
|
||||
return parse_with_docling(file_path)
|
||||
|
||||
|
||||
def parse_docx_with_unstructured(file_path: str) -> Tuple[Optional[str], Optional[str]]:
|
||||
"""使用 unstructured 库解析 DOCX 文件"""
|
||||
try:
|
||||
from unstructured.partition.docx import partition_docx
|
||||
except ImportError:
|
||||
return None, "unstructured 库未安装"
|
||||
|
||||
try:
|
||||
elements = partition_docx(filename=file_path, infer_table_structure=True)
|
||||
content = _unstructured_elements_to_markdown(elements)
|
||||
if not content.strip():
|
||||
return None, "文档为空"
|
||||
return content, None
|
||||
except Exception as e:
|
||||
return None, f"unstructured 解析失败: {str(e)}"
|
||||
|
||||
|
||||
def parse_docx_with_pypandoc(file_path: str) -> Tuple[Optional[str], Optional[str]]:
|
||||
"""使用 pypandoc-binary 库解析 DOCX 文件。"""
|
||||
try:
|
||||
import pypandoc
|
||||
except ImportError:
|
||||
return None, "pypandoc-binary 库未安装"
|
||||
|
||||
try:
|
||||
content = pypandoc.convert_file(
|
||||
source_file=file_path,
|
||||
to="md",
|
||||
format="docx",
|
||||
outputfile=None,
|
||||
extra_args=["--wrap=none"],
|
||||
)
|
||||
except OSError as exc:
|
||||
return None, f"pypandoc-binary 缺少 Pandoc 可执行文件: {exc}"
|
||||
except RuntimeError as exc:
|
||||
return None, f"pypandoc-binary 解析失败: {exc}"
|
||||
|
||||
content = content.strip()
|
||||
if not content:
|
||||
return None, "文档为空"
|
||||
return content, None
|
||||
|
||||
|
||||
def parse_docx_with_markitdown(file_path: str) -> Tuple[Optional[str], Optional[str]]:
|
||||
"""使用 MarkItDown 库解析 DOCX 文件"""
|
||||
return parse_with_markitdown(file_path)
|
||||
|
||||
|
||||
def parse_docx_with_python_docx(file_path: str) -> Tuple[Optional[str], Optional[str]]:
|
||||
"""使用 python-docx 库解析 DOCX 文件"""
|
||||
try:
|
||||
from docx import Document
|
||||
except ImportError:
|
||||
return None, "python-docx 库未安装"
|
||||
|
||||
try:
|
||||
doc = Document(file_path)
|
||||
|
||||
_HEADING_LEVELS = {
|
||||
"Title": 1, "Heading 1": 1, "Heading 2": 2, "Heading 3": 3,
|
||||
"Heading 4": 4, "Heading 5": 5, "Heading 6": 6,
|
||||
}
|
||||
|
||||
def get_heading_level(para: Any) -> int:
|
||||
if para.style and para.style.name:
|
||||
return _HEADING_LEVELS.get(para.style.name, 0)
|
||||
return 0
|
||||
|
||||
_LIST_STYLES = {
|
||||
"Bullet": "bullet", "Number": "number",
|
||||
}
|
||||
|
||||
def get_list_style(para: Any) -> Optional[str]:
|
||||
if not para.style or not para.style.name:
|
||||
return None
|
||||
style_name = para.style.name
|
||||
if style_name in _LIST_STYLES:
|
||||
return _LIST_STYLES[style_name]
|
||||
if style_name.startswith("List Bullet"):
|
||||
return "bullet"
|
||||
if style_name.startswith("List Number"):
|
||||
return "number"
|
||||
return None
|
||||
|
||||
def convert_runs_to_markdown(runs: List[Any]) -> str:
|
||||
result = []
|
||||
for run in runs:
|
||||
text = run.text
|
||||
if not text:
|
||||
continue
|
||||
if run.bold:
|
||||
text = f"**{text}**"
|
||||
if run.italic:
|
||||
text = f"*{text}*"
|
||||
if run.underline:
|
||||
text = f"<u>{text}</u>"
|
||||
result.append(text)
|
||||
return "".join(result)
|
||||
|
||||
def convert_table_to_markdown(table: Any) -> str:
|
||||
rows_data = []
|
||||
for row in table.rows:
|
||||
row_data = []
|
||||
for cell in row.cells:
|
||||
cell_text = cell.text.strip().replace("\n", " ")
|
||||
row_data.append(cell_text)
|
||||
rows_data.append(row_data)
|
||||
return build_markdown_table(rows_data)
|
||||
|
||||
markdown_lines = []
|
||||
prev_was_list = False
|
||||
|
||||
from docx.table import Table as DocxTable
|
||||
from docx.text.paragraph import Paragraph
|
||||
|
||||
for element in doc.element.body:
|
||||
if element.tag.endswith('}p'):
|
||||
para = Paragraph(element, doc)
|
||||
text = convert_runs_to_markdown(para.runs)
|
||||
if not text.strip():
|
||||
continue
|
||||
|
||||
heading_level = get_heading_level(para)
|
||||
if heading_level > 0:
|
||||
markdown_lines.append(f"{'#' * heading_level} {text}")
|
||||
prev_was_list = False
|
||||
else:
|
||||
list_style = get_list_style(para)
|
||||
if list_style == "bullet":
|
||||
if not prev_was_list and markdown_lines:
|
||||
markdown_lines.append("")
|
||||
markdown_lines.append(f"- {text}")
|
||||
prev_was_list = True
|
||||
elif list_style == "number":
|
||||
if not prev_was_list and markdown_lines:
|
||||
markdown_lines.append("")
|
||||
markdown_lines.append(f"1. {text}")
|
||||
prev_was_list = True
|
||||
else:
|
||||
if prev_was_list and markdown_lines:
|
||||
markdown_lines.append("")
|
||||
markdown_lines.append(text)
|
||||
markdown_lines.append("")
|
||||
prev_was_list = False
|
||||
|
||||
elif element.tag.endswith('}tbl'):
|
||||
table = DocxTable(element, doc)
|
||||
table_md = convert_table_to_markdown(table)
|
||||
if table_md:
|
||||
markdown_lines.append(table_md)
|
||||
markdown_lines.append("")
|
||||
prev_was_list = False
|
||||
|
||||
content = "\n".join(markdown_lines)
|
||||
if not content.strip():
|
||||
return None, "文档为空"
|
||||
return content, None
|
||||
except Exception as e:
|
||||
return None, f"python-docx 解析失败: {str(e)}"
|
||||
|
||||
|
||||
def parse_docx_with_xml(file_path: str) -> Tuple[Optional[str], Optional[str]]:
|
||||
"""使用 XML 原生解析 DOCX 文件"""
|
||||
word_namespace = "http://schemas.openxmlformats.org/wordprocessingml/2006/main"
|
||||
namespaces = {"w": word_namespace}
|
||||
|
||||
_STYLE_NAME_TO_HEADING = {
|
||||
"title": 1, "heading 1": 1, "heading 2": 2, "heading 3": 3,
|
||||
"heading 4": 4, "heading 5": 5, "heading 6": 6,
|
||||
}
|
||||
|
||||
def get_heading_level(style_id: Optional[str], style_to_level: dict) -> int:
|
||||
return style_to_level.get(style_id, 0)
|
||||
|
||||
def get_list_style(style_id: Optional[str], style_to_list: dict) -> Optional[str]:
|
||||
return style_to_list.get(style_id, None)
|
||||
|
||||
def extract_text_with_formatting(para: Any, namespaces: dict) -> str:
|
||||
texts = []
|
||||
for run in para.findall(".//w:r", namespaces=namespaces):
|
||||
text_elem = run.find(".//w:t", namespaces=namespaces)
|
||||
if text_elem is not None and text_elem.text:
|
||||
text = text_elem.text
|
||||
bold = run.find(".//w:b", namespaces=namespaces) is not None
|
||||
italic = run.find(".//w:i", namespaces=namespaces) is not None
|
||||
if bold:
|
||||
text = f"**{text}**"
|
||||
if italic:
|
||||
text = f"*{text}*"
|
||||
texts.append(text)
|
||||
return "".join(texts).strip()
|
||||
|
||||
def convert_table_to_markdown(table_elem: Any, namespaces: dict) -> str:
|
||||
rows = table_elem.findall(".//w:tr", namespaces=namespaces)
|
||||
if not rows:
|
||||
return ""
|
||||
rows_data = []
|
||||
for row in rows:
|
||||
cells = row.findall(".//w:tc", namespaces=namespaces)
|
||||
cell_texts = []
|
||||
for cell in cells:
|
||||
cell_text = extract_text_with_formatting(cell, namespaces)
|
||||
cell_text = cell_text.replace("\n", " ").strip()
|
||||
cell_texts.append(cell_text if cell_text else "")
|
||||
if cell_texts:
|
||||
rows_data.append(cell_texts)
|
||||
return build_markdown_table(rows_data)
|
||||
|
||||
try:
|
||||
style_to_level = {}
|
||||
style_to_list = {}
|
||||
markdown_lines = []
|
||||
|
||||
with zipfile.ZipFile(file_path) as zip_file:
|
||||
try:
|
||||
styles_file = safe_open_zip(zip_file, "word/styles.xml")
|
||||
if styles_file:
|
||||
styles_root = ET.parse(styles_file).getroot()
|
||||
for style in styles_root.findall(
|
||||
".//w:style", namespaces=namespaces
|
||||
):
|
||||
style_id = style.get(f"{{{word_namespace}}}styleId")
|
||||
style_name_elem = style.find("w:name", namespaces=namespaces)
|
||||
if style_id and style_name_elem is not None:
|
||||
style_name = style_name_elem.get(f"{{{word_namespace}}}val")
|
||||
if style_name:
|
||||
style_name_lower = style_name.lower()
|
||||
if style_name_lower in _STYLE_NAME_TO_HEADING:
|
||||
style_to_level[style_id] = _STYLE_NAME_TO_HEADING[style_name_lower]
|
||||
elif (
|
||||
style_name_lower.startswith("list bullet")
|
||||
or style_name_lower == "bullet"
|
||||
):
|
||||
style_to_list[style_id] = "bullet"
|
||||
elif (
|
||||
style_name_lower.startswith("list number")
|
||||
or style_name_lower == "number"
|
||||
):
|
||||
style_to_list[style_id] = "number"
|
||||
except Exception:
|
||||
pass
|
||||
|
||||
document_file = safe_open_zip(zip_file, "word/document.xml")
|
||||
if not document_file:
|
||||
return None, "document.xml 不存在或无法访问"
|
||||
|
||||
root = ET.parse(document_file).getroot()
|
||||
body = root.find(".//w:body", namespaces=namespaces)
|
||||
if body is None:
|
||||
return None, "document.xml 中未找到 w:body 元素"
|
||||
|
||||
for child in body.findall("./*", namespaces=namespaces):
|
||||
if child.tag.endswith("}p"):
|
||||
style_elem = child.find(".//w:pStyle", namespaces=namespaces)
|
||||
style_id = (
|
||||
style_elem.get(f"{{{word_namespace}}}val")
|
||||
if style_elem is not None
|
||||
else None
|
||||
)
|
||||
|
||||
heading_level = get_heading_level(style_id, style_to_level)
|
||||
list_style = get_list_style(style_id, style_to_list)
|
||||
para_text = extract_text_with_formatting(child, namespaces)
|
||||
|
||||
if para_text:
|
||||
if heading_level > 0:
|
||||
markdown_lines.append(f"{'#' * heading_level} {para_text}")
|
||||
elif list_style == "bullet":
|
||||
markdown_lines.append(f"- {para_text}")
|
||||
elif list_style == "number":
|
||||
markdown_lines.append(f"1. {para_text}")
|
||||
else:
|
||||
markdown_lines.append(para_text)
|
||||
markdown_lines.append("")
|
||||
|
||||
elif child.tag.endswith("}tbl"):
|
||||
table_md = convert_table_to_markdown(child, namespaces)
|
||||
if table_md:
|
||||
markdown_lines.append(table_md)
|
||||
markdown_lines.append("")
|
||||
|
||||
content = "\n".join(markdown_lines)
|
||||
if not content.strip():
|
||||
return None, "文档为空"
|
||||
return content, None
|
||||
except Exception as e:
|
||||
return None, f"XML 解析失败: {str(e)}"
|
||||
166
skills/lyxy-reader-office/scripts/parser.py
Normal file
166
skills/lyxy-reader-office/scripts/parser.py
Normal file
@@ -0,0 +1,166 @@
|
||||
#!/usr/bin/env python3
|
||||
"""文档解析器命令行交互模块,提供命令行接口。支持 DOCX、PPTX、XLSX 和 PDF 文件。"""
|
||||
|
||||
import argparse
|
||||
import logging
|
||||
import os
|
||||
import sys
|
||||
import warnings
|
||||
|
||||
# 抑制第三方库的进度条和日志,仅保留解析结果输出
|
||||
os.environ["HF_HUB_DISABLE_PROGRESS_BARS"] = "1"
|
||||
os.environ["HF_HUB_DISABLE_TELEMETRY"] = "1"
|
||||
os.environ["TQDM_DISABLE"] = "1"
|
||||
warnings.filterwarnings("ignore")
|
||||
logging.disable(logging.WARNING)
|
||||
|
||||
import common
|
||||
import docx_parser
|
||||
import pdf_parser
|
||||
import pptx_parser
|
||||
import xlsx_parser
|
||||
|
||||
|
||||
def main() -> None:
|
||||
parser = argparse.ArgumentParser(
|
||||
description="将 DOCX、PPTX、XLSX 或 PDF 文件解析为 Markdown"
|
||||
)
|
||||
|
||||
parser.add_argument("file_path", help="DOCX、PPTX、XLSX 或 PDF 文件的绝对路径")
|
||||
|
||||
parser.add_argument(
|
||||
"-n",
|
||||
"--context",
|
||||
type=int,
|
||||
default=2,
|
||||
help="与 -s 配合使用,指定每个检索结果包含的前后行数(不包含空行)",
|
||||
)
|
||||
|
||||
parser.add_argument(
|
||||
"--high-res",
|
||||
action="store_true",
|
||||
help="PDF 解析时启用 OCR 版面分析(需要额外依赖,处理较慢)",
|
||||
)
|
||||
|
||||
group = parser.add_mutually_exclusive_group()
|
||||
group.add_argument(
|
||||
"-c", "--count", action="store_true", help="返回解析后的 markdown 文档的总字数"
|
||||
)
|
||||
group.add_argument(
|
||||
"-l", "--lines", action="store_true", help="返回解析后的 markdown 文档的总行数"
|
||||
)
|
||||
group.add_argument(
|
||||
"-t",
|
||||
"--titles",
|
||||
action="store_true",
|
||||
help="返回解析后的 markdown 文档的标题行(1-6级)",
|
||||
)
|
||||
group.add_argument(
|
||||
"-tc",
|
||||
"--title-content",
|
||||
help="指定标题名称,输出该标题及其下级内容(不包含#号)",
|
||||
)
|
||||
group.add_argument(
|
||||
"-s",
|
||||
"--search",
|
||||
help="使用正则表达式搜索文档,返回所有匹配结果(用---分隔)",
|
||||
)
|
||||
|
||||
args = parser.parse_args()
|
||||
|
||||
if not os.path.exists(args.file_path):
|
||||
print(f"错误: 文件不存在: {args.file_path}")
|
||||
sys.exit(1)
|
||||
|
||||
file_type = common.detect_file_type(args.file_path)
|
||||
if not file_type:
|
||||
print(f"错误: 不是有效的 DOCX、PPTX、XLSX 或 PDF 格式: {args.file_path}")
|
||||
sys.exit(1)
|
||||
|
||||
if file_type == "docx":
|
||||
parsers = [
|
||||
("docling", docx_parser.parse_docx_with_docling),
|
||||
("unstructured", docx_parser.parse_docx_with_unstructured),
|
||||
("pypandoc-binary", docx_parser.parse_docx_with_pypandoc),
|
||||
("MarkItDown", docx_parser.parse_docx_with_markitdown),
|
||||
("python-docx", docx_parser.parse_docx_with_python_docx),
|
||||
("XML 原生解析", docx_parser.parse_docx_with_xml),
|
||||
]
|
||||
elif file_type == "pptx":
|
||||
parsers = [
|
||||
("docling", pptx_parser.parse_pptx_with_docling),
|
||||
("unstructured", pptx_parser.parse_pptx_with_unstructured),
|
||||
("MarkItDown", pptx_parser.parse_pptx_with_markitdown),
|
||||
("python-pptx", pptx_parser.parse_pptx_with_python_pptx),
|
||||
("XML 原生解析", pptx_parser.parse_pptx_with_xml),
|
||||
]
|
||||
elif file_type == "xlsx":
|
||||
parsers = [
|
||||
("docling", xlsx_parser.parse_xlsx_with_docling),
|
||||
("unstructured", xlsx_parser.parse_xlsx_with_unstructured),
|
||||
("MarkItDown", xlsx_parser.parse_xlsx_with_markitdown),
|
||||
("pandas", xlsx_parser.parse_xlsx_with_pandas),
|
||||
("XML 原生解析", xlsx_parser.parse_xlsx_with_xml),
|
||||
]
|
||||
else:
|
||||
if args.high_res:
|
||||
parsers = [
|
||||
("docling OCR", pdf_parser.parse_pdf_with_docling_ocr),
|
||||
("unstructured OCR", pdf_parser.parse_pdf_with_unstructured_ocr),
|
||||
("docling", pdf_parser.parse_pdf_with_docling),
|
||||
("unstructured", pdf_parser.parse_pdf_with_unstructured),
|
||||
("MarkItDown", pdf_parser.parse_pdf_with_markitdown),
|
||||
("pypdf", pdf_parser.parse_pdf_with_pypdf),
|
||||
]
|
||||
else:
|
||||
parsers = [
|
||||
("docling", pdf_parser.parse_pdf_with_docling),
|
||||
("unstructured", pdf_parser.parse_pdf_with_unstructured),
|
||||
("MarkItDown", pdf_parser.parse_pdf_with_markitdown),
|
||||
("pypdf", pdf_parser.parse_pdf_with_pypdf),
|
||||
]
|
||||
|
||||
failures = []
|
||||
content = None
|
||||
|
||||
for parser_name, parser_func in parsers:
|
||||
content, error = parser_func(args.file_path)
|
||||
if content is not None:
|
||||
content = common.remove_markdown_images(content)
|
||||
content = common.normalize_markdown_whitespace(content)
|
||||
break
|
||||
else:
|
||||
failures.append(f"- {parser_name}: {error}")
|
||||
|
||||
if content is None:
|
||||
print("所有解析方法均失败:")
|
||||
for failure in failures:
|
||||
print(failure)
|
||||
sys.exit(1)
|
||||
|
||||
if args.count:
|
||||
print(len(content.replace("\n", "")))
|
||||
elif args.lines:
|
||||
print(len(content.split("\n")))
|
||||
elif args.titles:
|
||||
titles = common.extract_titles(content)
|
||||
for title in titles:
|
||||
print(title)
|
||||
elif args.title_content:
|
||||
title_content = common.extract_title_content(content, args.title_content)
|
||||
if title_content is None:
|
||||
print(f"错误: 未找到标题 '{args.title_content}'")
|
||||
sys.exit(1)
|
||||
print(title_content, end="")
|
||||
elif args.search:
|
||||
search_result = common.search_markdown(content, args.search, args.context)
|
||||
if search_result is None:
|
||||
print(f"错误: 正则表达式无效或未找到匹配: '{args.search}'")
|
||||
sys.exit(1)
|
||||
print(search_result, end="")
|
||||
else:
|
||||
print(content, end="")
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
main()
|
||||
134
skills/lyxy-reader-office/scripts/pdf_parser.py
Normal file
134
skills/lyxy-reader-office/scripts/pdf_parser.py
Normal file
@@ -0,0 +1,134 @@
|
||||
#!/usr/bin/env python3
|
||||
"""PDF 文件解析模块,提供多种解析方法。"""
|
||||
|
||||
from typing import Optional, Tuple
|
||||
|
||||
from common import _unstructured_elements_to_markdown, parse_with_markitdown
|
||||
|
||||
|
||||
def parse_pdf_with_docling(file_path: str) -> Tuple[Optional[str], Optional[str]]:
|
||||
"""使用 docling 库解析 PDF 文件(不启用 OCR)"""
|
||||
try:
|
||||
from docling.datamodel.base_models import InputFormat
|
||||
from docling.datamodel.pipeline_options import PdfPipelineOptions
|
||||
from docling.document_converter import DocumentConverter, PdfFormatOption
|
||||
except ImportError:
|
||||
return None, "docling 库未安装"
|
||||
|
||||
try:
|
||||
converter = DocumentConverter(
|
||||
format_options={
|
||||
InputFormat.PDF: PdfFormatOption(
|
||||
pipeline_options=PdfPipelineOptions(do_ocr=False)
|
||||
)
|
||||
}
|
||||
)
|
||||
result = converter.convert(file_path)
|
||||
markdown_content = result.document.export_to_markdown()
|
||||
if not markdown_content.strip():
|
||||
return None, "文档为空"
|
||||
return markdown_content, None
|
||||
except Exception as e:
|
||||
return None, f"docling 解析失败: {str(e)}"
|
||||
|
||||
|
||||
def parse_pdf_with_docling_ocr(file_path: str) -> Tuple[Optional[str], Optional[str]]:
|
||||
"""使用 docling 库解析 PDF 文件(启用 OCR)"""
|
||||
try:
|
||||
from docling.document_converter import DocumentConverter
|
||||
except ImportError:
|
||||
return None, "docling 库未安装"
|
||||
|
||||
try:
|
||||
converter = DocumentConverter()
|
||||
result = converter.convert(file_path)
|
||||
markdown_content = result.document.export_to_markdown()
|
||||
if not markdown_content.strip():
|
||||
return None, "文档为空"
|
||||
return markdown_content, None
|
||||
except Exception as e:
|
||||
return None, f"docling OCR 解析失败: {str(e)}"
|
||||
|
||||
|
||||
def parse_pdf_with_unstructured(file_path: str) -> Tuple[Optional[str], Optional[str]]:
|
||||
"""使用 unstructured 库解析 PDF 文件(fast 策略)"""
|
||||
try:
|
||||
from unstructured.partition.pdf import partition_pdf
|
||||
except ImportError:
|
||||
return None, "unstructured 库未安装"
|
||||
|
||||
try:
|
||||
elements = partition_pdf(
|
||||
filename=file_path,
|
||||
infer_table_structure=True,
|
||||
strategy="fast",
|
||||
languages=["chi_sim"],
|
||||
)
|
||||
# fast 策略不做版面分析,Title 类型标注不可靠
|
||||
content = _unstructured_elements_to_markdown(elements, trust_titles=False)
|
||||
if not content.strip():
|
||||
return None, "文档为空"
|
||||
return content, None
|
||||
except Exception as e:
|
||||
return None, f"unstructured 解析失败: {str(e)}"
|
||||
|
||||
|
||||
def parse_pdf_with_unstructured_ocr(
|
||||
file_path: str,
|
||||
) -> Tuple[Optional[str], Optional[str]]:
|
||||
"""使用 unstructured 库解析 PDF 文件(hi_res 策略 + PaddleOCR)"""
|
||||
try:
|
||||
from unstructured.partition.pdf import partition_pdf
|
||||
except ImportError:
|
||||
return None, "unstructured 库未安装"
|
||||
|
||||
try:
|
||||
from unstructured.partition.utils.constants import OCR_AGENT_PADDLE
|
||||
except ImportError:
|
||||
return None, "unstructured-paddleocr 库未安装"
|
||||
|
||||
try:
|
||||
elements = partition_pdf(
|
||||
filename=file_path,
|
||||
infer_table_structure=True,
|
||||
strategy="hi_res",
|
||||
languages=["chi_sim"],
|
||||
ocr_agent=OCR_AGENT_PADDLE,
|
||||
table_ocr_agent=OCR_AGENT_PADDLE,
|
||||
)
|
||||
content = _unstructured_elements_to_markdown(elements, trust_titles=True)
|
||||
if not content.strip():
|
||||
return None, "文档为空"
|
||||
return content, None
|
||||
except Exception as e:
|
||||
return None, f"unstructured OCR 解析失败: {str(e)}"
|
||||
|
||||
|
||||
def parse_pdf_with_markitdown(file_path: str) -> Tuple[Optional[str], Optional[str]]:
|
||||
"""使用 MarkItDown 库解析 PDF 文件"""
|
||||
return parse_with_markitdown(file_path)
|
||||
|
||||
|
||||
def parse_pdf_with_pypdf(file_path: str) -> Tuple[Optional[str], Optional[str]]:
|
||||
"""使用 pypdf 库解析 PDF 文件"""
|
||||
try:
|
||||
from pypdf import PdfReader
|
||||
except ImportError:
|
||||
return None, "pypdf 库未安装"
|
||||
|
||||
try:
|
||||
reader = PdfReader(file_path)
|
||||
md_content = []
|
||||
|
||||
for page in reader.pages:
|
||||
text = page.extract_text(extraction_mode="plain")
|
||||
if text and text.strip():
|
||||
md_content.append(text.strip())
|
||||
md_content.append("")
|
||||
|
||||
content = "\n".join(md_content).strip()
|
||||
if not content:
|
||||
return None, "文档为空"
|
||||
return content, None
|
||||
except Exception as e:
|
||||
return None, f"pypdf 解析失败: {str(e)}"
|
||||
330
skills/lyxy-reader-office/scripts/pptx_parser.py
Normal file
330
skills/lyxy-reader-office/scripts/pptx_parser.py
Normal file
@@ -0,0 +1,330 @@
|
||||
#!/usr/bin/env python3
|
||||
"""PPTX 文件解析模块,提供三种解析方法。"""
|
||||
|
||||
import re
|
||||
import xml.etree.ElementTree as ET
|
||||
import zipfile
|
||||
from typing import Any, List, Optional, Tuple
|
||||
|
||||
from common import (
|
||||
_unstructured_elements_to_markdown,
|
||||
build_markdown_table,
|
||||
flush_list_stack,
|
||||
parse_with_docling,
|
||||
parse_with_markitdown,
|
||||
)
|
||||
|
||||
|
||||
def parse_pptx_with_docling(file_path: str) -> Tuple[Optional[str], Optional[str]]:
|
||||
"""使用 docling 库解析 PPTX 文件"""
|
||||
return parse_with_docling(file_path)
|
||||
|
||||
|
||||
def parse_pptx_with_unstructured(file_path: str) -> Tuple[Optional[str], Optional[str]]:
|
||||
"""使用 unstructured 库解析 PPTX 文件"""
|
||||
try:
|
||||
from unstructured.partition.pptx import partition_pptx
|
||||
except ImportError:
|
||||
return None, "unstructured 库未安装"
|
||||
|
||||
try:
|
||||
elements = partition_pptx(
|
||||
filename=file_path, infer_table_structure=True, include_metadata=True
|
||||
)
|
||||
content = _unstructured_elements_to_markdown(elements)
|
||||
if not content.strip():
|
||||
return None, "文档为空"
|
||||
return content, None
|
||||
except Exception as e:
|
||||
return None, f"unstructured 解析失败: {str(e)}"
|
||||
|
||||
|
||||
def parse_pptx_with_markitdown(file_path: str) -> Tuple[Optional[str], Optional[str]]:
|
||||
"""使用 MarkItDown 库解析 PPTX 文件"""
|
||||
return parse_with_markitdown(file_path)
|
||||
|
||||
|
||||
def extract_formatted_text_pptx(runs: List[Any]) -> str:
|
||||
"""从 PPTX 文本运行中提取带有格式的文本"""
|
||||
result = []
|
||||
for run in runs:
|
||||
if not run.text:
|
||||
continue
|
||||
|
||||
text = run.text
|
||||
|
||||
font = run.font
|
||||
is_bold = getattr(font, "bold", False) or False
|
||||
is_italic = getattr(font, "italic", False) or False
|
||||
|
||||
if is_bold and is_italic:
|
||||
text = f"***{text}***"
|
||||
elif is_bold:
|
||||
text = f"**{text}**"
|
||||
elif is_italic:
|
||||
text = f"*{text}*"
|
||||
|
||||
result.append(text)
|
||||
|
||||
return "".join(result).strip()
|
||||
|
||||
|
||||
def convert_table_to_md_pptx(table: Any) -> str:
|
||||
"""将 PPTX 表格转换为 Markdown 格式"""
|
||||
rows_data = []
|
||||
for row in table.rows:
|
||||
row_data = []
|
||||
for cell in row.cells:
|
||||
cell_content = []
|
||||
for para in cell.text_frame.paragraphs:
|
||||
text = extract_formatted_text_pptx(para.runs)
|
||||
if text:
|
||||
cell_content.append(text)
|
||||
cell_text = " ".join(cell_content).strip()
|
||||
row_data.append(cell_text if cell_text else "")
|
||||
rows_data.append(row_data)
|
||||
return build_markdown_table(rows_data)
|
||||
|
||||
|
||||
def parse_pptx_with_python_pptx(file_path: str) -> Tuple[Optional[str], Optional[str]]:
|
||||
"""使用 python-pptx 库解析 PPTX 文件"""
|
||||
try:
|
||||
from pptx import Presentation
|
||||
from pptx.enum.shapes import MSO_SHAPE_TYPE
|
||||
except ImportError:
|
||||
return None, "python-pptx 库未安装"
|
||||
|
||||
_A_NS = {"a": "http://schemas.openxmlformats.org/drawingml/2006/main"}
|
||||
|
||||
try:
|
||||
prs = Presentation(file_path)
|
||||
md_content = []
|
||||
|
||||
for slide_num, slide in enumerate(prs.slides, 1):
|
||||
md_content.append(f"\n## Slide {slide_num}\n")
|
||||
|
||||
list_stack = []
|
||||
|
||||
for shape in slide.shapes:
|
||||
if shape.shape_type == MSO_SHAPE_TYPE.PICTURE:
|
||||
continue
|
||||
|
||||
if hasattr(shape, "has_table") and shape.has_table:
|
||||
if list_stack:
|
||||
flush_list_stack(list_stack, md_content)
|
||||
|
||||
table_md = convert_table_to_md_pptx(shape.table)
|
||||
md_content.append(table_md)
|
||||
|
||||
if hasattr(shape, "text_frame"):
|
||||
for para in shape.text_frame.paragraphs:
|
||||
pPr = para._element.pPr
|
||||
is_list = False
|
||||
if pPr is not None:
|
||||
is_list = (
|
||||
para.level > 0
|
||||
or pPr.find(".//a:buChar", namespaces=_A_NS) is not None
|
||||
or pPr.find(".//a:buAutoNum", namespaces=_A_NS) is not None
|
||||
)
|
||||
|
||||
if is_list:
|
||||
level = para.level
|
||||
|
||||
while len(list_stack) <= level:
|
||||
list_stack.append("")
|
||||
|
||||
text = extract_formatted_text_pptx(para.runs)
|
||||
if text:
|
||||
is_ordered = (
|
||||
pPr is not None
|
||||
and pPr.find(".//a:buAutoNum", namespaces=_A_NS) is not None
|
||||
)
|
||||
marker = "1. " if is_ordered else "- "
|
||||
indent = " " * level
|
||||
list_stack[level] = f"{indent}{marker}{text}"
|
||||
|
||||
for i in range(len(list_stack)):
|
||||
if list_stack[i]:
|
||||
md_content.append(list_stack[i] + "\n")
|
||||
list_stack[i] = ""
|
||||
else:
|
||||
if list_stack:
|
||||
flush_list_stack(list_stack, md_content)
|
||||
|
||||
text = extract_formatted_text_pptx(para.runs)
|
||||
if text:
|
||||
md_content.append(f"{text}\n")
|
||||
|
||||
if list_stack:
|
||||
flush_list_stack(list_stack, md_content)
|
||||
|
||||
md_content.append("---\n")
|
||||
|
||||
content = "\n".join(md_content)
|
||||
if not content.strip():
|
||||
return None, "文档为空"
|
||||
return content, None
|
||||
except Exception as e:
|
||||
return None, f"python-pptx 解析失败: {str(e)}"
|
||||
|
||||
|
||||
def parse_pptx_with_xml(file_path: str) -> Tuple[Optional[str], Optional[str]]:
|
||||
"""使用 XML 原生解析 PPTX 文件"""
|
||||
pptx_namespace = {
|
||||
"a": "http://schemas.openxmlformats.org/drawingml/2006/main",
|
||||
"p": "http://schemas.openxmlformats.org/presentationml/2006/main",
|
||||
"r": "http://schemas.openxmlformats.org/officeDocument/2006/relationships",
|
||||
}
|
||||
|
||||
def extract_text_with_formatting_xml(text_elem: Any, namespaces: dict) -> str:
|
||||
result = []
|
||||
runs = text_elem.findall(".//a:r", namespaces=namespaces)
|
||||
for run in runs:
|
||||
t_elem = run.find(".//a:t", namespaces=namespaces)
|
||||
if t_elem is None or not t_elem.text:
|
||||
continue
|
||||
|
||||
text = t_elem.text
|
||||
|
||||
rPr = run.find(".//a:rPr", namespaces=namespaces)
|
||||
is_bold = False
|
||||
is_italic = False
|
||||
|
||||
if rPr is not None:
|
||||
is_bold = rPr.find(".//a:b", namespaces=namespaces) is not None
|
||||
is_italic = rPr.find(".//a:i", namespaces=namespaces) is not None
|
||||
|
||||
if is_bold and is_italic:
|
||||
text = f"***{text}***"
|
||||
elif is_bold:
|
||||
text = f"**{text}**"
|
||||
elif is_italic:
|
||||
text = f"*{text}*"
|
||||
|
||||
result.append(text)
|
||||
|
||||
return "".join(result).strip() if result else ""
|
||||
|
||||
def convert_table_to_md_xml(table_elem: Any, namespaces: dict) -> str:
|
||||
rows = table_elem.findall(".//a:tr", namespaces=namespaces)
|
||||
if not rows:
|
||||
return ""
|
||||
|
||||
rows_data = []
|
||||
for row in rows:
|
||||
cells = row.findall(".//a:tc", namespaces=namespaces)
|
||||
row_data = []
|
||||
for cell in cells:
|
||||
cell_text = extract_text_with_formatting_xml(cell, namespaces)
|
||||
if cell_text:
|
||||
cell_text = cell_text.replace("\n", " ").replace("\r", "")
|
||||
row_data.append(cell_text if cell_text else "")
|
||||
rows_data.append(row_data)
|
||||
return build_markdown_table(rows_data)
|
||||
|
||||
def is_list_item_xml(p_elem: Any, namespaces: dict) -> Tuple[bool, bool]:
|
||||
if p_elem is None:
|
||||
return False, False
|
||||
|
||||
pPr = p_elem.find(".//a:pPr", namespaces=namespaces)
|
||||
if pPr is None:
|
||||
return False, False
|
||||
|
||||
buChar = pPr.find(".//a:buChar", namespaces=namespaces)
|
||||
if buChar is not None:
|
||||
return True, False
|
||||
|
||||
buAutoNum = pPr.find(".//a:buAutoNum", namespaces=namespaces)
|
||||
if buAutoNum is not None:
|
||||
return True, True
|
||||
|
||||
return False, False
|
||||
|
||||
def get_indent_level_xml(p_elem: Any, namespaces: dict) -> int:
|
||||
if p_elem is None:
|
||||
return 0
|
||||
|
||||
pPr = p_elem.find(".//a:pPr", namespaces=namespaces)
|
||||
if pPr is None:
|
||||
return 0
|
||||
|
||||
lvl = pPr.get("lvl")
|
||||
return int(lvl) if lvl else 0
|
||||
|
||||
try:
|
||||
md_content = []
|
||||
|
||||
with zipfile.ZipFile(file_path) as zip_file:
|
||||
slide_files = [
|
||||
f
|
||||
for f in zip_file.namelist()
|
||||
if re.match(r"ppt/slides/slide\d+\.xml$", f)
|
||||
]
|
||||
slide_files.sort(
|
||||
key=lambda f: int(re.search(r"slide(\d+)\.xml$", f).group(1))
|
||||
)
|
||||
|
||||
for slide_idx, slide_file in enumerate(slide_files, 1):
|
||||
md_content.append("\n## Slide {}\n".format(slide_idx))
|
||||
|
||||
with zip_file.open(slide_file) as slide_xml:
|
||||
slide_root = ET.parse(slide_xml).getroot()
|
||||
|
||||
tx_bodies = slide_root.findall(
|
||||
".//p:sp/p:txBody", namespaces=pptx_namespace
|
||||
)
|
||||
|
||||
tables = slide_root.findall(".//a:tbl", namespaces=pptx_namespace)
|
||||
for table in tables:
|
||||
table_md = convert_table_to_md_xml(table, pptx_namespace)
|
||||
if table_md:
|
||||
md_content.append(table_md)
|
||||
|
||||
for tx_body in tx_bodies:
|
||||
paragraphs = tx_body.findall(
|
||||
".//a:p", namespaces=pptx_namespace
|
||||
)
|
||||
list_stack = []
|
||||
|
||||
for para in paragraphs:
|
||||
is_list, is_ordered = is_list_item_xml(para, pptx_namespace)
|
||||
|
||||
if is_list:
|
||||
level = get_indent_level_xml(para, pptx_namespace)
|
||||
|
||||
while len(list_stack) <= level:
|
||||
list_stack.append("")
|
||||
|
||||
text = extract_text_with_formatting_xml(
|
||||
para, pptx_namespace
|
||||
)
|
||||
if text:
|
||||
marker = "1. " if is_ordered else "- "
|
||||
indent = " " * level
|
||||
list_stack[level] = f"{indent}{marker}{text}"
|
||||
|
||||
for i in range(len(list_stack)):
|
||||
if list_stack[i]:
|
||||
md_content.append(list_stack[i] + "\n")
|
||||
list_stack[i] = ""
|
||||
else:
|
||||
if list_stack:
|
||||
flush_list_stack(list_stack, md_content)
|
||||
|
||||
text = extract_text_with_formatting_xml(
|
||||
para, pptx_namespace
|
||||
)
|
||||
if text:
|
||||
md_content.append(f"{text}\n")
|
||||
|
||||
if list_stack:
|
||||
flush_list_stack(list_stack, md_content)
|
||||
|
||||
md_content.append("---\n")
|
||||
|
||||
content = "\n".join(md_content)
|
||||
if not content.strip():
|
||||
return None, "文档为空"
|
||||
return content, None
|
||||
except Exception as e:
|
||||
return None, f"XML 解析失败: {str(e)}"
|
||||
286
skills/lyxy-reader-office/scripts/xlsx_parser.py
Normal file
286
skills/lyxy-reader-office/scripts/xlsx_parser.py
Normal file
@@ -0,0 +1,286 @@
|
||||
#!/usr/bin/env python3
|
||||
"""XLSX 文件解析模块,提供三种解析方法。"""
|
||||
|
||||
import xml.etree.ElementTree as ET
|
||||
import zipfile
|
||||
from typing import List, Optional, Tuple
|
||||
|
||||
from common import _unstructured_elements_to_markdown, parse_with_docling, parse_with_markitdown
|
||||
|
||||
|
||||
def parse_xlsx_with_docling(file_path: str) -> Tuple[Optional[str], Optional[str]]:
|
||||
"""使用 docling 库解析 XLSX 文件"""
|
||||
return parse_with_docling(file_path)
|
||||
|
||||
|
||||
def parse_xlsx_with_unstructured(file_path: str) -> Tuple[Optional[str], Optional[str]]:
|
||||
"""使用 unstructured 库解析 XLSX 文件"""
|
||||
try:
|
||||
from unstructured.partition.xlsx import partition_xlsx
|
||||
except ImportError:
|
||||
return None, "unstructured 库未安装"
|
||||
|
||||
try:
|
||||
elements = partition_xlsx(filename=file_path, infer_table_structure=True)
|
||||
content = _unstructured_elements_to_markdown(elements)
|
||||
if not content.strip():
|
||||
return None, "文档为空"
|
||||
return content, None
|
||||
except Exception as e:
|
||||
return None, f"unstructured 解析失败: {str(e)}"
|
||||
|
||||
|
||||
def parse_xlsx_with_markitdown(file_path: str) -> Tuple[Optional[str], Optional[str]]:
|
||||
"""使用 MarkItDown 库解析 XLSX 文件"""
|
||||
return parse_with_markitdown(file_path)
|
||||
|
||||
|
||||
def parse_xlsx_with_pandas(file_path: str) -> Tuple[Optional[str], Optional[str]]:
|
||||
"""使用 pandas 库解析 XLSX 文件"""
|
||||
try:
|
||||
import pandas as pd
|
||||
from tabulate import tabulate
|
||||
except ImportError as e:
|
||||
missing_lib = "pandas" if "pandas" in str(e) else "tabulate"
|
||||
return None, f"{missing_lib} 库未安装"
|
||||
|
||||
try:
|
||||
sheets = pd.read_excel(file_path, sheet_name=None)
|
||||
|
||||
markdown_parts = []
|
||||
for sheet_name, df in sheets.items():
|
||||
if len(df) == 0:
|
||||
markdown_parts.append(f"## {sheet_name}\n\n*工作表为空*")
|
||||
continue
|
||||
|
||||
table_md = tabulate(
|
||||
df, headers="keys", tablefmt="pipe", showindex=True, missingval=""
|
||||
)
|
||||
markdown_parts.append(f"## {sheet_name}\n\n{table_md}")
|
||||
|
||||
if not markdown_parts:
|
||||
return None, "Excel 文件为空"
|
||||
|
||||
markdown_content = "# Excel数据转换结果\n\n" + "\n\n".join(markdown_parts)
|
||||
|
||||
return markdown_content, None
|
||||
except Exception as e:
|
||||
return None, f"pandas 解析失败: {str(e)}"
|
||||
|
||||
|
||||
def parse_xlsx_with_xml(file_path: str) -> Tuple[Optional[str], Optional[str]]:
|
||||
"""使用 XML 原生解析 XLSX 文件"""
|
||||
xlsx_namespace = {
|
||||
"main": "http://schemas.openxmlformats.org/spreadsheetml/2006/main"
|
||||
}
|
||||
|
||||
def parse_col_index(cell_ref: str) -> int:
|
||||
col_index = 0
|
||||
for char in cell_ref:
|
||||
if char.isalpha():
|
||||
col_index = col_index * 26 + (ord(char) - ord("A") + 1)
|
||||
else:
|
||||
break
|
||||
return col_index - 1
|
||||
|
||||
def parse_cell_value(cell: ET.Element, shared_strings: List[str]) -> str:
|
||||
cell_type = cell.attrib.get("t")
|
||||
|
||||
if cell_type == "inlineStr":
|
||||
is_elem = cell.find("main:is", xlsx_namespace)
|
||||
if is_elem is not None:
|
||||
t_elem = is_elem.find("main:t", xlsx_namespace)
|
||||
if t_elem is not None and t_elem.text:
|
||||
return t_elem.text.replace("\n", " ").replace("\r", "")
|
||||
return ""
|
||||
|
||||
cell_value_elem = cell.find("main:v", xlsx_namespace)
|
||||
if cell_value_elem is None or not cell_value_elem.text:
|
||||
return ""
|
||||
|
||||
cell_value = cell_value_elem.text
|
||||
|
||||
if cell_type == "s":
|
||||
try:
|
||||
idx = int(cell_value)
|
||||
if 0 <= idx < len(shared_strings):
|
||||
text = shared_strings[idx]
|
||||
return text.replace("\n", " ").replace("\r", "")
|
||||
except (ValueError, IndexError):
|
||||
pass
|
||||
return ""
|
||||
elif cell_type == "b":
|
||||
return "TRUE" if cell_value == "1" else "FALSE"
|
||||
elif cell_type == "str":
|
||||
return cell_value.replace("\n", " ").replace("\r", "")
|
||||
elif cell_type == "e":
|
||||
_ERROR_CODES = {
|
||||
"#NULL!": "空引用错误",
|
||||
"#DIV/0!": "除零错误",
|
||||
"#VALUE!": "值类型错误",
|
||||
"#REF!": "无效引用",
|
||||
"#NAME?": "名称错误",
|
||||
"#NUM!": "数值错误",
|
||||
"#N/A": "值不可用",
|
||||
}
|
||||
return _ERROR_CODES.get(cell_value, f"错误: {cell_value}")
|
||||
elif cell_type == "d":
|
||||
return f"[日期] {cell_value}"
|
||||
elif cell_type == "n":
|
||||
return cell_value
|
||||
elif cell_type is None:
|
||||
try:
|
||||
float_val = float(cell_value)
|
||||
if float_val.is_integer():
|
||||
return str(int(float_val))
|
||||
return cell_value
|
||||
except ValueError:
|
||||
return cell_value
|
||||
else:
|
||||
return cell_value
|
||||
|
||||
def get_non_empty_columns(data: List[List[str]]) -> set:
|
||||
non_empty_cols = set()
|
||||
for row in data:
|
||||
for col_idx, cell in enumerate(row):
|
||||
if cell and cell.strip():
|
||||
non_empty_cols.add(col_idx)
|
||||
return non_empty_cols
|
||||
|
||||
def filter_columns(row: List[str], non_empty_cols: set) -> List[str]:
|
||||
return [row[i] if i < len(row) else "" for i in sorted(non_empty_cols)]
|
||||
|
||||
def data_to_markdown(data: List[List[str]], sheet_name: str) -> str:
|
||||
if not data or not data[0]:
|
||||
return f"## {sheet_name}\n\n*工作表为空*"
|
||||
|
||||
md_lines = []
|
||||
md_lines.append(f"## {sheet_name}")
|
||||
md_lines.append("")
|
||||
|
||||
headers = data[0]
|
||||
|
||||
non_empty_cols = get_non_empty_columns(data)
|
||||
|
||||
if not non_empty_cols:
|
||||
return f"## {sheet_name}\n\n*工作表为空*"
|
||||
|
||||
filtered_headers = filter_columns(headers, non_empty_cols)
|
||||
header_line = "| " + " | ".join(filtered_headers) + " |"
|
||||
md_lines.append(header_line)
|
||||
|
||||
separator_line = "| " + " | ".join(["---"] * len(filtered_headers)) + " |"
|
||||
md_lines.append(separator_line)
|
||||
|
||||
for row in data[1:]:
|
||||
filtered_row = filter_columns(row, non_empty_cols)
|
||||
row_line = "| " + " | ".join(filtered_row) + " |"
|
||||
md_lines.append(row_line)
|
||||
|
||||
md_lines.append("")
|
||||
|
||||
return "\n".join(md_lines)
|
||||
|
||||
try:
|
||||
with zipfile.ZipFile(file_path, "r") as zip_file:
|
||||
sheet_names = []
|
||||
sheet_rids = []
|
||||
try:
|
||||
with zip_file.open("xl/workbook.xml") as f:
|
||||
root = ET.parse(f).getroot()
|
||||
rel_ns = "http://schemas.openxmlformats.org/officeDocument/2006/relationships"
|
||||
sheet_elements = root.findall(".//main:sheet", xlsx_namespace)
|
||||
for sheet in sheet_elements:
|
||||
sheet_name = sheet.attrib.get("name", "")
|
||||
rid = sheet.attrib.get(f"{{{rel_ns}}}id", "")
|
||||
if sheet_name:
|
||||
sheet_names.append(sheet_name)
|
||||
sheet_rids.append(rid)
|
||||
except KeyError:
|
||||
return None, "无法解析工作表名称"
|
||||
|
||||
if not sheet_names:
|
||||
return None, "未找到工作表"
|
||||
|
||||
rid_to_target = {}
|
||||
try:
|
||||
rels_ns = "http://schemas.openxmlformats.org/package/2006/relationships"
|
||||
with zip_file.open("xl/_rels/workbook.xml.rels") as f:
|
||||
rels_root = ET.parse(f).getroot()
|
||||
for rel in rels_root.findall(f"{{{rels_ns}}}Relationship"):
|
||||
rid = rel.attrib.get("Id", "")
|
||||
target = rel.attrib.get("Target", "")
|
||||
if rid and target:
|
||||
rid_to_target[rid] = target
|
||||
except KeyError:
|
||||
pass
|
||||
|
||||
shared_strings = []
|
||||
try:
|
||||
with zip_file.open("xl/sharedStrings.xml") as f:
|
||||
root = ET.parse(f).getroot()
|
||||
for si in root.findall(".//main:si", xlsx_namespace):
|
||||
t_elem = si.find(".//main:t", xlsx_namespace)
|
||||
if t_elem is not None and t_elem.text:
|
||||
shared_strings.append(t_elem.text)
|
||||
else:
|
||||
shared_strings.append("")
|
||||
except KeyError:
|
||||
pass
|
||||
|
||||
markdown_content = "# Excel数据转换结果 (原生XML解析)\n\n"
|
||||
|
||||
for sheet_index, sheet_name in enumerate(sheet_names):
|
||||
rid = sheet_rids[sheet_index] if sheet_index < len(sheet_rids) else ""
|
||||
target = rid_to_target.get(rid, "")
|
||||
if target:
|
||||
if target.startswith("/"):
|
||||
worksheet_path = target.lstrip("/")
|
||||
else:
|
||||
worksheet_path = f"xl/{target}"
|
||||
else:
|
||||
worksheet_path = f"xl/worksheets/sheet{sheet_index + 1}.xml"
|
||||
|
||||
try:
|
||||
with zip_file.open(worksheet_path) as f:
|
||||
root = ET.parse(f).getroot()
|
||||
sheet_data = root.find("main:sheetData", xlsx_namespace)
|
||||
|
||||
rows = []
|
||||
if sheet_data is not None:
|
||||
row_elements = sheet_data.findall(
|
||||
"main:row", xlsx_namespace
|
||||
)
|
||||
|
||||
for row_elem in row_elements:
|
||||
cells = row_elem.findall("main:c", xlsx_namespace)
|
||||
|
||||
col_dict = {}
|
||||
for cell in cells:
|
||||
cell_ref = cell.attrib.get("r", "")
|
||||
if not cell_ref:
|
||||
continue
|
||||
|
||||
col_index = parse_col_index(cell_ref)
|
||||
cell_value = parse_cell_value(cell, shared_strings)
|
||||
col_dict[col_index] = cell_value
|
||||
|
||||
if col_dict:
|
||||
max_col = max(col_dict.keys())
|
||||
row_data = [
|
||||
col_dict.get(i, "") for i in range(max_col + 1)
|
||||
]
|
||||
rows.append(row_data)
|
||||
|
||||
table_md = data_to_markdown(rows, sheet_name)
|
||||
markdown_content += table_md + "\n\n"
|
||||
|
||||
except KeyError:
|
||||
markdown_content += f"## {sheet_name}\n\n*工作表解析失败*\n\n"
|
||||
|
||||
if not markdown_content.strip():
|
||||
return None, "解析结果为空"
|
||||
|
||||
return markdown_content, None
|
||||
except Exception as e:
|
||||
return None, f"XML 解析失败: {str(e)}"
|
||||
Reference in New Issue
Block a user