Compare commits
10 Commits
| Author | SHA1 | Date |
|---|---|---|
|
|
c5e1d0c88c | |
|
|
eb8e7a7daf | |
|
|
b6dc7af9bf | |
|
|
7a43eb391b | |
|
|
d497e92626 | |
|
|
a6b4f99a83 | |
|
|
a95f9045e5 | |
|
|
c7e14ca396 | |
|
|
6bc4e1d3b4 | |
|
|
e9e1f01728 |
112
README.md
112
README.md
|
|
@ -1,19 +1,22 @@
|
||||||
# video-product-snapshot — 视频商品截图
|
# video-product-snapshot — 视频商品以图搜图
|
||||||
|
|
||||||
检测视频中的电商商品,提取最佳商品画面,并通过图片搜索在 1688 找同款。
|
从视频中提取最佳商品帧,以图搜图在 1688 找同款。
|
||||||
|
|
||||||
## 工作原理
|
## 工作原理
|
||||||
|
|
||||||
1. 使用 `ffmpeg` 按配置间隔从视频抽帧
|
1. `ffmpeg` 按 0.5s 间隔抽帧(最多 60 帧)
|
||||||
2. 将每帧发给视觉模型,检测是否有商品并评分
|
2. 视觉质量预过滤(亮度/方差剔除模糊帧)
|
||||||
3. 选出置信度最高的帧作为最佳商品截图
|
3. 容器/架子类产品检测 → 自动选择空载帧
|
||||||
4. 可选:用这张截图调用图片搜索 API 找同款商品
|
4. 视觉模型多帧对比排序,选出最佳商品帧
|
||||||
|
5. 裁剪商品区域 → 上传 → 1688 图搜
|
||||||
|
6. 后置过滤(视觉模型判断结果是否同款)→ rerank 排序
|
||||||
|
|
||||||
## 安装
|
## 安装
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
|
./install.sh # 安装 auth-rt + 依赖
|
||||||
bun install
|
bun install
|
||||||
bun run build # 输出到 dist/run.js
|
bun run build # 输出到 dist/run.js
|
||||||
```
|
```
|
||||||
|
|
||||||
## 使用方法
|
## 使用方法
|
||||||
|
|
@ -26,77 +29,74 @@ bun dist/run.js <command> [options]
|
||||||
|
|
||||||
| 命令 | 说明 |
|
| 命令 | 说明 |
|
||||||
|------|------|
|
|------|------|
|
||||||
| `detect <video>` | 抽帧并检测商品画面 |
|
| `detect-best-and-search <video>` | **推荐。** 最佳帧 → 图搜 → rerank |
|
||||||
| `search <image>` | 用图片搜同款 |
|
| `detect-best <video>` | 只提取最佳商品帧,不搜图 |
|
||||||
| `detect-and-search <video>` | 完整流程:检测最佳画面 → 搜图 |
|
| `detect-and-search <video>` | 两阶段过滤后图搜(较慢) |
|
||||||
| `session` | 打印当前认证 session token |
|
| `detect <video>` | 抽帧并逐帧检测商品 |
|
||||||
|
| `search <image>` | 用已有图片搜同款 |
|
||||||
|
| `rerank` | 关键词对图搜结果交叉过滤 |
|
||||||
|
| `session` | 获取当前认证会话 token |
|
||||||
|
|
||||||
### 选项(`detect` / `detect-and-search`)
|
### 选项(`detect-best` / `detect-best-and-search`)
|
||||||
|
|
||||||
| 参数 | 默认值 | 说明 |
|
| 参数 | 默认值 | 说明 |
|
||||||
|------|--------|------|
|
|------|--------|------|
|
||||||
| `--interval=<秒>` | `1` | 抽帧间隔(秒) |
|
| `--interval=<秒>` | `0.5` | 帧采样间隔 |
|
||||||
| `--max-frames=<数量>` | `60` | 最多分析帧数 |
|
| `--max-frames=<n>` | `60` | 最大分析帧数 |
|
||||||
| `--output-dir=<目录>` | 视频所在目录 | 抽帧图片保存目录 |
|
| `--output-dir=<目录>` | 视频同目录 | 截图保存目录 |
|
||||||
| `--min-confidence=<0-1>` | `0.7` | 最低检测置信度 |
|
| `--session-id=<id>` | 自动生成 | Langfuse session ID |
|
||||||
| `--dry-run` | — | 解析参数并打印配置,不实际执行 |
|
| `--dry-run` | — | 解析参数,不实际执行 |
|
||||||
|
|
||||||
### 示例
|
|
||||||
|
|
||||||
```bash
|
|
||||||
# 检测商品,每 3 秒抽一帧
|
|
||||||
bun dist/run.js detect ./demo.mp4 --interval=3
|
|
||||||
|
|
||||||
# 完整流程 + 更高置信度门槛
|
|
||||||
bun dist/run.js detect-and-search ./demo.mp4 --interval=5 --min-confidence=0.85
|
|
||||||
|
|
||||||
# 用已有截图搜同款
|
|
||||||
bun dist/run.js search ./snapshot.jpg
|
|
||||||
```
|
|
||||||
|
|
||||||
## 输出
|
## 输出
|
||||||
|
|
||||||
所有命令输出 JSON 到 stdout。
|
所有命令输出 JSON 到 stdout,包含 `sessionId` 字段用于 Langfuse 追踪。
|
||||||
|
|
||||||
```json
|
```json
|
||||||
{
|
{
|
||||||
|
"sessionId": "skill-20260426-184345-lb06",
|
||||||
|
"status": "success",
|
||||||
|
"command": "detect-best-and-search",
|
||||||
"bestSnapshot": {
|
"bestSnapshot": {
|
||||||
"frameIndex": 4,
|
"frameIndex": 7,
|
||||||
"timestampSeconds": 9,
|
"timestampSeconds": 3,
|
||||||
"imagePath": "/path/to/frame_0004.jpg",
|
"imagePath": "/path/to/frame_0007.jpg",
|
||||||
"confidence": 0.92,
|
"croppedImagePath": "/path/to/frame_0007_cropped.jpg",
|
||||||
"description": "White sneaker with blue logo, left side view",
|
"description": "黑色金属床底鞋架 可折叠移动"
|
||||||
"boundingHint": "centered"
|
|
||||||
},
|
},
|
||||||
"productFrames": [...],
|
"rerank": {
|
||||||
"searchBody": { ... }
|
"keyword": "床底鞋架",
|
||||||
|
"results": [
|
||||||
|
{ "num_iid": 123, "title": "...", "price": "44.00", "sales": 87, "detail_url": "..." }
|
||||||
|
]
|
||||||
|
}
|
||||||
}
|
}
|
||||||
```
|
```
|
||||||
|
|
||||||
- `productFrames` — 所有检测到的画面,按置信度排序(最高在前)
|
## 鉴权架构
|
||||||
- `bestSnapshot` — 排名第一的画面
|
|
||||||
- `searchBody` — 图片搜索 API 的返回(仅 `search` / `detect-and-search`)
|
```
|
||||||
|
~/.openclaw/.env
|
||||||
|
CLIENT_KEY ──→ auth-rt ──→ 业务系统
|
||||||
|
├── /session → access_token
|
||||||
|
└── /client-config → provider.api_key
|
||||||
|
provider.base_url
|
||||||
|
provider.model
|
||||||
|
```
|
||||||
|
|
||||||
|
仅需配置 `CLIENT_KEY`,LLM 凭据和端点均由业务系统下发。
|
||||||
|
|
||||||
## 环境变量
|
## 环境变量
|
||||||
|
|
||||||
唯一必需配置是 `~/.openclaw/.env` 中的 `CLIENT_KEY`:
|
|
||||||
|
|
||||||
```
|
|
||||||
CLIENT_KEY=sk_xxxxxxxx.xxxxxxxxxxxxxxxxxxxxxxxx
|
|
||||||
```
|
|
||||||
|
|
||||||
所有凭据和接口地址通过 `auth-rt` 从客户端配置自动获取,无需额外配置。
|
|
||||||
|
|
||||||
### 可选覆盖
|
|
||||||
|
|
||||||
| 变量 | 说明 |
|
| 变量 | 说明 |
|
||||||
|------|------|
|
|------|------|
|
||||||
| `VISION_MODEL` | 覆盖模型名称(默认:`aliyun-cp-multimodal`) |
|
| `CLIENT_KEY` | **必需。** 在 `~/.openclaw/.env` 中配置 |
|
||||||
|
| `VISION_MODEL` | 覆盖模型名称(默认来自 client config) |
|
||||||
|
| `SKILL_SESSION_ID` | Langfuse session ID(自动生成,格式 `skill-YYYYMMDD-HHMMSS-xxxx`) |
|
||||||
| `AUTH_RT_BIN` | 覆盖 `auth-rt` 二进制路径 |
|
| `AUTH_RT_BIN` | 覆盖 `auth-rt` 二进制路径 |
|
||||||
| `TELEMETRY_ENDPOINT` | 上报执行结果到遥测接口 |
|
| `TELEMETRY_ENDPOINT` | 遥测上报接口 |
|
||||||
|
|
||||||
## 前置依赖
|
## 前置依赖
|
||||||
|
|
||||||
- [Bun](https://bun.sh) 运行时
|
- [Bun](https://bun.sh) 运行时
|
||||||
- 系统 PATH 中包含 `ffmpeg` 和 `ffprobe`
|
- 系统 PATH 中包含 `ffmpeg` / `ffprobe`(帧提取)
|
||||||
- 系统 PATH 中包含 `auth-rt` CLI(`search` / `detect-and-search` 需要)
|
- `auth-rt` CLI(鉴权/API 调用,`install.sh` 自动安装)
|
||||||
|
|
|
||||||
105
SKILL.md
105
SKILL.md
|
|
@ -1,11 +1,11 @@
|
||||||
---
|
---
|
||||||
name: video-product-snapshot
|
name: video-product-snapshot
|
||||||
description: "Detect ecommerce products in video frames using Claude Vision, extract the best product snapshot, and optionally search via image-search API. Use when the user provides a video and wants to find/identify products shown in it. / 检测视频中的商品,提取最佳商品截图,并通过图片搜索在1688找同款。当用户提供视频想找商品时使用。"
|
description: "Extract product snapshot from video and search 1688 by image. / 从视频中提取最佳商品帧,以图搜图在1688找同款。当用户提供视频想找商品时使用。"
|
||||||
---
|
---
|
||||||
|
|
||||||
# Video Product Snapshot — 视频商品截图
|
# Video Product Snapshot — 视频商品以图搜图
|
||||||
|
|
||||||
从视频中提取最佳商品画面,通过 Claude Vision 检测并截取,然后在 1688 上以图搜图 + 关键词重排序找到同款商品。
|
从视频中截取最清晰的商品帧(容器类产品自动选空载帧),上传图片在 1688 以图搜图找同款。
|
||||||
|
|
||||||
## 运行
|
## 运行
|
||||||
|
|
||||||
|
|
@ -17,73 +17,61 @@ bun dist/run.js <command> [args] [--dry-run]
|
||||||
|
|
||||||
| 命令 | 使用场景 |
|
| 命令 | 使用场景 |
|
||||||
|------|---------|
|
|------|---------|
|
||||||
| `detect-video-and-search <video>` | **推荐。** 直接上传视频到 API 识别商品主体,然后 1688 关键词搜索。跳过本地抽帧,无需 Vision API。 |
|
| `detect-best-and-search <video>` | **推荐。** 提取最佳商品帧 → 图搜 → rerank 返回结果。 |
|
||||||
| `detect-best-and-search <video>` | 旧版。抽帧 + Vision 排名 + 搜图。需要 Vision API key。 |
|
| `detect-best <video>` | 只提取最佳商品帧,不搜图。 |
|
||||||
| `detect-video <video>` | 只识别商品描述和生成关键词,不搜图。 |
|
| `detect-and-search <video>` | 两阶段过滤后图搜(比 detect-best 慢)。 |
|
||||||
| `detect-best <video>` | 旧版。只提取最佳画面,不搜图。 |
|
| `search <image-path>` | 已有商品图,直接图搜。 |
|
||||||
| `search <image-path>` | 已经有商品截图了,跳过检测直接搜图。 |
|
| `rerank` | 用关键词对图搜结果交叉过滤。 |
|
||||||
| `detect-and-search <video>` | 旧版。**不推荐。** |
|
|
||||||
| `session` | 获取当前认证会话 token。 |
|
| `session` | 获取当前认证会话 token。 |
|
||||||
|
|
||||||
## `detect-video` / `detect-video-and-search`
|
## 主命令:`detect-best-and-search`
|
||||||
|
|
||||||
上传视频到 API 直接识别商品主体,不走本地抽帧。
|
|
||||||
|
|
||||||
流程:
|
流程:
|
||||||
1. 上传视频 → 获取公开 URL(复用现有上传接口)
|
1. ffmpeg 按 0.5s 间隔提取帧(最多 60 帧)
|
||||||
2. 调用 LiteLLM(Chat Completions + `video_url`)分析视频内容
|
2. 视觉模型检测是否为容器/架子类产品
|
||||||
3. 识别商品名称、材质、颜色、功能
|
3. 容器类:只从前 40% 帧(空载阶段)中选最佳帧
|
||||||
4. 生成中文搜索关键词
|
4. 非容器类:全帧中选最清晰帧
|
||||||
5. 1688 关键词搜索(`detect-video-and-search`)
|
5. 裁剪商品区域
|
||||||
|
6. 上传裁剪图 → 1688 图搜
|
||||||
|
7. rerank:图搜结果与关键词搜索结果交叉过滤
|
||||||
|
|
||||||
依赖:
|
## Options for `detect-best` / `detect-best-and-search`
|
||||||
- `auth-rt` client key(自动,无需额外配置)
|
|
||||||
- LiteLLM 代理支持 `video_url` 内容类型
|
|
||||||
- 上传接口返回公开 URL
|
|
||||||
|
|
||||||
## `detect-best` / `detect-best-and-search` 选项
|
| Flag | Default | Description |
|
||||||
|
|------|---------|-------------|
|
||||||
| 参数 | 默认值 | 说明 |
|
| `--interval=<sec>` | `0.5` | 帧采样间隔(秒) |
|
||||||
|------|--------|------|
|
| `--max-frames=<n>` | `60` | 最大分析帧数 |
|
||||||
| `--interval=<秒>` | `0.5` | 抽帧间隔(秒) |
|
| `--output-dir=<dir>` | 视频同目录 | 截图保存目录 |
|
||||||
| `--max-frames=<数量>` | `60` | 最多抽帧数 |
|
|
||||||
| `--output-dir=<目录>` | 视频同目录 | 帧图片保存目录 |
|
|
||||||
|
|
||||||
## 画面选择原理
|
|
||||||
|
|
||||||
两轮 Vision 流水线:
|
|
||||||
|
|
||||||
1. **过滤轮**(仅 `detect` / `detect-and-search`)—— 每帧二分类:保留/丢弃。可能过于严格返回空。
|
|
||||||
2. **排名轮** —— 所有候选帧一起发给模型,从中选出最清晰、最完整、最突出的一张商品图。
|
|
||||||
|
|
||||||
`detect-best` 跳过第一轮,所有帧直接进排名轮。超过 20 帧时会均匀采样到 20 帧再调用。**只要视频能出帧,就一定返回结果。**
|
|
||||||
|
|
||||||
## 输出格式
|
## 输出格式
|
||||||
|
|
||||||
|
### `detect-best-and-search`
|
||||||
|
|
||||||
```json
|
```json
|
||||||
{
|
{
|
||||||
"bestSnapshot": {
|
"bestSnapshot": {
|
||||||
"frameIndex": 4,
|
"frameIndex": 7,
|
||||||
"timestampSeconds": 2,
|
"timestampSeconds": 3,
|
||||||
"imagePath": "/path/to/frame_0004.jpg",
|
"imagePath": "/path/to/frame_0007.jpg",
|
||||||
"croppedImagePath": "/path/to/frame_0004_cropped.jpg",
|
"croppedImagePath": "/path/to/frame_0007_cropped.jpg",
|
||||||
"confidence": 0.95,
|
"description": "黑色金属床底鞋架 可折叠移动"
|
||||||
"description": "White sneaker with blue logo, left side view",
|
|
||||||
"boundingHint": "Product fully visible, centered, no hands"
|
|
||||||
},
|
},
|
||||||
"rerank": {
|
"rerank": {
|
||||||
"results": [...]
|
"keyword": "床底鞋架",
|
||||||
|
"results": [
|
||||||
|
{ "num_iid": 123, "title": "...", "price": "44.00", "sales": 87, "detail_url": "..." }
|
||||||
|
]
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
```
|
```
|
||||||
|
|
||||||
## 结果展示格式
|
## 结果展示格式
|
||||||
|
|
||||||
CLI 执行完成后,将 `rerank.results` 格式化为 markdown 表格,**每页 5 行**(如不足 5 行则全显示)。
|
将 `rerank.results`(优先)或 `searchBody.data.items.item` 格式化为 markdown 表格,**最多 5 条**:
|
||||||
|
|
||||||
| # | 商品名称 | 价格 | 销量 | 链接 |
|
| # | 商品名称 | 价格 | 销量 | 链接 |
|
||||||
|---|----------|------|------|------|
|
|---|----------|------|------|------|
|
||||||
| 1 | {title} | ¥{promotion_price \|\| price} | {sales ?? —}件 | [查看](https://detail.1688.com/offer/{num_iid}.html) |
|
| 1 | {title} | ¥{promotion_price \|\| price} | {sales ?? —}件 | [查看]({detail_url}) |
|
||||||
|
|
||||||
- 有 `promotion_price` 用促销价,否则用原价
|
- 有 `promotion_price` 用促销价,否则用原价
|
||||||
- `sales` 缺失或为零时显示 `—`
|
- `sales` 缺失或为零时显示 `—`
|
||||||
|
|
@ -91,29 +79,16 @@ CLI 执行完成后,将 `rerank.results` 格式化为 markdown 表格,**每
|
||||||
|
|
||||||
## 执行规则
|
## 执行规则
|
||||||
|
|
||||||
### 视频命令(慢 — 用 sub-agent 执行)
|
1. **不要重试。** 命令失败就直接报错。
|
||||||
|
2. **信任工具输出。** CLI 内部已处理 session 管理和错误格式化。
|
||||||
|
3. **慢命令用 sub-agent**(detect-best-and-search / detect-and-search),直接运行会超时。
|
||||||
|
|
||||||
涉及命令:`detect-video-and-search`、`detect-best-and-search`、`detect-best`、`detect-and-search`、`detect-video`
|
### sub-agent 执行模板
|
||||||
|
|
||||||
使用 `sessions_spawn` 创建 sub-agent 执行,**不要直接运行**。
|
|
||||||
|
|
||||||
```
|
```
|
||||||
sessions_spawn(
|
sessions_spawn(
|
||||||
task: "Run this command and return the raw JSON output:\n\nbun dist/run.js <完整命令>\n\nCopy the entire JSON output as your reply.",
|
task: "cd /path/to/skill && Run this command and return the raw JSON output:\n\nbun dist/run.js detect-best-and-search <video-path>\n\nCopy the entire JSON output as your reply.",
|
||||||
label: "video-product-snapshot",
|
label: "video-product-snapshot",
|
||||||
runTimeoutSeconds: 300,
|
runTimeoutSeconds: 300,
|
||||||
)
|
)
|
||||||
```
|
```
|
||||||
|
|
||||||
- 通知用户处理已开始,告知 `runId`
|
|
||||||
- 等待 sub-agent 返回结果,然后解析并展示
|
|
||||||
|
|
||||||
### `search` 和 `session`(快 — 直接运行)
|
|
||||||
|
|
||||||
直接在本会话中运行,不需要 sub-agent。
|
|
||||||
|
|
||||||
### 通用规则
|
|
||||||
|
|
||||||
1. **视频输入 → 优先用 `detect-video-and-search`。** 比抽帧方案更可靠。如果没配视频模型,降级到 `detect-best-and-search`。不要用 `detect-and-search`。
|
|
||||||
2. **不要重试。** 命令失败就直接报错。
|
|
||||||
3. **信任工具输出。** CLI 内部已处理 session 管理和错误格式化。
|
|
||||||
|
|
|
||||||
|
|
@ -43,11 +43,17 @@ function printUsage(): void {
|
||||||
detect-and-search <video-path> [options]
|
detect-and-search <video-path> [options]
|
||||||
检测最佳商品画面 → 图片搜索 → 关键词重排序
|
检测最佳商品画面 → 图片搜索 → 关键词重排序
|
||||||
|
|
||||||
|
detect-best <video-path> [options]
|
||||||
|
从视频抽帧并选择最佳商品画面(更快更稳定)
|
||||||
|
|
||||||
|
detect-best-and-search <video-path> [options]
|
||||||
|
最佳画面 → 图片搜索 → 关键词重排序
|
||||||
|
|
||||||
detect-video <video-path>
|
detect-video <video-path>
|
||||||
直接上传视频到 API 识别商品主体,输出商品描述和搜索关键词
|
识别商品描述和搜索关键词(当前实现:从视频抽帧选最佳帧)
|
||||||
|
|
||||||
detect-video-and-search <video-path>
|
detect-video-and-search <video-path>
|
||||||
上传视频识别商品 → 1688 关键词搜索 → 重排序
|
识别商品 → 图片搜索 → 1688 关键词重排序(当前实现:从视频抽帧选最佳帧)
|
||||||
|
|
||||||
rerank --image-results=<json> [--description=<text>] [--keyword=<text>] [--top=<n>]
|
rerank --image-results=<json> [--description=<text>] [--keyword=<text>] [--top=<n>]
|
||||||
通过关键词交并集过滤搜索结果
|
通过关键词交并集过滤搜索结果
|
||||||
|
|
@ -75,6 +81,8 @@ async function main(): Promise<void> {
|
||||||
dryRun = true;
|
dryRun = true;
|
||||||
} else if (arg.startsWith('--api-base=')) {
|
} else if (arg.startsWith('--api-base=')) {
|
||||||
process.env.API_BASE = arg.slice('--api-base='.length).trim();
|
process.env.API_BASE = arg.slice('--api-base='.length).trim();
|
||||||
|
} else if (arg.startsWith('--session-id=')) {
|
||||||
|
process.env.SKILL_SESSION_ID = arg.slice('--session-id='.length).trim();
|
||||||
} else if (arg === '-h' || arg === '--help') {
|
} else if (arg === '-h' || arg === '--help') {
|
||||||
printUsage(); process.exit(0);
|
printUsage(); process.exit(0);
|
||||||
} else {
|
} else {
|
||||||
|
|
@ -85,6 +93,7 @@ async function main(): Promise<void> {
|
||||||
if (positionals.length < 1) { printUsage(); process.exit(1); }
|
if (positionals.length < 1) { printUsage(); process.exit(1); }
|
||||||
|
|
||||||
const command = positionals[0] as Command;
|
const command = positionals[0] as Command;
|
||||||
|
const sessionId = process.env.SKILL_SESSION_ID!; // set by auth-cli.ts at module load
|
||||||
const startMs = Date.now();
|
const startMs = Date.now();
|
||||||
let result: Awaited<ReturnType<typeof run>>;
|
let result: Awaited<ReturnType<typeof run>>;
|
||||||
|
|
||||||
|
|
@ -92,13 +101,14 @@ async function main(): Promise<void> {
|
||||||
result = await run(command, positionals.slice(1), dryRun);
|
result = await run(command, positionals.slice(1), dryRun);
|
||||||
} catch (err) {
|
} catch (err) {
|
||||||
const error = err instanceof Error ? err.message : String(err);
|
const error = err instanceof Error ? err.message : String(err);
|
||||||
console.log(JSON.stringify({ status: 'failed', command, dryRun, error }, null, 2));
|
console.log(JSON.stringify({ status: 'failed', command, dryRun, sessionId, error }, null, 2));
|
||||||
if (!dryRun) reportTelemetry({ skill: SKILL_NAME, command, status: 'failed', durationMs: Date.now() - startMs, error });
|
if (!dryRun) reportTelemetry({ skill: SKILL_NAME, command, sessionId, status: 'failed', durationMs: Date.now() - startMs, error });
|
||||||
process.exit(1);
|
process.exit(1);
|
||||||
}
|
}
|
||||||
|
|
||||||
console.log(JSON.stringify(result, null, 2));
|
const output = { ...result, sessionId } as Record<string, unknown>;
|
||||||
if (!dryRun) reportTelemetry({ skill: SKILL_NAME, command, status: result.status, durationMs: Date.now() - startMs, error: (result as any).error });
|
console.log(JSON.stringify(output, null, 2));
|
||||||
|
if (!dryRun) reportTelemetry({ skill: SKILL_NAME, command, sessionId, status: result.status, durationMs: Date.now() - startMs, error: (result as any).error });
|
||||||
}
|
}
|
||||||
|
|
||||||
main().catch((err) => {
|
main().catch((err) => {
|
||||||
|
|
|
||||||
|
|
@ -20,6 +20,18 @@ import * as path from 'path';
|
||||||
import * as os from 'os';
|
import * as os from 'os';
|
||||||
|
|
||||||
const home = process.env.HOME || os.homedir();
|
const home = process.env.HOME || os.homedir();
|
||||||
|
|
||||||
|
// ── session ID (Langfuse tracing) ──
|
||||||
|
// Priority: SKILL_SESSION_ID env > auto-generate
|
||||||
|
const SESSION_ID = process.env.SKILL_SESSION_ID || (() => {
|
||||||
|
const ts = new Date();
|
||||||
|
const pad = (n: number) => String(n).padStart(2, '0');
|
||||||
|
const tsPart = `${ts.getFullYear()}${pad(ts.getMonth()+1)}${pad(ts.getDate())}-${pad(ts.getHours())}${pad(ts.getMinutes())}${pad(ts.getSeconds())}`;
|
||||||
|
const rand = Math.random().toString(36).slice(2, 6);
|
||||||
|
return `skill-${tsPart}-${rand}`;
|
||||||
|
})();
|
||||||
|
process.env.SKILL_SESSION_ID = SESSION_ID;
|
||||||
|
|
||||||
const AUTH_RT_BIN = process.env.AUTH_RT_BIN
|
const AUTH_RT_BIN = process.env.AUTH_RT_BIN
|
||||||
|| (() => {
|
|| (() => {
|
||||||
// Check if auth-rt is in PATH
|
// Check if auth-rt is in PATH
|
||||||
|
|
|
||||||
247
src/index.ts
247
src/index.ts
|
|
@ -1,11 +1,10 @@
|
||||||
import * as fs from 'fs';
|
import * as fs from 'fs';
|
||||||
import * as path from 'path';
|
import * as path from 'path';
|
||||||
import type { Command, DetectOptions, DetectResult, SearchResult, OutputResult, SearchItem, DetectVideoResult } from './types.ts';
|
import type { Command, DetectOptions, DetectResult, SearchResult, OutputResult, SearchItem, DetectVideoResult, DetectVideoAndSearchResult } from './types.ts';
|
||||||
import { createSkillClient } from './auth-cli.ts';
|
import { createSkillClient } from './auth-cli.ts';
|
||||||
import { extractFrames } from './frame-extractor.ts';
|
import { extractFrames } from './frame-extractor.ts';
|
||||||
import { detectProductFrames, detectBestFrame } from './product-detector.ts';
|
import { detectProductFrames, detectBestFrame } from './product-detector.ts';
|
||||||
import { imageToBase64 } from './frame-extractor.ts';
|
import { postFilterByImage } from './post-filter.ts';
|
||||||
import { uploadVideo, analyzeVideo } from './video-analyzer.ts';
|
|
||||||
import { generateText } from 'ai';
|
import { generateText } from 'ai';
|
||||||
import { createOpenAI } from '@ai-sdk/openai';
|
import { createOpenAI } from '@ai-sdk/openai';
|
||||||
|
|
||||||
|
|
@ -13,6 +12,7 @@ export interface VisionConfig {
|
||||||
apiKey: string;
|
apiKey: string;
|
||||||
baseURL?: string;
|
baseURL?: string;
|
||||||
model: string;
|
model: string;
|
||||||
|
sessionId?: string;
|
||||||
}
|
}
|
||||||
|
|
||||||
async function loadVisionConfig(client: ReturnType<typeof createSkillClient>): Promise<VisionConfig> {
|
async function loadVisionConfig(client: ReturnType<typeof createSkillClient>): Promise<VisionConfig> {
|
||||||
|
|
@ -23,6 +23,7 @@ async function loadVisionConfig(client: ReturnType<typeof createSkillClient>): P
|
||||||
apiKey,
|
apiKey,
|
||||||
baseURL: cfg.metadata?.provider?.base_url,
|
baseURL: cfg.metadata?.provider?.base_url,
|
||||||
model: process.env.VISION_MODEL ?? cfg.metadata?.provider?.model ?? 'aliyun-cp-multimodal',
|
model: process.env.VISION_MODEL ?? cfg.metadata?.provider?.model ?? 'aliyun-cp-multimodal',
|
||||||
|
sessionId: process.env.SKILL_SESSION_ID || `skill_${Date.now()}_${Math.random().toString(36).slice(2, 8)}`,
|
||||||
};
|
};
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|
@ -184,15 +185,47 @@ async function runDetectBestAndSearch(args: string[], dryRun: boolean): Promise<
|
||||||
const imageForSearch = best.croppedImagePath || best.imagePath;
|
const imageForSearch = best.croppedImagePath || best.imagePath;
|
||||||
const searchResult = await runSearch([imageForSearch], dryRun) as SearchResult;
|
const searchResult = await runSearch([imageForSearch], dryRun) as SearchResult;
|
||||||
|
|
||||||
let rerankResult: any = undefined;
|
// Post-filter: drop results whose pic_url isn't the same product type as our snapshot
|
||||||
|
let postFilter: any = undefined;
|
||||||
if (!dryRun && searchResult.status === 'success' && searchResult.searchBody) {
|
if (!dryRun && searchResult.status === 'success' && searchResult.searchBody) {
|
||||||
|
const items: SearchItem[] = (searchResult.searchBody as any)?.data?.items?.item ?? [];
|
||||||
|
if (items.length > 0) {
|
||||||
|
try {
|
||||||
|
const client = createSkillClient();
|
||||||
|
const visionConfig = await loadVisionConfig(client);
|
||||||
|
const result = await postFilterByImage(imageForSearch, items, visionConfig, { description: best.description });
|
||||||
|
(searchResult.searchBody as any).data.items.item = result.kept;
|
||||||
|
postFilter = {
|
||||||
|
totalChecked: result.totalChecked,
|
||||||
|
keptCount: result.kept.length,
|
||||||
|
rejectedCount: result.rejected.length,
|
||||||
|
failed: result.failed,
|
||||||
|
};
|
||||||
|
} catch (e: any) {
|
||||||
|
postFilter = { error: e.message };
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
let rerankResult: any = undefined;
|
||||||
|
// If post-filter produced focused results, sort them directly by sales — they're already the best matches.
|
||||||
|
// Otherwise fall back to the keyword-intersection rerank.
|
||||||
|
if (!dryRun && postFilter && !postFilter.error && postFilter.keptCount > 0) {
|
||||||
|
const items: SearchItem[] = (searchResult.searchBody as any)?.data?.items?.item ?? [];
|
||||||
|
const sorted = [...items].sort((a, b) => (b.sales ?? 0) - (a.sales ?? 0)).slice(0, 5);
|
||||||
|
rerankResult = {
|
||||||
|
source: 'post-filter',
|
||||||
|
results: sorted,
|
||||||
|
count: sorted.length,
|
||||||
|
};
|
||||||
|
} else if (!dryRun && searchResult.status === 'success' && searchResult.searchBody) {
|
||||||
const tmpFile = path.join(path.dirname(imageForSearch), `search_body_${Date.now()}.json`);
|
const tmpFile = path.join(path.dirname(imageForSearch), `search_body_${Date.now()}.json`);
|
||||||
try {
|
try {
|
||||||
fs.writeFileSync(tmpFile, JSON.stringify(searchResult.searchBody));
|
fs.writeFileSync(tmpFile, JSON.stringify(searchResult.searchBody));
|
||||||
rerankResult = await runRerank([
|
rerankResult = await runRerank([
|
||||||
`--image-results=${tmpFile}`,
|
`--image-results=${tmpFile}`,
|
||||||
`--description=${best.description}`,
|
`--description=${best.description}`,
|
||||||
'--top=10',
|
'--top=5',
|
||||||
], dryRun);
|
], dryRun);
|
||||||
} catch (e: any) {
|
} catch (e: any) {
|
||||||
rerankResult = { error: e.message };
|
rerankResult = { error: e.message };
|
||||||
|
|
@ -207,10 +240,87 @@ async function runDetectBestAndSearch(args: string[], dryRun: boolean): Promise<
|
||||||
searchHttpStatus: searchResult.searchHttpStatus,
|
searchHttpStatus: searchResult.searchHttpStatus,
|
||||||
searchBody: searchResult.searchBody,
|
searchBody: searchResult.searchBody,
|
||||||
searchError: searchResult.error,
|
searchError: searchResult.error,
|
||||||
|
postFilter,
|
||||||
rerank: rerankResult,
|
rerank: rerankResult,
|
||||||
} as any;
|
} as any;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
async function runDetectVideo(args: string[], dryRun: boolean): Promise<DetectVideoResult> {
|
||||||
|
const videoPath = args[0];
|
||||||
|
if (!videoPath) return { status: 'failed', command: 'detect-video', dryRun, error: 'detect-video requires <video-path>' };
|
||||||
|
if (!fs.existsSync(videoPath)) return { status: 'failed', command: 'detect-video', dryRun, error: `video not found: ${videoPath}` };
|
||||||
|
|
||||||
|
const detectResult = await runDetectBest(args, dryRun) as DetectResult;
|
||||||
|
if (detectResult.status === 'failed') {
|
||||||
|
return { status: 'failed', command: 'detect-video', dryRun, videoPath, error: detectResult.error || 'failed to detect best frame' };
|
||||||
|
}
|
||||||
|
const description = detectResult.bestSnapshot?.description?.trim();
|
||||||
|
const snapshotImagePath = detectResult.bestSnapshot?.croppedImagePath || detectResult.bestSnapshot?.imagePath;
|
||||||
|
if (!description) {
|
||||||
|
return { status: 'failed', command: 'detect-video', dryRun, videoPath, error: 'no product description detected from video' };
|
||||||
|
}
|
||||||
|
|
||||||
|
if (dryRun) {
|
||||||
|
return { status: 'success', command: 'detect-video', dryRun, videoPath, videoUrl: null, description, keyword: '<dry-run-keyword>', snapshotImagePath };
|
||||||
|
}
|
||||||
|
|
||||||
|
const client = createSkillClient();
|
||||||
|
const visionConfig = await loadVisionConfig(client);
|
||||||
|
const keyword = await generateChineseKeyword(description, visionConfig);
|
||||||
|
|
||||||
|
return { status: 'success', command: 'detect-video', dryRun, videoPath, videoUrl: null, description, keyword, snapshotImagePath };
|
||||||
|
}
|
||||||
|
|
||||||
|
async function runDetectVideoAndSearch(args: string[], dryRun: boolean): Promise<DetectVideoAndSearchResult> {
|
||||||
|
const videoPath = args[0];
|
||||||
|
if (!videoPath) return { status: 'failed', command: 'detect-video-and-search', dryRun, error: 'detect-video-and-search requires <video-path>' };
|
||||||
|
if (!fs.existsSync(videoPath)) return { status: 'failed', command: 'detect-video-and-search', dryRun, error: `video not found: ${videoPath}` };
|
||||||
|
|
||||||
|
if (dryRun) {
|
||||||
|
return { status: 'success', command: 'detect-video-and-search', dryRun, videoPath, videoUrl: null, description: '<dry-run>', keyword: '<dry-run>', searchResults: [] };
|
||||||
|
}
|
||||||
|
|
||||||
|
// Reuse existing pipeline: best snapshot → image search → keyword rerank
|
||||||
|
const detectAndSearch = await runDetectBestAndSearch(args, dryRun) as any;
|
||||||
|
if (detectAndSearch.status === 'failed') {
|
||||||
|
return { status: 'failed', command: 'detect-video-and-search', dryRun, videoPath, error: detectAndSearch.error || 'detect-best-and-search failed' };
|
||||||
|
}
|
||||||
|
|
||||||
|
const description = String(detectAndSearch.bestSnapshot?.description || '').trim();
|
||||||
|
const rerank = detectAndSearch.rerank;
|
||||||
|
const keyword = String(rerank?.keyword || '').trim();
|
||||||
|
const searchResults = (rerank?.results || []) as SearchItem[];
|
||||||
|
|
||||||
|
// Fallback: if rerank didn't produce anything, do keyword search directly.
|
||||||
|
if (!searchResults.length) {
|
||||||
|
const client = createSkillClient();
|
||||||
|
const visionConfig = await loadVisionConfig(client);
|
||||||
|
const fallbackKeyword = keyword || (description ? await generateChineseKeyword(description, visionConfig) : '');
|
||||||
|
const items = fallbackKeyword ? await keywordSearch(client, fallbackKeyword, 1) : [];
|
||||||
|
return {
|
||||||
|
status: 'success',
|
||||||
|
command: 'detect-video-and-search',
|
||||||
|
dryRun,
|
||||||
|
videoPath,
|
||||||
|
videoUrl: null,
|
||||||
|
description,
|
||||||
|
keyword: fallbackKeyword,
|
||||||
|
searchResults: items,
|
||||||
|
};
|
||||||
|
}
|
||||||
|
|
||||||
|
return {
|
||||||
|
status: 'success',
|
||||||
|
command: 'detect-video-and-search',
|
||||||
|
dryRun,
|
||||||
|
videoPath,
|
||||||
|
videoUrl: null,
|
||||||
|
description,
|
||||||
|
keyword,
|
||||||
|
searchResults,
|
||||||
|
};
|
||||||
|
}
|
||||||
|
|
||||||
async function runDetectAndSearch(args: string[], dryRun: boolean): Promise<OutputResult> {
|
async function runDetectAndSearch(args: string[], dryRun: boolean): Promise<OutputResult> {
|
||||||
const detectResult = await runDetect(args, dryRun) as DetectResult;
|
const detectResult = await runDetect(args, dryRun) as DetectResult;
|
||||||
if (detectResult.status === 'failed') return detectResult;
|
if (detectResult.status === 'failed') return detectResult;
|
||||||
|
|
@ -223,15 +333,47 @@ async function runDetectAndSearch(args: string[], dryRun: boolean): Promise<Outp
|
||||||
const imageForSearch = best.croppedImagePath || best.imagePath;
|
const imageForSearch = best.croppedImagePath || best.imagePath;
|
||||||
const searchResult = await runSearch([imageForSearch], dryRun) as SearchResult;
|
const searchResult = await runSearch([imageForSearch], dryRun) as SearchResult;
|
||||||
|
|
||||||
let rerankResult: any = undefined;
|
// Post-filter: drop results whose pic_url isn't the same product type as our snapshot
|
||||||
|
let postFilter: any = undefined;
|
||||||
if (!dryRun && searchResult.status === 'success' && searchResult.searchBody) {
|
if (!dryRun && searchResult.status === 'success' && searchResult.searchBody) {
|
||||||
|
const items: SearchItem[] = (searchResult.searchBody as any)?.data?.items?.item ?? [];
|
||||||
|
if (items.length > 0) {
|
||||||
|
try {
|
||||||
|
const client = createSkillClient();
|
||||||
|
const visionConfig = await loadVisionConfig(client);
|
||||||
|
const result = await postFilterByImage(imageForSearch, items, visionConfig, { description: best.description });
|
||||||
|
(searchResult.searchBody as any).data.items.item = result.kept;
|
||||||
|
postFilter = {
|
||||||
|
totalChecked: result.totalChecked,
|
||||||
|
keptCount: result.kept.length,
|
||||||
|
rejectedCount: result.rejected.length,
|
||||||
|
failed: result.failed,
|
||||||
|
};
|
||||||
|
} catch (e: any) {
|
||||||
|
postFilter = { error: e.message };
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
let rerankResult: any = undefined;
|
||||||
|
// If post-filter produced focused results, sort them directly by sales — they're already the best matches.
|
||||||
|
// Otherwise fall back to the keyword-intersection rerank.
|
||||||
|
if (!dryRun && postFilter && !postFilter.error && postFilter.keptCount > 0) {
|
||||||
|
const items: SearchItem[] = (searchResult.searchBody as any)?.data?.items?.item ?? [];
|
||||||
|
const sorted = [...items].sort((a, b) => (b.sales ?? 0) - (a.sales ?? 0)).slice(0, 5);
|
||||||
|
rerankResult = {
|
||||||
|
source: 'post-filter',
|
||||||
|
results: sorted,
|
||||||
|
count: sorted.length,
|
||||||
|
};
|
||||||
|
} else if (!dryRun && searchResult.status === 'success' && searchResult.searchBody) {
|
||||||
const tmpFile = path.join(path.dirname(imageForSearch), `search_body_${Date.now()}.json`);
|
const tmpFile = path.join(path.dirname(imageForSearch), `search_body_${Date.now()}.json`);
|
||||||
try {
|
try {
|
||||||
fs.writeFileSync(tmpFile, JSON.stringify(searchResult.searchBody));
|
fs.writeFileSync(tmpFile, JSON.stringify(searchResult.searchBody));
|
||||||
rerankResult = await runRerank([
|
rerankResult = await runRerank([
|
||||||
`--image-results=${tmpFile}`,
|
`--image-results=${tmpFile}`,
|
||||||
`--description=${best.description}`,
|
`--description=${best.description}`,
|
||||||
'--top=10',
|
'--top=5',
|
||||||
], dryRun);
|
], dryRun);
|
||||||
} catch (e: any) {
|
} catch (e: any) {
|
||||||
rerankResult = { error: e.message };
|
rerankResult = { error: e.message };
|
||||||
|
|
@ -246,69 +388,11 @@ async function runDetectAndSearch(args: string[], dryRun: boolean): Promise<Outp
|
||||||
searchHttpStatus: searchResult.searchHttpStatus,
|
searchHttpStatus: searchResult.searchHttpStatus,
|
||||||
searchBody: searchResult.searchBody,
|
searchBody: searchResult.searchBody,
|
||||||
searchError: searchResult.error,
|
searchError: searchResult.error,
|
||||||
|
postFilter,
|
||||||
rerank: rerankResult,
|
rerank: rerankResult,
|
||||||
} as any;
|
} as any;
|
||||||
}
|
}
|
||||||
|
|
||||||
async function runDetectVideo(args: string[], dryRun: boolean): Promise<DetectVideoResult> {
|
|
||||||
const videoPath = args[0];
|
|
||||||
if (!videoPath) return { status: 'failed', command: 'detect-video', dryRun, error: 'detect-video requires <video-path>' };
|
|
||||||
if (!fs.existsSync(videoPath)) return { status: 'failed', command: 'detect-video', dryRun, error: `video not found: ${videoPath}` };
|
|
||||||
|
|
||||||
if (dryRun) {
|
|
||||||
return { status: 'success', command: 'detect-video', dryRun, videoPath };
|
|
||||||
}
|
|
||||||
|
|
||||||
const client = createSkillClient();
|
|
||||||
const visionConfig = await loadVisionConfig(client);
|
|
||||||
|
|
||||||
// 1. Upload video to get public URL
|
|
||||||
const videoUrl = await uploadVideo(videoPath);
|
|
||||||
|
|
||||||
// 2. Analyze video via LLM
|
|
||||||
const { description } = await analyzeVideo(videoUrl, visionConfig);
|
|
||||||
|
|
||||||
// 3. Generate Chinese search keyword
|
|
||||||
const keyword = await generateChineseKeyword(description, visionConfig);
|
|
||||||
|
|
||||||
return {
|
|
||||||
status: 'success',
|
|
||||||
command: 'detect-video',
|
|
||||||
dryRun,
|
|
||||||
videoPath,
|
|
||||||
videoUrl,
|
|
||||||
description,
|
|
||||||
keyword,
|
|
||||||
};
|
|
||||||
}
|
|
||||||
|
|
||||||
async function runDetectVideoAndSearch(args: string[], dryRun: boolean): Promise<DetectVideoResult> {
|
|
||||||
const result = await runDetectVideo(args, dryRun) as DetectVideoResult;
|
|
||||||
if (result.status === 'failed') return result;
|
|
||||||
|
|
||||||
if (dryRun) return { ...result, command: 'detect-video-and-search' };
|
|
||||||
|
|
||||||
const client = createSkillClient();
|
|
||||||
|
|
||||||
// Search 1688 with keyword directly (no rerank — image-based rerank doesn't apply to text search)
|
|
||||||
let searchResults: SearchItem[] = [];
|
|
||||||
if (result.keyword) {
|
|
||||||
try {
|
|
||||||
const items = await keywordSearch(client, result.keyword);
|
|
||||||
// Sort by sales descending
|
|
||||||
searchResults = items.sort((a, b) => (b.sales ?? 0) - (a.sales ?? 0));
|
|
||||||
} catch (e: any) {
|
|
||||||
return { ...result, command: 'detect-video-and-search', status: 'failed', error: `keyword search failed: ${e.message}` };
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
return {
|
|
||||||
...result,
|
|
||||||
command: 'detect-video-and-search',
|
|
||||||
searchResults,
|
|
||||||
};
|
|
||||||
}
|
|
||||||
|
|
||||||
function parseDetectOptions(videoPath: string, args: string[]): DetectOptions {
|
function parseDetectOptions(videoPath: string, args: string[]): DetectOptions {
|
||||||
const outputDir = getFlag(args, '--output-dir') || path.join(
|
const outputDir = getFlag(args, '--output-dir') || path.join(
|
||||||
path.dirname(videoPath),
|
path.dirname(videoPath),
|
||||||
|
|
@ -333,7 +417,25 @@ function getFlag(args: string[], flag: string): string | undefined {
|
||||||
}
|
}
|
||||||
|
|
||||||
function createVisionModel(config: VisionConfig) {
|
function createVisionModel(config: VisionConfig) {
|
||||||
const openai = createOpenAI({ apiKey: config.apiKey, baseURL: config.baseURL });
|
const sessionId = config.sessionId || '';
|
||||||
|
const originFetch = globalThis.fetch;
|
||||||
|
// Inject metadata.session_id into request body so LiteLLM → Langfuse creates sessions
|
||||||
|
const wrapped = async (input: RequestInfo | URL, init?: RequestInit) => {
|
||||||
|
if (init?.body && typeof init.body === 'string') {
|
||||||
|
try {
|
||||||
|
const body = JSON.parse(init.body);
|
||||||
|
if (!body.metadata) body.metadata = {};
|
||||||
|
if (!body.metadata.session_id) body.metadata.session_id = sessionId;
|
||||||
|
body.metadata.tags = ['skill:video-product-snapshot'];
|
||||||
|
init = { ...init, body: JSON.stringify(body) };
|
||||||
|
} catch {}
|
||||||
|
}
|
||||||
|
return originFetch(input, init);
|
||||||
|
};
|
||||||
|
const openai = createOpenAI({
|
||||||
|
apiKey: config.apiKey, baseURL: config.baseURL,
|
||||||
|
fetch: wrapped as typeof globalThis.fetch,
|
||||||
|
});
|
||||||
return openai(config.model);
|
return openai(config.model);
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|
@ -388,9 +490,10 @@ function extractKeywordsFromTitles(items: SearchItem[], topN = 5): string {
|
||||||
|
|
||||||
async function runRerank(args: string[], dryRun: boolean): Promise<OutputResult> {
|
async function runRerank(args: string[], dryRun: boolean): Promise<OutputResult> {
|
||||||
// --image-results=<path> --keyword=<text> --top=<n>
|
// --image-results=<path> --keyword=<text> --top=<n>
|
||||||
const imageResultsArg = getFlag(args, '--image-results') || args[0];
|
const positionals = args.filter((a) => !a.startsWith('--'));
|
||||||
const keywordArg = getFlag(args, '--keyword') || args[1];
|
const imageResultsArg = getFlag(args, '--image-results') || positionals[0];
|
||||||
const topN = parseInt(getFlag(args, '--top') || '10', 10);
|
const keywordArg = getFlag(args, '--keyword') || positionals[1];
|
||||||
|
const topN = parseInt(getFlag(args, '--top') || '5', 10);
|
||||||
|
|
||||||
const description = getFlag(args, '--description') || '';
|
const description = getFlag(args, '--description') || '';
|
||||||
|
|
||||||
|
|
@ -465,7 +568,3 @@ async function runRerank(args: string[], dryRun: boolean): Promise<OutputResult>
|
||||||
results: sorted,
|
results: sorted,
|
||||||
} as any;
|
} as any;
|
||||||
}
|
}
|
||||||
|
|
||||||
function parseJsonSafe(text: string): unknown {
|
|
||||||
try { return JSON.parse(text); } catch { return text; }
|
|
||||||
}
|
|
||||||
|
|
|
||||||
|
|
@ -0,0 +1,123 @@
|
||||||
|
import { generateText } from 'ai';
|
||||||
|
import { createOpenAI } from '@ai-sdk/openai';
|
||||||
|
import type { SearchItem } from './types.ts';
|
||||||
|
import type { VisionConfig } from './index.ts';
|
||||||
|
import { imageToBase64 } from './frame-extractor.ts';
|
||||||
|
|
||||||
|
export interface PostFilterResult {
|
||||||
|
kept: SearchItem[];
|
||||||
|
rejected: SearchItem[];
|
||||||
|
totalChecked: number;
|
||||||
|
failed: boolean;
|
||||||
|
}
|
||||||
|
|
||||||
|
const FILTER_PROMPT = (count: number, description?: string) => {
|
||||||
|
const productLine = description
|
||||||
|
? `查询商品是:${description}`
|
||||||
|
: '第1张图是查询商品。';
|
||||||
|
return `${productLine}
|
||||||
|
后面的 ${count} 张图是搜索结果。
|
||||||
|
|
||||||
|
任务:判断每张候选图是否与查询商品是**完全相同的具体产品类型**。
|
||||||
|
- 必须是同一个具体产品(例如:查询是"鞋架",候选必须也是鞋架;不是其他类型的架子如纸巾架、首饰架、收纳盒)
|
||||||
|
- 颜色、材质、款式、尺寸不同但同一具体类型 → 算同类
|
||||||
|
- 用途不同就不算同类(例如:查询是鞋架 vs 候选是纸巾架 → 不算;查询是鞋架 vs 候选是床下收纳箱 → 不算,除非明确是鞋类收纳)
|
||||||
|
- 关键判断:候选商品的主要用途是否与查询商品一致
|
||||||
|
|
||||||
|
按候选图顺序输出每一张的判断,每行一个,格式严格遵守:
|
||||||
|
1: YES
|
||||||
|
2: NO
|
||||||
|
3: YES
|
||||||
|
...
|
||||||
|
|
||||||
|
只输出 ${count} 行结果,不要解释,不要前后空行。`;
|
||||||
|
};
|
||||||
|
|
||||||
|
function createModel(config: VisionConfig) {
|
||||||
|
const sessionId = config.sessionId || '';
|
||||||
|
const originFetch = globalThis.fetch;
|
||||||
|
const wrapped = async (input: RequestInfo | URL, init?: RequestInit) => {
|
||||||
|
if (init?.body && typeof init.body === 'string') {
|
||||||
|
try {
|
||||||
|
const body = JSON.parse(init.body);
|
||||||
|
if (!body.metadata) body.metadata = {};
|
||||||
|
if (!body.metadata.session_id) body.metadata.session_id = sessionId;
|
||||||
|
body.metadata.tags = ['skill:video-product-snapshot'];
|
||||||
|
init = { ...init, body: JSON.stringify(body) };
|
||||||
|
} catch {}
|
||||||
|
}
|
||||||
|
return originFetch(input, init);
|
||||||
|
};
|
||||||
|
const provider = createOpenAI({
|
||||||
|
apiKey: config.apiKey, baseURL: config.baseURL,
|
||||||
|
fetch: wrapped as typeof globalThis.fetch,
|
||||||
|
});
|
||||||
|
return provider(config.model);
|
||||||
|
}
|
||||||
|
|
||||||
|
async function classifyBatch(
|
||||||
|
model: ReturnType<ReturnType<typeof createOpenAI>>,
|
||||||
|
queryImageDataUrl: string,
|
||||||
|
batch: SearchItem[],
|
||||||
|
description?: string,
|
||||||
|
): Promise<boolean[]> {
|
||||||
|
const content: any[] = [{ type: 'image', image: queryImageDataUrl }];
|
||||||
|
for (const item of batch) {
|
||||||
|
content.push({ type: 'image', image: item.pic_url });
|
||||||
|
}
|
||||||
|
content.push({ type: 'text', text: FILTER_PROMPT(batch.length, description) });
|
||||||
|
|
||||||
|
const { text } = await generateText({
|
||||||
|
model,
|
||||||
|
messages: [{ role: 'user', content }],
|
||||||
|
maxTokens: 200,
|
||||||
|
});
|
||||||
|
|
||||||
|
const flags = batch.map(() => false);
|
||||||
|
for (const line of text.split('\n')) {
|
||||||
|
const m = line.match(/^\s*(\d+)\s*[::]\s*(YES|NO|是|否)/i);
|
||||||
|
if (!m) continue;
|
||||||
|
const idx = parseInt(m[1], 10) - 1;
|
||||||
|
const yes = /YES|是/i.test(m[2]);
|
||||||
|
if (idx >= 0 && idx < flags.length) flags[idx] = yes;
|
||||||
|
}
|
||||||
|
return flags;
|
||||||
|
}
|
||||||
|
|
||||||
|
export async function postFilterByImage(
|
||||||
|
queryImagePath: string,
|
||||||
|
items: SearchItem[],
|
||||||
|
visionConfig: VisionConfig,
|
||||||
|
options: { description?: string; batchSize?: number } = {},
|
||||||
|
): Promise<PostFilterResult> {
|
||||||
|
if (items.length === 0) {
|
||||||
|
return { kept: [], rejected: [], totalChecked: 0, failed: false };
|
||||||
|
}
|
||||||
|
|
||||||
|
const batchSize = options.batchSize ?? 10;
|
||||||
|
const description = options.description;
|
||||||
|
|
||||||
|
const model = createModel(visionConfig);
|
||||||
|
const queryDataUrl = `data:image/jpeg;base64,${imageToBase64(queryImagePath)}`;
|
||||||
|
|
||||||
|
const kept: SearchItem[] = [];
|
||||||
|
const rejected: SearchItem[] = [];
|
||||||
|
let anyFailed = false;
|
||||||
|
|
||||||
|
for (let i = 0; i < items.length; i += batchSize) {
|
||||||
|
const batch = items.slice(i, i + batchSize);
|
||||||
|
try {
|
||||||
|
const flags = await classifyBatch(model, queryDataUrl, batch, description);
|
||||||
|
batch.forEach((item, idx) => {
|
||||||
|
if (flags[idx]) kept.push(item);
|
||||||
|
else rejected.push(item);
|
||||||
|
});
|
||||||
|
} catch {
|
||||||
|
// On batch failure, keep items (don't lose them) but flag the run as partial
|
||||||
|
anyFailed = true;
|
||||||
|
kept.push(...batch);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
return { kept, rejected, totalChecked: items.length, failed: anyFailed };
|
||||||
|
}
|
||||||
|
|
@ -1,4 +1,4 @@
|
||||||
import { generateObject } from 'ai';
|
import { generateObject, generateText } from 'ai';
|
||||||
import { createOpenAI } from '@ai-sdk/openai';
|
import { createOpenAI } from '@ai-sdk/openai';
|
||||||
import { z } from 'zod';
|
import { z } from 'zod';
|
||||||
import type { ExtractedFrame } from './frame-extractor.ts';
|
import type { ExtractedFrame } from './frame-extractor.ts';
|
||||||
|
|
@ -28,7 +28,38 @@ Discard (keep=false) if: only hands/texture/contents visible, motion blur, black
|
||||||
|
|
||||||
reason options: product_visible | content_only | hands_only | blur | transition | background_only`;
|
reason options: product_visible | content_only | hands_only | blur | transition | background_only`;
|
||||||
|
|
||||||
const RANKING_PROMPT = (count: number) => `You are selecting the single best product frame from ${count} video frames for ecommerce search.
|
const CONTAINER_CHECK_PROMPT = `Is the main product in this image a CONTAINER, RACK, or HOLDER (something designed to store/hold other items)?
|
||||||
|
Examples YES: shoe rack, shelf, storage box, organizer, basket, drawer, wardrobe, trolley, bin, tray, cabinet.
|
||||||
|
Examples NO: shoes, clothing, electronics, food, toys, cosmetics, tools.
|
||||||
|
Reply with only one word: YES or NO.`;
|
||||||
|
|
||||||
|
const RANKING_PROMPT_CONTAINER = (count: number) => `You are selecting ONE frame from ${count} video frames to use as the query image for an ecommerce reverse-image search.
|
||||||
|
|
||||||
|
The hero product is a CONTAINER / RACK / HOLDER / ORGANIZER.
|
||||||
|
|
||||||
|
CRITICAL CONSTRAINT — read this first:
|
||||||
|
Image search engines identify objects by visual appearance. If the container holds items (shoes, clothes, etc.), the search engine will match those ITEMS, not the container — returning completely wrong products.
|
||||||
|
|
||||||
|
YOUR ONLY JOB: find the frame where the container structure itself is most visible with the FEWEST or NO items inside.
|
||||||
|
|
||||||
|
ABSOLUTE PRIORITY ORDER (do not deviate):
|
||||||
|
1. Frame with container completely EMPTY — highest priority regardless of angle or assembly state
|
||||||
|
2. Frame with container partially assembled or partially visible but EMPTY — still better than any loaded frame
|
||||||
|
3. Frame with fewest items inside (1-2 items, mostly empty)
|
||||||
|
4. Frame with moderate load — only if no emptier option exists
|
||||||
|
5. Frame fully loaded — last resort only if no other frames exist
|
||||||
|
|
||||||
|
A frame showing the rack mid-assembly with zero items is ALWAYS better than a perfectly-lit fully-assembled rack filled with shoes.
|
||||||
|
|
||||||
|
Frames are numbered 0 to ${count - 1} in order shown. You MUST pick ONE.
|
||||||
|
|
||||||
|
Return:
|
||||||
|
- bestFrameIndex: 0-based index of the emptiest container frame
|
||||||
|
- description: concise Chinese search query ≤12 words (container type + material + color + key feature)
|
||||||
|
- reasoning: describe how many items are visible inside the chosen frame and why it's the emptiest option
|
||||||
|
- boundingBox: tight box of the PRODUCT STRUCTURE ONLY as [x1, y1, x2, y2] normalized 0.0–1.0. Exclude any items stored inside.`;
|
||||||
|
|
||||||
|
const RANKING_PROMPT_GENERAL = (count: number) => `You are selecting the single best product frame from ${count} video frames for ecommerce search.
|
||||||
|
|
||||||
Frames are numbered 0 to ${count - 1} in order shown.
|
Frames are numbered 0 to ${count - 1} in order shown.
|
||||||
|
|
||||||
|
|
@ -47,7 +78,24 @@ Return:
|
||||||
- boundingBox: tight box of the PRODUCT ONLY as [x1, y1, x2, y2] normalized 0.0–1.0, top-left origin. Exclude hands, background, and unrelated objects. The product is near the center of the frame.`;
|
- boundingBox: tight box of the PRODUCT ONLY as [x1, y1, x2, y2] normalized 0.0–1.0, top-left origin. Exclude hands, background, and unrelated objects. The product is near the center of the frame.`;
|
||||||
|
|
||||||
function createVisionModel(config: VisionConfig) {
|
function createVisionModel(config: VisionConfig) {
|
||||||
const provider = createOpenAI({ apiKey: config.apiKey, baseURL: config.baseURL });
|
const sessionId = config.sessionId || '';
|
||||||
|
const originFetch = globalThis.fetch;
|
||||||
|
const wrapped = async (input: RequestInfo | URL, init?: RequestInit) => {
|
||||||
|
if (init?.body && typeof init.body === 'string') {
|
||||||
|
try {
|
||||||
|
const body = JSON.parse(init.body);
|
||||||
|
if (!body.metadata) body.metadata = {};
|
||||||
|
if (!body.metadata.session_id) body.metadata.session_id = sessionId;
|
||||||
|
body.metadata.tags = ['skill:video-product-snapshot'];
|
||||||
|
init = { ...init, body: JSON.stringify(body) };
|
||||||
|
} catch {}
|
||||||
|
}
|
||||||
|
return originFetch(input, init);
|
||||||
|
};
|
||||||
|
const provider = createOpenAI({
|
||||||
|
apiKey: config.apiKey, baseURL: config.baseURL,
|
||||||
|
fetch: wrapped as typeof globalThis.fetch,
|
||||||
|
});
|
||||||
return provider(config.model);
|
return provider(config.model);
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|
@ -72,15 +120,52 @@ async function filterFrame(
|
||||||
return object.keep;
|
return object.keep;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|
||||||
|
async function isContainerProduct(
|
||||||
|
firstFrame: ExtractedFrame,
|
||||||
|
model: ReturnType<ReturnType<typeof createOpenAI>>,
|
||||||
|
): Promise<boolean> {
|
||||||
|
try {
|
||||||
|
const { text } = await generateText({
|
||||||
|
model,
|
||||||
|
messages: [{
|
||||||
|
role: 'user',
|
||||||
|
content: [
|
||||||
|
{ type: 'image', image: `data:image/jpeg;base64,${imageToBase64(firstFrame.imagePath)}` },
|
||||||
|
{ type: 'text', text: CONTAINER_CHECK_PROMPT },
|
||||||
|
],
|
||||||
|
}],
|
||||||
|
maxTokens: 5,
|
||||||
|
});
|
||||||
|
return text.trim().toUpperCase().startsWith('Y');
|
||||||
|
} catch {
|
||||||
|
return false;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
|
||||||
|
function takeEarliestFrames(candidates: ExtractedFrame[], fraction: number = 0.4): ExtractedFrame[] {
|
||||||
|
// Ecommerce videos show the container empty/unboxing early, then full.
|
||||||
|
// Taking the first 40% of frames reliably captures empty states.
|
||||||
|
const sorted = [...candidates].sort((a, b) => a.frameIndex - b.frameIndex);
|
||||||
|
const cutoff = Math.max(1, Math.ceil(sorted.length * fraction));
|
||||||
|
return sorted.slice(0, cutoff);
|
||||||
|
}
|
||||||
|
|
||||||
async function rankCandidates(
|
async function rankCandidates(
|
||||||
candidates: ExtractedFrame[],
|
candidates: ExtractedFrame[],
|
||||||
model: ReturnType<ReturnType<typeof createOpenAI>>,
|
model: ReturnType<ReturnType<typeof createOpenAI>>,
|
||||||
|
isContainer: boolean,
|
||||||
): Promise<{ bestFrame: ExtractedFrame; description: string; reasoning: string; boundingBox: [number, number, number, number] }> {
|
): Promise<{ bestFrame: ExtractedFrame; description: string; reasoning: string; boundingBox: [number, number, number, number] }> {
|
||||||
const imageContent = candidates.map((f) => ({
|
const imageContent = candidates.map((f) => ({
|
||||||
type: 'image' as const,
|
type: 'image' as const,
|
||||||
image: `data:image/jpeg;base64,${imageToBase64(f.imagePath)}`,
|
image: `data:image/jpeg;base64,${imageToBase64(f.imagePath)}`,
|
||||||
}));
|
}));
|
||||||
|
|
||||||
|
const prompt = isContainer
|
||||||
|
? RANKING_PROMPT_CONTAINER(candidates.length)
|
||||||
|
: RANKING_PROMPT_GENERAL(candidates.length);
|
||||||
|
|
||||||
const { object } = await generateObject({
|
const { object } = await generateObject({
|
||||||
model,
|
model,
|
||||||
schema: RankingSchema,
|
schema: RankingSchema,
|
||||||
|
|
@ -89,7 +174,7 @@ async function rankCandidates(
|
||||||
role: 'user',
|
role: 'user',
|
||||||
content: [
|
content: [
|
||||||
...imageContent,
|
...imageContent,
|
||||||
{ type: 'text', text: RANKING_PROMPT(candidates.length) },
|
{ type: 'text', text: prompt },
|
||||||
],
|
],
|
||||||
}],
|
}],
|
||||||
});
|
});
|
||||||
|
|
@ -246,9 +331,18 @@ export async function detectBestFrame(
|
||||||
|
|
||||||
const model = createVisionModel(visionConfig);
|
const model = createVisionModel(visionConfig);
|
||||||
|
|
||||||
// 3. Try Vision ranking with error isolation
|
// 3. Check if product is a container/rack type (use first candidate frame)
|
||||||
|
const container = await isContainerProduct(candidates[0], model);
|
||||||
|
|
||||||
|
// 4. For containers: restrict ranking to earliest frames (empty/unboxing phase)
|
||||||
|
if (container) {
|
||||||
|
const early = takeEarliestFrames(candidates);
|
||||||
|
if (early.length > 0) candidates = early;
|
||||||
|
}
|
||||||
|
|
||||||
|
// 5. Try Vision ranking with error isolation
|
||||||
try {
|
try {
|
||||||
const { bestFrame, description, reasoning, boundingBox } = await rankCandidates(candidates, model);
|
const { bestFrame, description, reasoning, boundingBox } = await rankCandidates(candidates, model, container);
|
||||||
|
|
||||||
if (isValidBoundingBox(boundingBox)) {
|
if (isValidBoundingBox(boundingBox)) {
|
||||||
const croppedPath = bestFrame.imagePath.replace(/\.jpg$/, '_cropped.jpg');
|
const croppedPath = bestFrame.imagePath.replace(/\.jpg$/, '_cropped.jpg');
|
||||||
|
|
@ -313,9 +407,10 @@ export async function detectProductFrames(
|
||||||
if (candidates.length === 0) return [];
|
if (candidates.length === 0) return [];
|
||||||
|
|
||||||
// Pass 2: single comparative call — model sees all candidates at once
|
// Pass 2: single comparative call — model sees all candidates at once
|
||||||
|
const container = await isContainerProduct(candidates[0], model);
|
||||||
let bestSnapshot: ProductFrame | undefined;
|
let bestSnapshot: ProductFrame | undefined;
|
||||||
try {
|
try {
|
||||||
const { bestFrame, description, reasoning, boundingBox } = await rankCandidates(candidates, model);
|
const { bestFrame, description, reasoning, boundingBox } = await rankCandidates(candidates, model, container);
|
||||||
|
|
||||||
if (isValidBoundingBox(boundingBox)) {
|
if (isValidBoundingBox(boundingBox)) {
|
||||||
const croppedPath = bestFrame.imagePath.replace(/\.jpg$/, '_cropped.jpg');
|
const croppedPath = bestFrame.imagePath.replace(/\.jpg$/, '_cropped.jpg');
|
||||||
|
|
|
||||||
50
src/types.ts
50
src/types.ts
|
|
@ -1,4 +1,13 @@
|
||||||
export type Command = 'detect' | 'search' | 'detect-and-search' | 'detect-best' | 'detect-best-and-search' | 'detect-video' | 'detect-video-and-search' | 'rerank' | 'session';
|
export type Command =
|
||||||
|
| 'detect'
|
||||||
|
| 'search'
|
||||||
|
| 'detect-and-search'
|
||||||
|
| 'detect-best'
|
||||||
|
| 'detect-best-and-search'
|
||||||
|
| 'detect-video'
|
||||||
|
| 'detect-video-and-search'
|
||||||
|
| 'rerank'
|
||||||
|
| 'session';
|
||||||
|
|
||||||
export interface SearchItem {
|
export interface SearchItem {
|
||||||
num_iid: number;
|
num_iid: number;
|
||||||
|
|
@ -11,6 +20,30 @@ export interface SearchItem {
|
||||||
detail_url: string;
|
detail_url: string;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
export interface DetectVideoResult {
|
||||||
|
status: 'success' | 'failed';
|
||||||
|
command: 'detect-video';
|
||||||
|
dryRun: boolean;
|
||||||
|
videoPath?: string;
|
||||||
|
videoUrl?: string | null;
|
||||||
|
description?: string;
|
||||||
|
keyword?: string;
|
||||||
|
snapshotImagePath?: string;
|
||||||
|
error?: string;
|
||||||
|
}
|
||||||
|
|
||||||
|
export interface DetectVideoAndSearchResult {
|
||||||
|
status: 'success' | 'failed';
|
||||||
|
command: 'detect-video-and-search';
|
||||||
|
dryRun: boolean;
|
||||||
|
videoPath?: string;
|
||||||
|
videoUrl?: string | null;
|
||||||
|
description?: string;
|
||||||
|
keyword?: string;
|
||||||
|
searchResults?: SearchItem[];
|
||||||
|
error?: string;
|
||||||
|
}
|
||||||
|
|
||||||
export interface DetectOptions {
|
export interface DetectOptions {
|
||||||
videoPath: string;
|
videoPath: string;
|
||||||
intervalSeconds: number;
|
intervalSeconds: number;
|
||||||
|
|
@ -51,17 +84,4 @@ export interface SearchResult {
|
||||||
error?: string;
|
error?: string;
|
||||||
}
|
}
|
||||||
|
|
||||||
export interface DetectVideoResult {
|
export type OutputResult = DetectResult | SearchResult | DetectVideoResult | DetectVideoAndSearchResult;
|
||||||
status: 'success' | 'failed';
|
|
||||||
command: Command;
|
|
||||||
dryRun: boolean;
|
|
||||||
videoPath?: string;
|
|
||||||
videoUrl?: string;
|
|
||||||
description?: string;
|
|
||||||
keyword?: string;
|
|
||||||
searchResults?: SearchItem[];
|
|
||||||
rerank?: unknown;
|
|
||||||
error?: string;
|
|
||||||
}
|
|
||||||
|
|
||||||
export type OutputResult = DetectResult | SearchResult | DetectVideoResult;
|
|
||||||
|
|
|
||||||
|
|
@ -1,97 +0,0 @@
|
||||||
import * as fs from 'fs';
|
|
||||||
import type { VisionConfig } from './index.ts';
|
|
||||||
import { createSkillClient } from './auth-cli.ts';
|
|
||||||
|
|
||||||
const UPLOAD_ENDPOINT =
|
|
||||||
process.env.ONEBOUND_UPLOAD_ENDPOINT ||
|
|
||||||
'http://localhost:3202/api/v1/tasks/upload-image';
|
|
||||||
|
|
||||||
/**
|
|
||||||
* Upload a video file to get a public URL.
|
|
||||||
*
|
|
||||||
* Uses direct HTTP fetch (not auth-rt CLI) to avoid E2BIG errors
|
|
||||||
* when the base64-encoded video exceeds the command-line argument limit.
|
|
||||||
*/
|
|
||||||
export async function uploadVideo(videoPath: string): Promise<string> {
|
|
||||||
const client = createSkillClient();
|
|
||||||
const { accessToken } = await client.session();
|
|
||||||
|
|
||||||
const videoBuffer = fs.readFileSync(videoPath);
|
|
||||||
const ext = videoPath.match(/\.(\w+)$/)?.[1] || 'mp4';
|
|
||||||
const filename = `video-${Date.now()}.${ext}`;
|
|
||||||
const contentType = ext === 'mov' ? 'video/quicktime' : `video/${ext}`;
|
|
||||||
|
|
||||||
const response = await fetch(UPLOAD_ENDPOINT, {
|
|
||||||
method: 'POST',
|
|
||||||
headers: {
|
|
||||||
Authorization: `Bearer ${accessToken}`,
|
|
||||||
'Content-Type': 'application/json',
|
|
||||||
},
|
|
||||||
body: JSON.stringify({
|
|
||||||
data: videoBuffer.toString('base64'),
|
|
||||||
filename,
|
|
||||||
contentType,
|
|
||||||
}),
|
|
||||||
});
|
|
||||||
|
|
||||||
if (!response.ok) {
|
|
||||||
const errBody = await response.text().catch(() => 'unknown');
|
|
||||||
throw new Error(`Video upload failed (${response.status}): ${errBody.slice(0, 300)}`);
|
|
||||||
}
|
|
||||||
|
|
||||||
const json = (await response.json()) as { url?: string };
|
|
||||||
if (!json.url) throw new Error('Upload response missing url');
|
|
||||||
return json.url;
|
|
||||||
}
|
|
||||||
|
|
||||||
export interface VideoAnalysis {
|
|
||||||
description: string;
|
|
||||||
rawResponse?: string;
|
|
||||||
}
|
|
||||||
|
|
||||||
export async function analyzeVideo(
|
|
||||||
videoUrl: string,
|
|
||||||
config: VisionConfig,
|
|
||||||
): Promise<VideoAnalysis> {
|
|
||||||
const response = await fetch(`${config.baseURL}/v1/chat/completions`, {
|
|
||||||
method: 'POST',
|
|
||||||
headers: {
|
|
||||||
Authorization: `Bearer ${config.apiKey}`,
|
|
||||||
'Content-Type': 'application/json',
|
|
||||||
},
|
|
||||||
body: JSON.stringify({
|
|
||||||
model: config.model,
|
|
||||||
messages: [
|
|
||||||
{
|
|
||||||
role: 'user',
|
|
||||||
content: [
|
|
||||||
{
|
|
||||||
type: 'video_url',
|
|
||||||
video_url: { url: videoUrl },
|
|
||||||
},
|
|
||||||
{
|
|
||||||
type: 'text',
|
|
||||||
text: '找出视频中的商品主体,用中文简要描述商品名称、材质、颜色、功能。',
|
|
||||||
},
|
|
||||||
],
|
|
||||||
},
|
|
||||||
],
|
|
||||||
max_tokens: 500,
|
|
||||||
}),
|
|
||||||
});
|
|
||||||
|
|
||||||
if (!response.ok) {
|
|
||||||
const errBody = await response.text().catch(() => 'unknown');
|
|
||||||
throw new Error(
|
|
||||||
`Video analysis API error (${response.status}): ${errBody.slice(0, 500)}`,
|
|
||||||
);
|
|
||||||
}
|
|
||||||
|
|
||||||
const json = (await response.json()) as any;
|
|
||||||
const content = json?.choices?.[0]?.message?.content;
|
|
||||||
if (!content) {
|
|
||||||
throw new Error('Video analysis returned empty response');
|
|
||||||
}
|
|
||||||
|
|
||||||
return { description: content.trim(), rawResponse: JSON.stringify(json) };
|
|
||||||
}
|
|
||||||
Loading…
Reference in New Issue