diff --git a/API-GUIDE.md b/API-GUIDE.md new file mode 100644 index 0000000..e1a9c9b --- /dev/null +++ b/API-GUIDE.md @@ -0,0 +1,500 @@ +# DaSiWa API Guide + +Полное руководство по использованию DaSiWa I2V/FLF2V API для генерации видео через ComfyUI. + +--- + +## 📋 Содержание + +1. [Обзор](#обзор) +2. [Аутентификация](#аутентификация) +3. [Endpoints](#endpoints) +4. [Параметры генерации](#параметры-генерации) +5. [Примеры использования](#примеры-использования) +6. [Коды ошибок](#коды-ошибок) +7. [Best Practices](#best-practices) + +--- + +## Обзор + +DaSiWa API — асинхронный REST API для генерации видео из изображений с использованием DaSiWa WAN 2.2 Lightspeed моделей через ComfyUI. + +**Архитектура:** Submit → Poll → Retrieve (как RunPod) + +**Base URL:** `http://:8080` + +**Аутентификация:** HMAC-SHA256 с timestamp и nonce + +--- + +## Аутентификация + +Все endpoints (кроме `/health`) требуют HMAC подписи. + +### Заголовки запроса + +``` +X-Client-Id: <ваш_client_id> +X-Timestamp: +X-Nonce: <случайная_строка_32_символа> +X-Signature: +``` + +### Алгоритм подписи + +```python +import hmac +import hashlib +import time +import secrets + +timestamp = str(int(time.time())) +nonce = secrets.token_hex(16) +body = json.dumps(payload).encode('utf-8') + +message = f"{timestamp}.{nonce}.".encode() + body +signature = hmac.new( + secret_key.encode(), + message, + hashlib.sha256 +).hexdigest() +``` + +### Защита от replay-атак + +- **Timestamp:** запросы старше 5 минут отклоняются +- **Nonce:** каждый nonce можно использовать только один раз +- **Signature:** уникальна для каждого запроса + +--- + +## Endpoints + +### `GET /health` + +Health check сервера. **Не требует аутентификации.** + +**Response:** +```json +{ + "status": "ok", + "comfyui": "ok", + "queue": 0, + "timestamp": 1234567890 +} +``` + +**Поля:** +- `status` — статус API сервера (`ok` / `error`) +- `comfyui` — статус ComfyUI (`ok` / `unavailable`) +- `queue` — количество задач в очереди +- `timestamp` — текущее время сервера (unix) + +--- + +### `POST /run` + +Поставить задачу на генерацию видео в очередь. + +**Request Body:** +```json +{ + "image_base64": "base64_encoded_image_data", + "prompt": "woman dancing gracefully", + "negative_prompt": "blurry, low quality", + "last_image_base64": "base64_encoded_last_frame", + "width": 528, + "height": 768, + "length": 81, + "steps": 4, + "cfg": 1.0, + "seed": -1, + "fps": 16, + "sampler_name": "euler", + "scheduler": "linear_quadratic" +} +``` + +**Обязательные поля:** +- `image_base64` — первый кадр (base64) +- `prompt` — текстовое описание + +**Опциональные поля:** +- `last_image_base64` — последний кадр для FLF2V режима +- `negative_prompt` — негативный промпт (default: встроенный) +- `width` — ширина (default: 528, кратно 16) +- `height` — высота (default: 768, кратно 16) +- `length` — количество кадров (default: 81) +- `steps` — шаги сэмплинга (default: 4) +- `cfg` — CFG scale (default: 1.0) +- `seed` — сид (-1 = random, default: -1) +- `fps` — кадров в секунду (default: 16) +- `sampler_name` — сэмплер (default: "euler") +- `scheduler` — планировщик (default: "linear_quadratic") + +**Response:** +```json +{ + "id": "550e8400-e29b-41d4-a716-446655440000", + "status": "IN_QUEUE" +} +``` + +**Коды ответа:** +- `200` — задача принята +- `400` — ошибка валидации (нет изображения) +- `401` — ошибка аутентификации + +--- + +### `GET /status/` + +Получить статус задачи. + +**Response (IN_QUEUE):** +```json +{ + "id": "550e8400-e29b-41d4-a716-446655440000", + "status": "IN_QUEUE" +} +``` + +**Response (IN_PROGRESS):** +```json +{ + "id": "550e8400-e29b-41d4-a716-446655440000", + "status": "IN_PROGRESS" +} +``` + +**Response (COMPLETED):** +```json +{ + "id": "550e8400-e29b-41d4-a716-446655440000", + "status": "COMPLETED", + "output": { + "video": "base64_encoded_video_data", + "seed": 42, + "mode": "I2V", + "elapsed": 45.2 + } +} +``` + +**Response (FAILED):** +```json +{ + "id": "550e8400-e29b-41d4-a716-446655440000", + "status": "FAILED", + "error": "Video generation failed — no output from ComfyUI" +} +``` + +**Коды ответа:** +- `200` — статус получен +- `404` — задача не найдена +- `401` — ошибка аутентификации + +--- + +### `POST /purge/` + +Удалить завершённую задачу из памяти сервера (освободить RAM от base64 видео). + +**Response:** +```json +{ + "id": "550e8400-e29b-41d4-a716-446655440000", + "purged": true +} +``` + +**Коды ответа:** +- `200` — задача удалена +- `400` — нельзя удалить активную задачу (IN_QUEUE / IN_PROGRESS) +- `404` — задача не найдена +- `401` — ошибка аутентификации + +--- + +## Параметры генерации + +### Режимы работы + +**I2V (Image to Video):** +- Генерация видео из одного изображения +- Передаётся только `image_base64` + +**FLF2V (First-Last Frame to Video):** +- Генерация видео между двумя кадрами +- Передаются `image_base64` + `last_image_base64` + +### Рекомендуемые значения + +| Параметр | I2V | FLF2V | Описание | +|----------|-----|-------|----------| +| `width` | 528 | 528 | Ширина (кратно 16) | +| `height` | 768 | 768 | Высота (кратно 16) | +| `length` | 81 | 81 | Кол-во кадров (~5 сек при 16fps) | +| `steps` | 4 | 4 | DaSiWa оптимизирован под 4 шага | +| `cfg` | 1.0 | 1.0 | CFG scale (DaSiWa работает с 1.0) | +| `fps` | 16 | 16 | Кадров в секунду | +| `sampler_name` | euler | euler | Сэмплер | +| `scheduler` | linear_quadratic | linear_quadratic | Планировщик | + +### Ограничения + +- **Размеры:** должны быть кратны 16 +- **Length:** рекомендуется кратно 8 + 1 (например: 81, 89, 97) +- **Steps:** DaSiWa Lightspeed оптимизирован под 4 шага (можно больше, но медленнее) +- **CFG:** значения > 2.0 могут давать артефакты + +--- + +## Примеры использования + +### Python (с библиотекой requests) + +```python +import requests +import base64 +import json +import time +from hmac_auth import sign_request + +# Загрузка ключей +with open('keys.json') as f: + keys = json.load(f) + +# Подготовка изображения +with open('photo.png', 'rb') as f: + image_b64 = base64.b64encode(f.read()).decode() + +# Payload +payload = { + "image_base64": image_b64, + "prompt": "woman dancing gracefully", + "width": 528, + "height": 768, + "length": 81, + "steps": 4, + "cfg": 1.0, + "seed": -1, + "fps": 16 +} + +# 1. Submit job +body = json.dumps(payload).encode('utf-8') +auth_headers = sign_request(body, keys['secret_key'], keys['client_id']) +headers = {'Content-Type': 'application/json', **auth_headers} + +response = requests.post( + 'http://server:8080/run', + data=body, + headers=headers +) +job_id = response.json()['id'] +print(f"Job ID: {job_id}") + +# 2. Poll status +while True: + auth_headers = sign_request(b"", keys['secret_key'], keys['client_id']) + response = requests.get( + f'http://server:8080/status/{job_id}', + headers=auth_headers + ) + data = response.json() + + if data['status'] == 'COMPLETED': + video_b64 = data['output']['video'] + video_bytes = base64.b64decode(video_b64) + with open('output.mp4', 'wb') as f: + f.write(video_bytes) + print(f"Video saved! Seed: {data['output']['seed']}") + break + elif data['status'] == 'FAILED': + print(f"Error: {data['error']}") + break + else: + print(f"Status: {data['status']}") + time.sleep(5) + +# 3. Purge job +auth_headers = sign_request(b"{}", keys['secret_key'], keys['client_id']) +requests.post( + f'http://server:8080/purge/{job_id}', + json={}, + headers={'Content-Type': 'application/json', **auth_headers} +) +``` + +### cURL + +```bash +# 1. Submit job +curl -X POST http://server:8080/run \ + -H "Content-Type: application/json" \ + -H "X-Client-Id: your_client_id" \ + -H "X-Timestamp: $(date +%s)" \ + -H "X-Nonce: $(openssl rand -hex 16)" \ + -H "X-Signature: " \ + -d '{ + "image_base64": "...", + "prompt": "woman dancing" + }' + +# Response: {"id": "abc-123", "status": "IN_QUEUE"} + +# 2. Check status +curl http://server:8080/status/abc-123 \ + -H "X-Client-Id: your_client_id" \ + -H "X-Timestamp: $(date +%s)" \ + -H "X-Nonce: $(openssl rand -hex 16)" \ + -H "X-Signature: " + +# 3. Purge +curl -X POST http://server:8080/purge/abc-123 \ + -H "Content-Type: application/json" \ + -H "X-Client-Id: your_client_id" \ + -H "X-Timestamp: $(date +%s)" \ + -H "X-Nonce: $(openssl rand -hex 16)" \ + -H "X-Signature: " \ + -d '{}' +``` + +--- + +## Коды ошибок + +| Код | Описание | Решение | +|-----|----------|---------| +| `400` | Нет входного изображения | Передайте `image_base64` | +| `401` | Invalid client ID | Проверьте `client_id` в `keys.json` | +| `401` | Invalid timestamp | Синхронизируйте время на клиенте и сервере | +| `401` | Nonce already used | Replay-атака или дублирующий запрос | +| `401` | Invalid signature | Проверьте `secret_key` и алгоритм подписи | +| `404` | Job not found | Job ID не существует или уже удалён | +| `500` | Internal server error | Проверьте логи сервера (`journalctl -u dasiwa-api`) | + +--- + +## Best Practices + +### 1. Polling интервал + +- **Рекомендуется:** 5-10 секунд +- **Не рекомендуется:** < 2 секунд (нагрузка на сервер) +- Генерация обычно занимает 30-60 секунд + +### 2. Timeout + +- Установите timeout на polling: 30 минут (1800 секунд) +- Если задача не завершилась за это время — проверьте логи сервера + +### 3. Purge после использования + +- Всегда вызывайте `/purge/` после получения видео +- Base64 видео занимает ~10-50 MB RAM на сервере +- Без purge память будет расти + +### 4. Обработка ошибок + +```python +try: + result = wait_for_completion(server, job_id, ...) +except RuntimeError as e: + if "Timeout" in str(e): + # Задача зависла — проверьте сервер + pass + elif "Job failed" in str(e): + # Ошибка генерации — проверьте параметры + pass +``` + +### 5. Retry логика + +- При `401` ошибках — не retry (проблема с ключами) +- При `500` ошибках — retry с exponential backoff +- При `404` на `/status` — задача потеряна, не retry + +### 6. Размер изображений + +- Оптимально: 528x768 или 768x528 +- Большие размеры → больше VRAM → медленнее +- Маленькие размеры → хуже качество + +### 7. Seed для воспроизводимости + +- Если нужен тот же результат — используйте тот же seed +- Seed из ответа `output.seed` — сохраните для повтора + +### 8. Мониторинг очереди + +```python +response = requests.get('http://server:8080/health') +queue_size = response.json()['queue'] +if queue_size > 5: + print("Очередь большая, ожидайте дольше") +``` + +--- + +## Лимиты и производительность + +### Текущие лимиты + +- **Одновременные задачи:** 1 (1 GPU = 1 задача) +- **Размер очереди:** не ограничен (но рекомендуется < 10) +- **Размер изображения:** max 2048x2048 (теоретически) +- **Длина видео:** max ~300 кадров (ограничено VRAM) + +### Производительность + +| Параметры | Время генерации | VRAM | +|-----------|-----------------|------| +| 528x768, 81 frames, 4 steps | ~30-45s | ~18 GB | +| 768x528, 81 frames, 4 steps | ~30-45s | ~18 GB | +| 528x768, 161 frames, 4 steps | ~60-90s | ~24 GB | + +*Время указано для RTX 4090 / A100* + +--- + +## Troubleshooting + +### Задача зависла в IN_PROGRESS + +1. Проверьте логи сервера: `journalctl -u dasiwa-api -f` +2. Проверьте ComfyUI: `curl http://localhost:8188` +3. Перезапустите сервис: `systemctl restart dasiwa-api` + +### Ошибка "Video generation failed" + +- ComfyUI не запущен или недоступен +- Недостаточно VRAM +- Workflow файл повреждён + +### Медленная генерация + +- Проверьте загрузку GPU: `nvidia-smi` +- Убедитесь что модели загружены в VRAM (первый запрос медленнее) +- Уменьшите `length` или размеры + +--- + +## Changelog + +### v2.0 (2026-03-07) +- ✨ Асинхронный API (submit + poll) +- ✨ Endpoints: `/run`, `/status`, `/purge` +- ✨ Background worker thread +- ✨ Queue management +- 🔧 Обновлён на DaSiWa WAN 2.2 Lightspeed +- 🔧 Упрощён workflow (14 нод вместо 50+) + +### v1.0 (2026-03-06) +- 🎉 Первый релиз +- ✅ Синхронный `/generate` endpoint +- ✅ HMAC аутентификация +- ✅ I2V и FLF2V режимы diff --git a/README.md b/README.md index 53a872c..e7ce5f0 100644 --- a/README.md +++ b/README.md @@ -18,20 +18,47 @@ custom_comfyui/ ├── dasiwa-api.service # Systemd сервис (автозапуск) ├── requirements.txt # Python зависимости ├── keys.json # 🔒 Ключи (НЕ коммитить!) -└── workflow_api.json # 🎨 ComfyUI workflow (сделай сам) +└── workflow_api.json # 🎨 ComfyUI workflow (DaSiWa WAN 2.2 Lightspeed) ``` --- ## 🚀 Быстрый старт -### 1. Подготовка workflow +### 1. Загрузка моделей на сервер -В ComfyUI на сервере: -1. Загрузи `DaSiWa WAN 2.2 i2v FastFidelity C-AiO-59.json` через UI -2. Настрой, проверь что работает -3. **Экспортируй API версию:** Menu → `Save (API Format)` → назови `workflow_api.json` -4. Положи файл в эту папку (`custom_comfyui/`) +```powershell +$SERVER = "user@" + +# HIGH checkpoint (~13 GB) +scp "DasiwaWAN22I2V14BLightspeed_synthseductionHighV9.safetensors" ${SERVER}:/ComfyUI/models/checkpoints/ + +# LOW checkpoint (~13 GB) +scp "DasiwaWAN22I2V14BLightspeed_synthseductionLowV9.safetensors" ${SERVER}:/ComfyUI/models/checkpoints/ + +# VAE (~335 MB) +scp "wan_2.1_vae.safetensors" ${SERVER}:/ComfyUI/models/vae/ + +# Text Encoder (~4.7 GB) +scp "umt5_xxl_fp8_e4m3fn_scaled.safetensors" ${SERVER}:/ComfyUI/models/text_encoders/ +``` + +**Скачать модели:** +- **Checkpoints:** [DaSiWa WAN 2.2 i2v 14B (S) Lightspeed](https://civitai.com/models/1981116) +- **VAE:** [wan_2.1_vae.safetensors](https://huggingface.co/Comfy-Org/Wan_2.2_ComfyUI_Repackaged/blob/main/split_files/vae/wan_2.1_vae.safetensors) +- **Text Encoder:** [umt5_xxl_fp8_e4m3fn_scaled.safetensors](https://huggingface.co/Comfy-Org/Wan_2.2_ComfyUI_Repackaged/resolve/main/split_files/text_encoders/umt5_xxl_fp8_e4m3fn_scaled.safetensors) + +**Структура моделей на сервере:** +``` +📂 ComfyUI/models/ +├── checkpoints/ +│ ├── DasiwaWAN22I2V14BLightspeed_synthseductionHighV9.safetensors +│ └── DasiwaWAN22I2V14BLightspeed_synthseductionLowV9.safetensors +├── vae/ +│ └── wan_2.1_vae.safetensors +└── text_encoders/ + └── umt5_xxl_fp8_e4m3fn_scaled.safetensors +``` ### 2. Генерация ключей (на любом ПК с Python) @@ -52,7 +79,7 @@ $SERVER = "root@" scp -r custom_comfyui/ ${SERVER}:/root/ # ИЛИ по файлам -scp server.py hmac_auth.py generate_keys.py setup.sh nginx.conf requirements.txt keys.json workflow_api.json ${SERVER}:/root/custom_comfyui/ +scp server.py hmac_auth.py generate_keys.py setup.sh requirements.txt keys.json workflow_api.json ${SERVER}:/root/custom_comfyui/ ``` ### 4. Установка на сервере (одна команда) @@ -65,10 +92,9 @@ sudo ./setup.sh ``` Скрипт автоматически: -- ✅ Установит Python, pip, nginx, wget +- ✅ Установит Python, pip, wget - ✅ Установит Python зависимости -- ✅ Настроит Nginx (порт 5000 → API) -- ✅ Настроит Firewall (открыт 22 + 5000, закрыт 8188) +- ✅ Настроит Firewall (открыт 22 + 8080, закрыт 8188) - ✅ Создаст systemd сервис с автозапуском - ✅ Сгенерирует ключи (если нет) @@ -87,7 +113,7 @@ scp root@:/root/custom_comfyui/keys.json . curl http://localhost:8080/health # Ответ: -# {"comfyui": "ok", "status": "ok", "timestamp": 1234567890} +# {"comfyui": "ok", "status": "ok", "queue": 0, "timestamp": 1234567890} ``` ### 7. Генерация видео (с клиента) @@ -111,6 +137,27 @@ python client.py \ --output transition.mp4 ``` +Клиент работает асинхронно (как RunPod): отправляет задачу → поллит статус → забирает видео. + +--- + +## 🔄 API Endpoints (асинхронный, как RunPod) + +| Endpoint | Method | Описание | +|----------|--------|----------| +| `/run` | POST | Поставить задачу в очередь → `{"id": "...", "status": "IN_QUEUE"}` | +| `/status/` | GET | Получить статус: `IN_QUEUE` / `IN_PROGRESS` / `COMPLETED` / `FAILED` | +| `/purge/` | POST | Удалить задачу из памяти (освободить RAM) | +| `/health` | GET | Health check (без авторизации) | + +**Пример потока:** +``` +1. POST /run {image_base64: "...", prompt: "..."} → {id: "abc-123", status: "IN_QUEUE"} +2. GET /status/abc-123 → {id: "abc-123", status: "IN_PROGRESS"} +3. GET /status/abc-123 → {id: "abc-123", status: "COMPLETED", output: {video: "base64...", seed: 42, ...}} +4. POST /purge/abc-123 → {id: "abc-123", purged: true} +``` + --- ## ⚙️ Параметры генерации @@ -200,9 +247,3 @@ systemctl restart dasiwa-api **Ошибка авторизации (401):** - Проверь что `keys.json` одинаковый на клиенте и сервере - Проверь время на обоих машинах (`date` на сервере, часы на ПК) - -**Nginx ошибка:** -```bash -nginx -t -systemctl restart nginx -``` diff --git a/client.py b/client.py index a378611..ccda979 100644 --- a/client.py +++ b/client.py @@ -1,11 +1,11 @@ #!/usr/bin/env python3 """ -Клиент для DaSiWa API Server. -Запускается на ТВОЁМ ПК. Отправляет подписанные запросы на сервер. +Клиент для DaSiWa API Server (асинхронный, как RunPod). +Запускается на ТВОЁМ ПК. Отправляет задачу, поллит статус, забирает результат. Использование: - python client.py --server http://:5000 --image photo.png --prompt "woman dancing" - python client.py --server http://:5000 --image start.png --last-image end.png --prompt "smooth transition" + python client.py --server http://:8080 --image photo.png --prompt "woman dancing" + python client.py --server http://:8080 --image start.png --last-image end.png --prompt "smooth transition" """ import argparse @@ -40,29 +40,58 @@ def image_to_base64(path: str) -> str: return base64.b64encode(f.read()).decode() -def send_request(server_url: str, payload: dict, client_id: str, secret_key: str) -> dict: - """Отправляет подписанный запрос на сервер.""" +def signed_post(server_url: str, path: str, payload: dict, client_id: str, secret_key: str): + """Отправляет подписанный POST запрос.""" body = json.dumps(payload).encode("utf-8") auth_headers = sign_request(body, secret_key, client_id) + headers = {"Content-Type": "application/json", **auth_headers} + response = requests.post(f"{server_url}{path}", data=body, headers=headers, timeout=30) + return response.status_code, response.json() - headers = { - "Content-Type": "application/json", - **auth_headers - } - - response = requests.post( - f"{server_url}/generate", - data=body, - headers=headers, - timeout=600 - ) +def signed_get(server_url: str, path: str, client_id: str, secret_key: str): + """Отправляет подписанный GET запрос.""" + body = b"" + auth_headers = sign_request(body, secret_key, client_id) + response = requests.get(f"{server_url}{path}", headers=auth_headers, timeout=30) return response.status_code, response.json() +def submit_job(server_url: str, payload: dict, client_id: str, secret_key: str): + """Отправляет задачу на генерацию. Возвращает job_id.""" + code, data = signed_post(server_url, "/run", payload, client_id, secret_key) + if code != 200: + raise RuntimeError(f"Submit failed ({code}): {data.get('error', data)}") + return data["id"] + + +def wait_for_completion(server_url: str, job_id: str, client_id: str, secret_key: str, + poll_interval: int = 5, max_wait: int = 1800): + """Поллит статус задачи до завершения.""" + start = time.time() + while time.time() - start < max_wait: + code, data = signed_get(server_url, f"/status/{job_id}", client_id, secret_key) + if code != 200: + raise RuntimeError(f"Status check failed ({code}): {data}") + + status = data.get("status") + elapsed = int(time.time() - start) + + if status == "COMPLETED": + print(f"\r✅ COMPLETED ({elapsed}s)") + return data + elif status == "FAILED": + raise RuntimeError(f"Job failed: {data.get('error', 'Unknown error')}") + else: + print(f"\r⏳ {status}... ({elapsed}s)", end="", flush=True) + time.sleep(poll_interval) + + raise RuntimeError(f"Timeout waiting for job ({max_wait}s)") + + def main(): - parser = argparse.ArgumentParser(description="DaSiWa API Client") - parser.add_argument("--server", required=True, help="Server URL, e.g. http://1.2.3.4:5000") + parser = argparse.ArgumentParser(description="DaSiWa API Client (async)") + parser.add_argument("--server", required=True, help="Server URL, e.g. http://1.2.3.4:8080") parser.add_argument("--image", required=True, help="Path to first frame image") parser.add_argument("--last-image", default=None, help="Path to last frame image (FLF2V mode)") parser.add_argument("--prompt", required=True, help="Text prompt") @@ -74,10 +103,12 @@ def main(): parser.add_argument("--cfg", type=float, default=1.0) parser.add_argument("--seed", type=int, default=-1) parser.add_argument("--fps", type=int, default=16) + parser.add_argument("--poll-interval", type=int, default=5, help="Status poll interval (seconds)") parser.add_argument("--output", "-o", default="output.mp4", help="Output video path") args = parser.parse_args() keys = load_keys() + cid, secret = keys["client_id"], keys["secret_key"] # Формируем payload payload = { @@ -102,31 +133,45 @@ def main(): print(f"🎬 Режим: I2V (image to video)") print(f"📐 {args.width}x{args.height}, {args.length} frames, {args.steps} steps") - print(f"📤 Отправляю запрос на {args.server}...") - start = time.time() - status_code, result = send_request( - args.server, payload, keys["client_id"], keys["secret_key"] - ) - elapsed = time.time() - start - - if status_code != 200: - print(f"❌ Ошибка {status_code}: {result.get('error', 'Unknown')}") - if "detail" in result: - print(f" Детали: {result['detail']}") + # 1. Submit job + print(f"📤 Отправляю задачу на {args.server}...") + try: + job_id = submit_job(args.server, payload, cid, secret) + except RuntimeError as e: + print(f"❌ {e}") + sys.exit(1) + print(f"📝 Job ID: {job_id}") + + # 2. Poll for completion + print(f"⏳ Жду результат (поллинг каждые {args.poll_interval}s)...") + try: + result = wait_for_completion(args.server, job_id, cid, secret, + poll_interval=args.poll_interval) + except RuntimeError as e: + print(f"\n❌ {e}") sys.exit(1) - if "video" in result: - video_bytes = base64.b64decode(result["video"]) - with open(args.output, "wb") as f: - f.write(video_bytes) - print(f"✅ Видео сохранено: {args.output} ({len(video_bytes) / 1024 / 1024:.1f} MB)") - print(f"⏱ Время: {elapsed:.1f}s (сервер: {result.get('elapsed', '?')}s)") - print(f"🌱 Seed: {result.get('seed', '?')}") - else: - print(f"❌ Ошибка: {result.get('error', 'No video in response')}") + # 3. Save video + output = result.get("output", {}) + video_b64 = output.get("video") + if not video_b64: + print(f"❌ Нет видео в ответе") sys.exit(1) + video_bytes = base64.b64decode(video_b64) + with open(args.output, "wb") as f: + f.write(video_bytes) + + print(f"✅ Видео сохранено: {args.output} ({len(video_bytes) / 1024 / 1024:.1f} MB)") + print(f"⏱ Сервер: {output.get('elapsed', '?')}s | Seed: {output.get('seed', '?')} | Mode: {output.get('mode', '?')}") + + # 4. Purge job from server memory + try: + signed_post(args.server, f"/purge/{job_id}", {}, cid, secret) + except Exception: + pass # не критично + if __name__ == "__main__": main() diff --git a/nginx.conf b/nginx.conf deleted file mode 100644 index 5475a42..0000000 --- a/nginx.conf +++ /dev/null @@ -1,2 +0,0 @@ -# ЭТОТ ФАЙЛ БОЛЬШЕ НЕ ИСПОЛЬЗУЕТСЯ — удали его -# API работает напрямую через Python на порту 8080 diff --git a/server.py b/server.py index b2e4d2b..0303a5d 100644 --- a/server.py +++ b/server.py @@ -3,8 +3,10 @@ DaSiWa I2V/FLF2V API Server для ComfyUI. Работает рядом с ComfyUI на той же машине. -Принимает HTTP запросы с HMAC авторизацией, -отправляет workflow в ComfyUI, возвращает видео. +Асинхронный API (как RunPod): + POST /run → {"id": "job_id", "status": "IN_QUEUE"} + GET /status/ID → {"id": ..., "status": "IN_QUEUE|IN_PROGRESS|COMPLETED|FAILED", "output": ...} + GET /health → {"status": "ok", "comfyui": "ok", "queue": 0} """ import os @@ -18,6 +20,8 @@ import random import logging import binascii import subprocess +import threading +import queue import urllib.request import urllib.parse import websocket as ws_client @@ -69,6 +73,14 @@ used_nonces = set() # WebSocket client ID ws_client_id = str(uuid.uuid4()) +# ============================================================================ +# Job Queue (асинхронная очередь как в RunPod) +# ============================================================================ + +job_queue = queue.Queue() +jobs = {} # job_id -> {status, input, output, error, created_at, started_at, completed_at} +jobs_lock = threading.Lock() + # ============================================================================ # Утилиты @@ -185,6 +197,162 @@ def generate_video(prompt): return None +# ============================================================================ +# Background Worker (обработка задач из очереди) +# ============================================================================ + +def build_prompt(job_input, image_path, last_image_path, use_flf2v): + """Загружает workflow и патчит параметрами задачи.""" + with open(WORKFLOW_FILE, "r") as f: + prompt = json.load(f) + + width = to_nearest_multiple_of_16(job_input.get("width", 528)) + height = to_nearest_multiple_of_16(job_input.get("height", 768)) + length = job_input.get("length", 81) + steps = job_input.get("steps", 4) + cfg = job_input.get("cfg", 1.0) + seed = job_input.get("seed", -1) + fps = job_input.get("fps", 16) + sampler_name = job_input.get("sampler_name", "euler") + scheduler = job_input.get("scheduler", "linear_quadratic") + + if seed == -1: + seed = random.randint(0, 2**63 - 1) + + # Node 5: Positive prompt + prompt["5"]["inputs"]["text"] = job_input.get("prompt", "") + + # Node 6: Negative prompt (use default or custom) + negative_prompt = job_input.get("negative_prompt", prompt["6"]["inputs"]["text"]) + prompt["6"]["inputs"]["text"] = negative_prompt + + # Node 7: Load first frame image + prompt["7"]["inputs"]["image"] = image_path + + # Node 15: Load last frame image (for FLF2V mode) + if use_flf2v and last_image_path: + prompt["15"]["inputs"]["image"] = last_image_path + logger.info(f"🎬 FLF2V: last frame = {last_image_path}") + else: + # I2V mode: switch to WanImageToVideo, remove end_image + prompt["8"]["class_type"] = "WanImageToVideo" + if "end_image" in prompt["8"]["inputs"]: + del prompt["8"]["inputs"]["end_image"] + if "15" in prompt: + del prompt["15"] + logger.info("🎬 I2V: single image mode") + + # Node 8: WanFirstLastFrameToVideo / WanImageToVideo + prompt["8"]["inputs"]["width"] = width + prompt["8"]["inputs"]["height"] = height + prompt["8"]["inputs"]["length"] = length + + # Node 11: KSampler High + prompt["11"]["inputs"]["noise_seed"] = seed + prompt["11"]["inputs"]["steps"] = steps + prompt["11"]["inputs"]["cfg"] = cfg + prompt["11"]["inputs"]["sampler_name"] = sampler_name + prompt["11"]["inputs"]["scheduler"] = scheduler + prompt["11"]["inputs"]["end_at_step"] = steps // 2 + + # Node 12: KSampler Low + prompt["12"]["inputs"]["noise_seed"] = seed + prompt["12"]["inputs"]["steps"] = steps + prompt["12"]["inputs"]["cfg"] = cfg + prompt["12"]["inputs"]["sampler_name"] = sampler_name + prompt["12"]["inputs"]["scheduler"] = scheduler + prompt["12"]["inputs"]["start_at_step"] = steps // 2 + + # Node 14: Video output + prompt["14"]["inputs"]["frame_rate"] = fps + + return prompt, seed, width, height + + +def cleanup_comfy_output(): + """Очистка output директории ComfyUI.""" + try: + if os.path.exists(COMFY_OUTPUT_DIR): + for fname in os.listdir(COMFY_OUTPUT_DIR): + fpath = os.path.join(COMFY_OUTPUT_DIR, fname) + if os.path.isfile(fpath): + os.unlink(fpath) + elif os.path.isdir(fpath): + shutil.rmtree(fpath) + except Exception: + pass + + +def worker_loop(): + """Фоновый воркер — берёт задачи из очереди и выполняет по одной.""" + logger.info("⚙️ Worker thread started") + while True: + job_id = job_queue.get() # блокируется пока нет задач + with jobs_lock: + job = jobs.get(job_id) + if not job: + continue + + logger.info("=" * 60) + logger.info(f"🎬 Job {job_id}: Начинаем генерацию") + logger.info("=" * 60) + + with jobs_lock: + job["status"] = "IN_PROGRESS" + job["started_at"] = time.time() + + job_input = job["input"] + temp_dir = job["temp_dir"] + + try: + # Обработка изображений + image_path, has_image = process_image_input(job_input, "image", temp_dir) + if not has_image: + raise ValueError("No input image provided") + + last_image_path, use_flf2v = process_image_input(job_input, "last_image", temp_dir) + mode = "FLF2V" if use_flf2v else "I2V" + logger.info(f"🎬 Job {job_id}: Режим {mode}") + + # Сборка промпта + prompt, seed, width, height = build_prompt(job_input, image_path, last_image_path, use_flf2v) + logger.info(f"📐 Job {job_id}: {width}x{height}, seed {seed}") + + # Генерация + video_b64 = generate_video(prompt) + + if not video_b64: + raise RuntimeError("Video generation failed — no output from ComfyUI") + + elapsed = time.time() - job["started_at"] + logger.info(f"✅ Job {job_id}: Видео готово за {elapsed:.1f}s") + + with jobs_lock: + job["status"] = "COMPLETED" + job["completed_at"] = time.time() + job["output"] = { + "video": video_b64, + "seed": seed, + "mode": mode, + "elapsed": round(elapsed, 1) + } + + except Exception as e: + logger.error(f"❌ Job {job_id}: {e}", exc_info=True) + with jobs_lock: + job["status"] = "FAILED" + job["completed_at"] = time.time() + job["error"] = str(e) + + finally: + # Очистка + if os.path.exists(temp_dir): + shutil.rmtree(temp_dir, ignore_errors=True) + cleanup_comfy_output() + + job_queue.task_done() + + # ============================================================================ # API Endpoints # ============================================================================ @@ -222,146 +390,85 @@ def health(): return jsonify({ "status": "ok", "comfyui": comfy_status, + "queue": job_queue.qsize(), "timestamp": int(time.time()) }) -@app.route("/generate", methods=["POST"]) -def generate(): - """Основной endpoint для генерации видео.""" - start_time = time.time() - +@app.route("/run", methods=["POST"]) +def run_job(): + """Отправляет задачу в очередь. Возвращает job_id сразу.""" job_input = request.json or {} - logger.info("=" * 60) - logger.info("🎬 Новый запрос на генерацию") - logger.info("=" * 60) + # Валидация: должно быть хотя бы одно изображение + has_image = any(k in job_input and job_input[k] + for k in ("image_base64", "image_url", "image_path")) + if not has_image: + return jsonify({"error": "No input image. Use image_base64, image_url, or image_path"}), 400 + + job_id = str(uuid.uuid4()) + temp_dir = os.path.join("/tmp", f"job_{job_id[:8]}") + + # Логирование (без base64) + log_input = {k: (f"[{len(v)}chars]" if k.endswith("_base64") else v) + for k, v in job_input.items()} + logger.info(f"📥 Job {job_id}: поставлен в очередь") + logger.info(f" Параметры: {json.dumps(log_input, ensure_ascii=False)}") + + with jobs_lock: + jobs[job_id] = { + "status": "IN_QUEUE", + "input": job_input, + "temp_dir": temp_dir, + "output": None, + "error": None, + "created_at": time.time(), + "started_at": None, + "completed_at": None, + } + + job_queue.put(job_id) - # Логирование (без base64 данных) - log_input = {k: v for k, v in job_input.items() - if not k.endswith("_base64")} - logger.info(f"Параметры: {json.dumps(log_input, ensure_ascii=False)}") + return jsonify({ + "id": job_id, + "status": "IN_QUEUE" + }) - task_id = f"task_{uuid.uuid4().hex[:8]}" - temp_dir = os.path.join("/tmp", task_id) - try: - # === Обработка изображений === - image_path, has_image = process_image_input(job_input, "image", temp_dir) - if not has_image: - return jsonify({"error": "No input image provided. Use image_base64, image_url, or image_path"}), 400 - - last_image_path, use_flf2v = process_image_input(job_input, "last_image", temp_dir) - - mode = "FLF2V" if use_flf2v else "I2V" - logger.info(f"🎬 Режим: {mode}") - - # === Загрузка workflow === - if not os.path.exists(WORKFLOW_FILE): - return jsonify({"error": f"Workflow file not found: {WORKFLOW_FILE}"}), 500 - - with open(WORKFLOW_FILE, "r") as f: - prompt = json.load(f) - - # === Параметры генерации === - width = to_nearest_multiple_of_16(job_input.get("width", 528)) - height = to_nearest_multiple_of_16(job_input.get("height", 768)) - length = job_input.get("length", 81) - steps = job_input.get("steps", 4) - cfg = job_input.get("cfg", 1.0) - seed = job_input.get("seed", -1) - fps = job_input.get("fps", 16) - sampler_name = job_input.get("sampler_name", "euler") - scheduler = job_input.get("scheduler", "linear_quadratic") - - if seed == -1: - seed = random.randint(0, 2**63 - 1) - - logger.info(f"📐 {width}x{height}, {length} frames, {steps} steps, CFG {cfg}, seed {seed}") - - # === Заполнение workflow === - - # Positive prompt - prompt["5"]["inputs"]["text"] = job_input.get("prompt", "") - - # Negative prompt - negative_prompt = job_input.get("negative_prompt", prompt["6"]["inputs"]["text"]) - prompt["6"]["inputs"]["text"] = negative_prompt - - # First frame image - prompt["7"]["inputs"]["image"] = image_path - - # FLF2V / I2V mode - if use_flf2v and last_image_path: - prompt["15"]["inputs"]["image"] = last_image_path - logger.info(f"🎬 FLF2V: last frame = {last_image_path}") - else: - prompt["8"]["class_type"] = "WanImageToVideo" - if "end_image" in prompt["8"]["inputs"]: - del prompt["8"]["inputs"]["end_image"] - if "15" in prompt: - del prompt["15"] - logger.info("🎬 I2V: single image mode") - - # Video dimensions - prompt["8"]["inputs"]["width"] = width - prompt["8"]["inputs"]["height"] = height - prompt["8"]["inputs"]["length"] = length - - # KSampler High - prompt["11"]["inputs"]["noise_seed"] = seed - prompt["11"]["inputs"]["steps"] = steps - prompt["11"]["inputs"]["cfg"] = cfg - prompt["11"]["inputs"]["sampler_name"] = sampler_name - prompt["11"]["inputs"]["scheduler"] = scheduler - prompt["11"]["inputs"]["end_at_step"] = steps // 2 - - # KSampler Low - prompt["12"]["inputs"]["noise_seed"] = seed - prompt["12"]["inputs"]["steps"] = steps - prompt["12"]["inputs"]["cfg"] = cfg - prompt["12"]["inputs"]["sampler_name"] = sampler_name - prompt["12"]["inputs"]["scheduler"] = scheduler - prompt["12"]["inputs"]["start_at_step"] = steps // 2 - - # Video output - prompt["14"]["inputs"]["frame_rate"] = fps - - # === Генерация === - video_b64 = generate_video(prompt) - - if not video_b64: - return jsonify({"error": "Video generation failed — no output"}), 500 - - elapsed = time.time() - start_time - logger.info(f"✅ Видео сгенерировано за {elapsed:.1f}s") - - return jsonify({ - "video": video_b64, - "seed": seed, - "mode": mode, - "elapsed": round(elapsed, 1) - }) - - except Exception as e: - logger.error(f"❌ Ошибка: {e}", exc_info=True) - return jsonify({"error": str(e)}), 500 - - finally: - # Очистка temp файлов - if os.path.exists(temp_dir): - shutil.rmtree(temp_dir, ignore_errors=True) - # Очистка output ComfyUI - try: - if os.path.exists(COMFY_OUTPUT_DIR): - for fname in os.listdir(COMFY_OUTPUT_DIR): - fpath = os.path.join(COMFY_OUTPUT_DIR, fname) - if os.path.isfile(fpath): - os.unlink(fpath) - elif os.path.isdir(fpath): - shutil.rmtree(fpath) - except Exception: - pass +@app.route("/status/", methods=["GET"]) +def job_status(job_id): + """Получить статус задачи. Когда COMPLETED — возвращает результат.""" + with jobs_lock: + job = jobs.get(job_id) + + if not job: + return jsonify({"error": "Job not found"}), 404 + + response = { + "id": job_id, + "status": job["status"], + } + + if job["status"] == "COMPLETED": + response["output"] = job["output"] + elif job["status"] == "FAILED": + response["error"] = job["error"] + + return jsonify(response) + + +@app.route("/purge/", methods=["POST"]) +def purge_job(job_id): + """Удалить завершённую задачу из памяти (освободить RAM от base64 видео).""" + with jobs_lock: + job = jobs.get(job_id) + if not job: + return jsonify({"error": "Job not found"}), 404 + if job["status"] in ("IN_QUEUE", "IN_PROGRESS"): + return jsonify({"error": "Cannot purge active job"}), 400 + del jobs[job_id] + + return jsonify({"id": job_id, "purged": True}) # ============================================================================ @@ -370,10 +477,15 @@ def generate(): if __name__ == "__main__": logger.info("=" * 60) - logger.info("🚀 DaSiWa API Server") + logger.info("🚀 DaSiWa API Server (async worker mode)") logger.info(f" ComfyUI: http://{COMFY_HOST}:{COMFY_PORT}") logger.info(f" API Port: {API_PORT}") logger.info(f" Workflow: {WORKFLOW_FILE}") + logger.info(" Endpoints:") + logger.info(" POST /run → поставить задачу") + logger.info(" GET /status/ → статус / результат") + logger.info(" POST /purge/ → удалить задачу из памяти") + logger.info(" GET /health → здоровье") logger.info("=" * 60) # Проверяем подключение к ComfyUI @@ -381,6 +493,11 @@ if __name__ == "__main__": urllib.request.urlopen(f"http://{COMFY_HOST}:{COMFY_PORT}/", timeout=5) logger.info("✅ ComfyUI доступен") except Exception: - logger.warning("⚠️ ComfyUI недоступен — запросы будут ждать") + logger.warning("⚠️ ComfyUI недоступен — запросы будут ждать" + " пока ComfyUI запустится") + + # Запуск фонового воркера + worker_thread = threading.Thread(target=worker_loop, daemon=True) + worker_thread.start() app.run(host="0.0.0.0", port=API_PORT, debug=False) diff --git a/workflow_api.json b/workflow_api.json index 207f19b..ed13c7d 100644 --- a/workflow_api.json +++ b/workflow_api.json @@ -1,149 +1,32 @@ { - "9": { - "inputs": { - "options": "Intermediate and Utility", - "filenames": [ - "28", - 0 - ] - }, - "class_type": "VHS_PruneOutputs", - "_meta": { - "title": "Prune Outputs 🎥🅥🅗🅢" - } - }, - "18": { - "inputs": { - "PowerLoraLoaderHeaderWidget": { - "type": "PowerLoraLoaderHeaderWidget" - }, - "➕ Add Lora": "", - "model": [ - "1512:1534", - 0 - ], - "clip": [ - "1512:1504", - 0 - ] - }, - "class_type": "Power Lora Loader (rgthree)", - "_meta": { - "title": "Lora's LOW" - } - }, - "23": { - "inputs": { - "image": "example.png" - }, - "class_type": "LoadImage", - "_meta": { - "title": "First-Frame-Image" - } - }, - "24": { - "inputs": { - "image": "example.png" - }, - "class_type": "LoadImage", - "_meta": { - "title": "Last-Frame-Image" - } - }, - "26": { - "inputs": { - "PowerLoraLoaderHeaderWidget": { - "type": "PowerLoraLoaderHeaderWidget" - }, - "➕ Add Lora": "", - "model": [ - "1512:1539", - 0 - ], - "clip": [ - "1512:1504", - 0 - ] - }, - "class_type": "Power Lora Loader (rgthree)", - "_meta": { - "title": "Lora's HIGH" - } - }, - "28": { - "inputs": { - "frame_rate": [ - "1512:1814:1810", - 0 - ], - "loop_count": 0, - "filename_prefix": "video/2026-03-07/191808", - "format": "video/webm", - "pix_fmt": "yuv420p", - "crf": 20, - "save_metadata": true, - "trim_to_audio": false, - "pingpong": false, - "save_output": true, - "images": [ - "1512:1731:1099", - 0 - ], - "audio": [ - "1512:1757", - 0 - ] - }, - "class_type": "VHS_VideoCombine", - "_meta": { - "title": "Video Combine 🎥🅥🅗🅢" - } - }, - "1349": { + "1": { "inputs": { - "text": "censored, mosaic censoring, bar censor, pixelated, glowing, bloom, blurry, out of focus, low detail, bad anatomy, ugly, overexposed, underexposed, distorted face, extra limbs, cartoonish, 3d render artifacts, duplicate people, unnatural lighting, bad composition, missing shadows, low resolution, poorly textured, glitch, noise, grain, static, motionless, still frame, stylized, artwork, painting, illustration, many people in background, three legs, walking backward, unnatural skin tone, discolored eyelid, red eyelids, closed eyes, poorly drawn hands, extra fingers, fused fingers, poorly drawn face, deformed, disfigured, malformed limbs, fog, mist, voluminous eyelashes,", - "clip": [ - "26", - 1 - ] + "ckpt_name": "DasiwaWAN22I2V14BLightspeed_synthseductionHighV9.safetensors" }, - "class_type": "CLIPTextEncode", - "_meta": { - "title": "Video prompt negative" - } - }, - "1370": { - "inputs": { - "text": "", - "clip": [ - "26", - 1 - ] - }, - "class_type": "CLIPTextEncode", + "class_type": "CheckpointLoaderSimple", "_meta": { - "title": "Video prompt positive" + "title": "Load Checkpoint High" } }, - "1371": { + "2": { "inputs": { - "value": "She is doing cheerfully and cute cat meowing, while she purrs cat-like in between." + "ckpt_name": "DasiwaWAN22I2V14BLightspeed_synthseductionLowV9.safetensors" }, - "class_type": "PrimitiveStringMultiline", + "class_type": "CheckpointLoaderSimple", "_meta": { - "title": "Audio prompt positive" + "title": "Load Checkpoint Low" } }, - "1372": { + "3": { "inputs": { - "value": "noisy, harsh, crash, bang, music, background melody, synthesizer, male vocals, talking, speech, low quality, distorted, ambient noise, room tone, pounding, wet, pounding," + "vae_name": "wan_2.1_vae.safetensors" }, - "class_type": "PrimitiveStringMultiline", + "class_type": "VAELoader", "_meta": { - "title": "Audio prompt negative" + "title": "VAE Loader" } }, - "1512:1504": { + "4": { "inputs": { "clip_name": "umt5_xxl_fp8_e4m3fn_scaled.safetensors", "type": "wan", @@ -151,898 +34,152 @@ }, "class_type": "CLIPLoader", "_meta": { - "title": "Загрузить CLIP" - } - }, - "1512:1584": { - "inputs": {}, - "class_type": "Context (rgthree)", - "_meta": { - "title": "FLF2V context" - } - }, - "1512:1592": { - "inputs": { - "samples": [ - "1512:1577", - 0 - ], - "vae": [ - "1512:1503", - 0 - ] - }, - "class_type": "VAEDecode", - "_meta": { - "title": "Декодировать VAE" - } - }, - "1512:1591": { - "inputs": { - "width": [ - "1512:1588:317", - 0 - ], - "height": [ - "1512:1588:317", - 1 - ], - "length": [ - "1512:1594", - 0 - ], - "batch_size": 1, - "positive": [ - "1370", - 0 - ], - "negative": [ - "1349", - 0 - ], - "vae": [ - "1512:1503", - 0 - ], - "start_image": [ - "1512:1588:329", - 0 - ] - }, - "class_type": "WanImageToVideo", - "_meta": { - "title": "WanИзображениеВВидео" - } - }, - "1512:1577": { - "inputs": { - "add_noise": "disable", - "noise_seed": [ - "1512:1670", - 0 - ], - "steps": [ - "1512:1671", - 0 - ], - "cfg": [ - "1512:1671", - 2 - ], - "sampler_name": [ - "1512:1671", - 3 - ], - "scheduler": [ - "1512:1671", - 4 - ], - "start_at_step": [ - "1512:1671", - 1 - ], - "end_at_step": 10000, - "return_with_leftover_noise": "disable", - "model": [ - "1512:1572", - 0 - ], - "positive": [ - "1512:1591", - 0 - ], - "negative": [ - "1512:1591", - 1 - ], - "latent_image": [ - "1512:1578", - 0 - ] - }, - "class_type": "KSamplerAdvanced", - "_meta": { - "title": "KSampler (Low)" - } - }, - "1512:1516": { - "inputs": { - "ctx_01": [ - "1512:1513", - 0 - ], - "ctx_02": [ - "1512:1514", - 0 - ] - }, - "class_type": "Context Switch (rgthree)", - "_meta": { - "title": "Context Switch (rgthree)" - } - }, - "1512:1668": { - "inputs": { - "value": 5 - }, - "class_type": "PrimitiveInt", - "_meta": { - "title": "Seconds" - } - }, - "1512:1712": { - "inputs": {}, - "class_type": "Context (rgthree)", - "_meta": { - "title": "s2v context" - } - }, - "1512:1702": { - "inputs": { - "value": false - }, - "class_type": "PrimitiveBoolean", - "_meta": { - "title": "Булево значение" - } - }, - "1512:1530": { - "inputs": { - "sage_attention": "auto", - "allow_compile": false, - "model": [ - "1512:1516", - 1 - ] - }, - "class_type": "PathchSageAttentionKJ", - "_meta": { - "title": "Patch Sage Attention KJ" - } - }, - "1512:1523": { - "inputs": { - "value": true - }, - "class_type": "PrimitiveBoolean", - "_meta": { - "title": "Булево значение" - } - }, - "1512:1502": { - "inputs": { - "unet_name": "DasiwaWAN22I2V14B_synthseductionHighV9.safetensors", - "weight_dtype": "default" - }, - "class_type": "UNETLoader", - "_meta": { - "title": "Загрузить модель диффузии" - } - }, - "1512:1517": { - "inputs": { - "model": [ - "1512:1502", - 0 - ] - }, - "class_type": "Context (rgthree)", - "_meta": { - "title": "Context (safetensor high)" - } - }, - "1512:1664": { - "inputs": { - "any_01": [ - "1512:1528", - 0 - ], - "any_02": [ - "1512:1540", - 0 - ] - }, - "class_type": "Any Switch (rgthree)", - "_meta": { - "title": "Any Switch (rgthree)" - } - }, - "1512:1528": { - "inputs": { - "value": true - }, - "class_type": "PrimitiveBoolean", - "_meta": { - "title": "Булево значение" - } - }, - "1512:1525": { - "inputs": { - "any_01": [ - "1512:1523", - 0 - ], - "any_02": [ - "1512:1540", - 0 - ] - }, - "class_type": "Any Switch (rgthree)", - "_meta": { - "title": "Any Switch (rgthree)" - } - }, - "1512:1538": { - "inputs": { - "sage_attention": "auto", - "allow_compile": false, - "model": [ - "1512:1515", - 1 - ] - }, - "class_type": "PathchSageAttentionKJ", - "_meta": { - "title": "Patch Sage Attention KJ" - } - }, - "1512:1537": { - "inputs": { - "boolean": [ - "1512:1525", - 0 - ], - "on_true": [ - "1512:1538", - 0 - ], - "on_false": [ - "1512:1515", - 1 - ] - }, - "class_type": "easy ifElse", - "_meta": { - "title": "If else HIGH" + "title": "CLIP Loader" } }, - "1512:1531": { + "5": { "inputs": { - "boolean": [ - "1512:1525", - 0 - ], - "on_true": [ - "1512:1530", - 0 - ], - "on_false": [ - "1512:1516", - 1 - ] + "text": "", + "clip": ["4", 0] }, - "class_type": "easy ifElse", + "class_type": "CLIPTextEncode", "_meta": { - "title": "If else LOW" + "title": "Positive Prompt" } }, - "1512:1522": { + "6": { "inputs": { - "enable_fp16_accumulation": true, - "model": [ - "1512:1531", - 0 - ] + "text": "censored, mosaic censoring, bar censor, pixelated, glowing, bloom, blurry, out of focus, low detail, bad anatomy, ugly, overexposed, underexposed, distorted face, extra limbs, cartoonish, 3d render artifacts, duplicate people, unnatural lighting, bad composition, missing shadows, low resolution, poorly textured, glitch, noise, grain, static, motionless, still frame, stylized, artwork, painting, illustration, many people in background, three legs, walking backward", + "clip": ["4", 0] }, - "class_type": "ModelPatchTorchSettings", + "class_type": "CLIPTextEncode", "_meta": { - "title": "Model Patch Torch Settings" + "title": "Negative Prompt" } }, - "1512:1536": { + "7": { "inputs": { - "enable_fp16_accumulation": true, - "model": [ - "1512:1537", - 0 - ] + "image": "input.png" }, - "class_type": "ModelPatchTorchSettings", + "class_type": "LoadImage", "_meta": { - "title": "Model Patch Torch Settings" + "title": "First Frame Image" } }, - "1512:1535": { + "15": { "inputs": { - "boolean": [ - "1512:1664", - 0 - ], - "on_true": [ - "1512:1536", - 0 - ], - "on_false": [ - "1512:1537", - 0 - ] + "image": "input_last.png" }, - "class_type": "easy ifElse", + "class_type": "LoadImage", "_meta": { - "title": "If else HIGH" + "title": "Last Frame Image" } }, - "1512:1529": { + "8": { "inputs": { - "boolean": [ - "1512:1664", - 0 - ], - "on_true": [ - "1512:1522", - 0 - ], - "on_false": [ - "1512:1531", - 0 - ] + "width": 528, + "height": 768, + "length": 81, + "batch_size": 1, + "positive": ["5", 0], + "negative": ["6", 0], + "vae": ["3", 0], + "start_image": ["7", 0], + "end_image": ["15", 0] }, - "class_type": "easy ifElse", + "class_type": "WanFirstLastFrameToVideo", "_meta": { - "title": "If else LOW" + "title": "Wan First Last Frame To Video" } }, - "1512:1539": { + "9": { "inputs": { "shift": 5, - "model": [ - "1512:1535", - 0 - ] + "model": ["1", 0] }, "class_type": "ModelSamplingSD3", "_meta": { "title": "Sigma Shift High" } }, - "1512:1534": { + "10": { "inputs": { "shift": 5, - "model": [ - "1512:1529", - 0 - ] + "model": ["2", 0] }, "class_type": "ModelSamplingSD3", "_meta": { "title": "Sigma Shift Low" } }, - "1512:1588:329": { - "inputs": { - "upscale_method": "lanczos", - "width": [ - "1512:1588:316", - 0 - ], - "height": [ - "1512:1588:318", - 0 - ], - "crop": "disabled", - "image": [ - "23", - 0 - ] - }, - "class_type": "ImageScale", - "_meta": { - "title": "Масштабировать изображение" - } - }, - "1512:1588:326": { - "inputs": { - "value": 0.25 - }, - "class_type": "FloatConstant", - "_meta": { - "title": "Float Constant" - } - }, - "1512:1588:317": { - "inputs": { - "image": [ - "1512:1588:329", - 0 - ] - }, - "class_type": "GetImageSize", - "_meta": { - "title": "Get Image Size" - } - }, - "1512:1588:316": { - "inputs": { - "expression": "round(sqrt(a * 1000000 * (b / c)) / 16) * 16", - "a": [ - "1512:1588:327", - 0 - ], - "b": [ - "1512:1588:328", - 0 - ], - "c": [ - "1512:1588:328", - 1 - ] - }, - "class_type": "MathExpression|pysssss", - "_meta": { - "title": "Width 🐍" - } - }, - "1512:1588:327": { - "inputs": { - "any_01": [ - "1512:1588:326", - 0 - ] - }, - "class_type": "Any Switch (rgthree)", - "_meta": { - "title": "Any Switch (rgthree)" - } - }, - "1512:1588:318": { - "inputs": { - "expression": "round((a / (b / c)) / 16) * 16", - "a": [ - "1512:1588:316", - 0 - ], - "b": [ - "1512:1588:328", - 0 - ], - "c": [ - "1512:1588:328", - 1 - ] - }, - "class_type": "MathExpression|pysssss", - "_meta": { - "title": "Height 🐍" - } - }, - "1512:1588:328": { - "inputs": { - "image": [ - "23", - 0 - ] - }, - "class_type": "GetImageSize", - "_meta": { - "title": "Get Image Size" - } - }, - "1512:1503": { - "inputs": { - "vae_name": "WAN/wan_2.1_vae.safetensors" - }, - "class_type": "VAELoader", - "_meta": { - "title": "Загрузить VAE" - } - }, - "1512:1671": { - "inputs": { - "steps_total": 4, - "refiner_step": 2, - "cfg": 1, - "sampler_name": "euler", - "scheduler": "linear_quadratic" - }, - "class_type": "KSampler Config (rgthree)", - "_meta": { - "title": "Sampling" - } - }, - "1512:1594": { - "inputs": { - "expression": "(round((a * b) / 8) * 8) +1", - "a": [ - "1512:1668", - 0 - ], - "b": [ - "1512:1669", - 0 - ] - }, - "class_type": "MathExpression|pysssss", - "_meta": { - "title": "frames logic 🐍" - } - }, - "1512:1670": { - "inputs": { - "value": -486226021089793 - }, - "class_type": "PrimitiveInt", - "_meta": { - "title": "Seed" - } - }, - "1512:1501": { - "inputs": { - "unet_name": "DasiwaWAN22I2V14B_synthseductionLowV9.safetensors", - "weight_dtype": "default" - }, - "class_type": "UNETLoader", - "_meta": { - "title": "Загрузить модель диффузии" - } - }, - "1512:1578": { + "11": { "inputs": { "add_noise": "enable", - "noise_seed": [ - "1512:1670", - 0 - ], - "steps": [ - "1512:1671", - 0 - ], - "cfg": [ - "1512:1671", - 2 - ], - "sampler_name": [ - "1512:1671", - 3 - ], - "scheduler": [ - "1512:1671", - 4 - ], + "noise_seed": 12345, + "steps": 4, + "cfg": 1.0, + "sampler_name": "euler", + "scheduler": "linear_quadratic", "start_at_step": 0, - "end_at_step": [ - "1512:1671", - 1 - ], + "end_at_step": 2, "return_with_leftover_noise": "enable", - "model": [ - "1512:1571", - 0 - ], - "positive": [ - "1512:1591", - 0 - ], - "negative": [ - "1512:1591", - 1 - ], - "latent_image": [ - "1512:1591", - 2 - ] + "model": ["9", 0], + "positive": ["8", 0], + "negative": ["8", 1], + "latent_image": ["8", 2] }, "class_type": "KSamplerAdvanced", "_meta": { - "title": "KSampler (High)" - } - }, - "1512:1571": { - "inputs": { - "model": [ - "26", - 0 - ] - }, - "class_type": "CFGZeroStar", - "_meta": { - "title": "CFGZeroStar" - } - }, - "1512:1572": { - "inputs": { - "model": [ - "18", - 0 - ] - }, - "class_type": "CFGZeroStar", - "_meta": { - "title": "CFGZeroStar" - } - }, - "1512:1512": { - "inputs": {}, - "class_type": "Context (rgthree)", - "_meta": { - "title": "Context (gguf high)" - } - }, - "1512:1515": { - "inputs": { - "ctx_01": [ - "1512:1517", - 0 - ], - "ctx_02": [ - "1512:1512", - 0 - ] - }, - "class_type": "Context Switch (rgthree)", - "_meta": { - "title": "Context Switch (rgthree)" - } - }, - "1512:1513": { - "inputs": { - "model": [ - "1512:1501", - 0 - ] - }, - "class_type": "Context (rgthree)", - "_meta": { - "title": "Context (safetensor low)" - } - }, - "1512:1514": { - "inputs": {}, - "class_type": "Context (rgthree)", - "_meta": { - "title": "Context (gguf low)" - } - }, - "1512:1585": { - "inputs": { - "images": [ - "1512:1592", - 0 - ] - }, - "class_type": "Context (rgthree)", - "_meta": { - "title": "I2V context" - } - }, - "1512:1589": { - "inputs": { - "ctx_01": [ - "1512:1585", - 0 - ], - "ctx_02": [ - "1512:1584", - 0 - ], - "ctx_03": [ - "1512:1637", - 0 - ], - "ctx_04": [ - "1512:1712", - 0 - ] - }, - "class_type": "Context Switch (rgthree)", - "_meta": { - "title": "Context Switch (rgthree)" - } - }, - "1512:1730:1280": { - "inputs": { - "boolean": [ - "1512:1730:1282", - 0 - ], - "on_true": [ - "1512:1784", - 7 - ], - "on_false": [ - "1512:1589", - 7 - ] - }, - "class_type": "easy ifElse", - "_meta": { - "title": "If else" - } - }, - "1512:1730:1282": { - "inputs": { - "any_02": [ - "1512:1730:1281", - 0 - ] - }, - "class_type": "Any Switch (rgthree)", - "_meta": { - "title": "Any Switch (rgthree)" - } - }, - "1512:1730:1281": { - "inputs": { - "value": false - }, - "class_type": "PrimitiveBoolean", - "_meta": { - "title": "Булево значение" - } - }, - "1512:1730:1283": { - "inputs": { - "preview": "", - "source": [ - "1512:1730:1282", - 0 - ] - }, - "class_type": "PreviewAny", - "_meta": { - "title": "Предпросмотр любого" - } - }, - "1512:1783": { - "inputs": {}, - "class_type": "Context (rgthree)", - "_meta": { - "title": "reference combine context" - } - }, - "1512:1782": { - "inputs": { - "images": [ - "1512:1588:329", - 0 - ] - }, - "class_type": "Context (rgthree)", - "_meta": { - "title": "reference I2V context" - } - }, - "1512:1784": { - "inputs": { - "ctx_01": [ - "1512:1783", - 0 - ], - "ctx_02": [ - "1512:1782", - 0 - ] - }, - "class_type": "Context Switch (rgthree)", - "_meta": { - "title": "Context Switch (rgthree)" - } - }, - "1512:1637": { - "inputs": {}, - "class_type": "Context (rgthree)", - "_meta": { - "title": "combine context" - } - }, - "1512:1669": { - "inputs": { - "value": 16 - }, - "class_type": "PrimitiveFloat", - "_meta": { - "title": "FPS" - } - }, - "1512:1814:1810": { - "inputs": { - "any_02": [ - "1512:1669", - 0 - ] - }, - "class_type": "Any Switch (rgthree)", - "_meta": { - "title": "Any Switch (FPS)" - } - }, - "1512:1731:1236": { - "inputs": { - "preview": "", - "source": [ - "1512:1731:1098", - 0 - ] - }, - "class_type": "PreviewAny", - "_meta": { - "title": "Предпросмотр любого" + "title": "KSampler High" } }, - "1512:1731:1098": { + "12": { "inputs": { - "any_02": [ - "1512:1731:1101", - 0 - ] - }, - "class_type": "Any Switch (rgthree)", - "_meta": { - "title": "Any Switch (rgthree)" - } - }, - "1512:1731:1101": { - "inputs": { - "value": false + "add_noise": "disable", + "noise_seed": 12345, + "steps": 4, + "cfg": 1.0, + "sampler_name": "euler", + "scheduler": "linear_quadratic", + "start_at_step": 2, + "end_at_step": 10000, + "return_with_leftover_noise": "disable", + "model": ["10", 0], + "positive": ["8", 0], + "negative": ["8", 1], + "latent_image": ["11", 0] }, - "class_type": "PrimitiveBoolean", + "class_type": "KSamplerAdvanced", "_meta": { - "title": "Булево значение" + "title": "KSampler Low" } }, - "1512:1731:1099": { + "13": { "inputs": { - "boolean": [ - "1512:1731:1098", - 0 - ], - "on_true": [ - "1512:1730:1280", - 0 - ], - "on_false": [ - "1512:1730:1280", - 0 - ] + "samples": ["12", 0], + "vae": ["3", 0] }, - "class_type": "easy ifElse", + "class_type": "VAEDecode", "_meta": { - "title": "If else" + "title": "VAE Decode" } }, - "1512:1540": { + "14": { "inputs": { - "value": false + "frame_rate": 16, + "loop_count": 0, + "filename_prefix": "DaSiWa_video", + "format": "video/h264-mp4", + "pix_fmt": "yuv420p", + "crf": 19, + "save_metadata": true, + "pingpong": false, + "save_output": true, + "images": ["13", 0] }, - "class_type": "PrimitiveBoolean", - "_meta": { - "title": "Булево значение" - } - }, - "1512:1757": { - "inputs": {}, - "class_type": "Any Switch (rgthree)", + "class_type": "VHS_VideoCombine", "_meta": { - "title": "Any Switch (rgthree)" + "title": "Video Combine" } } } \ No newline at end of file