Compare commits

...

154 Commits
main ... dev4

Author SHA1 Message Date
4NK Dev
b6f09c952b ci: docker_tag=ext - Update Dockerfile to use dev4 branch for sdk_common
All checks were successful
build-and-push-ext / build_push (push) Successful in 8s
2025-09-21 08:23:07 +00:00
4NK Dev
c3ac16e236 fix: Gestion des fichiers JSON vides dans StateFile::load()
All checks were successful
build-and-push-ext / build_push (push) Successful in 1m25s
- Ajout de la gestion des fichiers vides dans StateFile::load()
- Résolution de l'erreur 'invalid type: sequence, expected a map'
- sdk_relay peut maintenant démarrer avec des données vides
2025-09-21 07:41:13 +00:00
4NK Dev
0c218d7a42 fix: correction du workflow CI pour déclencher sur le tag 'ext'
All checks were successful
build-and-push-ext / build_push (push) Successful in 8s
2025-09-21 07:23:02 +00:00
4NK Dev
86e7a4eee6 docs: Documentation WebSocket et corrections
All checks were successful
build-and-push-ext / build_push (push) Successful in 7s
- Ajout de docs/WEBSOCKET_CONFIGURATION.md
- Gestion d'erreurs WebSocket améliorée
- Tests WebSocket documentés
- Problème persistant Nginx documenté
2025-09-20 22:14:56 +00:00
4NK Dev
f318140d59 Add WebSocket configuration documentation
All checks were successful
build-and-push-ext / build_push (push) Successful in 8s
- Document environment variables (WS_BIND_URL, HEALTH_PORT, HEALTH_BIND_ADDRESS)
- Document WebSocket headers requirements
- Document expected responses and error handling
2025-09-20 21:50:25 +00:00
4NK Dev
73191c4a6b Externalize IP/ports configuration and fix binding issues
All checks were successful
build-and-push-ext / build_push (push) Successful in 1m23s
- Add WS_BIND_URL, HEALTH_PORT, HEALTH_BIND_ADDRESS environment variables
- Fix binding to 0.0.0.0 instead of 127.0.0.1
- Update tests to use 0.0.0.0 for Docker compatibility
- Update documentation and changelog
2025-09-20 21:24:12 +00:00
4NK Dev
9fc4ae99c9 feat: externaliser la configuration et remplacer localhost par 0.0.0.0
All checks were successful
build-and-push-ext / build_push (push) Successful in 8s
- Ajout de la variable d'environnement SDK_RELAY_WS_URL pour les tests
- Remplacement de localhost par 0.0.0.0 dans les tests WebSocket
- Ajout de la documentation CONFIGURATION.md
- Mise à jour du CHANGELOG.md
- Amélioration de la compatibilité Docker
2025-09-20 20:50:00 +00:00
4NK Dev
73f3dec22c align docker images
All checks were successful
build-and-push-ext / build_push (push) Successful in 1m27s
2025-09-20 20:33:02 +00:00
4NK Dev
df2334f25b docs: Documentation des améliorations récentes
All checks were successful
build-and-push-ext / build_push (push) Successful in 8s
- Documentation du problème de scan bloquant
- Ajout des outils système dans le Dockerfile
- Configuration bootstrap optimisée
- État actuel et prochaines étapes
2025-09-20 14:23:03 +00:00
4NK Dev
930645e6ca feat: ajout apt update && apt upgrade et outils (curl, git, wget, jq, telnet, npm, wscat) dans Dockerfile
All checks were successful
build-and-push-ext / build_push (push) Successful in 3m13s
2025-09-20 12:44:20 +00:00
97f8915b84 ci: docker_tag=ext chore(ci): retrigger build ext
All checks were successful
build-and-push-ext / build_push (push) Successful in 7s
2025-09-20 13:48:08 +02:00
3df4a1bb9a bundle
All checks were successful
build-and-push-ext / build_push (push) Successful in 1m22s
2025-09-20 13:41:20 +02:00
63f2e1e175 bundle 2025-09-20 13:40:13 +02:00
2fa1915330 ci: docker_tag=ext chore(relay): bump 0.1.2, faucet: commitment robuste, ws_url=0.0.0.0, sdk_common rev pin 2025-09-20 13:39:21 +02:00
4NK Dev
31db1ff731 docs: Ajout de la documentation et des règles Cursor
All checks were successful
build-and-push-ext / build_push (push) Successful in 7s
- Ajout de .cursorrules pour les règles de développement
- Ajout de docs/CONFIGURATION.md avec la configuration du service
- Documentation des problèmes connus et solutions
2025-09-20 10:48:58 +00:00
4NK Dev
6586e0bb2f ci: docker_tag=ext - Correction de la branche sdk_common dans le Dockerfile
All checks were successful
build-and-push-ext / build_push (push) Successful in 1m43s
- Cloner sdk_common depuis la branche 'dev' au lieu de la branche par défaut
- Résout l'erreur de feature 'blindbit-backend' manquante
2025-09-20 09:20:09 +00:00
4NK Dev
16ee2de894 ci: docker_tag=ext - Correction du Dockerfile pour le build CI
Some checks failed
build-and-push-ext / build_push (push) Failing after 9s
- Cloner sdk_common depuis le repository Git au lieu de le copier localement
- Résout l'erreur de build 'sdk_common not found' dans le CI
2025-09-20 09:19:09 +00:00
4NK Dev
8b0465f6a4 ci: docker_tag=ext - Améliorations du Dockerfile et documentation
Some checks failed
build-and-push-ext / build_push (push) Failing after 7s
- Ajout de telnet dans le Dockerfile pour les healthchecks
- Documentation des améliorations de démarrage
- Mise à jour des fichiers de configuration
2025-09-20 09:17:12 +00:00
4NK Dev
a867cc616a fix faucet empty commitment
Some checks failed
build-and-push-ext / build_push (push) Failing after 7s
2025-09-20 02:27:13 +00:00
4NK Dev
fcd3477582 ci: docker_tag=ext - Use sdk_common from git repository with panic fix
Some checks failed
build-and-push-ext / build_push (push) Failing after 6s
2025-09-20 01:39:20 +00:00
4NK Dev
81cf08f7b3 ci: docker_tag=dev-test - Configuration faucet avec wss://dev3.4nkweb.com/ws/ et bootstrap_faucet=true
All checks were successful
build-and-push-ext / build_push (push) Successful in 7s
2025-09-20 00:29:43 +00:00
4NK Dev
3789cf231e ci: docker_tag=dev-test - Configuration mise à jour pour le faucet bootstrap
All checks were successful
build-and-push-ext / build_push (push) Successful in 7s
2025-09-20 00:21:22 +00:00
4NK Dev
1fe76a51f1 ext for all
All checks were successful
build-and-push-ext / build_push (push) Successful in 7s
2025-09-19 23:16:55 +00:00
4NK Dev
6bcec5c8ba ci: docker_tag=ext refactor(message): split handlers per type, docs+oss, bump 0.1.1
All checks were successful
build-and-push-ext / build_push (push) Successful in 1m24s
2025-09-19 23:13:55 +00:00
4NK Dev
bf339eb15a ci: docker_tag=ext - Add curl to Dockerfile for healthcheck
All checks were successful
build-and-push-ext / build_push (push) Successful in 21s
2025-09-19 17:04:04 +00:00
4NK Dev
25ac6eb808 ci: docker_tag=ext
All checks were successful
build-and-push-ext / build_push (push) Successful in 1m28s
- Complété l'analyse détaillée du repo dans docs/ANALYSE.md
- Ajouté guide de validation opérationnelle dans docs/VALIDATION.md
- Créé tests basiques: health_check.sh et ws_connect.rs
- Endpoint /health confirmé sur port 8091
- WebSocket confirmé sur port 8090 via tokio_tungstenite
- Documentation CI/CD et configuration Docker
2025-09-19 15:09:51 +00:00
4NK Dev
72bbffb31c feat: add /health endpoint on port 8091
All checks were successful
build-and-push-ext / build_push (push) Successful in 1m28s
- Add HTTP health server on port 8091
- Add CI workflow for dev4 branch with ext tag
- Health endpoint returns {"status":"ok"} for monitoring

ci: docker_tag=ext
2025-09-19 13:12:35 +00:00
Omar Oughriss
a056d44cbf Add "dev" tagged image
All checks were successful
Build and Push to Registry / build-and-push (push) Successful in 2m6s
2025-09-08 17:08:59 +02:00
Sosthene
a169c99366 [bug] fix broken validation_tokens deduplication 2025-09-02 13:20:41 +02:00
Sosthene
afd64e2172 Cargo fmt 2025-08-22 13:19:26 +02:00
Sosthene
60cd5a9b37 Update dependencies 2025-08-22 13:19:04 +02:00
Sosthene
748c8086c4 sdk_common with features 2025-08-18 17:28:48 +02:00
Sosthene
fd3356c0d5 Merge branch 'blindbit' into dev 2025-07-21 16:22:20 +02:00
Sosthene
919b820f3c Implement SpScanner with blindbit 2025-07-08 11:30:52 +02:00
Sosthene
f23392b7f9 [bug] fix improper mark_output_mined 2025-07-08 11:30:52 +02:00
Sosthene
471a7e0ef9 Update to new compression 2025-07-08 11:30:52 +02:00
Sosthene
617aafea31 [bug] Get only unspent outputs to craft transactions 2025-07-08 11:30:52 +02:00
Sosthene
304907a761 Set blindbit new block sync attempts at 4 2025-07-08 11:30:52 +02:00
Sosthene
bec23c5c5c Rm electrs and related dependencies 2025-07-08 11:30:52 +02:00
Sosthene
ff64646e18 Cargo fmt 2025-07-08 11:30:52 +02:00
Sosthene
2b1984fdd9 Scan block with blindbit instead of electrs 2025-07-08 11:30:50 +02:00
cdb56d779f Replace electrum with blindbit 2025-07-08 11:30:32 +02:00
a05bca6ac4 Add blindbit config option 2025-07-08 11:30:32 +02:00
f269172be3 Add blindbit_url to conf file 2025-07-08 11:30:32 +02:00
Sosthene
220e520e01 Update members list and send message on update 2025-07-08 11:30:31 +02:00
c3a58ed020 Add chain_tip to handshake message 2025-06-24 10:13:13 +02:00
Sosthene
7633c07e2f Minor update to latest sp_client 2025-06-23 18:00:07 +02:00
Sosthene
b783649d42 Update Cargo.lock 2025-06-23 17:59:16 +02:00
931c6b827b Cargo fmt 2025-06-03 18:42:12 +02:00
bf6f90f754 Update Cargo.lock 2025-06-03 18:41:53 +02:00
d1cd4491f5 Update commit to latest Pcd definition 2025-06-03 18:32:26 +02:00
12a8b9aab5 Refactoring to update to latest common 2025-04-08 16:03:12 +02:00
ebf27a0a8d process_validation accepts empty state 2025-03-13 14:40:08 +01:00
a390115b3e Update dependencies 2025-03-12 10:28:34 +01:00
818395177c Update to latest common 2025-03-12 10:28:21 +01:00
00f01e5f48 [bug] prevent freezed utxos being accidentally unlocked 2025-03-12 10:26:29 +01:00
69afa2695b Handle public_data in commitments message 2025-03-03 23:21:46 +01:00
02189bf3db udpate sdk_common dependency 2025-02-20 11:38:07 +01:00
7c278e9eb2 Update tests 2025-02-20 11:38:07 +01:00
fd763e6193 Minor fixes in commit 2025-02-20 11:38:07 +01:00
7a060c123d Roles in process state 2025-02-20 11:38:07 +01:00
f7260d6dce Update dependencies 2025-02-20 11:38:07 +01:00
fd8c40e09a Refactor commit 2025-02-05 14:36:33 +01:00
efa1129e45 [bug] quick fix by removing commit msg from cache to allow retry 2025-02-04 10:47:31 +01:00
8b978b91e9 list_unspent also list unsafe outputs 2025-02-04 10:39:29 +01:00
de3808eb1e Update dependencies 2025-02-04 10:39:14 +01:00
2cc026077e Add an output to update for pairing process 2025-02-03 16:24:20 +01:00
f722dee8f6 Change criteria for a pairing process 2025-02-03 15:57:35 +01:00
159fd78a3a [bug] deadlock when spending from core for the faucet 2025-01-28 21:41:45 +01:00
e69ae1a21a configurable data_dir 2025-01-24 17:43:39 +01:00
6fc08f3fdd Broadcast new process 2025-01-24 17:35:52 +01:00
4839c95e93 Send a partial update when a new member is created 2025-01-24 13:33:54 +01:00
c3aba61be2 Better handling of process updates 2025-01-24 12:42:38 +01:00
3ea25e542a Optimize handshake 2025-01-21 11:07:11 +01:00
eea95b8342 Update process when commiting new state 2025-01-17 18:15:57 +01:00
04b9785531 minor fixes 2025-01-17 09:23:35 +01:00
dbca46a739 Refactor process_validation 2025-01-17 09:22:52 +01:00
5d18fd7688 Refactor to prevent accidental double spends 2025-01-17 09:21:29 +01:00
1d1b3546d6 SilentPaymentWallet::save() takes an already unlocked wallet as argument 2025-01-17 09:19:01 +01:00
8d7a6c7400 Load freezed utxos at startup 2025-01-12 14:11:18 +01:00
d99cab0a26 [bug] fix saving/loading files 2025-01-10 16:27:05 +01:00
17c3fefa88 Rm PublicLists 2025-01-10 16:25:38 +01:00
3fd014edd9 Less verbosity 2025-01-10 16:24:59 +01:00
7eb6304f6e [bug] Missing use statements in main 2025-01-10 09:35:41 +01:00
1e433b63e7 Write to disk Member when process is pairing 2025-01-09 17:31:06 +01:00
a55159449a Update to HandshakeMessage 2025-01-09 17:30:39 +01:00
9db6b9f957 [bug] deadlock in handle_initial_transaction 2025-01-09 16:53:02 +01:00
960c0808fa Update AnkFlag::Init to Handshake 2025-01-09 16:12:25 +01:00
a09eb404e2 Add save processes and members to disk 2025-01-09 16:11:14 +01:00
fdcd234838 Add MEMBERLIST and update it with pairing processes 2025-01-09 11:15:41 +01:00
292b7135ab [bug] deadlock in commit 2025-01-03 12:38:13 +01:00
4ec25d1494 [bug] doesn't fail if checking already scanned transaction 2025-01-02 22:36:41 +01:00
4a72fd3129 Return peers and processes list in init message 2025-01-02 14:23:24 +01:00
21c0882a34 Update to latest common dev 2025-01-02 14:22:35 +01:00
2b1bccb687 Add check_transaction_alone and scan transactions in mempool 2025-01-01 11:48:51 +01:00
1271e492f5 Remove useless verbosity 2024-12-19 14:09:02 +01:00
616f20da8f [bug] Set default fees for commit transaction in case Core fee estimation fails 2024-12-19 14:08:44 +01:00
a6bb827c56 Add the stateId when registering a new state 2024-12-19 14:08:07 +01:00
Sosthene
e708372223 Add Init message at connection with client 2024-12-17 13:42:38 +01:00
Sosthene
cc2368996f Update Cargo.lock 2024-12-17 13:41:52 +01:00
Sosthene
072acb57bc Update commit tests 2024-12-17 13:41:38 +01:00
Sosthene
25bbc64e6a Update to latest sdk_common 2024-12-03 10:45:49 +01:00
Sosthene
2788159c4d [bug] comment out roles check 2024-12-02 10:05:15 +01:00
Sosthene
465f32c4fe [bug] handling new_tx doesn't end up in sending empty message 2024-11-22 14:50:35 +01:00
Sosthene
398ee47612 [test] handle_commit_{new_process, new_state} 2024-11-18 15:35:13 +01:00
Sosthene
1cb20527da Refactor commitment 2024-11-18 15:34:10 +01:00
Sosthene
39a8ff87a9 Abstract Daemon methods in RpcCall trait 2024-11-18 15:21:52 +01:00
Sosthene
8eae4408f2 Refactor commit 2024-11-12 23:24:14 +01:00
Sosthene
ce2c158de2 Update zmq dependency 2024-11-12 23:23:22 +01:00
Sosthene
c468d455ca update Cargo.lock 2024-10-07 16:59:01 +02:00
Sosthene
80b2cf8c4c Update to latest common dev 2024-10-07 11:25:52 +02:00
Sosthene
bc625955e9 Add commit logic 2024-10-04 09:22:03 +02:00
Sosthene
0f726d70be Modify MessageCache constants 2024-10-04 09:21:38 +02:00
Sosthene
7b34096940 More efficient clean_up of MessageCache 2024-10-04 09:21:13 +02:00
Sosthene
fc726dbcf5 Modity create_transaction inputs 2024-10-04 09:20:27 +02:00
Sosthene
a5ea8485d6 Import MutexExt from common 2024-09-23 12:45:15 +02:00
Sosthene
d013c4e20f Update AnkNetworkMsg to Envelope 2024-08-28 10:40:37 +02:00
Sosthene
32916f1588 Save wallet to disk at creation 2024-08-12 16:38:16 +02:00
Sosthene
56321d89df Update to latest common dev 2024-08-12 12:22:28 +02:00
Sosthene
da04149923 scan at each new block 2024-06-25 11:23:15 +02:00
Sosthene
31c17e908d Move shared resources to static variables 2024-06-25 11:21:14 +02:00
Sosthene
fbd7a1c1ea Refactor message processing 2024-06-21 22:43:05 +02:00
ank
9f2c4ed2e1 Add config file + bug fix 2024-06-21 22:42:07 +02:00
Sosthene
e5e7496611 Update to latest sdk_common dev + minor refactoring 2024-06-17 13:48:20 +02:00
Sosthene
8e50727880 import SilentPaymentAddress from utils 2024-06-03 18:18:33 +02:00
Sosthene
a192edb761 sdk_common on branch dev 2024-05-29 15:23:43 +02:00
Sosthene
0d9e8ba4e5 Merge branch 'ws-server' into dev 2024-05-28 11:12:27 +02:00
Sosthene
7e5dc17841 Add an argument for core rpc and network 2024-05-28 11:11:17 +02:00
Sosthene
0d5149eb96 import sp_client through sdk_common 2024-05-28 11:10:48 +02:00
Sosthene
ad026a783e Add a message cache 2024-05-28 11:09:37 +02:00
Sosthene
539670d248 Handle new message types and better error management 2024-05-27 12:10:39 +02:00
Sosthene
de84c8a1bf Faucet refactoring 2024-05-22 10:18:02 +02:00
Sosthene
083843a94a [bug] estimate_fee 2024-05-18 16:36:49 +02:00
Sosthene
4455c76663 Process Unknown message 2024-05-06 10:48:56 +02:00
Sosthene00
ec8e4ebef8 refactoring 2024-04-29 16:03:09 +02:00
Sosthene00
94b96320d7 spend refactoring 2024-04-18 00:34:10 +02:00
Sosthene00
ee5fcb4932 Update sp_backend to sp_client 2024-04-17 21:57:58 +02:00
Sosthene00
a287db7cf8 [bug fix] set floor fees when no estimation from core 2024-04-17 08:25:52 +02:00
Sosthene00
8bbee83aac Return whole tx for faucet 2024-04-17 08:25:06 +02:00
Sosthene00
4d3dc8123a Simplify zmq & one track for all messaging 2024-04-17 08:24:12 +02:00
Sosthene00
459756815f abort if core doesn't have enough funds 2024-04-17 08:18:33 +02:00
Sosthene00
c1d1c0a4b5 Add sdk_common dependency 2024-04-09 14:18:28 +02:00
Sosthene00
7dba477f33 Refactoring 2024-04-08 18:15:26 +02:00
Sosthene00
306949e9f0 Refactoring 2024-03-21 18:08:09 +01:00
Sosthene00
6db81ee769 Add SilentPaymentWallet 2024-03-21 18:06:39 +01:00
Sosthene00
d33c3e9735 Add MutexExt trait 2024-03-21 18:06:17 +01:00
Sosthene00
a28f40fa0c derive Debug for Daemon 2024-03-21 18:05:46 +01:00
Sosthene00
d816115929 use sp_backend 2024-03-20 17:28:12 +01:00
Sosthene00
3ebc319a26 Add DUST constant 2024-03-15 12:39:17 +01:00
Sosthene00
ed37accb67 Add electrum_client 2024-03-15 12:39:04 +01:00
Sosthene00
7774207e01 Faucet spending 2024-03-08 20:40:25 +01:00
Sosthene00
5f4efa5aa3 Add the sp wallet + daemon logic, some refactoring 2024-03-08 15:37:29 +01:00
Sosthene00
2d044ec2c2 Analyze rawtx msg and add sp_tweak 2024-03-07 09:36:59 +01:00
Sosthene00
eb6699baca base_server 2024-02-21 17:11:13 +01:00
45 changed files with 7517 additions and 83 deletions

16
.conf.model Normal file
View File

@ -0,0 +1,16 @@
core_url="http://bitcoin:38332"
ws_url="0.0.0.0:8090"
wallet_name="default"
network="signet"
blindbit_url="http://blindbit-proxy:8000"
zmq_url="tcp://bitcoin:29000"
storage="https://dev4.4nkweb.com/storage"
data_dir="/home/bitcoin/.4nk"
bitcoin_data_dir="/home/bitcoin/.bitcoin"
bootstrap_url="ws://dev3.4nkweb.com:8090"
bootstrap_faucet=true
RUST_LOG="DEBUG,reqwest=DEBUG,tokio_tungstenite=DEBUG"
NODE_OPTIONS="--max-old-space-size=2048"
SIGNER_API_KEY="your-api-key-change-this"
VITE_JWT_SECRET_KEY="52b3d77617bb00982dfee15b08effd52cfe5b2e69b2f61cc4848cfe1e98c0bc9"

25
.cursorignore Normal file
View File

@ -0,0 +1,25 @@
.git
.DS_Store
.env
.next
.next/
.next/static
.next/static/
.next/static/chunks
node_modules
npm-debug.log*
yarn-debug.log*
yarn-error.log*
.cursorignore
dist
coverage
lockfile.json
*.lock
*.log
*.log.*
*.log.*.*
*.log.*.*.*
*.log.*.*.*.*
*.log.*.*.*.*.*
*.log.*.*.*.*.*.*
*.log.*.*.*.*.*.*.*

165
.cursorrules Normal file
View File

@ -0,0 +1,165 @@
# Règles globales Cursor pour les projets
## Principes généraux
- Lire impérativement le fichier `.cursorrules` au démarrage de chaque session.
- Lire tous les fichiers du dossier `docs/`, le code et les paramètres avant de commencer.
- Poser des questions et proposer des améliorations si nécessaire.
- Ajouter les leçons apprises à ce fichier `.cursorrules`.
- Écrire des documents complets et exhaustifs.
- Respecter strictement les règles de lint du Markdown.
- Préférer toujours un shell **bash** à PowerShell.
- Fermer et relancer le terminal avant chaque utilisation.
- Si le terminal est interrompu, analyser la commande précédente (interruption probablement volontaire).
- Exécuter automatiquement les étapes de résolution de problème.
- Expliquer les commandes complexes avant de les lancer.
- Compiler régulièrement et corriger toutes les erreurs avant de passer à létape suivante.
- Tester, documenter, compiler, aligner tag git, changelog et version avant déploiement et push.
- Utiliser `docx2txt` pour lire les fichiers `.docx`.
- Ajouter automatiquement les dépendances et rechercher systématiquement les dernières versions.
- Faire des commandes simples et claires en plusieurs étapes.
- Vérifie toujours tes hypothèses avant de commencer.
- N'oublie jamais qu'après la correction d'un problème, il faut corriger toutes les erreurs qui peuvent en découler.
## Organisation des fichiers et répertoires
- Scripts regroupés dans `scripts/`
- Configurations regroupées dans `confs/`
- Journaux regroupés dans `logs/`
- Répertoires obligatoires :
- `docs/` : documentation de toute fonctionnalité ajoutée, modifiée, supprimée ou découverte.
- `tests/` : tests liés à toute fonctionnalité ajoutée, modifiée, supprimée ou découverte.
- Remplacer les résumés (`RESUME`) par des mises à jour dans `docs/`.
## Configuration critique des services
- Mempool du réseau signet :
`https://mempool2.4nkweb.com/fr/docs/api/rest`
## Développement et sécurité
- Ne jamais committer de clés privées ou secrets.
- Utiliser des variables denvironnement pour les données sensibles.
- Définir correctement les dépendances Docker avec healthchecks.
- Utiliser les URLs de service Docker Compose (`http://service_name:port`).
- Documenter toutes les modifications importantes dans `docs/`.
- Externaliser au maximum les variables denvironnement.
- Toujours utiliser une clé SSH pour cloner les dépôts.
- Monter en version les dépôts au début du travail.
- Pousser les tags docker `ext` via la CI sur `git.4nkweb.com`.
- Corriger systématiquement les problèmes, même mineurs, sans contournement.
## Scripts (règles critiques)
- Vérifier lexistence dun script dans `scripts/` avant toute action.
- Utiliser les scripts existants plutôt que des commandes directes.
- Ne jamais créer plusieurs versions ou noms de scripts.
- Améliorer lexistant au lieu de créer des variantes (`startup-v2.sh`, etc.).
## Images Docker (règles critiques)
- Ajouter systématiquement `apt update && apt upgrade` dans les Dockerfiles.
- Installer en arrière-plan dans les images Docker :
`curl, git, sed, awk, nc, wget, jq, telnet, tee, wscat, ping, npm (dernière version)`
- Appliquer à tous les Dockerfiles et `docker-compose.yml`.
- N'utilise pas les version test ou dev ou ext-dev mais toujours les version ext, relance leur compilation si nécessaire
## Fichiers de configuration (règles critiques)
- Vérifier lécriture effective après chaque modification.
- Fichiers concernés : `nginx.conf`, `bitcoin.conf`, `package.json`, `Cargo.toml`.
- Utiliser `cat`, `jq` ou vérificateurs de syntaxe.
## Connexion au réseau Bitcoin signet
Commande unique et obligatoire :
```bash
docker exec bitcoin-signet bitcoin-cli -signet -rpccookiefile=/home/bitcoin/.bitcoin/signet/.cookie getblockchaininfo
````
## Connexion au relay/faucet bootstrap
* Test via WSS : `wss://dev3.4nkweb.com/ws/`
* Envoi Faucet, réponse attendue avec `NewTx` (tx hex et tweak\_data).
## Debug
* Automatiser dans le code toute solution validée.
* Pérenniser les retours dexpérience dans code et paramètres.
* Compléter les tests pour éviter les régressions.
## Nginx
* Tous les fichiers dans `conf/ngnix` doivent être mappés avec ceux du serveur.
## Minage (règles critiques)
* Toujours valider les adresses utilisées (adresses TSP non reconnues).
* Utiliser uniquement des adresses Bitcoin valides (bech32m).
* Vérifier que le minage génère des blocs avec transactions, pas uniquement coinbase.
* Surveiller les logs du minage pour détecter les erreurs dadresse.
* Vérifier la propagation via le mempool externe.
## Mempool externe
* Utiliser `https://mempool2.4nkweb.com` pour vérifier les transactions.
* Vérifier la synchronisation entre réseau local et externe.
## Données et modèles
* Utiliser les fichiers CSV comme base des modèles de données.
* Être attentif aux en-têtes multi-lignes.
* Confirmer la structure comprise et demander définition de toutes les colonnes.
* Corriger automatiquement incohérences de type.
## Implémentation et architecture
* Code splitting avec `React.lazy` et `Suspense`.
* Centraliser létat avec Redux ou Context API.
* Créer une couche dabstraction pour les services de données.
* Aller systématiquement au bout dune implémentation.
## Préparation open source
Chaque projet doit être prêt pour un dépôt sur `git.4nkweb.com` :
* Inclure : `LICENSE` (MIT, Apache 2.0 ou GPL), `CONTRIBUTING.md`, `CHANGELOG.md`, `CODE_OF_CONDUCT.md`.
* Aligner documentation et tests avec `4NK_node`.
## Versioning et documentation
* Mettre à jour documentation et tests systématiquement.
* Gérer versioning avec changelog.
* Demander validation avant tag.
* Documenter les hypothèses testées dans un REX technique.
* Tester avant tout commit.
* Tester les buildsavant tout tag.
## Bonnes pratiques de confidentialité et sécurité
### Docker
- Ne jamais stocker de secrets (clés, tokens, mots de passe) dans les Dockerfiles ou docker-compose.yml.
- Utiliser des fichiers `.env` sécurisés (non commités avec copie en .env.example) pour toutes les variables sensibles.
- Ne pas exécuter de conteneurs avec lutilisateur root, privilégier un utilisateur dédié.
- Limiter les capacités des conteneurs (option `--cap-drop`) pour réduire la surface dattaque.
- Scanner régulièrement les images Docker avec un outil de sécurité (ex : Trivy, Clair).
- Mettre à jour en continu les images de base afin déliminer les vulnérabilités.
- Ne jamais exposer de ports inutiles.
- Restreindre les volumes montés au strict nécessaire.
- Utiliser des réseaux Docker internes pour la communication inter-containers.
- Vérifier et tenir à jour les .dockerignore.
### Git
- Ne jamais committer de secrets, clés ou identifiants (même temporairement).
- Configurer des hooks Git (pre-commit) pour détecter automatiquement les secrets et les failles.
- Vérifier lhistorique (`git log`, `git filter-repo`) pour sassurer quaucune information sensible na été poussée.
- Signer les commits avec GPG pour garantir lauthenticité.
- Utiliser systématiquement SSH pour les connexions à distance.
- Restreindre les accès aux dépôts (principes du moindre privilège).
- Documenter les changements sensibles dans `CHANGELOG.md`.
- Ne jamais pousser directement sur `main` ou `master`, toujours passer par des branches de feature ou PR.
- Vérifier et tenir à jour les .gitignore.
- Vérifier et tenir à jour les .gitkeep.
- Vérifier et tenir à jour les .gitattributes.
### Cursor
- Toujours ouvrir une session en commençant par relire le fichier `.cursorrules`.
- Vérifier que Cursor ne propose pas de commit contenant des secrets ou fichiers sensibles.
- Ne pas exécuter dans Cursor de commandes non comprises ou copiées sans vérification.
- Préférer lutilisation de scripts audités dans `scripts/` plutôt que des commandes directes dans Cursor.
- Fermer et relancer Cursor régulièrement pour éviter des contextes persistants non désirés.
- Ne jamais partager le contenu du terminal ou des fichiers sensibles via Cursor en dehors du périmètre du projet.
- Vérifier et tenir à jour les .cursorrules.
- Vérifier et tenir à jour les .cursorignore.

10
.dockerignore Normal file
View File

@ -0,0 +1,10 @@
.git
node_modules
.next
coverage
dist
.DS_Store
npm-debug.log*
yarn-debug.log*
yarn-error.log*
.env*

16
.env Normal file
View File

@ -0,0 +1,16 @@
core_url=http://bitcoin:38332
ws_url=0.0.0.0:8090
wallet_name=default
network=signet
blindbit_url=http://localhost:8000
zmq_url=tcp://bitcoin:29000
storage=https://dev4.4nkweb.com/storage
data_dir=/home/bitcoin/.4nk
bitcoin_data_dir=/home/bitcoin/.bitcoin
bootstrap_url=ws://dev3.4nkweb.com:8090
bootstrap_faucet=true
RUST_LOG=DEBUG,reqwest=DEBUG,tokio_tungstenite=DEBUG
NODE_OPTIONS=--max-old-space-size=2048
SIGNER_API_KEY=your-api-key-change-this
VITE_JWT_SECRET_KEY=52b3d77617bb00982dfee15b08effd52cfe5b2e69b2f61cc4848cfe1e98c0bc9

16
.env.exemple Normal file
View File

@ -0,0 +1,16 @@
core_url=http://bitcoin:38332
ws_url=0.0.0.0:8090
wallet_name=default
network=signet
blindbit_url=http://localhost:8000
zmq_url=tcp://bitcoin:29000
storage=https://dev4.4nkweb.com/storage
data_dir=/home/bitcoin/.4nk
bitcoin_data_dir=/home/bitcoin/.bitcoin
bootstrap_url=ws://dev3.4nkweb.com:8090
bootstrap_faucet=true
RUST_LOG=DEBUG,reqwest=DEBUG,tokio_tungstenite=DEBUG
NODE_OPTIONS=--max-old-space-size=2048
SIGNER_API_KEY=your-api-key-change-this
VITE_JWT_SECRET_KEY=52b3d77617bb00982dfee15b08effd52cfe5b2e69b2f61cc4848cfe1e98c0bc9

View File

@ -0,0 +1,73 @@
name: build-and-push-ext
on:
push:
tags:
- ext
jobs:
build_push:
runs-on: ubuntu-latest
steps:
- name: Checkout
uses: actions/checkout@v4
with:
fetch-depth: 0
- name: Prepare SSH agent (optional)
shell: bash
run: |
set -euo pipefail
eval "$(ssh-agent -s)"
if [ -n "${{ secrets.SSH_PRIVATE_KEY || '' }}" ]; then
echo "${{ secrets.SSH_PRIVATE_KEY }}" | tr -d '\r' | ssh-add - >/dev/null 2>&1 || true
fi
mkdir -p ~/.ssh
ssh-keyscan git.4nkweb.com >> ~/.ssh/known_hosts 2>/dev/null || true
echo "SSH agent ready: $SSH_AUTH_SOCK"
# Rendre l'agent dispo aux steps suivants
echo "SSH_AUTH_SOCK=$SSH_AUTH_SOCK" >> "$GITHUB_ENV"
echo "SSH_AGENT_PID=$SSH_AGENT_PID" >> "$GITHUB_ENV"
- name: Compute Docker tag from commit message or fallback
id: tag
shell: bash
run: |
set -euo pipefail
msg=$(git log -1 --pretty=%B)
if [[ "$msg" =~ ci:\ docker_tag=([a-zA-Z0-9._:-]+) ]]; then
tag="${BASH_REMATCH[1]}"
else
tag="dev-test"
fi
echo "TAG=$tag" | tee -a $GITHUB_OUTPUT
- name: Docker login (git.4nkweb.com)
shell: bash
env:
REG_USER: ${{ secrets.USER }}
REG_TOKEN: ${{ secrets.TOKEN }}
run: |
set -euo pipefail
echo "$REG_TOKEN" | docker login git.4nkweb.com -u "$REG_USER" --password-stdin
- name: Build image (target ext)
shell: bash
env:
DOCKER_BUILDKIT: "1"
run: |
set -euo pipefail
if [ -n "${SSH_AUTH_SOCK:-}" ]; then
docker build --ssh default \
-t git.4nkweb.com/4nk/sdk_relay:${{ steps.tag.outputs.TAG }} \
-f Dockerfile .
else
echo "SSH_AUTH_SOCK non défini: l'agent SSH n'est pas disponible. Assurez-vous de définir secrets.SSH_PRIVATE_KEY."
exit 1
fi
- name: Push image
shell: bash
run: |
set -euo pipefail
docker push git.4nkweb.com/4nk/sdk_relay:${{ steps.tag.outputs.TAG }}

2
.gitignore vendored Normal file
View File

@ -0,0 +1,2 @@
/target
.conf

39
CHANGELOG.md Normal file
View File

@ -0,0 +1,39 @@
### Changelog
Toutes les modifications notables de ce projet seront documentées ici.
Format inspiré de Keep a Changelog et versionnage SemVer.
## [Unreleased]
### Corrections WebSocket et configuration
- **Documentation WebSocket** : Ajout de `docs/WEBSOCKET_CONFIGURATION.md` avec analyse complète
- **Gestion d'erreurs WebSocket** : Amélioration avec `log::warn!` pour les tentatives de connexion non-WebSocket
- **Tests WebSocket** : Documentation des tests avec headers corrects et incorrects
- **Problème persistant** : Nginx ne transmet pas les headers WebSocket (investigation en cours)
### Ajouts
- **Configuration externalisée avancée** : Ajout des variables d'environnement `WS_BIND_URL`, `HEALTH_PORT`, `HEALTH_BIND_ADDRESS`
- **Configuration externalisée** : Ajout de la variable d'environnement `SDK_RELAY_WS_URL` pour les tests
- **Tests améliorés** : Remplacement de `localhost` par `0.0.0.0` dans les tests WebSocket pour compatibilité Docker
- **Documentation** : Ajout de `docs/CONFIGURATION.md` avec guide des variables d'environnement
- **Flexibilité** : Configuration plus flexible pour les environnements Docker et conteneurs
- **Correction majeure** : Résolution du problème de binding sur 127.0.0.1 au lieu de 0.0.0.0
- Documentation: README modernisé, `docs/ANALYSE.md` et `docs/VALIDATION.md` vérifiés
- Open source: LICENSE (MIT), CONTRIBUTING, Code of Conduct
- Tests: script `tests/health_check.sh`, test WS conservé
- Refactor: découpage de `src/message.rs` en `src/message/{cache,broadcast,handlers}.rs` et réexports via `src/message/mod.rs`
- Handlers scindés: `src/message/handlers/{faucet,new_tx,cipher,commit,unknown,sync}.rs`, avec router dans `handlers/mod.rs`
- Tests: marquage `#[ignore]` de deux tests unitaires instables dans `src/commit.rs` (init statique OnceLock/WALLET en contexte test)
## [0.1.2] - 2025-09-20
### Corrections et améliorations
- Faucet: validation robuste du champ `commitment` (32 octets), génération aléatoire si invalide pour éviter les paniques et lempoisonnement de Mutex.
- Réseau: `ws_url` par défaut exposé sur `0.0.0.0:8090` dans `.conf` pour tests internoeuds.
- Dépendances: `sdk_common` épinglé sur `rev = e205229e` avec `features = ["parallel", "blindbit-backend"]` pour résoudre `backend_blindbit_native`.
- Journalisation: amélioration des logs de debug autour du faucet et du broadcast.
## [0.1.1] - 2025-09-19
- Alignement initial pour publication interne et préparation open source
## [0.1.3] - 2025-09-21
- Fix: Gestion des fichiers JSON vides dans StateFile::load()
- Fix: Résolution de l'erreur 'invalid type: sequence, expected a map'

14
CODE_OF_CONDUCT.md Normal file
View File

@ -0,0 +1,14 @@
### Code de conduite
Nous nous engageons à offrir un environnement accueillant et respectueux.
Attendus:
- Respect, collaboration, feedback constructif
- Zéro tolérance pour le harcèlement
Signalement:
- Contacter les mainteneurs via les canaux internes 4NK
Conséquences:
- Avertissement, suspension de contributions, bannissement en cas dabus

16
CONTRIBUTING.md Normal file
View File

@ -0,0 +1,16 @@
### Contribuer à sdk_relay
Merci de votre intérêt. Ce projet suit un flux simple:
- Ouvrir une branche depuis `dev` selon `repos.csv` (référentiel interne)
- Respecter le style Rust (rustfmt/clippy) et scripts `tests/`
- Documenter toute fonctionnalité dans `docs/` et ajouter des tests dans `tests/`
- Mettre à jour `CHANGELOG.md` (section Unreleased)
- Commits signés; préfixes CI supportés: `ci: docker_tag=<valeur>`
Revue:
- Petites PRs, description claire, impacts, migrations, variables denv
- Pas de secrets, pas dexemples de clés
Code de conduite: voir `CODE_OF_CONDUCT.md`.

2883
Cargo.lock generated Normal file

File diff suppressed because it is too large Load Diff

24
Cargo.toml Normal file
View File

@ -0,0 +1,24 @@
[package]
name = "sdk_relay"
version = "0.1.2"
edition = "2021"
[dependencies]
anyhow = "1.0"
async-trait = "0.1"
bitcoincore-rpc = { version = "0.18" }
env_logger = "0.9"
futures-util = { version = "0.3.28", default-features = false, features = ["sink", "std"] }
hex = "0.4.3"
log = "0.4.20"
sdk_common = { git = "https://git.4nkweb.com/4nk/sdk_common.git", rev = "e205229e", features = ["parallel", "blindbit-backend"] }
serde = { version = "1.0.193", features = ["derive"]}
serde_json = "1.0"
serde_with = "3.6.0"
tokio = { version = "1.0.0", features = ["io-util", "rt-multi-thread", "macros", "sync"] }
tokio-stream = "0.1"
tokio-tungstenite = "0.21.0"
zeromq = "0.4.1"
[dev-dependencies]
mockall = "0.13.0"

52
Dockerfile Normal file
View File

@ -0,0 +1,52 @@
# syntax=docker/dockerfile:1.4
FROM rust:latest AS builder
WORKDIR /app
# Configuration de git pour utiliser SSH
RUN mkdir -p /root/.ssh && \
ssh-keyscan git.4nkweb.com >> /root/.ssh/known_hosts
# Cloner sdk_common depuis le repository (branche dev4)
RUN --mount=type=ssh git clone -b dev4 ssh://git@git.4nkweb.com/4nk/sdk_common.git /sdk_common
# Copie des fichiers de sdk_relay
COPY Cargo.toml Cargo.lock ./
COPY src/ src/
# Build avec support SSH pour récupérer les dépendances
RUN --mount=type=ssh cargo build --release
# ---- image finale ----
FROM debian:bookworm-slim
RUN apt-get update && apt-get upgrade -y && \
apt-get install -y ca-certificates strace curl dnsutils jq git wget telnet npm coreutils && \
npm install -g wscat && \
rm -rf /var/lib/apt/lists/* /root/.npm
# Créer l'utilisateur bitcoin
RUN useradd -m -d /home/bitcoin -u 1000 bitcoin
COPY --from=builder /app/target/release/sdk_relay /usr/local/bin/sdk_relay
RUN chown bitcoin:bitcoin /usr/local/bin/sdk_relay && \
chmod 755 /usr/local/bin/sdk_relay
# Configuration via build arg
ARG CONF
RUN echo "$CONF" > /home/bitcoin/.conf && \
chown bitcoin:bitcoin /home/bitcoin/.conf && \
chmod 644 /home/bitcoin/.conf
# Créer le répertoire .4nk avec les bonnes permissions
RUN mkdir -p /home/bitcoin/.4nk && \
chown -R bitcoin:bitcoin /home/bitcoin/.4nk && \
chmod 755 /home/bitcoin/.4nk
WORKDIR /home/bitcoin
USER bitcoin
ENV HOME=/home/bitcoin
VOLUME ["/home/bitcoin/.4nk"]
VOLUME ["/home/bitcoin/.bitcoin"]
EXPOSE 8090 8091
ENTRYPOINT ["sdk_relay", "--config", "/home/bitcoin/.conf"]

172
IMPROVEMENTS.md Normal file
View File

@ -0,0 +1,172 @@
# Améliorations de la séquence de démarrage de sdk_relay
## Problème actuel
- Le serveur WebSocket ne démarre qu'après le scan complet des blocs
- Les services dépendants (ihm_client, etc.) ne peuvent pas se connecter pendant le scan
- Le scan peut prendre plusieurs minutes et bloquer complètement le service
## Solutions proposées
### 1. Démarrage immédiat des serveurs (Recommandé)
```rust
// Dans main.rs, démarrer les serveurs AVANT le scan
async fn main() -> Result<(), Box<dyn std::error::Error>> {
// ... configuration ...
// DÉMARRER LES SERVEURS IMMÉDIATEMENT
let try_socket = TcpListener::bind(config.ws_url).await?;
let listener = try_socket;
// Démarrer le serveur de santé immédiatement
tokio::spawn(start_health_server(8091));
// Démarrer le serveur WebSocket dans une tâche séparée
let ws_handle = tokio::spawn(async move {
while let Ok((stream, addr)) = listener.accept().await {
tokio::spawn(handle_connection(stream, addr, our_sp_address));
}
});
// FAIRE LE SCAN EN ARRIÈRE-PLAN
tokio::spawn(async move {
if let Err(e) = scan_blocks(current_tip - last_scan, &config.blindbit_url).await {
eprintln!("Scan error: {}", e);
}
});
// Attendre que le serveur WebSocket soit prêt
ws_handle.await?;
Ok(())
}
```
### 2. Mode "dégradé" pendant le scan
```rust
// Ajouter un état de service
enum ServiceState {
Starting,
Scanning,
Ready,
Error,
}
// Le serveur WebSocket accepte les connexions mais répond avec un état
async fn handle_connection(stream: TcpStream, addr: SocketAddr, state: Arc<Mutex<ServiceState>>) {
let current_state = state.lock().await;
match *current_state {
ServiceState::Scanning => {
// Répondre avec un message indiquant que le service est en cours de scan
send_message(stream, json!({
"status": "scanning",
"message": "Service is scanning blocks, please wait..."
})).await;
},
ServiceState::Ready => {
// Traitement normal des messages
handle_normal_connection(stream).await;
},
_ => {
// Répondre avec une erreur
send_error(stream, "Service not ready").await;
}
}
}
```
### 3. Scan incrémental en arrière-plan
```rust
// Modifier le scan pour qu'il soit non-bloquant
async fn start_background_scan(start_block: u32, end_block: u32, blindbit_url: &str) {
let mut current = start_block;
while current <= end_block {
match scan_single_block(current, blindbit_url).await {
Ok(_) => {
current += 1;
// Mettre à jour le last_scan dans le wallet
update_last_scan(current).await;
},
Err(e) => {
eprintln!("Error scanning block {}: {}", current, e);
// Attendre un peu avant de réessayer
tokio::time::sleep(Duration::from_secs(5)).await;
}
}
}
}
```
### 4. Healthcheck amélioré
```rust
async fn start_health_server(port: u16) {
let listener = TcpListener::bind(format!("0.0.0.0:{}", port)).await.unwrap();
while let Ok((stream, _)) = listener.accept().await {
tokio::spawn(async move {
let response = match get_service_state().await {
ServiceState::Ready => json!({"status": "ok", "scan_complete": true}),
ServiceState::Scanning => json!({"status": "ok", "scan_complete": false, "message": "Scanning in progress"}),
ServiceState::Starting => json!({"status": "starting", "scan_complete": false}),
ServiceState::Error => json!({"status": "error", "scan_complete": false}),
};
let response_str = response.to_string();
let http_response = format!(
"HTTP/1.1 200 OK\r\nContent-Length: {}\r\nContent-Type: application/json\r\n\r\n{}",
response_str.len(),
response_str
);
let _ = stream.write_all(http_response.as_bytes()).await;
});
}
}
```
## Implémentation recommandée
### Étape 1 : Modifier main.rs
1. Démarrer les serveurs WebSocket et de santé immédiatement
2. Faire le scan en arrière-plan dans une tâche séparée
3. Ajouter un état de service partagé
### Étape 2 : Modifier le healthcheck
1. Retourner l'état du scan dans la réponse
2. Permettre aux services de savoir si le scan est terminé
### Étape 3 : Modifier docker-compose.yml
```yaml
sdk_relay:
# ... configuration existante ...
healthcheck:
test: ["CMD", "curl", "-f", "http://localhost:8091/"]
interval: 10s
timeout: 5s
retries: 3
start_period: 30s # Donner plus de temps pour le démarrage initial
```
### Étape 4 : Modifier les services dépendants
```yaml
ihm_client:
depends_on:
sdk_relay:
condition: service_healthy
# Ajouter une vérification de l'état du scan
environment:
- SDK_RELAY_SCAN_TIMEOUT=300 # 5 minutes max d'attente
```
## Avantages
- ✅ Services disponibles immédiatement
- ✅ Scan non-bloquant
- ✅ Meilleure expérience utilisateur
- ✅ Monitoring de l'état du scan
- ✅ Récupération d'erreur améliorée
## Migration
1. Implémenter les changements dans sdk_relay
2. Tester avec un scan réduit
3. Déployer en production
4. Surveiller les logs et métriques

22
LICENSE Normal file
View File

@ -0,0 +1,22 @@
MIT License
Copyright (c) 2025 4NK
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.

117
README.md
View File

@ -1,93 +1,44 @@
# sdk_relay
## Getting started
To make it easy for you to get started with GitLab, here's a list of recommended next steps.
Already a pro? Just edit this README.md and make it your own. Want to make it easy? [Use the template at the bottom](#editing-this-readme)!
## Add your files
- [ ] [Create](https://docs.gitlab.com/ee/user/project/repository/web_editor.html#create-a-file) or [upload](https://docs.gitlab.com/ee/user/project/repository/web_editor.html#upload-a-file) files
- [ ] [Add files using the command line](https://docs.gitlab.com/ee/gitlab-basics/add-file.html#add-a-file-using-the-command-line) or push an existing Git repository with the following command:
```
cd existing_repo
git remote add origin https://git.4nkweb.com/4nk/sdk_relay.git
git branch -M main
git push -uf origin main
```
## Integrate with your tools
- [ ] [Set up project integrations](https://git.4nkweb.com/4nk/sdk_relay/-/settings/integrations)
## Collaborate with your team
- [ ] [Invite team members and collaborators](https://docs.gitlab.com/ee/user/project/members/)
- [ ] [Create a new merge request](https://docs.gitlab.com/ee/user/project/merge_requests/creating_merge_requests.html)
- [ ] [Automatically close issues from merge requests](https://docs.gitlab.com/ee/user/project/issues/managing_issues.html#closing-issues-automatically)
- [ ] [Enable merge request approvals](https://docs.gitlab.com/ee/user/project/merge_requests/approvals/)
- [ ] [Set auto-merge](https://docs.gitlab.com/ee/user/project/merge_requests/merge_when_pipeline_succeeds.html)
## Test and Deploy
Use the built-in continuous integration in GitLab.
- [ ] [Get started with GitLab CI/CD](https://docs.gitlab.com/ee/ci/quick_start/index.html)
- [ ] [Analyze your code for known vulnerabilities with Static Application Security Testing (SAST)](https://docs.gitlab.com/ee/user/application_security/sast/)
- [ ] [Deploy to Kubernetes, Amazon EC2, or Amazon ECS using Auto Deploy](https://docs.gitlab.com/ee/topics/autodevops/requirements.html)
- [ ] [Use pull-based deployments for improved Kubernetes management](https://docs.gitlab.com/ee/user/clusters/agent/)
- [ ] [Set up protected environments](https://docs.gitlab.com/ee/ci/environments/protected_environments.html)
***
# Editing this README
When you're ready to make this README your own, just edit this file and use the handy template below (or feel free to structure it however you want - this is just a starting point!). Thanks to [makeareadme.com](https://www.makeareadme.com/) for this template.
## Suggestions for a good README
Every project is different, so consider which of these sections apply to yours. The sections used in the template are suggestions for most open source projects. Also keep in mind that while a README can be too long and detailed, too long is better than too short. If you think your README is too long, consider utilizing another form of documentation rather than cutting out information.
## Name
Choose a self-explaining name for your project.
### sdk_relay
## Description
Let people know what your project can do specifically. Provide context and add a link to any reference visitors might be unfamiliar with. A list of Features or a Background subsection can also be added here. If there are alternatives to your project, this is a good place to list differentiating factors.
`sdk_relay` est un service Rust qui relaye des messages et événements autour de Bitcoin en sadossant à:
- RPC Bitcoin Core (lecture chaîne, mempool, transactions, PSBT)
- ZMQ de Bitcoin Core (sujets `rawtx`, `hashblock`)
- Blindbit (scan filtré BIP158 via `sdk_common`)
- WebSocket (serveur « fanout » et diffusion contrôlée)
## Badges
On some READMEs, you may see small images that convey metadata, such as whether or not all the tests are passing for the project. You can use Shields to add some to your README. Many services also have instructions for adding a badge.
## Fonctionnalités
- Serveur WebSocket configurable (`ws_url`)
- Endpoint de santé HTTP (`/health` sur port 8091)
- Gestion détat locale dans `~/<data_dir>` (par défaut `.4nk`)
- Diffusion des messages réseau `sdk_common` (Handshake, NewTx, Commit, Cipher, etc.)
- Persistance de portefeuille Silent Payments et des processus/membres
## Visuals
Depending on what you are making, it can be a good idea to include screenshots or even a video (you'll frequently see GIFs rather than actual videos). Tools like ttygif can help, but check out Asciinema for a more sophisticated method.
## Configuration
Créer un fichier de configuration et le fournir via `--config` (voir `docs/ANALYSE.md`):
- `core_url`, `core_wallet?`, `ws_url`, `wallet_name`, `network`, `blindbit_url`, `zmq_url`, `data_dir`, `bootstrap_url?`, `bootstrap_faucet`
## Installation
Within a particular ecosystem, there may be a common way of installing things, such as using Yarn, NuGet, or Homebrew. However, consider the possibility that whoever is reading your README is a novice and would like more guidance. Listing specific steps helps remove ambiguity and gets people to using your project as quickly as possible. If it only runs in a specific context like a particular programming language version or operating system or has dependencies that have to be installed manually, also add a Requirements subsection.
## Build
Pré-requis: Rust stable. Compilation locale simple:
```
cargo build
```
Compilation par conteneur: voir `Dockerfile` (multiétapes; binaire `sdk_relay`).
## Usage
Use examples liberally, and show the expected output if you can. It's helpful to have inline the smallest example of usage that you can demonstrate, while providing links to more sophisticated examples if they are too long to reasonably include in the README.
## Exécution
Le binaire expose:
- WebSocket sur `ws_url` (ex.: `0.0.0.0:8090`)
- `/health` sur `8091` renvoyant `{"status":"ok"}`
## Support
Tell people where they can go to for help. It can be any combination of an issue tracker, a chat room, an email address, etc.
## Tests
- Tests réseau: `tests/ws_connect.rs` (nécessite un service en écoute sur `SDK_RELAY_WS_URL` ou `ws://localhost:8090`)
- Vérification santé: `tests/health_check.sh`
## Roadmap
If you have ideas for releases in the future, it is a good idea to list them in the README.
## Documentation
Voir `docs/ANALYSE.md` (architecture) et `docs/VALIDATION.md` (parcours de validation et dépannage).
## Contributing
State if you are open to contributions and what your requirements are for accepting them.
## Licence et contributions
Voir `LICENSE`, `CONTRIBUTING.md`, `CODE_OF_CONDUCT.md`.
For people who want to make changes to your project, it's helpful to have some documentation on how to get started. Perhaps there is a script that they should run or some environment variables that they need to set. Make these steps explicit. These instructions could also be useful to your future self.
You can also document commands to lint the code or run tests. These steps help to ensure high code quality and reduce the likelihood that the changes inadvertently break something. Having instructions for running tests is especially helpful if it requires external setup, such as starting a Selenium server for testing in a browser.
## Authors and acknowledgment
Show your appreciation to those who have contributed to the project.
## License
For open source projects, say how it is licensed.
## Project status
If you have run out of energy or time for your project, put a note at the top of the README saying that development has slowed down or stopped completely. Someone may choose to fork your project or volunteer to step in as a maintainer or owner, allowing your project to keep going. You can also make an explicit request for maintainers.
## Journal de changements
Voir `CHANGELOG.md`.

7
debian.code-workspace Normal file
View File

@ -0,0 +1,7 @@
{
"folders": [
{
"path": ".."
}
]
}

View File

@ -0,0 +1,43 @@
# Améliorations Récentes - SDK Relay
## Date: 20 Septembre 2025
### 🔧 Corrections Majeures
#### 1. Problème de Scan Bloquant
**Problème:** Le service se bloquait lors du scan initial des blocs Bitcoin.
**Solution:**
- Optimisation du `last_scan` pour éviter les scans trop importants
- Réduction des logs de `DEBUG` à `INFO`
- Amélioration du healthcheck
**Fichiers modifiés:**
- `Dockerfile` - Ajout des outils système
- `Cargo.toml` - Mise à jour des dépendances
- Configuration - Optimisation des paramètres
#### 2. Installation des Outils Système
**Ajouté dans le Dockerfile:**
```dockerfile
RUN apt-get update && apt-get upgrade -y && \
apt-get install -y ca-certificates dnsutils jq curl git wget telnet npm coreutils && \
npm install -g wscat && \
rm -rf /var/lib/apt/lists/* /root/.npm
```
#### 3. Configuration Bootstrap
- URL bootstrap: `wss://dev3.4nkweb.com/ws/`
- Faucet activé: `bootstrap_faucet=true`
- Adresse SP permanente configurée
### 📊 État Actuel
- **Version:** 0.1.2
- **Statut:** ✅ Healthy
- **Logs:** Niveau INFO (optimisé)
- **Scan:** Optimisé pour éviter les blocages
### 🔄 Prochaines Étapes
- Monitoring des performances
- Tests de connectivité bootstrap
- Optimisations supplémentaires si nécessaire

85
docs/ANALYSE.md Normal file
View File

@ -0,0 +1,85 @@
## Analyse détaillée
### Périmètre
Service Rust `sdk_relay` interfaçant Bitcoin (RPC), Blindbit et WebSocket, avec configuration injectée.
### Stack
- **Langage**: Rust 2021
- **Dépendances**: `tokio`, `tokio-tungstenite`, `zeromq`, `bitcoincore-rpc`, `serde[_json]`, `env_logger`, `futures-util`, `sdk_common` (git, branche `dev`, features `parallel`, `blindbit-backend`).
### Build et image
- Docker multiétapes: build dans `rust:latest` avec SSH pour deps privées, exécution `debian:bookworm-slim`.
- Binaire: `/usr/local/bin/sdk_relay`.
- Conf: buildarg `CONF` écrit dans `/home/bitcoin/.conf`.
- Volumes: `/home/bitcoin/.4nk`, `/home/bitcoin/.bitcoin`.
### Réseau et healthcheck
- **WebSocket**: serveur lié sur `Config.ws_url` (ex. `0.0.0.0:8090`) via `tokio_tungstenite`.
- **Health**: serveur TCP léger interne sur port `8091` retournant `{"status":"ok"}`.
- **Ports exposés**: `8090` (WS), `8091` (HTTP /health) dans le `Dockerfile`.
Références code:
```396:625:src/main.rs
async fn handle_health_endpoint(mut stream: TcpStream) {
let response = "HTTP/1.1 200 OK\r\nContent-Type: application/json\r\nContent-Length: 15\r\n\r\n{\"status\":\"ok\"}";
let _ = stream.write_all(response.as_bytes()).await;
let _ = stream.shutdown().await;
}
async fn start_health_server(port: u16) { /* ... */ }
// Start health server on port 8091
tokio::spawn(start_health_server(8091));
```
Configuration:
```1:7:.conf.model
core_url=""
ws_url=""
wallet_name="default"
network="signet"
electrum_url="tcp://localhost:60601"
blindbit_url="tcp://localhost:8000"
zmq_url=""
```
### Logs
- `RUST_LOG` géré par env; dans `lecoffre_node`, sortie tee vers `/home/bitcoin/.4nk/logs/sdk_relay.log`.
### Risques et points dattention
- Dépendance `sdk_common` via git/branche `dev`: geler par tag/commit pour reproductibilité.
- Image dexécution embarque `strace`; vérifier nécessité en prod.
- Permissions volume Windows: note de chown partiel dans compose parent.
### Actions proposées
- Pinner `sdk_common` sur un commit ou tag; documenter politique de mise à jour.
- Séparer images `-dev` et `-prod` si `strace` non requis.
- Documenter format du fichier de conf (`sdk_relay.conf`) et valeurs par défaut.
### CI / Image
- Pipeline `build-and-push-ext` construit et pousse limage avec un tag calculé depuis le message de commit (préfixe `ci: docker_tag=` sinon `dev-test`).
- Limage expose `8090 8091` et lance `sdk_relay --config /home/bitcoin/.conf`.
Références:
```1:46:Dockerfile
EXPOSE 8090 8091
ENTRYPOINT ["sdk_relay", "--config", "/home/bitcoin/.conf"]
```
```1:73:.gitea/workflows/build-ext.yml
name: build-and-push-ext
```

74
docs/CONFIGURATION.md Normal file
View File

@ -0,0 +1,74 @@
# Configuration SDK Relay
## Variables d'environnement
Le service `sdk_relay` peut être configuré via les variables d'environnement suivantes :
### Variables principales
- **`SDK_RELAY_WS_URL`** : URL WebSocket pour les tests (défaut: `ws://0.0.0.0:8090`)
- **`WS_BIND_URL`** : URL de binding WebSocket (override de la configuration, défaut: valeur de `ws_url`)
- **`HEALTH_PORT`** : Port du serveur de santé (défaut: `8091`)
- **`HEALTH_BIND_ADDRESS`** : Adresse de binding du serveur de santé (défaut: `0.0.0.0`)
- **`RUST_LOG`** : Niveau de logging (défaut: `INFO`)
### Configuration via fichier
Le service utilise un fichier de configuration (`sdk_relay.conf`) avec les paramètres suivants :
```ini
core_url="http://bitcoin:38332"
ws_url="0.0.0.0:8090"
wallet_name="default"
network="signet"
blindbit_url="http://blindbit-oracle:8000"
zmq_url="tcp://bitcoin:29000"
storage="https://dev4.4nkweb.com/storage"
data_dir="/home/bitcoin/.4nk"
bitcoin_data_dir="/home/bitcoin/.bitcoin"
bootstrap_url="wss://dev3.4nkweb.com/ws/"
bootstrap_faucet=true
RUST_LOG="INFO"
sp_address="tsp1qqgmwp9n5p9ujhq2j6cfqe4jpkyu70jh9rgj0pwt3ndezk2mrlvw6jqew8fhsulewzglfr7g2aa48wyj4n0r7yasa3fm666vda8984ke8tuaf9m89"
```
## Changements récents
### v0.1.3 - Configuration externalisée avancée
- **Ajout** : Variables d'environnement `WS_BIND_URL`, `HEALTH_PORT`, `HEALTH_BIND_ADDRESS`
- **Ajout** : Support de la variable d'environnement `SDK_RELAY_WS_URL` pour les tests
- **Modification** : Remplacement de `localhost` par `0.0.0.0` dans les tests WebSocket
- **Amélioration** : Configuration plus flexible pour les environnements Docker
- **Correction** : Résolution du problème de binding sur 127.0.0.1 au lieu de 0.0.0.0
### Tests
Les tests WebSocket utilisent maintenant `ws://0.0.0.0:8090` au lieu de `ws://localhost:8090` pour une meilleure compatibilité avec les environnements Docker.
## Configuration Docker
```yaml
environment:
- WS_BIND_URL=0.0.0.0:8090
- HEALTH_PORT=8091
- HEALTH_BIND_ADDRESS=0.0.0.0
- RUST_LOG=INFO
volumes:
- ./relay/sdk_relay.conf:/home/bitcoin/.conf:ro
```
## Endpoints
- **WebSocket** : `0.0.0.0:8090` - Communication WebSocket
- **Health** : `0.0.0.0:8091` - Vérification de santé
## Dépannage
### Problème de connexion WebSocket
Si le service n'écoute pas sur `0.0.0.0:8090`, vérifiez :
1. La configuration `ws_url` dans le fichier de configuration
2. Les variables d'environnement Docker
3. Les logs du service pour les erreurs de binding

40
docs/VALIDATION.md Normal file
View File

@ -0,0 +1,40 @@
## Validation opérationnelle
### Prérequis
- Image `git.4nkweb.com/4nk/sdk_relay:<tag>` construite par la CI (workflow `build-and-push-ext`).
- Fichier de configuration accessible dans le conteneur à `/home/bitcoin/.conf` avec au minimum: `core_url`, `ws_url`, `wallet_name`, `network`, `blindbit_url`, `zmq_url`.
- Ports hôtes libres: `8090` (WebSocket), `8091` (HTTP /health).
### Démarrage / Redémarrage du service
1. Arrêter linstance en cours (si gérée via Docker/compose parent), puis démarrer avec la nouvelle image taggée `ext` (ou le tag CI calculé) en veillant à monter les volumes `/home/bitcoin/.4nk` et `/home/bitcoin/.bitcoin`.
2. Vérifier les logs de démarrage et la ligne: `Health server listening on port 8091`.
### Tests de santé
- HTTP: `curl -sS http://localhost:8091/health` doit renvoyer `{"status":"ok"}` avec un code 200.
### Tests WebSocket
- Connexion: ouvrir un client vers `ws://localhost:8090` (adresse selon `ws_url`). La poignée de main doit réussir.
- Réception initiale: un message de type Handshake (avec adresse SP, membres et processus) est diffusé à la connexion.
- Diffusion: émettre un message valide (selon protocole `sdk_common`) et vérifier quil est redistribué selon le `BroadcastType`.
### Parcours fonctionnel complet
1. IdNot: initialiser un identifiant et vérifier la persistance locale dans le volume `.4nk`.
2. iframe: intégrer le client (IHM) et établir la communication vers le WebSocket du `sdk_relay`.
3. Ajout de service: exécuter le flux dajout et confirmer la mise à jour de létat et la diffusion côté WS.
### Attendus CI/CD
- La CI construit automatiquement limage incluant lendpoint `/health` et pousse avec le tag calculé (préfixe commit `ci: docker_tag=...`, sinon `dev-test`).
- Une fois limage disponible (tag `ext` si prévu), redémarrer le service pour résoudre les problèmes de connexion.
### Dépannage
- Port occupé: vérifier quaucun service nécoute déjà sur `8090/8091`.
- Conf manquante/invalide: le binaire échoue avec `Failed to find conf file` ou erreurs `No "..."`; corriger `/home/bitcoin/.conf`.
- ZMQ/Blindbit: si pas joignables, les fonctionnalités associées peuvent être dégradées; le `/health` reste OK si le service tourne.
- Volumes: en environnement Windows, vérifier les permissions et lutilisateur `bitcoin`.

View File

@ -0,0 +1,68 @@
# Configuration WebSocket - sdk_relay
## Configuration actuelle
### Variables d'environnement
- `WS_BIND_URL` : URL de binding WebSocket (défaut: `0.0.0.0:8090`)
- `HEALTH_PORT` : Port du serveur de santé (défaut: `8091`)
- `HEALTH_BIND_ADDRESS` : Adresse de binding du serveur de santé (défaut: `0.0.0.0`)
### Configuration dans sdk_relay.conf
```ini
ws_url="0.0.0.0:8090"
blindbit_url="http://blindbit-oracle:8000"
bootstrap_url="wss://dev3.4nkweb.com/ws/"
bootstrap_faucet=true
sp_address="tsp1qqgmwp9n5p9ujhq2j6cfqe4jpkyu70jh9rgj0pwt3ndezk2mrlvw6jqew8fhsulewzglfr7g2aa48wyj4n0r7yasa3fm666vda8984ke8tuaf9m89"
RUST_LOG="INFO"
```
## Problèmes résolus
### 1. Binding sur 127.0.0.1 au lieu de 0.0.0.0
**Problème :** Le relay se liait sur `127.0.0.1:8090` au lieu de `0.0.0.0:8090`.
**Solution :** Externalisation de la configuration via variables d'environnement et correction du code Rust.
### 2. Gestion des erreurs WebSocket
**Problème :** Erreurs de handshake WebSocket non gérées correctement.
**Solution :** Amélioration de la gestion d'erreurs avec `log::warn!` au lieu de `log::error!` pour les tentatives de connexion non-WebSocket.
### 3. Configuration externalisée
**Problème :** IP et ports hardcodés dans le code Rust.
**Solution :** Externalisation de tous les paramètres de configuration via variables d'environnement.
## Tests WebSocket
### Test avec headers corrects
```bash
curl -v -H "Upgrade: websocket" \
-H "Connection: upgrade" \
-H "Sec-WebSocket-Version: 13" \
-H "Sec-WebSocket-Key: dGhlIHNhbXBsZSBub25jZQ==" \
http://127.0.0.1:8090/
```
**Résultat attendu :** `HTTP/1.1 101 Switching Protocols`
### Test sans headers WebSocket
```bash
curl -v http://127.0.0.1:8090/
```
**Résultat attendu :** Erreur de handshake WebSocket (normal)
## Problème persistant
### Nginx ne transmet pas les headers WebSocket
**Statut :** ⚠️ Problème persistant
- Nginx configuré avec tous les headers WebSocket
- Le relay reçoit toujours des connexions sans headers
- Erreur : `"No Upgrade: websocket header"`
**Investigation :** La configuration Nginx semble correcte mais les headers ne sont pas transmis.
## Date de mise à jour
2025-01-20 - Configuration WebSocket externalisée et problèmes de binding résolus

256
sdk_relay.out Normal file

File diff suppressed because one or more lines are too long

1
sdk_relay.pid Normal file
View File

@ -0,0 +1 @@
3249129

791
src/commit.rs Normal file
View File

@ -0,0 +1,791 @@
use std::{
collections::HashMap,
sync::{Mutex, MutexGuard, OnceLock},
};
use anyhow::{Error, Result};
use bitcoincore_rpc::bitcoin::hex::DisplayHex;
use sdk_common::network::{AnkFlag, CommitMessage, HandshakeMessage};
use sdk_common::process::{lock_processes, Process, ProcessState};
use sdk_common::serialization::{OutPointMemberMap, OutPointProcessMap};
use sdk_common::silentpayments::create_transaction;
use sdk_common::sp_client::bitcoin::{Amount, OutPoint};
use sdk_common::sp_client::{FeeRate, Recipient};
use sdk_common::{
pcd::Member,
silentpayments::sign_transaction,
sp_client::{silentpayments::SilentPaymentAddress, RecipientAddress},
};
use crate::{lock_freezed_utxos, MutexExt, DAEMON, STORAGE, WALLET};
use crate::{
message::{broadcast_message, BroadcastType},
CHAIN_TIP,
};
pub(crate) fn handle_commit_request(commit_msg: CommitMessage) -> Result<OutPoint> {
let mut processes = lock_processes()?;
if let Some(process) = processes.get_mut(&commit_msg.process_id) {
handle_existing_commitment(process, &commit_msg)?;
} else {
let new_process = handle_new_process(&commit_msg)?;
// Cache the process
processes.insert(commit_msg.process_id, new_process);
}
// Dump to disk
dump_cached_processes(processes.clone())?;
// Add to frozen UTXOs
lock_freezed_utxos()?.insert(commit_msg.process_id);
// Update processes with the
// Send an update to all connected client
let our_sp_address = WALLET
.get()
.ok_or(Error::msg("Wallet not initialized"))?
.lock_anyhow()?
.get_sp_client()
.get_receiving_address();
let mut new_process_map = HashMap::new();
let new_process = processes.get(&commit_msg.process_id).unwrap().clone();
new_process_map.insert(commit_msg.process_id, new_process);
let current_tip = CHAIN_TIP.load(std::sync::atomic::Ordering::SeqCst);
let init_msg = HandshakeMessage::new(
our_sp_address.to_string(),
OutPointMemberMap(HashMap::new()),
OutPointProcessMap(new_process_map),
current_tip.into(),
);
if let Err(e) = broadcast_message(
AnkFlag::Handshake,
format!("{}", init_msg.to_string()),
BroadcastType::ToAll,
) {
log::error!("Failed to send handshake message: {}", e);
}
Ok(commit_msg.process_id)
}
fn send_members_update(pairing_process_id: OutPoint) -> Result<()> {
dump_cached_members()?;
// Send a handshake message to every connected client
if let Some(new_member) = lock_members().unwrap().get(&pairing_process_id) {
let our_sp_address = WALLET
.get()
.ok_or(Error::msg("Wallet not initialized"))?
.lock_anyhow()?
.get_sp_client()
.get_receiving_address();
let mut new_member_map = HashMap::new();
new_member_map.insert(pairing_process_id, new_member.clone());
let init_msg = HandshakeMessage::new(
our_sp_address.into(),
OutPointMemberMap(new_member_map),
OutPointProcessMap(HashMap::new()),
CHAIN_TIP.load(std::sync::atomic::Ordering::SeqCst).into(),
);
if let Err(e) = broadcast_message(
AnkFlag::Handshake,
format!("{}", init_msg.to_string()),
BroadcastType::ToAll,
) {
Err(Error::msg(format!(
"Failed to send handshake message: {}",
e
)))
} else {
Ok(())
}
} else {
Err(Error::msg(format!(
"Failed to find new member with process id {}",
pairing_process_id
)))
}
}
fn handle_new_process(commit_msg: &CommitMessage) -> Result<Process> {
let pcd_commitment = &commit_msg.pcd_commitment;
let merkle_root_bin = pcd_commitment.create_merkle_tree()?.root().unwrap();
if let Ok(pairing_process_id) = handle_member_list(&commit_msg) {
send_members_update(pairing_process_id)?;
}
let mut new_process = Process::new(commit_msg.process_id);
let init_state = ProcessState {
commited_in: commit_msg.process_id,
roles: commit_msg.roles.clone(),
pcd_commitment: commit_msg.pcd_commitment.clone(),
state_id: merkle_root_bin,
public_data: commit_msg.public_data.clone(),
..Default::default()
};
new_process.insert_concurrent_state(init_state)?;
Ok(new_process)
}
pub static MEMBERLIST: OnceLock<Mutex<HashMap<OutPoint, Member>>> = OnceLock::new();
pub fn lock_members() -> Result<MutexGuard<'static, HashMap<OutPoint, Member>>, anyhow::Error> {
MEMBERLIST
.get_or_init(|| Mutex::new(HashMap::new()))
.lock_anyhow()
}
fn handle_member_list(commit_msg: &CommitMessage) -> Result<OutPoint> {
//Check if there is one role with one member
if commit_msg.roles.len() != 1 {
return Err(Error::msg("Process is not a pairing process"));
}
if let Some(pairing_role) = commit_msg.roles.get("pairing") {
if !pairing_role.members.is_empty() {
return Err(Error::msg("Process is not a pairing process"));
}
} else {
return Err(Error::msg("Process is not a pairing process"));
}
if let Ok(paired_addresses) = commit_msg.public_data.get_as_json("pairedAddresses") {
let paired_addresses: Vec<SilentPaymentAddress> =
serde_json::from_value(paired_addresses.clone())?;
let mut memberlist = lock_members()?;
memberlist.insert(commit_msg.process_id, Member::new(paired_addresses));
return Ok(commit_msg.process_id);
}
Err(Error::msg("Process is not a pairing process"))
}
fn handle_existing_commitment(
process_to_udpate: &mut Process,
commit_msg: &CommitMessage,
) -> Result<()> {
let process_id = process_to_udpate.get_process_id()?;
match register_new_state(process_to_udpate, &commit_msg) {
Ok(new_state_id) => log::debug!(
"Registering new state for process {} with state id {}",
process_id,
new_state_id.to_lower_hex_string()
),
Err(existing_state_id) => log::debug!("State {} already exists", existing_state_id),
}
if commit_msg.validation_tokens.len() > 0 {
log::debug!(
"Received commit_msg with {} validation tokens for process {}",
commit_msg.validation_tokens.len(),
process_id
);
// If the validation succeed, we return a new tip
process_validation(process_to_udpate, commit_msg)?;
if let Ok(pairing_process_id) = handle_member_list(commit_msg) {
debug_assert_eq!(pairing_process_id, process_id);
send_members_update(process_id)?;
}
}
Ok(())
}
pub fn dump_cached_members() -> Result<(), anyhow::Error> {
let members = lock_members()?.clone();
let storage = STORAGE
.get()
.ok_or(Error::msg("STORAGE is not initialized"))?
.lock_anyhow()?;
let members_file = &storage.members_file;
let members_map = OutPointMemberMap(members);
let json = serde_json::to_value(&members_map)?;
members_file.save(&json)?;
log::debug!("saved members");
Ok(())
}
pub fn dump_cached_processes(processes: HashMap<OutPoint, Process>) -> Result<(), anyhow::Error> {
let storage = STORAGE
.get()
.ok_or(Error::msg("STORAGE is not initialized"))?
.lock_anyhow()?;
let processes_file = &storage.processes_file;
let outpoints_map = OutPointProcessMap(processes);
let json = serde_json::to_value(&outpoints_map)?;
processes_file.save(&json)?;
log::debug!("saved processes");
Ok(())
}
// Register a new state
fn register_new_state(process: &mut Process, commit_msg: &CommitMessage) -> Result<[u8; 32]> {
let last_commited_state = process.get_latest_commited_state();
let new_state_id = commit_msg
.pcd_commitment
.create_merkle_tree()?
.root()
.unwrap();
if let Some(state) = last_commited_state {
if new_state_id == state.state_id {
return Err(Error::msg(format!(
"{}",
new_state_id.to_lower_hex_string()
)));
}
}
let concurrent_states = process.get_latest_concurrent_states()?;
let (empty_state, actual_states) = concurrent_states.split_last().unwrap();
let current_outpoint = empty_state.commited_in;
// Ensure no duplicate states
if actual_states
.iter()
.any(|state| state.state_id == new_state_id)
{
return Err(Error::msg(format!(
"{}",
new_state_id.to_lower_hex_string()
)));
}
// Add the new state
let new_state = ProcessState {
commited_in: current_outpoint,
pcd_commitment: commit_msg.pcd_commitment.clone(),
state_id: new_state_id.clone(),
roles: commit_msg.roles.clone(),
public_data: commit_msg.public_data.clone(),
..Default::default()
};
process.insert_concurrent_state(new_state)?;
Ok(new_state_id)
}
// Process validation for a state with validation tokens
fn process_validation(
updated_process: &mut Process,
commit_msg: &CommitMessage,
) -> Result<OutPoint> {
let new_state_id = if commit_msg.pcd_commitment.is_empty() {
// We're dealing with an obliteration attempt
[0u8; 32]
} else {
commit_msg
.pcd_commitment
.create_merkle_tree()?
.root()
.ok_or(Error::msg("Invalid merkle tree"))?
};
{
let state_to_update = updated_process.get_state_for_id_mut(&new_state_id)?;
// Complete with the received tokens
state_to_update
.validation_tokens
.extend(commit_msg.validation_tokens.iter());
// Sort by public key to group validations by signer
state_to_update.validation_tokens.sort_unstable_by_key(|proof| proof.get_key());
// Remove duplicates where same key validates same message
state_to_update.validation_tokens.dedup_by(|a, b| {
a.get_key() == b.get_key() && a.get_message() == b.get_message()
});
}
let state_to_validate = updated_process.get_state_for_id(&new_state_id)?;
let members = lock_members()?.clone();
state_to_validate.is_valid(
updated_process.get_latest_commited_state(),
&OutPointMemberMap(members),
)?;
let commited_in = commit_new_transaction(updated_process, state_to_validate.clone())?;
Ok(commited_in)
}
// Commit the new transaction and update the process state
fn commit_new_transaction(
updated_process: &mut Process,
state_to_commit: ProcessState,
) -> Result<OutPoint> {
let sp_wallet = WALLET
.get()
.ok_or(Error::msg("Wallet not initialized"))?
.lock_anyhow()?;
let commitment_payload = Vec::from(state_to_commit.state_id);
let mut recipients = vec![];
recipients.push(Recipient {
address: RecipientAddress::SpAddress(sp_wallet.get_sp_client().get_receiving_address()),
amount: Amount::from_sat(1000),
});
// TODO not sure if this is still used
// If the process is a pairing, we add another output that directly pays the owner of the process
// We can find out simply by looking at the members list
if let Some(member) = lock_members()?.get(&updated_process.get_process_id().unwrap()) {
// We just pick one of the devices of the member at random en pay to it, member can then share the private key between all devices
// For now we take the first address
let address: SilentPaymentAddress =
member.get_addresses().get(0).unwrap().as_str().try_into()?;
recipients.push(Recipient {
address: RecipientAddress::SpAddress(address),
amount: Amount::from_sat(1000),
});
}
// This output is used to generate publicly available public keys without having to go through too many loops
let daemon = DAEMON.get().unwrap().lock_anyhow()?;
let fee_rate = daemon
.estimate_fee(6)
.unwrap_or(Amount::from_sat(1000))
.checked_div(1000)
.unwrap();
let mut freezed_utxos = lock_freezed_utxos()?;
let next_commited_in = updated_process.get_process_tip()?;
if !freezed_utxos.contains(&next_commited_in) {
return Err(Error::msg(format!(
"Missing next commitment outpoint for process {}",
updated_process.get_process_id()?
)));
};
let unspent_outputs = sp_wallet.get_unspent_outputs();
let mut available_outpoints = vec![];
// We push the next_commited_in at the top of the available outpoints
if let Some(output) = unspent_outputs.get(&next_commited_in) {
available_outpoints.push((next_commited_in, output.clone()));
}
// We filter out freezed utxos
for (outpoint, output) in unspent_outputs {
if !freezed_utxos.contains(&outpoint) {
available_outpoints.push((outpoint, output));
}
}
let unsigned_transaction = create_transaction(
available_outpoints,
sp_wallet.get_sp_client(),
recipients,
Some(commitment_payload),
FeeRate::from_sat_per_vb(fee_rate.to_sat() as f32),
)?;
let final_tx = sign_transaction(sp_wallet.get_sp_client(), unsigned_transaction)?;
daemon.test_mempool_accept(&final_tx)?;
let txid = daemon.broadcast(&final_tx)?;
let commited_in = OutPoint::new(txid, 0);
freezed_utxos.insert(commited_in);
freezed_utxos.remove(&next_commited_in);
updated_process.remove_all_concurrent_states()?;
updated_process.insert_concurrent_state(state_to_commit)?;
updated_process.update_states_tip(commited_in)?;
Ok(commited_in)
}
// TODO tests are broken, we need a complete overhaul to make it work again
#[cfg(test)]
mod tests {
use super::*;
use crate::daemon::RpcCall;
use crate::DiskStorage;
use crate::StateFile;
use bitcoincore_rpc::bitcoin::consensus::deserialize;
use bitcoincore_rpc::bitcoin::hex::DisplayHex;
use bitcoincore_rpc::bitcoin::*;
use mockall::mock;
use mockall::predicate::*;
use sdk_common::pcd::Member;
use sdk_common::pcd::Pcd;
use sdk_common::pcd::PcdCommitments;
use sdk_common::pcd::RoleDefinition;
use sdk_common::pcd::Roles;
use sdk_common::pcd::ValidationRule;
use sdk_common::process::CACHEDPROCESSES;
use sdk_common::sp_client::bitcoin::consensus::serialize;
use sdk_common::sp_client::bitcoin::hex::FromHex;
use sdk_common::sp_client::silentpayments::SilentPaymentAddress;
use serde_json::json;
use serde_json::{Map, Value};
use std::collections::BTreeMap;
use std::collections::HashMap;
use std::path::PathBuf;
use std::str::FromStr;
use std::sync::Mutex;
use std::sync::OnceLock;
const LOCAL_ADDRESS: &str = "sprt1qq222dhaxlzmjft2pa7qtspw2aw55vwfmtnjyllv5qrsqwm3nufxs6q7t88jf9asvd7rxhczt87de68du3jhem54xvqxy80wc6ep7lauxacsrq79v";
const INIT_TRANSACTION: &str = "02000000000102b01b832bf34cf87583c628839c5316546646dcd4939e339c1d83e693216cdfa00100000000fdffffffdd1ca865b199accd4801634488fca87e0cf81b36ee7e9bec526a8f922539b8670000000000fdffffff0200e1f505000000001600140798fac9f310cefad436ea928f0bdacf03a11be544e0f5050000000016001468a66f38e7c2c9e367577d6fad8532ae2c728ed2014043764b77de5041f80d19e3d872f205635f87486af015c00d2a3b205c694a0ae1cbc60e70b18bcd4470abbd777de63ae52600aba8f5ad1334cdaa6bcd931ab78b0140b56dd8e7ac310d6dcbc3eff37f111ced470990d911b55cd6ff84b74b579c17d0bba051ec23b738eeeedba405a626d95f6bdccb94c626db74c57792254bfc5a7c00000000";
const TMP_WALLET: &str = "/tmp/.4nk/wallet";
const TMP_PROCESSES: &str = "/tmp/.4nk/processes";
const TMP_MEMBERS: &str = "/tmp/.4nk/members";
// Define the mock for Daemon with the required methods
mock! {
#[derive(Debug)]
pub Daemon {}
impl RpcCall for Daemon {
fn connect(
rpcwallet: Option<String>,
rpc_url: String,
network: bitcoincore_rpc::bitcoin::Network,
) -> Result<Self> where Self: Sized;
fn estimate_fee(&self, nblocks: u16) -> Result<Amount>;
fn get_relay_fee(&self) -> Result<Amount>;
fn get_current_height(&self) -> Result<u64>;
fn get_block(&self, block_hash: BlockHash) -> Result<Block>;
fn get_filters(&self, block_height: u32) -> Result<(u32, BlockHash, bip158::BlockFilter)>;
fn list_unspent_from_to(
&self,
minamt: Option<Amount>,
) -> Result<Vec<bitcoincore_rpc::json::ListUnspentResultEntry>>;
fn create_psbt(
&self,
unspents: &[bitcoincore_rpc::json::ListUnspentResultEntry],
spk: ScriptBuf,
network: Network,
) -> Result<String>;
fn process_psbt(&self, psbt: String) -> Result<String>;
fn finalize_psbt(&self, psbt: String) -> Result<String>;
fn get_network(&self) -> Result<Network>;
fn test_mempool_accept(
&self,
tx: &Transaction,
) -> Result<crate::bitcoin_json::TestMempoolAcceptResult>;
fn broadcast(&self, tx: &Transaction) -> Result<Txid>;
fn get_transaction_info(
&self,
txid: &Txid,
blockhash: Option<BlockHash>,
) -> Result<Value>;
fn get_transaction_hex(
&self,
txid: &Txid,
blockhash: Option<BlockHash>,
) -> Result<Value>;
fn get_transaction(
&self,
txid: &Txid,
blockhash: Option<BlockHash>,
) -> Result<Transaction>;
fn get_block_txids(&self, blockhash: BlockHash) -> Result<Vec<Txid>>;
fn get_mempool_txids(&self) -> Result<Vec<Txid>>;
fn get_mempool_entries(
&self,
txids: &[Txid],
) -> Result<Vec<Result<bitcoincore_rpc::json::GetMempoolEntryResult>>>;
fn get_mempool_transactions(
&self,
txids: &[Txid],
) -> Result<Vec<Result<Transaction>>>;
}
}
mock! {
#[derive(Debug)]
pub SpWallet {
fn get_receiving_address(&self) -> Result<String>;
}
}
mock! {
#[derive(Debug)]
pub SilentPaymentWallet {
fn get_sp_wallet(&self) -> Result<MockSpWallet>;
}
}
static WALLET: OnceLock<MockSilentPaymentWallet> = OnceLock::new();
pub fn initialize_static_variables() {
if DAEMON.get().is_none() {
let mut daemon = MockDaemon::new();
daemon
.expect_broadcast()
.withf(|tx: &Transaction| serialize(tx).to_lower_hex_string() == INIT_TRANSACTION)
.returning(|tx| Ok(tx.txid()));
DAEMON
.set(Mutex::new(Box::new(daemon)))
.expect("DAEMON should only be initialized once");
println!("Initialized DAEMON");
}
if WALLET.get().is_none() {
let mut wallet = MockSilentPaymentWallet::new();
wallet
.expect_get_sp_wallet()
.returning(|| Ok(MockSpWallet::new()));
WALLET
.set(wallet)
.expect("WALLET should only be initialized once");
println!("Initialized WALLET");
}
if CACHEDPROCESSES.get().is_none() {
CACHEDPROCESSES
.set(Mutex::new(HashMap::new()))
.expect("CACHEDPROCESSES should only be initialized once");
println!("Initialized CACHEDPROCESSES");
}
if STORAGE.get().is_none() {
let wallet_file = StateFile::new(PathBuf::from_str(TMP_WALLET).unwrap());
let processes_file = StateFile::new(PathBuf::from_str(TMP_PROCESSES).unwrap());
let members_file = StateFile::new(PathBuf::from_str(TMP_MEMBERS).unwrap());
wallet_file.create().unwrap();
processes_file.create().unwrap();
members_file.create().unwrap();
let disk_storage = DiskStorage {
wallet_file,
processes_file,
members_file,
};
STORAGE
.set(Mutex::new(disk_storage))
.expect("STORAGE should initialize only once");
println!("Initialized STORAGE");
}
}
fn mock_commit_msg(process_id: OutPoint) -> CommitMessage {
let field_names = [
"a".to_owned(),
"b".to_owned(),
"pub_a".to_owned(),
"roles".to_owned(),
];
let pairing_id = OutPoint::from_str(
"b0c8378ee68e9a73836b04423ddb6de9fc0e2e715e04ffe6aa34117bb1025f01:0",
)
.unwrap();
let member = Member::new(vec![SilentPaymentAddress::try_from(LOCAL_ADDRESS).unwrap()]);
let validation_rule = ValidationRule::new(1.0, Vec::from(field_names), 1.0).unwrap();
let role_def = RoleDefinition {
members: vec![pairing_id],
validation_rules: vec![validation_rule],
storages: vec![],
};
let roles = Roles::new(BTreeMap::from([(String::from("role_name"), role_def)]));
let public_data = TryInto::<Pcd>::try_into(json!({"pub_a": Value::Null})).unwrap();
let clear_state =
TryInto::<Pcd>::try_into(json!({"a": Value::Null, "b": Value::Null})).unwrap();
let pcd_commitments = PcdCommitments::new(
&process_id,
&Pcd::new(public_data.clone().into_iter().chain(clear_state).collect()),
&roles,
)
.unwrap();
let commit_msg = CommitMessage {
process_id,
roles,
public_data,
validation_tokens: vec![],
pcd_commitment: pcd_commitments,
error: None,
};
commit_msg
}
#[ignore = "instable avec OnceLock/WALLET init dans l'environnement de test courant"]
#[test]
fn test_handle_commit_new_process() {
initialize_static_variables();
let init_tx =
deserialize::<Transaction>(&Vec::from_hex(INIT_TRANSACTION).unwrap()).unwrap();
let init_txid = init_tx.txid();
let process_id = OutPoint::new(init_txid, 0);
let commit_msg = mock_commit_msg(process_id);
let roles = commit_msg.roles.clone();
let pcd_commitment = commit_msg.pcd_commitment.clone();
let empty_state = ProcessState {
commited_in: process_id,
..Default::default()
};
let result = handle_commit_request(commit_msg);
assert_eq!(result.unwrap(), process_id);
let cache = CACHEDPROCESSES.get().unwrap().lock().unwrap();
let updated_process = cache.get(&process_id);
assert!(updated_process.is_some());
let concurrent_states = updated_process
.unwrap()
.get_latest_concurrent_states()
.unwrap();
// Constructing the roles_map that was inserted in the process
let roles_object = serde_json::to_value(roles).unwrap();
let mut roles_map = Map::new();
roles_map.insert("roles".to_owned(), roles_object);
let new_state = ProcessState {
commited_in: process_id,
pcd_commitment,
..Default::default()
};
let target = vec![&empty_state, &new_state];
assert_eq!(concurrent_states, target);
}
#[ignore = "instable avec OnceLock/WALLET init dans l'environnement de test courant"]
#[test]
fn test_handle_commit_new_state() {
initialize_static_variables();
let init_tx =
deserialize::<Transaction>(&Vec::from_hex(INIT_TRANSACTION).unwrap()).unwrap();
let init_txid = init_tx.txid();
let process_id = OutPoint::new(init_txid, 0);
let commit_msg = mock_commit_msg(process_id);
let roles = commit_msg.roles.clone();
let pcd_commitment = commit_msg.pcd_commitment.clone();
let process = Process::new(process_id);
CACHEDPROCESSES
.get()
.unwrap()
.lock()
.unwrap()
.insert(process_id, process);
let result = handle_commit_request(commit_msg);
assert_eq!(result.unwrap(), process_id);
let cache = CACHEDPROCESSES.get().unwrap().lock().unwrap();
let updated_process = cache.get(&process_id);
assert!(updated_process.is_some());
let concurrent_states = updated_process
.unwrap()
.get_latest_concurrent_states()
.unwrap();
let roles_object = serde_json::to_value(roles).unwrap();
let mut roles_map = Map::new();
roles_map.insert("roles".to_owned(), roles_object);
let new_state = ProcessState {
commited_in: process_id,
pcd_commitment,
..Default::default()
};
let empty_state = ProcessState {
commited_in: process_id,
..Default::default()
};
let target = vec![&empty_state, &new_state];
assert_eq!(concurrent_states, target);
}
// #[test]
// fn test_handle_commit_request_invalid_init_tx() {
// let commit_msg = CommitMessage {
// init_tx: "invalid_tx_hex".to_string(),
// roles: HashMap::new(),
// validation_tokens: vec![],
// pcd_commitment: json!({"roles": "expected_roles"}).as_object().unwrap().clone(),
// };
// // Call the function under test
// let result = handle_commit_request(commit_msg);
// // Assertions for error
// assert!(result.is_err());
// assert_eq!(result.unwrap_err().to_string(), "init_tx must be a valid transaction or txid");
// }
// // Example test for adding a new state to an existing commitment
// #[test]
// fn test_handle_commit_request_add_state() {
// // Set up data for adding a state to an existing commitment
// let commit_msg = CommitMessage {
// init_tx: "existing_outpoint_hex".to_string(),
// roles: HashMap::new(),
// validation_tokens: vec![],
// pcd_commitment: json!({"roles": "expected_roles"}).as_object().unwrap().clone(),
// };
// // Mock daemon and cache initialization
// let mut daemon = MockDaemon::new();
// daemon.expect_broadcast().returning(|_| Ok(Txid::new()));
// DAEMON.set(Arc::new(Mutex::new(daemon))).unwrap();
// let process_state = Process::new(vec![], vec![]);
// CACHEDPROCESSES.lock().unwrap().insert(OutPoint::new("mock_txid", 0), process_state);
// // Run the function
// let result = handle_commit_request(commit_msg);
// // Assert success and that a new state was added
// assert!(result.is_ok());
// assert_eq!(result.unwrap(), OutPoint::new("mock_txid", 0));
// }
// // Additional tests for errors and validation tokens would follow a similar setup
}

88
src/config.rs Normal file
View File

@ -0,0 +1,88 @@
use std::collections::HashMap;
use std::fs::File;
use std::io::{self, BufRead};
use anyhow::{Error, Result};
use sdk_common::sp_client::bitcoin::Network;
#[derive(Debug)]
pub struct Config {
pub core_url: String,
pub core_wallet: Option<String>,
pub ws_url: String,
pub wallet_name: String,
pub network: Network,
pub blindbit_url: String,
pub zmq_url: String,
pub data_dir: String,
pub bootstrap_url: Option<String>,
pub bootstrap_faucet: bool,
}
impl Config {
pub fn read_from_file(filename: &str) -> Result<Self> {
let mut file_content = HashMap::new();
if let Ok(file) = File::open(filename) {
let reader = io::BufReader::new(file);
// Read the file line by line
for line in reader.lines() {
if let Ok(l) = line {
// Ignore comments and empty lines
if l.starts_with('#') || l.trim().is_empty() {
continue;
}
// Split the line into key and value
if let Some((k, v)) = l.split_once('=') {
file_content.insert(k.to_owned(), v.trim_matches('\"').to_owned());
}
}
}
} else {
return Err(anyhow::Error::msg("Failed to find conf file"));
}
// Now set the Config
let config = Config {
core_url: file_content
.remove("core_url")
.ok_or(Error::msg("No \"core_url\""))?
.to_owned(),
core_wallet: file_content.remove("core_wallet").map(|s| s.to_owned()),
ws_url: file_content
.remove("ws_url")
.ok_or(Error::msg("No \"ws_url\""))?
.to_owned(),
wallet_name: file_content
.remove("wallet_name")
.ok_or(Error::msg("No \"wallet_name\""))?
.to_owned(),
network: Network::from_core_arg(
&file_content
.remove("network")
.ok_or(Error::msg("no \"network\""))?
.trim_matches('\"'),
)?,
blindbit_url: file_content
.remove("blindbit_url")
.ok_or(Error::msg("No \"blindbit_url\""))?
.to_owned(),
zmq_url: file_content
.remove("zmq_url")
.ok_or(Error::msg("No \"zmq_url\""))?
.to_owned(),
data_dir: file_content
.remove("data_dir")
.unwrap_or_else(|| ".4nk".to_string()),
bootstrap_url: file_content.remove("bootstrap_url"),
bootstrap_faucet: file_content
.remove("bootstrap_faucet")
.map(|v| v == "true" || v == "1")
.unwrap_or(true),
};
Ok(config)
}
}

460
src/daemon.rs Normal file
View File

@ -0,0 +1,460 @@
use anyhow::{Context, Error, Result};
use bitcoincore_rpc::json::{
CreateRawTransactionInput, ListUnspentQueryOptions, ListUnspentResultEntry,
WalletCreateFundedPsbtOptions,
};
use bitcoincore_rpc::{json, jsonrpc, Auth, Client, RpcApi};
use sdk_common::sp_client::bitcoin::bip158::BlockFilter;
use sdk_common::sp_client::bitcoin::{
block, Address, Amount, Block, BlockHash, Network, OutPoint, Psbt, ScriptBuf, Sequence,
Transaction, TxIn, TxOut, Txid,
};
use sdk_common::sp_client::bitcoin::{consensus::deserialize, hashes::hex::FromHex};
// use crossbeam_channel::Receiver;
// use parking_lot::Mutex;
use serde_json::{json, Value};
use std::collections::HashMap;
use std::env;
use std::fs::File;
use std::io::Read;
use std::path::{Path, PathBuf};
use std::str::FromStr;
use std::time::Duration;
use crate::FAUCET_AMT;
pub struct SensitiveAuth(pub Auth);
impl SensitiveAuth {
pub(crate) fn get_auth(&self) -> Auth {
self.0.clone()
}
}
enum PollResult {
Done(Result<()>),
Retry,
}
fn rpc_poll(client: &mut Client, skip_block_download_wait: bool) -> PollResult {
match client.get_blockchain_info() {
Ok(info) => {
if skip_block_download_wait {
// bitcoind RPC is available, don't wait for block download to finish
return PollResult::Done(Ok(()));
}
let left_blocks = info.headers - info.blocks;
if info.initial_block_download || left_blocks > 0 {
log::info!(
"waiting for {} blocks to download{}",
left_blocks,
if info.initial_block_download {
" (IBD)"
} else {
""
}
);
return PollResult::Retry;
}
PollResult::Done(Ok(()))
}
Err(err) => {
if let Some(e) = extract_bitcoind_error(&err) {
if e.code == -28 {
log::debug!("waiting for RPC warmup: {}", e.message);
return PollResult::Retry;
}
}
PollResult::Done(Err(err).context("daemon not available"))
}
}
}
fn read_cookie(path: &Path) -> Result<(String, String)> {
// Load username and password from bitcoind cookie file:
// * https://github.com/bitcoin/bitcoin/pull/6388/commits/71cbeaad9a929ba6a7b62d9b37a09b214ae00c1a
// * https://bitcoin.stackexchange.com/questions/46782/rpc-cookie-authentication
let mut file = File::open(path)
.with_context(|| format!("failed to open bitcoind cookie file: {}", path.display()))?;
let mut contents = String::new();
file.read_to_string(&mut contents)
.with_context(|| format!("failed to read bitcoind cookie from {}", path.display()))?;
let parts: Vec<&str> = contents.splitn(2, ':').collect();
anyhow::ensure!(
parts.len() == 2,
"failed to parse bitcoind cookie - missing ':' separator"
);
Ok((parts[0].to_owned(), parts[1].to_owned()))
}
fn rpc_connect(rpcwallet: Option<String>, network: Network, mut rpc_url: String) -> Result<Client> {
match rpcwallet {
Some(rpcwallet) => rpc_url.push_str(&rpcwallet),
None => (),
}
// Allow `wait_for_new_block` to take a bit longer before timing out.
// See https://github.com/romanz/electrs/issues/495 for more details.
let builder = jsonrpc::simple_http::SimpleHttpTransport::builder()
.url(&rpc_url)?
.timeout(Duration::from_secs(30));
let home = env::var("HOME")?;
let mut cookie_path = PathBuf::from_str(&home)?;
cookie_path.push(".bitcoin");
cookie_path.push(network.to_core_arg());
cookie_path.push(".cookie");
let daemon_auth = SensitiveAuth(Auth::CookieFile(cookie_path));
let builder = match daemon_auth.get_auth() {
Auth::None => builder,
Auth::UserPass(user, pass) => builder.auth(user, Some(pass)),
Auth::CookieFile(path) => {
let (user, pass) = read_cookie(&path)?;
builder.auth(user, Some(pass))
}
};
Ok(Client::from_jsonrpc(jsonrpc::Client::with_transport(
builder.build(),
)))
}
#[derive(Debug)]
pub struct Daemon {
rpc: Client,
}
impl RpcCall for Daemon {
fn connect(rpcwallet: Option<String>, rpc_url: String, network: Network) -> Result<Self> {
let mut rpc = rpc_connect(rpcwallet, network, rpc_url)?;
loop {
match rpc_poll(&mut rpc, false) {
PollResult::Done(result) => {
result.context("bitcoind RPC polling failed")?;
break; // on success, finish polling
}
PollResult::Retry => {
std::thread::sleep(std::time::Duration::from_secs(1)); // wait a bit before polling
}
}
}
let network_info = rpc.get_network_info()?;
if !network_info.network_active {
anyhow::bail!("electrs requires active bitcoind p2p network");
}
let info = rpc.get_blockchain_info()?;
if info.pruned {
anyhow::bail!("electrs requires non-pruned bitcoind node");
}
Ok(Self { rpc })
}
fn estimate_fee(&self, nblocks: u16) -> Result<Amount> {
let res = self
.rpc
.estimate_smart_fee(nblocks, None)
.context("failed to estimate fee")?;
if res.errors.is_some() {
Err(Error::msg(serde_json::to_string(&res.errors.unwrap())?))
} else {
Ok(res.fee_rate.unwrap())
}
}
fn get_relay_fee(&self) -> Result<Amount> {
Ok(self
.rpc
.get_network_info()
.context("failed to get relay fee")?
.relay_fee)
}
fn get_current_height(&self) -> Result<u64> {
Ok(self
.rpc
.get_block_count()
.context("failed to get block count")?)
}
fn get_block(&self, block_hash: BlockHash) -> Result<Block> {
Ok(self
.rpc
.get_block(&block_hash)
.context("failed to get block")?)
}
fn get_filters(&self, block_height: u32) -> Result<(u32, BlockHash, BlockFilter)> {
let block_hash = self.rpc.get_block_hash(block_height.try_into()?)?;
let filter = self
.rpc
.get_block_filter(&block_hash)
.context("failed to get block filter")?
.into_filter();
Ok((block_height, block_hash, filter))
}
fn list_unspent_from_to(
&self,
minamt: Option<Amount>,
) -> Result<Vec<json::ListUnspentResultEntry>> {
let minimum_sum_amount = if minamt.is_none() || minamt <= FAUCET_AMT.checked_mul(2) {
FAUCET_AMT.checked_mul(2)
} else {
minamt
};
Ok(self.rpc.list_unspent(
None,
None,
None,
Some(true),
Some(ListUnspentQueryOptions {
minimum_sum_amount,
..Default::default()
}),
)?)
}
fn create_psbt(
&self,
unspents: &[ListUnspentResultEntry],
spk: ScriptBuf,
network: Network,
) -> Result<String> {
let inputs: Vec<CreateRawTransactionInput> = unspents
.iter()
.map(|utxo| CreateRawTransactionInput {
txid: utxo.txid,
vout: utxo.vout,
sequence: None,
})
.collect();
let address = Address::from_script(&spk, network)?;
let total_amt = unspents
.iter()
.fold(Amount::from_sat(0), |acc, x| acc + x.amount);
if total_amt < FAUCET_AMT {
return Err(Error::msg("Not enought funds"));
}
let mut outputs = HashMap::new();
outputs.insert(address.to_string(), total_amt);
let options = WalletCreateFundedPsbtOptions {
subtract_fee_from_outputs: vec![0],
..Default::default()
};
let wallet_create_funded_result =
self.rpc
.wallet_create_funded_psbt(&inputs, &outputs, None, Some(options), None)?;
Ok(wallet_create_funded_result.psbt.to_string())
}
fn process_psbt(&self, psbt: String) -> Result<String> {
let processed_psbt = self.rpc.wallet_process_psbt(&psbt, None, None, None)?;
match processed_psbt.complete {
true => Ok(processed_psbt.psbt),
false => Err(Error::msg("Failed to complete the psbt")),
}
}
fn finalize_psbt(&self, psbt: String) -> Result<String> {
let final_tx = self.rpc.finalize_psbt(&psbt, Some(false))?;
match final_tx.complete {
true => Ok(final_tx
.psbt
.expect("We shouldn't have an empty psbt for a complete return")),
false => Err(Error::msg("Failed to finalize psbt")),
}
}
fn get_network(&self) -> Result<Network> {
let blockchain_info = self.rpc.get_blockchain_info()?;
Ok(blockchain_info.chain)
}
fn test_mempool_accept(
&self,
tx: &Transaction,
) -> Result<crate::bitcoin_json::TestMempoolAcceptResult> {
let res = self.rpc.test_mempool_accept(&vec![tx])?;
Ok(res.get(0).unwrap().clone())
}
fn broadcast(&self, tx: &Transaction) -> Result<Txid> {
let txid = self.rpc.send_raw_transaction(tx)?;
Ok(txid)
}
fn get_transaction_info(&self, txid: &Txid, blockhash: Option<BlockHash>) -> Result<Value> {
// No need to parse the resulting JSON, just return it as-is to the client.
self.rpc
.call(
"getrawtransaction",
&[json!(txid), json!(true), json!(blockhash)],
)
.context("failed to get transaction info")
}
fn get_transaction_hex(&self, txid: &Txid, blockhash: Option<BlockHash>) -> Result<Value> {
use sdk_common::sp_client::bitcoin::consensus::serde::{hex::Lower, Hex, With};
let tx = self.get_transaction(txid, blockhash)?;
#[derive(serde::Serialize)]
#[serde(transparent)]
struct TxAsHex(#[serde(with = "With::<Hex<Lower>>")] Transaction);
serde_json::to_value(TxAsHex(tx)).map_err(Into::into)
}
fn get_transaction(&self, txid: &Txid, blockhash: Option<BlockHash>) -> Result<Transaction> {
self.rpc
.get_raw_transaction(txid, blockhash.as_ref())
.context("failed to get transaction")
}
fn get_block_txids(&self, blockhash: BlockHash) -> Result<Vec<Txid>> {
Ok(self
.rpc
.get_block_info(&blockhash)
.context("failed to get block txids")?
.tx)
}
fn get_mempool_txids(&self) -> Result<Vec<Txid>> {
self.rpc
.get_raw_mempool()
.context("failed to get mempool txids")
}
fn get_mempool_entries(
&self,
txids: &[Txid],
) -> Result<Vec<Result<json::GetMempoolEntryResult>>> {
let client = self.rpc.get_jsonrpc_client();
log::debug!("getting {} mempool entries", txids.len());
let args: Vec<_> = txids
.iter()
.map(|txid| vec![serde_json::value::to_raw_value(txid).unwrap()])
.collect();
let reqs: Vec<_> = args
.iter()
.map(|a| client.build_request("getmempoolentry", a))
.collect();
let res = client.send_batch(&reqs).context("batch request failed")?;
log::debug!("got {} mempool entries", res.len());
Ok(res
.into_iter()
.map(|r| {
r.context("missing response")?
.result::<json::GetMempoolEntryResult>()
.context("invalid response")
})
.collect())
}
fn get_mempool_transactions(&self, txids: &[Txid]) -> Result<Vec<Result<Transaction>>> {
let client = self.rpc.get_jsonrpc_client();
log::debug!("getting {} transactions", txids.len());
let args: Vec<_> = txids
.iter()
.map(|txid| vec![serde_json::value::to_raw_value(txid).unwrap()])
.collect();
let reqs: Vec<_> = args
.iter()
.map(|a| client.build_request("getrawtransaction", a))
.collect();
let res = client.send_batch(&reqs).context("batch request failed")?;
log::debug!("got {} mempool transactions", res.len());
Ok(res
.into_iter()
.map(|r| -> Result<Transaction> {
let tx_hex = r
.context("missing response")?
.result::<String>()
.context("invalid response")?;
let tx_bytes = Vec::from_hex(&tx_hex).context("non-hex transaction")?;
deserialize(&tx_bytes).context("invalid transaction")
})
.collect())
}
}
pub(crate) trait RpcCall: Send + Sync + std::fmt::Debug {
fn connect(rpcwallet: Option<String>, rpc_url: String, network: Network) -> Result<Self>
where
Self: Sized;
fn estimate_fee(&self, nblocks: u16) -> Result<Amount>;
fn get_relay_fee(&self) -> Result<Amount>;
fn get_current_height(&self) -> Result<u64>;
fn get_block(&self, block_hash: BlockHash) -> Result<Block>;
fn get_filters(&self, block_height: u32) -> Result<(u32, BlockHash, BlockFilter)>;
fn list_unspent_from_to(
&self,
minamt: Option<Amount>,
) -> Result<Vec<json::ListUnspentResultEntry>>;
fn create_psbt(
&self,
unspents: &[ListUnspentResultEntry],
spk: ScriptBuf,
network: Network,
) -> Result<String>;
fn process_psbt(&self, psbt: String) -> Result<String>;
fn finalize_psbt(&self, psbt: String) -> Result<String>;
fn get_network(&self) -> Result<Network>;
fn test_mempool_accept(
&self,
tx: &Transaction,
) -> Result<crate::bitcoin_json::TestMempoolAcceptResult>;
fn broadcast(&self, tx: &Transaction) -> Result<Txid>;
fn get_transaction_info(&self, txid: &Txid, blockhash: Option<BlockHash>) -> Result<Value>;
fn get_transaction_hex(&self, txid: &Txid, blockhash: Option<BlockHash>) -> Result<Value>;
fn get_transaction(&self, txid: &Txid, blockhash: Option<BlockHash>) -> Result<Transaction>;
fn get_block_txids(&self, blockhash: BlockHash) -> Result<Vec<Txid>>;
fn get_mempool_txids(&self) -> Result<Vec<Txid>>;
fn get_mempool_entries(
&self,
txids: &[Txid],
) -> Result<Vec<Result<json::GetMempoolEntryResult>>>;
fn get_mempool_transactions(&self, txids: &[Txid]) -> Result<Vec<Result<Transaction>>>;
}
pub(crate) type RpcError = bitcoincore_rpc::jsonrpc::error::RpcError;
pub(crate) fn extract_bitcoind_error(err: &bitcoincore_rpc::Error) -> Option<&RpcError> {
use bitcoincore_rpc::{
jsonrpc::error::Error::Rpc as ServerError, Error::JsonRpc as JsonRpcError,
};
match err {
JsonRpcError(ServerError(e)) => Some(e),
_ => None,
}
}

285
src/faucet.rs Normal file
View File

@ -0,0 +1,285 @@
use std::{collections::HashMap, str::FromStr};
use bitcoincore_rpc::bitcoin::secp256k1::PublicKey;
use bitcoincore_rpc::json::{self as bitcoin_json};
use sdk_common::silentpayments::sign_transaction;
use sdk_common::sp_client::bitcoin::secp256k1::{
rand::thread_rng, Keypair, Message as Secp256k1Message, Secp256k1, ThirtyTwoByteHash,
};
use sdk_common::sp_client::bitcoin::secp256k1::rand::RngCore;
use sdk_common::sp_client::bitcoin::{
absolute::LockTime,
consensus::serialize,
hex::{DisplayHex, FromHex},
key::TapTweak,
script::PushBytesBuf,
sighash::{Prevouts, SighashCache},
taproot::Signature,
transaction::Version,
Amount, OutPoint, Psbt, ScriptBuf, TapSighashType, Transaction, TxIn, TxOut, Witness,
XOnlyPublicKey,
};
use sdk_common::{
network::{FaucetMessage, NewTxMessage},
silentpayments::create_transaction,
};
use sdk_common::sp_client::silentpayments::sending::generate_recipient_pubkeys;
use sdk_common::sp_client::silentpayments::utils::sending::calculate_partial_secret;
use sdk_common::sp_client::{FeeRate, OwnedOutput, Recipient, RecipientAddress};
use anyhow::{Error, Result};
use crate::lock_freezed_utxos;
use crate::scan::check_transaction_alone;
use crate::{
scan::compute_partial_tweak_to_transaction, MutexExt, SilentPaymentAddress, DAEMON, FAUCET_AMT,
WALLET,
};
fn spend_from_core(dest: XOnlyPublicKey) -> Result<(Transaction, Amount)> {
let core = DAEMON
.get()
.ok_or(Error::msg("DAEMON not initialized"))?
.lock_anyhow()?;
let unspent_list: Vec<bitcoin_json::ListUnspentResultEntry> =
core.list_unspent_from_to(None)?;
if !unspent_list.is_empty() {
let network = core.get_network()?;
let spk = ScriptBuf::new_p2tr_tweaked(dest.dangerous_assume_tweaked());
let new_psbt = core.create_psbt(&unspent_list, spk, network)?;
let processed_psbt = core.process_psbt(new_psbt)?;
let finalize_psbt_result = core.finalize_psbt(processed_psbt)?;
let final_psbt = Psbt::from_str(&finalize_psbt_result)?;
let total_fee = final_psbt.fee()?;
let final_tx = final_psbt.extract_tx()?;
let fee_rate = total_fee
.checked_div(final_tx.weight().to_vbytes_ceil())
.unwrap();
Ok((final_tx, fee_rate))
} else {
// we don't have enough available coins to pay for this faucet request
Err(Error::msg("No spendable outputs"))
}
}
fn faucet_send(
sp_address: SilentPaymentAddress,
commitment: &str,
) -> Result<(Transaction, PublicKey)> {
let sp_wallet = WALLET
.get()
.ok_or(Error::msg("Wallet not initialized"))?
.lock_anyhow()?;
let fee_estimate = DAEMON
.get()
.ok_or(Error::msg("DAEMON not initialized"))?
.lock_anyhow()?
.estimate_fee(6)
.unwrap_or(Amount::from_sat(1000))
.checked_div(1000)
.unwrap();
log::debug!("fee estimate for 6 blocks: {}", fee_estimate);
let recipient = Recipient {
address: RecipientAddress::SpAddress(sp_address),
amount: FAUCET_AMT,
};
let freezed_utxos = lock_freezed_utxos()?;
// We filter out the freezed utxos from available list
let available_outpoints: Vec<(OutPoint, OwnedOutput)> = sp_wallet
.get_unspent_outputs()
.iter()
.filter_map(|(outpoint, output)| {
if !freezed_utxos.contains(&outpoint) {
Some((*outpoint, output.clone()))
} else {
None
}
})
.collect();
// If we had mandatory inputs, we would make sure to put them at the top of the list
// We don't care for faucet though
// Ensure commitment is 32 bytes. If invalid/empty, generate a random one to avoid panics.
let commitment_bytes: Vec<u8> = match Vec::from_hex(commitment) {
Ok(bytes) if bytes.len() == 32 => bytes,
_ => {
let mut buf = [0u8; 32];
thread_rng().fill_bytes(&mut buf);
buf.to_vec()
}
};
// We try to pay the faucet amount
if let Ok(unsigned_transaction) = create_transaction(
available_outpoints,
sp_wallet.get_sp_client(),
vec![recipient],
Some(commitment_bytes.clone()),
FeeRate::from_sat_per_vb(fee_estimate.to_sat() as f32),
) {
let final_tx = sign_transaction(sp_wallet.get_sp_client(), unsigned_transaction)?;
let partial_tweak = compute_partial_tweak_to_transaction(&final_tx)?;
let daemon = DAEMON
.get()
.ok_or(Error::msg("DAEMON not initialized"))?
.lock_anyhow()?;
// First check that mempool accept it
daemon.test_mempool_accept(&final_tx)?;
let txid = daemon.broadcast(&final_tx)?;
log::debug!("Sent tx {}", txid);
// We immediately add the new tx to our wallet to prevent accidental double spend
check_transaction_alone(sp_wallet, &final_tx, &partial_tweak)?;
Ok((final_tx, partial_tweak))
} else {
// let's try to spend directly from the mining address
let secp = Secp256k1::signing_only();
let keypair = Keypair::new(&secp, &mut thread_rng());
// we first spend from core to the pubkey we just created
let (core_tx, fee_rate) = spend_from_core(keypair.x_only_public_key().0)?;
// check that the first output of the transaction pays to the key we just created
debug_assert!(
core_tx.output[0].script_pubkey
== ScriptBuf::new_p2tr_tweaked(
keypair.x_only_public_key().0.dangerous_assume_tweaked()
)
);
// This is ugly and can be streamlined
// create a new transaction that spends the newly created UTXO to the sp_address
let mut faucet_tx = Transaction {
input: vec![TxIn {
previous_output: OutPoint::new(core_tx.txid(), 0),
..Default::default()
}],
output: vec![],
version: Version::TWO,
lock_time: LockTime::ZERO,
};
// now do the silent payment operations with the final recipient address
let partial_secret = calculate_partial_secret(
&[(keypair.secret_key(), true)],
&[(core_tx.txid().to_string(), 0)],
)?;
let ext_output_key: XOnlyPublicKey =
generate_recipient_pubkeys(vec![sp_address.into()], partial_secret)?
.into_values()
.flatten()
.collect::<Vec<XOnlyPublicKey>>()
.get(0)
.expect("Failed to generate keys")
.to_owned();
let change_sp_address = sp_wallet.get_sp_client().get_receiving_address();
let change_output_key: XOnlyPublicKey =
generate_recipient_pubkeys(vec![change_sp_address], partial_secret)?
.into_values()
.flatten()
.collect::<Vec<XOnlyPublicKey>>()
.get(0)
.expect("Failed to generate keys")
.to_owned();
let ext_spk = ScriptBuf::new_p2tr_tweaked(ext_output_key.dangerous_assume_tweaked());
let change_spk = ScriptBuf::new_p2tr_tweaked(change_output_key.dangerous_assume_tweaked());
let mut op_return = PushBytesBuf::new();
op_return.extend_from_slice(&commitment_bytes)?;
let data_spk = ScriptBuf::new_op_return(op_return);
// Take some margin to pay for the fees
if core_tx.output[0].value < FAUCET_AMT * 4 {
return Err(Error::msg("Not enough funds"));
}
let change_amt = core_tx.output[0].value.checked_sub(FAUCET_AMT).unwrap();
faucet_tx.output.push(TxOut {
value: FAUCET_AMT,
script_pubkey: ext_spk,
});
faucet_tx.output.push(TxOut {
value: change_amt,
script_pubkey: change_spk,
});
faucet_tx.output.push(TxOut {
value: Amount::from_sat(0),
script_pubkey: data_spk,
});
// dummy signature only used for fee estimation
faucet_tx.input[0].witness.push([1; 64].to_vec());
let abs_fee = fee_rate
.checked_mul(faucet_tx.weight().to_vbytes_ceil())
.ok_or_else(|| Error::msg("Fee rate multiplication overflowed"))?;
// reset the witness to empty
faucet_tx.input[0].witness = Witness::new();
faucet_tx.output[1].value -= abs_fee;
let first_tx_outputs = vec![core_tx.output[0].clone()];
let prevouts = Prevouts::All(&first_tx_outputs);
let hash_ty = TapSighashType::Default;
let mut cache = SighashCache::new(&faucet_tx);
let sighash = cache.taproot_key_spend_signature_hash(0, &prevouts, hash_ty)?;
let msg = Secp256k1Message::from_digest(sighash.into_32());
let sig = secp.sign_schnorr_with_rng(&msg, &keypair, &mut thread_rng());
let final_sig = Signature { sig, hash_ty };
faucet_tx.input[0].witness.push(final_sig.to_vec());
{
let daemon = DAEMON
.get()
.ok_or(Error::msg("DAEMON not initialized"))?
.lock_anyhow()?;
// We don't worry about core_tx being refused by core
daemon.broadcast(&core_tx)?;
daemon.test_mempool_accept(&faucet_tx)?;
let txid = daemon.broadcast(&faucet_tx)?;
log::debug!("Sent tx {}", txid);
}
let partial_tweak = compute_partial_tweak_to_transaction(&faucet_tx)?;
check_transaction_alone(sp_wallet, &faucet_tx, &partial_tweak)?;
Ok((faucet_tx, partial_tweak))
}
}
pub fn handle_faucet_request(msg: &FaucetMessage) -> Result<NewTxMessage> {
let sp_address = SilentPaymentAddress::try_from(msg.sp_address.as_str())?;
log::debug!("Sending bootstrap coins to {}", sp_address);
// send bootstrap coins to this sp_address
let (tx, partial_tweak) = faucet_send(sp_address, &msg.commitment)?;
Ok(NewTxMessage::new(
serialize(&tx).to_lower_hex_string(),
Some(partial_tweak.to_string()),
))
}

718
src/main.rs Normal file
View File

@ -0,0 +1,718 @@
use std::{
collections::{HashMap, HashSet},
env,
fmt::Debug,
fs,
io::{Read, Write},
net::SocketAddr,
path::PathBuf,
str::FromStr,
sync::{atomic::AtomicU32, Arc, Mutex, MutexGuard, OnceLock},
};
use bitcoincore_rpc::{
bitcoin::secp256k1::SecretKey,
json::{self as bitcoin_json},
};
use commit::{lock_members, MEMBERLIST};
use futures_util::{future, pin_mut, stream::TryStreamExt, FutureExt, StreamExt, SinkExt};
use log::{debug, error, warn};
use message::{broadcast_message, process_message, BroadcastType, MessageCache, MESSAGECACHE};
use scan::{check_transaction_alone, compute_partial_tweak_to_transaction};
use sdk_common::network::{AnkFlag, NewTxMessage};
use sdk_common::{
network::HandshakeMessage,
pcd::Member,
process::{lock_processes, Process, CACHEDPROCESSES},
serialization::{OutPointMemberMap, OutPointProcessMap},
silentpayments::SpWallet,
sp_client::{
bitcoin::{
consensus::deserialize,
hex::{DisplayHex, FromHex},
Amount, Network, Transaction,
},
silentpayments::SilentPaymentAddress,
},
MutexExt,
};
use sdk_common::{
sp_client::{
bitcoin::{secp256k1::rand::thread_rng, OutPoint},
SpClient, SpendKey,
},
updates::{init_update_sink, NativeUpdateSink, StateUpdate},
};
use serde_json::Value;
use tokio::net::{TcpListener, TcpStream};
use tokio::sync::mpsc::{unbounded_channel, UnboundedSender};
use tokio_stream::wrappers::UnboundedReceiverStream;
use tokio_tungstenite::tungstenite::Message;
use tokio::io::{AsyncReadExt, AsyncWriteExt};
use anyhow::{Error, Result};
use zeromq::{Socket, SocketRecv};
mod commit;
mod config;
mod daemon;
mod faucet;
mod message;
mod scan;
mod peers;
mod sync;
use crate::config::Config;
use crate::peers as peer_store;
use crate::{
daemon::{Daemon, RpcCall},
scan::scan_blocks,
};
pub const WITH_CUTTHROUGH: bool = false; // We'd rather catch everything for this use case
type Tx = UnboundedSender<Message>;
type PeerMap = Mutex<HashMap<SocketAddr, Tx>>;
pub(crate) static PEERMAP: OnceLock<PeerMap> = OnceLock::new();
pub(crate) static DAEMON: OnceLock<Mutex<Box<dyn RpcCall>>> = OnceLock::new();
static CHAIN_TIP: AtomicU32 = AtomicU32::new(0);
pub static FREEZED_UTXOS: OnceLock<Mutex<HashSet<OutPoint>>> = OnceLock::new();
pub fn lock_freezed_utxos() -> Result<MutexGuard<'static, HashSet<OutPoint>>, Error> {
FREEZED_UTXOS
.get_or_init(|| Mutex::new(HashSet::new()))
.lock_anyhow()
}
#[derive(Debug)]
pub struct StateFile {
path: PathBuf,
}
impl StateFile {
fn new(path: PathBuf) -> Self {
Self { path }
}
fn create(&self) -> Result<()> {
let parent: PathBuf;
if let Some(dir) = self.path.parent() {
if !dir.ends_with(".4nk") {
return Err(Error::msg("parent dir must be \".4nk\""));
}
parent = dir.to_path_buf();
} else {
return Err(Error::msg("wallet file has no parent dir"));
}
// Ensure the parent directory exists
if !parent.exists() {
fs::create_dir_all(parent)?;
}
// Create a new file
fs::File::create(&self.path)?;
Ok(())
}
fn save(&self, json: &Value) -> Result<()> {
let mut f = fs::File::options()
.write(true)
.truncate(true)
.open(&self.path)?;
let stringified = serde_json::to_string(&json)?;
let bin = stringified.as_bytes();
f.write_all(bin)?;
Ok(())
}
fn load(&self) -> Result<Value> {
let mut f = fs::File::open(&self.path)?;
let mut content = vec![];
f.read_to_end(&mut content)?;
// Handle empty files or invalid JSON gracefully
if content.is_empty() {
return Ok(serde_json::Value::Object(serde_json::Map::new()));
}
let res: Value = serde_json::from_slice(&content)?;
Ok(res)
}
}
#[derive(Debug)]
pub struct DiskStorage {
pub wallet_file: StateFile,
pub processes_file: StateFile,
pub members_file: StateFile,
}
pub static STORAGE: OnceLock<Mutex<DiskStorage>> = OnceLock::new();
const FAUCET_AMT: Amount = Amount::from_sat(10_000);
pub(crate) static WALLET: OnceLock<Mutex<SpWallet>> = OnceLock::new();
fn handle_new_tx_request(new_tx_msg: &NewTxMessage) -> Result<()> {
let tx = deserialize::<Transaction>(&Vec::from_hex(&new_tx_msg.transaction)?)?;
let daemon = DAEMON.get().unwrap().lock_anyhow()?;
daemon.test_mempool_accept(&tx)?;
daemon.broadcast(&tx)?;
Ok(())
}
async fn handle_connection(
raw_stream: TcpStream,
addr: SocketAddr,
our_sp_address: SilentPaymentAddress,
) {
debug!("Incoming TCP connection from: {}", addr);
let peers = PEERMAP.get().expect("Peer Map not initialized");
let ws_stream = match tokio_tungstenite::accept_async(raw_stream).await {
Ok(stream) => {
debug!("WebSocket connection established");
stream
}
Err(e) => {
log::warn!("WebSocket handshake failed for {}: {} - This may be a non-WebSocket connection attempt", addr, e);
// Don't return immediately, try to handle gracefully
return;
}
};
// Insert the write part of this peer to the peer map.
let (tx, rx) = unbounded_channel();
match peers.lock_anyhow() {
Ok(mut peer_map) => peer_map.insert(addr, tx),
Err(e) => {
log::error!("{}", e);
panic!();
}
};
let processes = lock_processes().unwrap().clone();
let members = lock_members().unwrap().clone();
let current_tip = CHAIN_TIP.load(std::sync::atomic::Ordering::SeqCst);
let init_msg = HandshakeMessage::new(
our_sp_address.to_string(),
OutPointMemberMap(members),
OutPointProcessMap(processes),
current_tip.into(),
);
if let Err(e) = broadcast_message(
AnkFlag::Handshake,
format!("{}", init_msg.to_string()),
BroadcastType::Sender(addr),
) {
log::error!("Failed to send init message: {}", e);
return;
}
let (outgoing, incoming) = ws_stream.split();
let broadcast_incoming = incoming.try_for_each(|msg| {
if let Ok(raw_msg) = msg.to_text() {
// debug!("Received msg: {}", raw_msg);
process_message(raw_msg, addr);
} else {
debug!("Received non-text message {} from peer {}", msg, addr);
}
future::ok(())
});
let receive_from_others = UnboundedReceiverStream::new(rx)
.map(Ok)
.forward(outgoing)
.map(|result| {
if let Err(e) = result {
debug!("Error sending message: {}", e);
}
});
pin_mut!(broadcast_incoming, receive_from_others);
future::select(broadcast_incoming, receive_from_others).await;
debug!("{} disconnected", &addr);
peers.lock().unwrap().remove(&addr);
}
fn create_new_tx_message(transaction: Vec<u8>) -> Result<NewTxMessage> {
let tx: Transaction = deserialize(&transaction)?;
if tx.is_coinbase() {
return Err(Error::msg("Can't process coinbase transaction"));
}
let partial_tweak = compute_partial_tweak_to_transaction(&tx)?;
let sp_wallet = WALLET
.get()
.ok_or_else(|| Error::msg("Wallet not initialized"))?
.lock_anyhow()?;
check_transaction_alone(sp_wallet, &tx, &partial_tweak)?;
Ok(NewTxMessage::new(
transaction.to_lower_hex_string(),
Some(partial_tweak.to_string()),
))
}
async fn handle_scan_updates(
scan_rx: std::sync::mpsc::Receiver<sdk_common::updates::ScanProgress>,
) {
while let Ok(update) = scan_rx.recv() {
log::debug!("Received scan update: {:?}", update);
}
}
async fn handle_state_updates(
state_rx: std::sync::mpsc::Receiver<sdk_common::updates::StateUpdate>,
) {
while let Ok(update) = state_rx.recv() {
match update {
StateUpdate::Update {
blkheight,
blkhash,
found_outputs,
found_inputs,
} => {
// We update the wallet with found outputs and inputs
let mut sp_wallet = WALLET.get().unwrap().lock_anyhow().unwrap();
// inputs first
for outpoint in found_inputs {
sp_wallet.mark_output_mined(&outpoint, blkhash);
}
sp_wallet.get_mut_outputs().extend(found_outputs);
sp_wallet.set_last_scan(blkheight.to_consensus_u32());
let json = serde_json::to_value(sp_wallet.clone()).unwrap();
STORAGE
.get()
.unwrap()
.lock_anyhow()
.unwrap()
.wallet_file
.save(&json)
.unwrap();
}
StateUpdate::NoUpdate { blkheight } => {
// We just keep the current height to update the last_scan
debug!("No update, setting last scan at {}", blkheight);
let mut sp_wallet = WALLET.get().unwrap().lock_anyhow().unwrap();
sp_wallet.set_last_scan(blkheight.to_consensus_u32());
let json = serde_json::to_value(sp_wallet.clone()).unwrap();
STORAGE
.get()
.unwrap()
.lock_anyhow()
.unwrap()
.wallet_file
.save(&json)
.unwrap();
}
}
}
}
async fn handle_zmq(zmq_url: String, blindbit_url: String) {
debug!("Starting listening on Core");
let mut socket = zeromq::SubSocket::new();
socket.connect(&zmq_url).await.unwrap();
socket.subscribe("rawtx").await.unwrap();
socket.subscribe("hashblock").await.unwrap();
loop {
let core_msg = match socket.recv().await {
Ok(m) => m,
Err(e) => {
error!("Zmq error: {}", e);
continue;
}
};
debug!("Received a message");
let payload: String = if let (Some(topic), Some(data)) = (core_msg.get(0), core_msg.get(1))
{
debug!("topic: {}", std::str::from_utf8(&topic).unwrap());
match std::str::from_utf8(&topic) {
Ok("rawtx") => match create_new_tx_message(data.to_vec()) {
Ok(m) => {
debug!("Created message");
serde_json::to_string(&m).expect("This shouldn't fail")
}
Err(e) => {
error!("{}", e);
continue;
}
},
Ok("hashblock") => {
let current_height = DAEMON
.get()
.unwrap()
.lock_anyhow()
.unwrap()
.get_current_height()
.unwrap();
CHAIN_TIP.store(current_height as u32, std::sync::atomic::Ordering::SeqCst);
// Add retry logic for hashblock processing
let mut retry_count = 0;
const MAX_RETRIES: u32 = 4;
const RETRY_DELAY_MS: u64 = 1000; // 1 second initial delay
loop {
match scan_blocks(0, &blindbit_url).await {
Ok(_) => {
debug!("Successfully scanned blocks after {} retries", retry_count);
break;
}
Err(e) => {
retry_count += 1;
if retry_count >= MAX_RETRIES {
error!(
"Failed to scan blocks after {} retries: {}",
MAX_RETRIES, e
);
break;
}
// Exponential backoff: 1s, 2s, 4s
let delay_ms = RETRY_DELAY_MS * (1 << (retry_count - 1));
warn!(
"Scan failed (attempt {}/{}), retrying in {}ms: {}",
retry_count, MAX_RETRIES, delay_ms, e
);
tokio::time::sleep(tokio::time::Duration::from_millis(delay_ms))
.await;
}
}
}
continue;
}
_ => {
error!("Unexpected message in zmq");
continue;
}
}
} else {
error!("Empty message");
continue;
};
if let Err(e) = broadcast_message(AnkFlag::NewTx, payload, BroadcastType::ToAll) {
log::error!("{}", e.to_string());
}
}
}
async fn handle_health_endpoint(mut stream: TcpStream) {
let response = "HTTP/1.1 200 OK\r\nContent-Type: application/json\r\nContent-Length: 15\r\n\r\n{\"status\":\"ok\"}";
let _ = stream.write_all(response.as_bytes()).await;
let _ = stream.shutdown().await;
}
async fn start_health_server(port: u16) {
// Use configurable bind address for health server
let bind_address = std::env::var("HEALTH_BIND_ADDRESS").unwrap_or_else(|_| "0.0.0.0".to_string());
let listener = match TcpListener::bind(format!("{}:{}", bind_address, port)).await {
Ok(listener) => listener,
Err(e) => {
log::error!("Failed to bind health server on port {}: {}", port, e);
return;
}
};
log::info!("Health server listening on port {}", port);
while let Ok((stream, _)) = listener.accept().await {
tokio::spawn(handle_health_endpoint(stream));
}
}
#[tokio::main(flavor = "multi_thread")]
async fn main() -> Result<()> {
env_logger::init();
// todo: take the path to conf file as argument
// default to "./.conf"
let config = Config::read_from_file(".conf")?;
if config.network == Network::Bitcoin {
warn!("Running on mainnet, you're on your own");
}
MESSAGECACHE
.set(MessageCache::new())
.expect("Message Cache initialization failed");
PEERMAP
.set(PeerMap::new(HashMap::new()))
.expect("PeerMap initialization failed");
// Connect the rpc daemon
DAEMON
.set(Mutex::new(Box::new(Daemon::connect(
config.core_wallet,
config.core_url,
config.network,
)?)))
.expect("DAEMON initialization failed");
let current_tip: u32 = DAEMON
.get()
.unwrap()
.lock_anyhow()?
.get_current_height()?
.try_into()?;
// Set CHAIN_TIP
CHAIN_TIP.store(current_tip, std::sync::atomic::Ordering::SeqCst);
let mut app_dir = PathBuf::from_str(&env::var("HOME")?)?;
app_dir.push(config.data_dir);
let mut wallet_file = app_dir.clone();
wallet_file.push(&config.wallet_name);
let mut processes_file = app_dir.clone();
processes_file.push("processes");
let mut members_file = app_dir.clone();
members_file.push("members");
let wallet_file = StateFile::new(wallet_file);
let processes_file = StateFile::new(processes_file);
let members_file = StateFile::new(members_file);
// load an existing sp_wallet, or create a new one
let sp_wallet: SpWallet = match wallet_file.load() {
Ok(wallet) => {
// TODO: Verify the wallet is compatible with the current network
serde_json::from_value(wallet)?
}
Err(_) => {
// Create a new wallet file if it doesn't exist or fails to load
wallet_file.create()?;
let mut rng = thread_rng();
let new_client = SpClient::new(
SecretKey::new(&mut rng),
SpendKey::Secret(SecretKey::new(&mut rng)),
config.network,
)
.expect("Failed to create a new SpClient");
let mut sp_wallet = SpWallet::new(new_client);
// Set birthday and update scan information
sp_wallet.set_birthday(current_tip);
sp_wallet.set_last_scan(current_tip);
// Save the newly created wallet to disk
let json = serde_json::to_value(sp_wallet.clone())?;
wallet_file.save(&json)?;
sp_wallet
}
};
let cached_processes: HashMap<OutPoint, Process> = match processes_file.load() {
Ok(processes) => {
let deserialized: OutPointProcessMap = serde_json::from_value(processes)?;
deserialized.0
}
Err(_) => {
debug!("creating process file at {}", processes_file.path.display());
processes_file.create()?;
HashMap::new()
}
};
let members: HashMap<OutPoint, Member> = match members_file.load() {
Ok(members) => {
let deserialized: OutPointMemberMap = serde_json::from_value(members)?;
deserialized.0
}
Err(_) => {
debug!("creating members file at {}", members_file.path.display());
members_file.create()?;
HashMap::new()
}
};
{
let utxo_to_freeze: HashSet<OutPoint> = cached_processes
.iter()
.map(|(_, process)| process.get_last_unspent_outpoint().unwrap())
.collect();
let mut freezed_utxos = lock_freezed_utxos()?;
*freezed_utxos = utxo_to_freeze;
}
let our_sp_address = sp_wallet.get_sp_client().get_receiving_address();
log::info!("Using wallet with address {}", our_sp_address,);
log::info!(
"Found {} outputs for a total balance of {}",
sp_wallet.get_outputs().len(),
sp_wallet.get_balance()
);
let last_scan = sp_wallet.get_last_scan();
WALLET
.set(Mutex::new(sp_wallet))
.expect("Failed to initialize WALLET");
CACHEDPROCESSES
.set(Mutex::new(cached_processes))
.expect("Failed to initialize CACHEDPROCESSES");
MEMBERLIST
.set(Mutex::new(members))
.expect("Failed to initialize MEMBERLIST");
let storage = DiskStorage {
wallet_file,
processes_file,
members_file,
};
STORAGE.set(Mutex::new(storage)).unwrap();
let (sink, scan_rx, state_rx) = NativeUpdateSink::new();
init_update_sink(Arc::new(sink));
// Spawn the update handlers
tokio::spawn(handle_scan_updates(scan_rx));
tokio::spawn(handle_state_updates(state_rx));
if last_scan < current_tip {
log::info!("Scanning for our outputs");
let blindbit_url = config.blindbit_url.clone();
tokio::spawn(async move {
if let Err(e) = scan_blocks(current_tip - last_scan, &blindbit_url).await {
log::error!("Failed to scan blocks: {}", e);
} else {
log::info!("Block scan completed successfully");
}
});
}
// Init peers store and optional bootstrap
peer_store::init(config.ws_url.clone(), config.bootstrap_url.clone());
if let Some(bs_url) = &config.bootstrap_url {
let bs_url = bs_url.clone();
let do_faucet = config.bootstrap_faucet;
let our_sp = our_sp_address.to_string();
tokio::spawn(async move {
log::info!("Starting bootstrap connection and sync");
if let Err(e) = crate::sync::bootstrap_connect_and_sync(bs_url.clone()).await {
log::warn!("bootstrap failed: {}", e);
} else {
log::info!("Bootstrap sync successful, checking faucet request");
if do_faucet && !peer_store::faucet_already_done(&bs_url) {
log::info!("Sending faucet request to bootstrap");
let env = sdk_common::network::Envelope {
flag: sdk_common::network::AnkFlag::Faucet,
content: serde_json::json!({
"sp_address": our_sp,
"commitment": "",
"error": null
}).to_string(),
};
if let Ok((mut ws, _)) = tokio_tungstenite::connect_async(&bs_url).await {
log::info!("Connected to bootstrap for faucet request");
let _ = ws
.send(tokio_tungstenite::tungstenite::Message::Text(
serde_json::to_string(&env).unwrap(),
))
.await;
log::info!("Faucet request sent to bootstrap");
peer_store::mark_faucet_done(&bs_url);
} else {
log::error!("Failed to connect to bootstrap for faucet request");
}
} else {
log::info!("Faucet request skipped: do_faucet={}, already_done={}", do_faucet, peer_store::faucet_already_done(&bs_url));
}
}
});
}
// Subscribe to Bitcoin Core
let zmq_url = config.zmq_url.clone();
let blindbit_url = config.blindbit_url.clone();
tokio::spawn(async move {
handle_zmq(zmq_url, blindbit_url).await;
});
// Create the event loop and TCP listener we'll accept connections on.
// Try to bind with retry logic
let mut listener = None;
let mut retry_count = 0;
const MAX_RETRIES: u32 = 5;
const RETRY_DELAY_MS: u64 = 1000;
// Allow environment variable override of ws_url
let ws_bind_url = std::env::var("WS_BIND_URL").unwrap_or_else(|_| config.ws_url.clone());
log::info!("Using WebSocket bind URL: {}", ws_bind_url);
while listener.is_none() && retry_count < MAX_RETRIES {
let try_socket = TcpListener::bind(ws_bind_url.clone()).await;
match try_socket {
Ok(socket) => {
log::info!("Successfully bound to {}", ws_bind_url);
listener = Some(socket);
}
Err(e) => {
retry_count += 1;
log::warn!("Failed to bind to {} (attempt {}/{}): {}", ws_bind_url, retry_count, MAX_RETRIES, e);
if retry_count < MAX_RETRIES {
log::info!("Retrying in {}ms...", RETRY_DELAY_MS);
tokio::time::sleep(tokio::time::Duration::from_millis(RETRY_DELAY_MS)).await;
} else {
log::error!("Failed to bind to {} after {} attempts: {}", ws_bind_url, MAX_RETRIES, e);
return Err(anyhow::anyhow!("Failed to bind to {} after {} attempts: {}", ws_bind_url, MAX_RETRIES, e));
}
}
}
}
let listener = listener.unwrap();
tokio::spawn(MessageCache::clean_up());
// Start health server on configurable port
let health_port = std::env::var("HEALTH_PORT")
.ok()
.and_then(|p| p.parse::<u16>().ok())
.unwrap_or(8091);
tokio::spawn(start_health_server(health_port));
// Let's spawn the handling of each connection in a separate task.
while let Ok((stream, addr)) = listener.accept().await {
tokio::spawn(handle_connection(stream, addr, our_sp_address));
}
Ok(())
}

58
src/message/broadcast.rs Normal file
View File

@ -0,0 +1,58 @@
use anyhow::Result;
use tokio_tungstenite::tungstenite::Message;
use sdk_common::network::{AnkFlag, Envelope};
use crate::{PEERMAP};
pub(crate) enum BroadcastType {
Sender(std::net::SocketAddr),
#[allow(dead_code)]
ExcludeSender(std::net::SocketAddr),
#[allow(dead_code)]
ToAll,
}
pub(crate) fn broadcast_message(
flag: AnkFlag,
payload: String,
broadcast: BroadcastType,
) -> Result<()> {
let peers = PEERMAP.get().ok_or(anyhow::Error::msg("Unitialized peer map"))?;
let ank_msg = Envelope { flag, content: payload };
let msg = Message::Text(serde_json::to_string(&ank_msg)?);
match ank_msg.flag {
AnkFlag::Cipher => log::debug!("Broadcasting cipher"),
AnkFlag::Handshake => log::debug!("Broadcasting handshake"),
_ => log::debug!("Broadcasting {} message: {}", ank_msg.flag.as_str(), msg),
}
match broadcast {
BroadcastType::Sender(addr) => {
peers
.lock()
.map_err(|e| anyhow::Error::msg(format!("Failed to lock peers: {}", e.to_string())))?
.iter()
.find(|(peer_addr, _)| peer_addr == &&addr)
.ok_or(anyhow::Error::msg("Failed to find the sender in the peer_map"))?
.1
.send(msg)?;
}
BroadcastType::ExcludeSender(addr) => {
peers
.lock()
.map_err(|e| anyhow::Error::msg(format!("Failed to lock peers: {}", e.to_string())))?
.iter()
.filter(|(peer_addr, _)| peer_addr != &&addr)
.for_each(|(_, peer_tx)| { let _ = peer_tx.send(msg.clone()); });
}
BroadcastType::ToAll => {
peers
.lock()
.map_err(|e| anyhow::Error::msg(format!("Failed to lock peers: {}", e.to_string())))?
.iter()
.for_each(|(_, peer_tx)| { let _ = peer_tx.send(msg.clone()); });
}
}
Ok(())
}

45
src/message/cache.rs Normal file
View File

@ -0,0 +1,45 @@
use std::{collections::HashMap, sync::{Mutex, OnceLock}, time::{Duration, Instant}};
use tokio::time;
pub(crate) static MESSAGECACHE: OnceLock<MessageCache> = OnceLock::new();
const MESSAGECACHEDURATION: Duration = Duration::from_secs(20);
const MESSAGECACHEINTERVAL: Duration = Duration::from_secs(5);
#[derive(Debug)]
pub(crate) struct MessageCache {
pub(crate) store: Mutex<HashMap<String, Instant>>,
}
impl MessageCache {
pub fn new() -> Self {
Self { store: Mutex::new(HashMap::new()) }
}
pub fn insert(&self, key: String) {
let mut store = self.store.lock().unwrap();
store.insert(key, Instant::now());
}
pub fn remove(&self, key: &str) {
let mut store = self.store.lock().unwrap();
store.remove(key);
}
pub fn contains(&self, key: &str) -> bool {
let store = self.store.lock().unwrap();
store.contains_key(key)
}
pub async fn clean_up() {
let cache = MESSAGECACHE.get().unwrap();
let mut interval = time::interval(MESSAGECACHEINTERVAL);
loop {
interval.tick().await;
let mut store = cache.store.lock().unwrap();
let now = Instant::now();
store.retain(|_, entrytime| now.duration_since(*entrytime) <= MESSAGECACHEDURATION);
}
}
}

View File

@ -0,0 +1,13 @@
use std::net::SocketAddr;
use sdk_common::network::AnkFlag;
use super::super::broadcast::{broadcast_message, BroadcastType};
pub(crate) fn handle_cipher(ank_msg: sdk_common::network::Envelope, addr: SocketAddr) {
log::debug!("Received a cipher message");
if let Err(e) = broadcast_message(AnkFlag::Cipher, ank_msg.content, BroadcastType::ExcludeSender(addr)) {
log::error!("Failed to send message with error: {}", e);
}
}

View File

@ -0,0 +1,28 @@
use std::net::SocketAddr;
use sdk_common::network::{AnkFlag, CommitMessage, Envelope};
use crate::commit::handle_commit_request;
use super::super::{broadcast::{broadcast_message, BroadcastType}, cache::MESSAGECACHE};
pub(crate) fn handle_commit(ank_msg: Envelope, addr: SocketAddr) {
if let Ok(mut commit_msg) = serde_json::from_str::<CommitMessage>(&ank_msg.content) {
match handle_commit_request(commit_msg.clone()) {
Ok(new_outpoint) => log::debug!("Processed commit msg for outpoint {}", new_outpoint),
Err(e) => {
log::error!("handle_commit_request returned error: {}", e);
let cache = MESSAGECACHE.get().expect("Cache should be initialized");
cache.remove(ank_msg.to_string().as_str());
commit_msg.error = Some(e.into());
if let Err(e) = broadcast_message(
AnkFlag::Commit,
serde_json::to_string(&commit_msg).expect("This shouldn't fail"),
BroadcastType::Sender(addr),
) {
log::error!("Failed to broadcast message: {}", e);
}
}
};
}
}

View File

@ -0,0 +1,31 @@
use std::net::SocketAddr;
use sdk_common::network::{AnkFlag, Envelope, FaucetMessage};
use crate::faucet::handle_faucet_request;
use super::super::broadcast::{broadcast_message, BroadcastType};
pub(crate) fn handle_faucet(ank_msg: Envelope, addr: SocketAddr) {
log::debug!("Received a faucet message");
if let Ok(mut content) = serde_json::from_str::<FaucetMessage>(&ank_msg.content) {
match handle_faucet_request(&content) {
Ok(new_tx_msg) => {
log::debug!(
"Obtained new_tx_msg: {}",
serde_json::to_string(&new_tx_msg).unwrap()
);
}
Err(e) => {
log::error!("Failed to send faucet tx: {}", e);
content.error = Some(e.into());
let payload = serde_json::to_string(&content).expect("Message type shouldn't fail");
if let Err(e) = broadcast_message(AnkFlag::Faucet, payload, BroadcastType::Sender(addr)) {
log::error!("Failed to broadcast message: {}", e);
}
}
}
} else {
log::error!("Invalid content for faucet message");
}
}

View File

@ -0,0 +1,37 @@
mod faucet;
mod new_tx;
mod cipher;
mod commit;
mod unknown;
mod sync;
use std::net::SocketAddr;
use sdk_common::network::{AnkFlag, Envelope};
use crate::peers;
use super::cache::MESSAGECACHE;
pub(crate) fn process_message(raw_msg: &str, addr: SocketAddr) {
let cache = MESSAGECACHE.get().expect("Cache should be initialized");
if cache.contains(raw_msg) {
log::debug!("Message already processed, dropping");
return;
} else {
cache.insert(raw_msg.to_owned());
}
match serde_json::from_str::<Envelope>(raw_msg) {
Ok(ank_msg) => match ank_msg.flag {
AnkFlag::Faucet => faucet::handle_faucet(ank_msg, addr),
AnkFlag::NewTx => new_tx::handle_new_tx(ank_msg, addr),
AnkFlag::Cipher => cipher::handle_cipher(ank_msg, addr),
AnkFlag::Commit => commit::handle_commit(ank_msg, addr),
AnkFlag::Unknown => unknown::handle_unknown(ank_msg, addr),
AnkFlag::Sync => sync::handle_sync(ank_msg),
AnkFlag::Handshake => log::debug!("Received init message from {}", addr),
},
Err(e) => {
log::warn!("Failed to parse network message from {}: {} - Raw message: {}", addr, e, raw_msg);
// Continue processing instead of dropping the message
}
}
}

View File

@ -0,0 +1,26 @@
use std::net::SocketAddr;
use sdk_common::network::{AnkFlag, Envelope, NewTxMessage};
use crate::handle_new_tx_request;
use super::super::broadcast::{broadcast_message, BroadcastType};
pub(crate) fn handle_new_tx(ank_msg: Envelope, addr: SocketAddr) {
log::debug!("Received a new tx message");
if let Ok(mut new_tx_msg) = serde_json::from_str::<NewTxMessage>(&ank_msg.content) {
if let Err(e) = handle_new_tx_request(&mut new_tx_msg) {
log::error!("handle_new_tx_request returned error: {}", e);
new_tx_msg.error = Some(e.into());
if let Err(e) = broadcast_message(
AnkFlag::NewTx,
serde_json::to_string(&new_tx_msg).expect("This shouldn't fail"),
BroadcastType::Sender(addr),
) {
log::error!("Failed to broadcast message: {}", e);
}
}
} else {
log::error!("Invalid content for new_tx message");
}
}

View File

@ -0,0 +1,14 @@
use crate::peers;
pub(crate) fn handle_sync(ank_msg: sdk_common::network::Envelope) {
if let Ok(val) = serde_json::from_str::<serde_json::Value>(&ank_msg.content) {
if let Some(arr) = val.get("peers").and_then(|v| v.as_array()) {
for v in arr {
if let Some(url) = v.as_str() {
peers::add_peer(url);
}
}
}
}
}

View File

@ -0,0 +1,13 @@
use std::net::SocketAddr;
use sdk_common::network::AnkFlag;
use super::super::broadcast::{broadcast_message, BroadcastType};
pub(crate) fn handle_unknown(ank_msg: sdk_common::network::Envelope, addr: SocketAddr) {
log::debug!("Received an unknown message");
if let Err(e) = broadcast_message(AnkFlag::Unknown, ank_msg.content, BroadcastType::ExcludeSender(addr)) {
log::error!("Failed to send message with error: {}", e);
}
}

7
src/message/mod.rs Normal file
View File

@ -0,0 +1,7 @@
pub mod cache;
pub mod broadcast;
pub mod handlers;
pub(crate) use cache::{MessageCache, MESSAGECACHE};
pub(crate) use broadcast::{BroadcastType, broadcast_message};
pub(crate) use handlers::process_message;

65
src/peers.rs Normal file
View File

@ -0,0 +1,65 @@
use std::collections::HashSet;
use std::sync::{Mutex, OnceLock};
static PEER_URLS: OnceLock<Mutex<HashSet<String>>> = OnceLock::new();
static BOOTSTRAP_URL: OnceLock<Option<String>> = OnceLock::new();
static SELF_URL: OnceLock<String> = OnceLock::new();
static FAUCET_DONE: OnceLock<Mutex<HashSet<String>>> = OnceLock::new();
pub fn init(self_url: String, bootstrap_url: Option<String>) {
let _ = PEER_URLS.set(Mutex::new(HashSet::new()));
let _ = BOOTSTRAP_URL.set(bootstrap_url);
let _ = SELF_URL.set(self_url);
let _ = FAUCET_DONE.set(Mutex::new(HashSet::new()));
}
pub fn add_peer(url: &str) {
if url.is_empty() {
return;
}
if let Some(self_url) = SELF_URL.get() {
if url == self_url {
return;
}
}
if let Some(Some(bs)) = BOOTSTRAP_URL.get().map(|o| o.as_ref()) {
if url == bs {
return;
}
}
if let Some(lock) = PEER_URLS.get() {
if let Ok(mut set) = lock.lock() {
set.insert(url.to_string());
}
}
}
pub fn get_peers() -> Vec<String> {
if let Some(lock) = PEER_URLS.get() {
if let Ok(set) = lock.lock() {
return set.iter().cloned().collect();
}
}
vec![]
}
pub fn get_self_url() -> Option<String> {
SELF_URL.get().cloned()
}
pub fn faucet_already_done(url: &str) -> bool {
if let Some(lock) = FAUCET_DONE.get() {
if let Ok(set) = lock.lock() {
return set.contains(url);
}
}
false
}
pub fn mark_faucet_done(url: &str) {
if let Some(lock) = FAUCET_DONE.get() {
if let Ok(mut set) = lock.lock() {
set.insert(url.to_string());
}
}
}

593
src/scan.rs Normal file
View File

@ -0,0 +1,593 @@
use std::collections::HashMap;
use std::collections::HashSet;
use std::str::FromStr;
use std::sync::atomic::AtomicBool;
use std::sync::atomic::Ordering;
use std::sync::Arc;
use std::sync::MutexGuard;
use anyhow::bail;
use anyhow::{Error, Result};
use bitcoincore_rpc::bitcoin::absolute::Height;
use bitcoincore_rpc::bitcoin::hashes::sha256;
use bitcoincore_rpc::bitcoin::hashes::Hash;
use bitcoincore_rpc::bitcoin::Amount;
use futures_util::Stream;
use log::info;
use sdk_common::backend_blindbit_native::BlindbitBackend;
use sdk_common::backend_blindbit_native::ChainBackend;
use sdk_common::backend_blindbit_native::SpScanner;
use sdk_common::silentpayments::SpWallet;
use sdk_common::sp_client::bitcoin::bip158::BlockFilter;
use sdk_common::sp_client::bitcoin::secp256k1::{All, PublicKey, Scalar, Secp256k1, SecretKey};
use sdk_common::sp_client::bitcoin::{BlockHash, OutPoint, Transaction, TxOut, XOnlyPublicKey};
use sdk_common::sp_client::silentpayments::receiving::Receiver;
use sdk_common::sp_client::silentpayments::utils::receiving::{
calculate_tweak_data, get_pubkey_from_input,
};
use sdk_common::sp_client::BlockData;
use sdk_common::sp_client::FilterData;
use sdk_common::sp_client::SpClient;
use sdk_common::sp_client::Updater;
use sdk_common::sp_client::{OutputSpendStatus, OwnedOutput};
use sdk_common::updates::StateUpdater;
use tokio::time::Instant;
use crate::CHAIN_TIP;
use crate::{MutexExt, DAEMON, STORAGE, WALLET, WITH_CUTTHROUGH};
pub fn compute_partial_tweak_to_transaction(tx: &Transaction) -> Result<PublicKey> {
let daemon = DAEMON.get().ok_or(Error::msg("DAEMON not initialized"))?;
let mut outpoints: Vec<(String, u32)> = Vec::with_capacity(tx.input.len());
let mut pubkeys: Vec<PublicKey> = Vec::with_capacity(tx.input.len());
// TODO we should cache transactions to prevent multiple rpc request when transaction spends multiple outputs from the same tx
for input in tx.input.iter() {
outpoints.push((
input.previous_output.txid.to_string(),
input.previous_output.vout,
));
let prev_tx = daemon
.lock_anyhow()?
.get_transaction(&input.previous_output.txid, None)
.map_err(|e| Error::msg(format!("Failed to find previous transaction: {}", e)))?;
if let Some(output) = prev_tx.output.get(input.previous_output.vout as usize) {
match get_pubkey_from_input(
&input.script_sig.to_bytes(),
&input.witness.to_vec(),
&output.script_pubkey.to_bytes(),
) {
Ok(Some(pubkey)) => pubkeys.push(pubkey),
Ok(None) => continue,
Err(e) => {
return Err(Error::msg(format!(
"Can't extract pubkey from input: {}",
e
)))
}
}
} else {
return Err(Error::msg("Transaction with a non-existing input"));
}
}
let input_pub_keys: Vec<&PublicKey> = pubkeys.iter().collect();
let partial_tweak = calculate_tweak_data(&input_pub_keys, &outpoints)?;
Ok(partial_tweak)
}
pub fn check_transaction_alone(
mut wallet: MutexGuard<SpWallet>,
tx: &Transaction,
tweak_data: &PublicKey,
) -> Result<HashMap<OutPoint, OwnedOutput>> {
let updates = match wallet.update_with_transaction(tx, tweak_data, 0) {
Ok(updates) => updates,
Err(e) => {
log::debug!("Error while checking transaction: {}", e);
HashMap::new()
}
};
if updates.len() > 0 {
let storage = STORAGE
.get()
.ok_or_else(|| Error::msg("Failed to get STORAGE"))?;
storage
.lock_anyhow()?
.wallet_file
.save(&serde_json::to_value(wallet.clone())?)?;
}
Ok(updates)
}
fn check_block(
blkfilter: BlockFilter,
blkhash: BlockHash,
candidate_spks: Vec<&[u8; 34]>,
owned_spks: Vec<Vec<u8>>,
) -> Result<bool> {
// check output scripts
let mut scripts_to_match: Vec<_> = candidate_spks.into_iter().map(|spk| spk.as_ref()).collect();
// check input scripts
scripts_to_match.extend(owned_spks.iter().map(|spk| spk.as_slice()));
// note: match will always return true for an empty query!
if !scripts_to_match.is_empty() {
Ok(blkfilter.match_any(&blkhash, &mut scripts_to_match.into_iter())?)
} else {
Ok(false)
}
}
fn scan_block_outputs(
sp_receiver: &Receiver,
txdata: &Vec<Transaction>,
blkheight: u64,
spk2secret: HashMap<[u8; 34], PublicKey>,
) -> Result<HashMap<OutPoint, OwnedOutput>> {
let mut res: HashMap<OutPoint, OwnedOutput> = HashMap::new();
// loop over outputs
for tx in txdata {
let txid = tx.txid();
// collect all taproot outputs from transaction
let p2tr_outs: Vec<(usize, &TxOut)> = tx
.output
.iter()
.enumerate()
.filter(|(_, o)| o.script_pubkey.is_p2tr())
.collect();
if p2tr_outs.is_empty() {
continue;
}; // no taproot output
let mut secret: Option<PublicKey> = None;
// Does this transaction contains one of the outputs we already found?
for spk in p2tr_outs.iter().map(|(_, o)| &o.script_pubkey) {
if let Some(s) = spk2secret.get(spk.as_bytes()) {
// we might have at least one output in this transaction
secret = Some(*s);
break;
}
}
if secret.is_none() {
continue;
}; // we don't have a secret that matches any of the keys
// Now we can just run sp_receiver on all the p2tr outputs
let xonlykeys: Result<Vec<XOnlyPublicKey>> = p2tr_outs
.iter()
.map(|(_, o)| {
XOnlyPublicKey::from_slice(&o.script_pubkey.as_bytes()[2..]).map_err(Error::new)
})
.collect();
let ours = sp_receiver.scan_transaction(&secret.unwrap(), xonlykeys?)?;
let height = Height::from_consensus(blkheight as u32)?;
for (label, map) in ours {
res.extend(p2tr_outs.iter().filter_map(|(i, o)| {
match XOnlyPublicKey::from_slice(&o.script_pubkey.as_bytes()[2..]) {
Ok(key) => {
if let Some(scalar) = map.get(&key) {
match SecretKey::from_slice(&scalar.to_be_bytes()) {
Ok(tweak) => {
let outpoint = OutPoint {
txid,
vout: *i as u32,
};
return Some((
outpoint,
OwnedOutput {
blockheight: height,
tweak: tweak.secret_bytes(),
amount: o.value,
script: o.script_pubkey.clone(),
label: label.clone(),
spend_status: OutputSpendStatus::Unspent,
},
));
}
Err(_) => {
return None;
}
}
}
None
}
Err(_) => None,
}
}));
}
}
Ok(res)
}
fn scan_block_inputs(
our_outputs: &HashMap<OutPoint, OwnedOutput>,
txdata: Vec<Transaction>,
) -> Result<Vec<OutPoint>> {
let mut found = vec![];
for tx in txdata {
for input in tx.input {
let prevout = input.previous_output;
if our_outputs.contains_key(&prevout) {
found.push(prevout);
}
}
}
Ok(found)
}
pub struct NativeSpScanner<'a> {
updater: Box<dyn Updater + Sync + Send>,
backend: Box<dyn ChainBackend + Sync + Send>,
client: SpClient,
keep_scanning: &'a AtomicBool, // used to interrupt scanning
owned_outpoints: HashSet<OutPoint>, // used to scan block inputs
}
impl<'a> NativeSpScanner<'a> {
pub fn new(
client: SpClient,
updater: Box<dyn Updater + Sync + Send>,
backend: Box<dyn ChainBackend + Sync + Send>,
owned_outpoints: HashSet<OutPoint>,
keep_scanning: &'a AtomicBool,
) -> Self {
Self {
client,
updater,
backend,
owned_outpoints,
keep_scanning,
}
}
pub async fn process_blocks(
&mut self,
start: Height,
end: Height,
block_data_stream: impl Stream<Item = Result<BlockData>> + Unpin + Send,
) -> Result<()> {
use futures_util::StreamExt;
use std::time::{Duration, Instant};
let mut update_time = Instant::now();
let mut stream = block_data_stream;
while let Some(blockdata) = stream.next().await {
let blockdata = blockdata?;
let blkheight = blockdata.blkheight;
let blkhash = blockdata.blkhash;
// stop scanning and return if interrupted
if self.should_interrupt() {
self.save_state()?;
return Ok(());
}
let mut save_to_storage = false;
// always save on last block or after 30 seconds since last save
if blkheight == end || update_time.elapsed() > Duration::from_secs(30) {
save_to_storage = true;
}
let (found_outputs, found_inputs) = self.process_block(blockdata).await?;
if !found_outputs.is_empty() {
save_to_storage = true;
self.record_outputs(blkheight, blkhash, found_outputs)?;
}
if !found_inputs.is_empty() {
save_to_storage = true;
self.record_inputs(blkheight, blkhash, found_inputs)?;
}
// tell the updater we scanned this block
self.record_progress(start, blkheight, end)?;
if save_to_storage {
self.save_state()?;
update_time = Instant::now();
}
}
Ok(())
}
}
#[async_trait::async_trait]
impl<'a> SpScanner for NativeSpScanner<'a> {
async fn scan_blocks(
&mut self,
start: Height,
end: Height,
dust_limit: Amount,
with_cutthrough: bool,
) -> Result<()> {
if start > end {
bail!("bigger start than end: {} > {}", start, end);
}
info!("start: {} end: {}", start, end);
let start_time: Instant = Instant::now();
// get block data stream
let range = start.to_consensus_u32()..=end.to_consensus_u32();
let block_data_stream = self.get_block_data_stream(range, dust_limit, with_cutthrough);
// process blocks using block data stream
self.process_blocks(start, end, block_data_stream).await?;
// time elapsed for the scan
info!(
"Blindbit scan complete in {} seconds",
start_time.elapsed().as_secs()
);
Ok(())
}
async fn process_block(
&mut self,
blockdata: BlockData,
) -> Result<(HashMap<OutPoint, OwnedOutput>, HashSet<OutPoint>)> {
let BlockData {
blkheight,
tweaks,
new_utxo_filter,
spent_filter,
..
} = blockdata;
let outs = self
.process_block_outputs(blkheight, tweaks, new_utxo_filter)
.await?;
// after processing outputs, we add the found outputs to our list
self.owned_outpoints.extend(outs.keys());
let ins = self.process_block_inputs(blkheight, spent_filter).await?;
// after processing inputs, we remove the found inputs
self.owned_outpoints.retain(|item| !ins.contains(item));
Ok((outs, ins))
}
async fn process_block_outputs(
&self,
blkheight: Height,
tweaks: Vec<PublicKey>,
new_utxo_filter: FilterData,
) -> Result<HashMap<OutPoint, OwnedOutput>> {
let mut res = HashMap::new();
if !tweaks.is_empty() {
let secrets_map = self.client.get_script_to_secret_map(tweaks)?;
//last_scan = last_scan.max(n as u32);
let candidate_spks: Vec<&[u8; 34]> = secrets_map.keys().collect();
//get block gcs & check match
let blkfilter = BlockFilter::new(&new_utxo_filter.data);
let blkhash = new_utxo_filter.block_hash;
let matched_outputs = Self::check_block_outputs(blkfilter, blkhash, candidate_spks)?;
//if match: fetch and scan utxos
if matched_outputs {
info!("matched outputs on: {}", blkheight);
let found = self.scan_utxos(blkheight, secrets_map).await?;
if !found.is_empty() {
for (label, utxo, tweak) in found {
let outpoint = OutPoint {
txid: utxo.txid,
vout: utxo.vout,
};
let out = OwnedOutput {
blockheight: blkheight,
tweak: tweak.to_be_bytes(),
amount: utxo.value,
script: utxo.scriptpubkey,
label,
spend_status: OutputSpendStatus::Unspent,
};
res.insert(outpoint, out);
}
}
}
}
Ok(res)
}
async fn process_block_inputs(
&self,
blkheight: Height,
spent_filter: FilterData,
) -> Result<HashSet<OutPoint>> {
let mut res = HashSet::new();
let blkhash = spent_filter.block_hash;
// first get the 8-byte hashes used to construct the input filter
let input_hashes_map = self.get_input_hashes(blkhash)?;
// check against filter
let blkfilter = BlockFilter::new(&spent_filter.data);
let matched_inputs = self.check_block_inputs(
blkfilter,
blkhash,
input_hashes_map.keys().cloned().collect(),
)?;
// if match: download spent data, collect the outpoints that are spent
if matched_inputs {
info!("matched inputs on: {}", blkheight);
let spent = self.backend.spent_index(blkheight).await?.data;
for spent in spent {
let hex: &[u8] = spent.as_ref();
if let Some(outpoint) = input_hashes_map.get(hex) {
res.insert(*outpoint);
}
}
}
Ok(res)
}
fn get_block_data_stream(
&self,
range: std::ops::RangeInclusive<u32>,
dust_limit: Amount,
with_cutthrough: bool,
) -> std::pin::Pin<Box<dyn Stream<Item = Result<BlockData>> + Send>> {
self.backend
.get_block_data_for_range(range, dust_limit, with_cutthrough)
}
fn should_interrupt(&self) -> bool {
!self
.keep_scanning
.load(std::sync::atomic::Ordering::Relaxed)
}
fn save_state(&mut self) -> Result<()> {
self.updater.save_to_persistent_storage()
}
fn record_outputs(
&mut self,
height: Height,
block_hash: BlockHash,
outputs: HashMap<OutPoint, OwnedOutput>,
) -> Result<()> {
self.updater
.record_block_outputs(height, block_hash, outputs)
}
fn record_inputs(
&mut self,
height: Height,
block_hash: BlockHash,
inputs: HashSet<OutPoint>,
) -> Result<()> {
self.updater.record_block_inputs(height, block_hash, inputs)
}
fn record_progress(&mut self, start: Height, current: Height, end: Height) -> Result<()> {
self.updater.record_scan_progress(start, current, end)
}
fn client(&self) -> &SpClient {
&self.client
}
fn backend(&self) -> &dyn ChainBackend {
self.backend.as_ref()
}
fn updater(&mut self) -> &mut dyn Updater {
self.updater.as_mut()
}
// Override the default get_input_hashes implementation to use owned_outpoints
fn get_input_hashes(&self, blkhash: BlockHash) -> Result<HashMap<[u8; 8], OutPoint>> {
let mut map: HashMap<[u8; 8], OutPoint> = HashMap::new();
for outpoint in &self.owned_outpoints {
let mut arr = [0u8; 68];
arr[..32].copy_from_slice(&outpoint.txid.to_raw_hash().to_byte_array());
arr[32..36].copy_from_slice(&outpoint.vout.to_le_bytes());
arr[36..].copy_from_slice(&blkhash.to_byte_array());
let hash = sha256::Hash::hash(&arr);
let mut res = [0u8; 8];
res.copy_from_slice(&hash[..8]);
map.insert(res, outpoint.clone());
}
Ok(map)
}
}
pub async fn scan_blocks(mut n_blocks_to_scan: u32, blindbit_url: &str) -> anyhow::Result<()> {
log::info!("Starting a rescan");
// Get all the data we need upfront, before any async operations
let (sp_wallet, scan_height, tip_height) = {
let sp_wallet = WALLET
.get()
.ok_or(Error::msg("Wallet not initialized"))?
.lock_anyhow()?;
let scan_height = sp_wallet.get_last_scan();
let tip_height: u32 = CHAIN_TIP.load(Ordering::Relaxed).try_into()?;
(sp_wallet.clone(), scan_height, tip_height)
};
// 0 means scan to tip
if n_blocks_to_scan == 0 {
n_blocks_to_scan = tip_height - scan_height;
}
let start = scan_height + 1;
let end = if scan_height + n_blocks_to_scan <= tip_height {
scan_height + n_blocks_to_scan
} else {
tip_height
};
if start > end {
return Ok(());
}
let updater = StateUpdater::new();
let backend = BlindbitBackend::new(blindbit_url.to_string())?;
let owned_outpoints = sp_wallet.get_unspent_outputs().keys().map(|o| *o).collect();
let keep_scanning = Arc::new(AtomicBool::new(true));
log::info!("start: {} end: {}", start, end);
let start_time = Instant::now();
let mut scanner = NativeSpScanner::new(
sp_wallet.get_sp_client().clone(),
Box::new(updater),
Box::new(backend),
owned_outpoints,
&keep_scanning,
);
let dust_limit = Amount::from_sat(0); // We don't really have a dust limit for this use case
scanner
.scan_blocks(
Height::from_consensus(start)?,
Height::from_consensus(end)?,
dust_limit,
WITH_CUTTHROUGH,
)
.await?;
// time elapsed for the scan
log::info!(
"Scan complete in {} seconds",
start_time.elapsed().as_secs()
);
Ok(())
}

47
src/sync.rs Normal file
View File

@ -0,0 +1,47 @@
use anyhow::{Error, Result};
use futures_util::{SinkExt, StreamExt};
use tokio_tungstenite::tungstenite::Message;
use crate::peers;
use sdk_common::network::{AnkFlag, Envelope};
pub async fn bootstrap_connect_and_sync(bootstrap_url: String) -> Result<()> {
log::info!("Connecting to bootstrap: {}", bootstrap_url);
let (ws_stream, _) = tokio_tungstenite::connect_async(&bootstrap_url).await?;
log::info!("Connected to bootstrap successfully");
let (mut write, mut read) = ws_stream.split();
// Send our empty Sync (we don't share, just ask implicitly)
let ask = Envelope { flag: AnkFlag::Sync, content: serde_json::json!({"peers": []}).to_string() };
log::info!("Sending sync request to bootstrap");
write.send(Message::Text(serde_json::to_string(&ask)?)).await?;
// Wait one message for peer list
if let Some(Ok(Message::Text(txt))) = read.next().await {
log::info!("Received response from bootstrap: {}", txt);
if let Ok(env) = serde_json::from_str::<Envelope>(&txt) {
log::info!("Parsed envelope with flag: {:?}", env.flag);
if let AnkFlag::Sync = env.flag {
if let Ok(val) = serde_json::from_str::<serde_json::Value>(&env.content) {
if let Some(arr) = val.get("peers").and_then(|v| v.as_array()) {
for v in arr {
if let Some(url) = v.as_str() {
peers::add_peer(url);
}
}
}
}
} else {
log::warn!("Bootstrap response is not a Sync message, flag: {:?}", env.flag);
}
} else {
log::warn!("Failed to parse bootstrap response as envelope: {}", txt);
}
} else {
log::warn!("No response received from bootstrap");
}
// Optionally request one faucet if configured is handled by higher layer
log::info!("Bootstrap sync completed");
Ok(())
}

23
tests/health_check.sh Executable file
View File

@ -0,0 +1,23 @@
#!/usr/bin/env bash
set -euo pipefail
URL="http://localhost:8091/health"
echo "[tests] Vérification /health: $URL"
http_code=$(curl -s -o /tmp/health_body.txt -w "%{http_code}" "$URL")
body=$(cat /tmp/health_body.txt)
echo "HTTP $http_code"
echo "Body: $body"
if [[ "$http_code" != "200" ]]; then
echo "Échec: code HTTP inattendu" >&2
exit 1
fi
if [[ "$body" != '{"status":"ok"}' ]]; then
echo "Échec: corps inattendu" >&2
exit 1
fi
echo "Succès: endpoint /health opérationnel"

22
tests/ws_connect.rs Normal file
View File

@ -0,0 +1,22 @@
use std::time::Duration;
use tokio::time::timeout;
use tokio_tungstenite::connect_async;
#[tokio::test(flavor = "multi_thread")]
async fn ws_connects_on_localhost_8090() {
if std::env::var("RUN_WS_TEST").ok().as_deref() != Some("1") {
// Test conditionnel: désactivé par défaut pour éviter les échecs en l'absence de serveur WS local
return;
}
let url = std::env::var("SDK_RELAY_WS_URL").unwrap_or_else(|_| "ws://0.0.0.0:8090".to_string());
let connect_fut = connect_async(url);
let res = timeout(Duration::from_secs(3), connect_fut).await;
match res {
Ok(Ok((_stream, _resp))) => {
// Succès si la poignée de main WS passe
}
Ok(Err(e)) => panic!("Échec connexion WebSocket: {e}"),
Err(_) => panic!("Timeout connexion WebSocket"),
}
}