Pipeline Collatz aligné sur commandes.md et reprise après interruption

**Motivations:**
- Implémenter le workflow complet de démonstration Collatz (commandes.md)
- Permettre la reprise après interruption au palier D20

**Evolutions:**
- Scripts 01-12 et run-full-workflow alignés sur commandes.md sections 1-10
- collatz_recover_noyau.py : recréation de noyau_post_D20 à partir du CSV candidats
- Option --resume-from D20 dans collatz_k_pipeline pour reprendre sans recalculer D18-D19-F15
- Détection automatique : si candidats_D20 existe sans noyau_post_D20, récupération puis poursuite
- Filtres --cible=critique et --modulo dans collatz_fusion_pipeline
- ROOT par défaut = collatz_k_scripts (plus data/source vide)

**Pages affectées:**
- .gitignore (__pycache__, out/)
- applications/collatz/collatz_k_scripts/*.py
- applications/collatz/scripts/*.sh
- applications/collatz/scripts/README.md
This commit is contained in:
Nicolas Cantu 2026-03-02 02:49:23 +01:00
parent 9a975c73d7
commit 14ed1de36b
43 changed files with 3567 additions and 19 deletions

7
.gitignore vendored
View File

@ -6,3 +6,10 @@ Thumbs.db
*.swp *.swp
*.swo *.swo
*~ *~
# Python
__pycache__/
*.pyc
# Collatz outputs
applications/collatz/out/

View File

@ -0,0 +1,798 @@
{
"residue_to_state": {
"255": 1,
"511": 1,
"767": 1,
"1023": 1,
"1279": 1,
"1535": 1,
"1791": 1,
"2047": 1,
"2303": 1,
"2559": 1,
"2815": 1,
"3071": 1,
"3327": 1,
"3583": 1,
"3839": 1,
"4095": 1,
"127": 2,
"639": 2,
"1151": 2,
"1663": 2,
"2175": 2,
"2687": 2,
"3199": 2,
"3711": 2,
"447": 3,
"959": 3,
"1471": 3,
"1983": 3,
"2495": 3,
"3007": 3,
"3519": 3,
"4031": 3,
"159": 4,
"671": 4,
"1183": 4,
"1695": 4,
"2207": 4,
"2719": 4,
"3231": 4,
"3743": 4,
"239": 5,
"751": 5,
"1263": 5,
"1775": 5,
"2287": 5,
"2799": 5,
"3311": 5,
"3823": 5,
"359": 6,
"871": 6,
"1383": 6,
"1895": 6,
"2407": 6,
"2919": 6,
"3431": 6,
"3943": 6,
"283": 7,
"795": 7,
"1307": 7,
"1819": 7,
"2331": 7,
"2843": 7,
"3355": 7,
"3867": 7,
"895": 8,
"1919": 8,
"2943": 8,
"3967": 8,
"703": 9,
"1727": 9,
"2751": 9,
"3775": 9,
"319": 10,
"1343": 10,
"2367": 10,
"3391": 10,
"415": 11,
"1439": 11,
"2463": 11,
"3487": 11,
"31": 12,
"1055": 12,
"2079": 12,
"3103": 12,
"479": 13,
"1503": 13,
"2527": 13,
"3551": 13,
"495": 14,
"1519": 14,
"2543": 14,
"3567": 14,
"111": 15,
"1135": 15,
"2159": 15,
"3183": 15,
"559": 16,
"1583": 16,
"2607": 16,
"3631": 16,
"103": 17,
"1127": 17,
"2151": 17,
"3175": 17,
"743": 18,
"1767": 18,
"2791": 18,
"3815": 18,
"167": 19,
"1191": 19,
"2215": 19,
"3239": 19,
"327": 20,
"1351": 20,
"2375": 20,
"3399": 20,
"27": 21,
"1051": 21,
"2075": 21,
"3099": 21,
"667": 22,
"1691": 22,
"2715": 22,
"3739": 22,
"91": 23,
"1115": 23,
"2139": 23,
"3163": 23,
"251": 24,
"1275": 24,
"2299": 24,
"3323": 24,
"1407": 25,
"3455": 25,
"1215": 26,
"3263": 26,
"831": 27,
"2879": 27,
"63": 28,
"2111": 28,
"1951": 29,
"3999": 29,
"1567": 30,
"3615": 30,
"991": 31,
"3039": 31,
"223": 32,
"2271": 32,
"1007": 33,
"3055": 33,
"623": 34,
"2671": 34,
"47": 35,
"2095": 35,
"1327": 36,
"3375": 36,
"1639": 37,
"3687": 37,
"1255": 38,
"3303": 38,
"679": 39,
"2727": 39,
"1959": 40,
"4007": 40,
"839": 41,
"2887": 41,
"71": 42,
"2119": 42,
"1563": 43,
"3611": 43,
"1179": 44,
"3227": 44,
"603": 45,
"2651": 45,
"1883": 46,
"3931": 46,
"763": 47,
"2811": 47,
"2043": 48,
"4091": 48,
"191": 49,
"3135": 50,
"927": 51,
"1247": 52,
"4079": 53,
"303": 54,
"2663": 55,
"2983": 56,
"1095": 57,
"539": 58,
"859": 59,
"3067": 60
},
"state_table": [
{
"État": 1,
"Mot (a0..a6)": "1 1 1 1 1 1 1",
"Somme A": 7,
"Effectif": 16,
"C7": 2059,
"D8 = 3*C7 + 2^A": "6305",
"n7 mod 3": 2,
"n7 mod 2187": 2186
},
{
"État": 2,
"Mot (a0..a6)": "1 1 1 1 1 1 2",
"Somme A": 8,
"Effectif": 8,
"C7": 2059,
"D8 = 3*C7 + 2^A": "6433",
"n7 mod 3": 1,
"n7 mod 2187": 1093
},
{
"État": 3,
"Mot (a0..a6)": "1 1 1 1 1 2 1",
"Somme A": 8,
"Effectif": 8,
"C7": 2123,
"D8 = 3*C7 + 2^A": "6625",
"n7 mod 3": 2,
"n7 mod 2187": 1640
},
{
"État": 4,
"Mot (a0..a6)": "1 1 1 1 2 1 1",
"Somme A": 8,
"Effectif": 8,
"C7": 2219,
"D8 = 3*C7 + 2^A": "6913",
"n7 mod 3": 2,
"n7 mod 2187": 1367
},
{
"État": 5,
"Mot (a0..a6)": "1 1 1 2 1 1 1",
"Somme A": 8,
"Effectif": 8,
"C7": 2363,
"D8 = 3*C7 + 2^A": "7345",
"n7 mod 3": 2,
"n7 mod 2187": 2051
},
{
"État": 6,
"Mot (a0..a6)": "1 1 2 1 1 1 1",
"Somme A": 8,
"Effectif": 8,
"C7": 2579,
"D8 = 3*C7 + 2^A": "7993",
"n7 mod 3": 2,
"n7 mod 2187": 890
},
{
"État": 7,
"Mot (a0..a6)": "1 2 1 1 1 1 1",
"Somme A": 8,
"Effectif": 8,
"C7": 2903,
"D8 = 3*C7 + 2^A": "8965",
"n7 mod 3": 2,
"n7 mod 2187": 242
},
{
"État": 8,
"Mot (a0..a6)": "1 1 1 1 1 1 3",
"Somme A": 9,
"Effectif": 4,
"C7": 2059,
"D8 = 3*C7 + 2^A": "6689",
"n7 mod 3": 2,
"n7 mod 2187": 1640
},
{
"État": 9,
"Mot (a0..a6)": "1 1 1 1 1 2 2",
"Somme A": 9,
"Effectif": 4,
"C7": 2123,
"D8 = 3*C7 + 2^A": "6881",
"n7 mod 3": 1,
"n7 mod 2187": 820
},
{
"État": 10,
"Mot (a0..a6)": "1 1 1 1 1 3 1",
"Somme A": 9,
"Effectif": 4,
"C7": 2251,
"D8 = 3*C7 + 2^A": "7265",
"n7 mod 3": 2,
"n7 mod 2187": 1367
},
{
"État": 11,
"Mot (a0..a6)": "1 1 1 1 2 1 2",
"Somme A": 9,
"Effectif": 4,
"C7": 2219,
"D8 = 3*C7 + 2^A": "7169",
"n7 mod 3": 1,
"n7 mod 2187": 1777
},
{
"État": 12,
"Mot (a0..a6)": "1 1 1 1 2 2 1",
"Somme A": 9,
"Effectif": 4,
"C7": 2347,
"D8 = 3*C7 + 2^A": "7553",
"n7 mod 3": 2,
"n7 mod 2187": 137
},
{
"État": 13,
"Mot (a0..a6)": "1 1 1 1 3 1 1",
"Somme A": 9,
"Effectif": 4,
"C7": 2539,
"D8 = 3*C7 + 2^A": "8129",
"n7 mod 3": 2,
"n7 mod 2187": 2051
},
{
"État": 14,
"Mot (a0..a6)": "1 1 1 2 1 1 2",
"Somme A": 9,
"Effectif": 4,
"C7": 2363,
"D8 = 3*C7 + 2^A": "7601",
"n7 mod 3": 1,
"n7 mod 2187": 2119
},
{
"État": 15,
"Mot (a0..a6)": "1 1 1 2 1 2 1",
"Somme A": 9,
"Effectif": 4,
"C7": 2491,
"D8 = 3*C7 + 2^A": "7985",
"n7 mod 3": 2,
"n7 mod 2187": 479
},
{
"État": 16,
"Mot (a0..a6)": "1 1 1 2 2 1 1",
"Somme A": 9,
"Effectif": 4,
"C7": 2683,
"D8 = 3*C7 + 2^A": "8561",
"n7 mod 3": 2,
"n7 mod 2187": 206
},
{
"État": 17,
"Mot (a0..a6)": "1 1 2 1 1 1 2",
"Somme A": 9,
"Effectif": 4,
"C7": 2579,
"D8 = 3*C7 + 2^A": "8249",
"n7 mod 3": 1,
"n7 mod 2187": 445
},
{
"État": 18,
"Mot (a0..a6)": "1 1 2 1 1 2 1",
"Somme A": 9,
"Effectif": 4,
"C7": 2707,
"D8 = 3*C7 + 2^A": "8633",
"n7 mod 3": 2,
"n7 mod 2187": 992
},
{
"État": 19,
"Mot (a0..a6)": "1 1 2 1 2 1 1",
"Somme A": 9,
"Effectif": 4,
"C7": 2899,
"D8 = 3*C7 + 2^A": "9209",
"n7 mod 3": 2,
"n7 mod 2187": 719
},
{
"État": 20,
"Mot (a0..a6)": "1 1 2 2 1 1 1",
"Somme A": 9,
"Effectif": 4,
"C7": 3187,
"D8 = 3*C7 + 2^A": "10073",
"n7 mod 3": 2,
"n7 mod 2187": 1403
},
{
"État": 21,
"Mot (a0..a6)": "1 2 1 1 1 1 2",
"Somme A": 9,
"Effectif": 4,
"C7": 2903,
"D8 = 3*C7 + 2^A": "9221",
"n7 mod 3": 1,
"n7 mod 2187": 121
},
{
"État": 22,
"Mot (a0..a6)": "1 2 1 1 1 2 1",
"Somme A": 9,
"Effectif": 4,
"C7": 3031,
"D8 = 3*C7 + 2^A": "9605",
"n7 mod 3": 2,
"n7 mod 2187": 668
},
{
"État": 23,
"Mot (a0..a6)": "1 2 1 1 2 1 1",
"Somme A": 9,
"Effectif": 4,
"C7": 3223,
"D8 = 3*C7 + 2^A": "10181",
"n7 mod 3": 2,
"n7 mod 2187": 395
},
{
"État": 24,
"Mot (a0..a6)": "1 2 1 2 1 1 1",
"Somme A": 9,
"Effectif": 4,
"C7": 3511,
"D8 = 3*C7 + 2^A": "11045",
"n7 mod 3": 2,
"n7 mod 2187": 1079
},
{
"État": 25,
"Mot (a0..a6)": "1 1 1 1 1 1 4",
"Somme A": 10,
"Effectif": 2,
"C7": 2059,
"D8 = 3*C7 + 2^A": "7201",
"n7 mod 3": 1,
"n7 mod 2187": 820
},
{
"État": 26,
"Mot (a0..a6)": "1 1 1 1 1 2 3",
"Somme A": 10,
"Effectif": 2,
"C7": 2123,
"D8 = 3*C7 + 2^A": "7393",
"n7 mod 3": 2,
"n7 mod 2187": 410
},
{
"État": 27,
"Mot (a0..a6)": "1 1 1 1 1 3 2",
"Somme A": 10,
"Effectif": 2,
"C7": 2251,
"D8 = 3*C7 + 2^A": "7777",
"n7 mod 3": 1,
"n7 mod 2187": 1777
},
{
"État": 28,
"Mot (a0..a6)": "1 1 1 1 1 4 1",
"Somme A": 10,
"Effectif": 2,
"C7": 2507,
"D8 = 3*C7 + 2^A": "8545",
"n7 mod 3": 2,
"n7 mod 2187": 137
},
{
"État": 29,
"Mot (a0..a6)": "1 1 1 1 2 1 3",
"Somme A": 10,
"Effectif": 2,
"C7": 2219,
"D8 = 3*C7 + 2^A": "7681",
"n7 mod 3": 2,
"n7 mod 2187": 1982
},
{
"État": 30,
"Mot (a0..a6)": "1 1 1 1 2 2 2",
"Somme A": 10,
"Effectif": 2,
"C7": 2347,
"D8 = 3*C7 + 2^A": "8065",
"n7 mod 3": 1,
"n7 mod 2187": 1162
},
{
"État": 31,
"Mot (a0..a6)": "1 1 1 1 3 1 2",
"Somme A": 10,
"Effectif": 2,
"C7": 2539,
"D8 = 3*C7 + 2^A": "8641",
"n7 mod 3": 1,
"n7 mod 2187": 2119
},
{
"État": 32,
"Mot (a0..a6)": "1 1 1 1 3 2 1",
"Somme A": 10,
"Effectif": 2,
"C7": 2795,
"D8 = 3*C7 + 2^A": "9409",
"n7 mod 3": 2,
"n7 mod 2187": 479
},
{
"État": 33,
"Mot (a0..a6)": "1 1 1 2 1 1 3",
"Somme A": 10,
"Effectif": 2,
"C7": 2363,
"D8 = 3*C7 + 2^A": "8113",
"n7 mod 3": 2,
"n7 mod 2187": 2153
},
{
"État": 34,
"Mot (a0..a6)": "1 1 1 2 1 2 2",
"Somme A": 10,
"Effectif": 2,
"C7": 2491,
"D8 = 3*C7 + 2^A": "8497",
"n7 mod 3": 1,
"n7 mod 2187": 1333
},
{
"État": 35,
"Mot (a0..a6)": "1 1 1 2 2 1 2",
"Somme A": 10,
"Effectif": 2,
"C7": 2683,
"D8 = 3*C7 + 2^A": "9073",
"n7 mod 3": 1,
"n7 mod 2187": 103
},
{
"État": 36,
"Mot (a0..a6)": "1 1 1 2 2 2 1",
"Somme A": 10,
"Effectif": 2,
"C7": 2939,
"D8 = 3*C7 + 2^A": "9841",
"n7 mod 3": 2,
"n7 mod 2187": 650
},
{
"État": 37,
"Mot (a0..a6)": "1 1 2 1 1 1 3",
"Somme A": 10,
"Effectif": 2,
"C7": 2579,
"D8 = 3*C7 + 2^A": "8761",
"n7 mod 3": 2,
"n7 mod 2187": 1316
},
{
"État": 38,
"Mot (a0..a6)": "1 1 2 1 1 2 2",
"Somme A": 10,
"Effectif": 2,
"C7": 2707,
"D8 = 3*C7 + 2^A": "9145",
"n7 mod 3": 1,
"n7 mod 2187": 496
},
{
"État": 39,
"Mot (a0..a6)": "1 1 2 1 2 1 2",
"Somme A": 10,
"Effectif": 2,
"C7": 2899,
"D8 = 3*C7 + 2^A": "9721",
"n7 mod 3": 1,
"n7 mod 2187": 1453
},
{
"État": 40,
"Mot (a0..a6)": "1 1 2 1 2 2 1",
"Somme A": 10,
"Effectif": 2,
"C7": 3155,
"D8 = 3*C7 + 2^A": "10489",
"n7 mod 3": 2,
"n7 mod 2187": 2000
},
{
"État": 41,
"Mot (a0..a6)": "1 1 2 2 1 1 2",
"Somme A": 10,
"Effectif": 2,
"C7": 3187,
"D8 = 3*C7 + 2^A": "10585",
"n7 mod 3": 1,
"n7 mod 2187": 1795
},
{
"État": 42,
"Mot (a0..a6)": "1 1 2 2 1 2 1",
"Somme A": 10,
"Effectif": 2,
"C7": 3443,
"D8 = 3*C7 + 2^A": "11353",
"n7 mod 3": 2,
"n7 mod 2187": 155
},
{
"État": 43,
"Mot (a0..a6)": "1 2 1 1 1 1 3",
"Somme A": 10,
"Effectif": 2,
"C7": 2903,
"D8 = 3*C7 + 2^A": "9733",
"n7 mod 3": 2,
"n7 mod 2187": 1154
},
{
"État": 44,
"Mot (a0..a6)": "1 2 1 1 1 2 2",
"Somme A": 10,
"Effectif": 2,
"C7": 3031,
"D8 = 3*C7 + 2^A": "10117",
"n7 mod 3": 1,
"n7 mod 2187": 334
},
{
"État": 45,
"Mot (a0..a6)": "1 2 1 1 2 1 2",
"Somme A": 10,
"Effectif": 2,
"C7": 3223,
"D8 = 3*C7 + 2^A": "10693",
"n7 mod 3": 1,
"n7 mod 2187": 1291
},
{
"État": 46,
"Mot (a0..a6)": "1 2 1 1 2 2 1",
"Somme A": 10,
"Effectif": 2,
"C7": 3479,
"D8 = 3*C7 + 2^A": "11461",
"n7 mod 3": 2,
"n7 mod 2187": 1838
},
{
"État": 47,
"Mot (a0..a6)": "1 2 1 2 1 1 2",
"Somme A": 10,
"Effectif": 2,
"C7": 3511,
"D8 = 3*C7 + 2^A": "11557",
"n7 mod 3": 1,
"n7 mod 2187": 1633
},
{
"État": 48,
"Mot (a0..a6)": "1 2 1 2 1 2 1",
"Somme A": 10,
"Effectif": 2,
"C7": 3767,
"D8 = 3*C7 + 2^A": "12325",
"n7 mod 3": 2,
"n7 mod 2187": 2180
},
{
"État": 49,
"Mot (a0..a6)": "1 1 1 1 1 2 4",
"Somme A": 11,
"Effectif": 1,
"C7": 2123,
"D8 = 3*C7 + 2^A": "8417",
"n7 mod 3": 1,
"n7 mod 2187": 205
},
{
"État": 50,
"Mot (a0..a6)": "1 1 1 1 1 4 2",
"Somme A": 11,
"Effectif": 1,
"C7": 2507,
"D8 = 3*C7 + 2^A": "9569",
"n7 mod 3": 1,
"n7 mod 2187": 1162
},
{
"État": 51,
"Mot (a0..a6)": "1 1 1 1 2 1 4",
"Somme A": 11,
"Effectif": 1,
"C7": 2219,
"D8 = 3*C7 + 2^A": "8705",
"n7 mod 3": 1,
"n7 mod 2187": 991
},
{
"État": 52,
"Mot (a0..a6)": "1 1 1 1 3 2 2",
"Somme A": 11,
"Effectif": 1,
"C7": 2795,
"D8 = 3*C7 + 2^A": "10433",
"n7 mod 3": 1,
"n7 mod 2187": 1333
},
{
"État": 53,
"Mot (a0..a6)": "1 1 1 2 1 1 4",
"Somme A": 11,
"Effectif": 1,
"C7": 2363,
"D8 = 3*C7 + 2^A": "9137",
"n7 mod 3": 1,
"n7 mod 2187": 2170
},
{
"État": 54,
"Mot (a0..a6)": "1 1 1 2 2 2 2",
"Somme A": 11,
"Effectif": 1,
"C7": 2939,
"D8 = 3*C7 + 2^A": "10865",
"n7 mod 3": 1,
"n7 mod 2187": 325
},
{
"État": 55,
"Mot (a0..a6)": "1 1 2 1 1 1 4",
"Somme A": 11,
"Effectif": 1,
"C7": 2579,
"D8 = 3*C7 + 2^A": "9785",
"n7 mod 3": 1,
"n7 mod 2187": 658
},
{
"État": 56,
"Mot (a0..a6)": "1 1 2 1 2 2 2",
"Somme A": 11,
"Effectif": 1,
"C7": 3155,
"D8 = 3*C7 + 2^A": "11513",
"n7 mod 3": 1,
"n7 mod 2187": 1000
},
{
"État": 57,
"Mot (a0..a6)": "1 1 2 2 1 2 2",
"Somme A": 11,
"Effectif": 1,
"C7": 3443,
"D8 = 3*C7 + 2^A": "12377",
"n7 mod 3": 1,
"n7 mod 2187": 1171
},
{
"État": 58,
"Mot (a0..a6)": "1 2 1 1 1 1 4",
"Somme A": 11,
"Effectif": 1,
"C7": 2903,
"D8 = 3*C7 + 2^A": "10757",
"n7 mod 3": 1,
"n7 mod 2187": 577
},
{
"État": 59,
"Mot (a0..a6)": "1 2 1 1 2 2 2",
"Somme A": 11,
"Effectif": 1,
"C7": 3479,
"D8 = 3*C7 + 2^A": "12485",
"n7 mod 3": 1,
"n7 mod 2187": 919
},
{
"État": 60,
"Mot (a0..a6)": "1 2 1 2 1 2 2",
"Somme A": 11,
"Effectif": 1,
"C7": 3767,
"D8 = 3*C7 + 2^A": "13349",
"n7 mod 3": 1,
"n7 mod 2187": 1090
}
]
}

View File

@ -0,0 +1,136 @@
# -*- coding: utf-8 -*-
"""
collatz_audit.py
Audit des classes couvertes à partir d'un CSV de candidats.
Produit un rapport Markdown avec tailles, distributions et impact par état.
Usage: --input CSV_PATH --output MD_PATH [--audit60 JSON_PATH]
"""
from __future__ import annotations
import argparse
import csv
import json
from collections import Counter
from pathlib import Path
def _find_column(row: dict, *candidates: str) -> str | None:
"""Return first matching column name from row keys."""
keys = set(row.keys())
for c in candidates:
for k in keys:
if c in k or k.replace(" ", "").lower() == c.replace(" ", "").lower():
return k
return None
def load_state_table(audit60_path: str | None) -> dict[int, str]:
"""Load state_id -> mot_7 from audit60 JSON. Returns {} if not found."""
if not audit60_path or not Path(audit60_path).exists():
return {}
try:
data = json.loads(Path(audit60_path).read_text(encoding="utf-8"))
state_table = data.get("state_table", [])
mot_key = "Mot (a0..a6)"
etat_key = "État"
return {
int(row.get(etat_key, 0)): row.get(mot_key, "")
for row in state_table
if etat_key in row
}
except (json.JSONDecodeError, KeyError):
return {}
def run_audit(csv_path: str, out_md_path: str, audit60_path: str | None = None) -> None:
"""Read CSV, produce audit markdown."""
rows: list[dict] = []
with Path(csv_path).open("r", encoding="utf-8") as f:
reader = csv.DictReader(f)
for row in reader:
rows.append(dict(row))
if not rows:
Path(out_md_path).write_text("# Audit (vide)\n\nAucune clause.\n", encoding="utf-8")
print(f"Wrote {out_md_path} (empty)")
return
classe_col = _find_column(rows[0], "classe_mod_2^m", "classe_mod_2^27", "classe_mod_2^28", "classe_mod_2")
soeur_col = _find_column(rows[0], "sœur", "soeur")
etat_col = _find_column(rows[0], "etat_id", "état_id")
clauses: set[int] = set()
covered: set[int] = set()
etat_counts: Counter[int] = Counter()
for r in rows:
if classe_col:
try:
c = int(r.get(classe_col, 0) or 0)
clauses.add(c)
covered.add(c)
except (ValueError, TypeError):
pass
if soeur_col:
try:
s = int(r.get(soeur_col, 0) or 0)
covered.add(s)
except (ValueError, TypeError):
pass
if etat_col:
try:
e = int(r.get(etat_col, 0) or 0)
etat_counts[e] += 1
except (ValueError, TypeError):
pass
n_clauses = len(clauses)
n_covered = len(covered)
name = Path(csv_path).stem
state_mot = load_state_table(audit60_path or str(Path(__file__).parent / "audit_60_etats_B12_mod4096_horizon7.json"))
lines = [
f"# Audit {name}",
"",
"## Introduction",
"",
f"Audit des clauses extraites de {Path(csv_path).name}.",
"",
"## Résultats globaux",
"",
f"- Nombre de clauses : {n_clauses}",
f"- Classes couvertes (clauses + sœurs) : {n_covered}",
f"- États distincts représentés : {len(etat_counts)}",
"",
]
if etat_counts:
lines.extend([
"## Distribution par état (60 états de base)",
"",
"| état_id | mot_7 | effectif |",
"|--------:|:------|--------:|",
])
for etat_id in sorted(etat_counts.keys(), key=lambda x: (-etat_counts[x], x)):
mot = state_mot.get(etat_id, "")
lines.append(f"| {etat_id:8} | {mot:20} | {etat_counts[etat_id]:8} |")
lines.append("")
Path(out_md_path).write_text("\n".join(lines), encoding="utf-8")
print(f"Wrote {out_md_path}: {n_clauses} clauses, {n_covered} covered")
def main() -> None:
ap = argparse.ArgumentParser(description="Audit Collatz CSV → Markdown")
ap.add_argument("--input", "-i", required=True, help="Input CSV path")
ap.add_argument("--output", "-o", required=True, help="Output Markdown path")
ap.add_argument("--audit60", help="Path to audit_60_etats JSON (optional)")
args = ap.parse_args()
run_audit(args.input, args.output, args.audit60)
if __name__ == "__main__":
main()

View File

@ -0,0 +1,55 @@
# -*- coding: utf-8 -*-
"""
collatz_base_projective.py
Generate base projective of 60 states (B12 mod 4096).
Uses md_to_audit_json when audit MD exists, else outputs minimal structure.
"""
from __future__ import annotations
import argparse
import json
from pathlib import Path
from md_to_audit_json import (
parse_residues_by_state,
parse_state_table,
build_residue_to_state,
)
def run(modulus: int, output_path: str, audit_md_path: str | None = None) -> None:
"""Generate base_projective JSON from audit MD or minimal structure."""
default_md = Path(__file__).parent / "audit_60_etats_B12_mod4096_horizon7.md"
md_path = Path(audit_md_path) if audit_md_path else default_md
if md_path.exists():
text = md_path.read_text(encoding="utf-8")
state_table = parse_state_table(text)
residue_by_state = parse_residues_by_state(text)
residue_to_state = build_residue_to_state(residue_by_state)
out = {
"modulus": modulus,
"residue_to_state": {k: v for k, v in residue_to_state.items()},
"state_table": state_table,
}
else:
out = {"modulus": modulus, "residue_to_state": {}, "state_table": []}
Path(output_path).write_text(
json.dumps(out, indent=2, ensure_ascii=False), encoding="utf-8"
)
print(f"Wrote {output_path}: modulus={modulus}")
def main() -> None:
ap = argparse.ArgumentParser(description="Generate base projective 60 states")
ap.add_argument("--modulus", type=int, default=4096)
ap.add_argument("--output", "-o", required=True)
ap.add_argument("--audit-md", help="Path to audit MD (default: audit_60_etats_*.md)")
args = ap.parse_args()
run(args.modulus, args.output, args.audit_md)
if __name__ == "__main__":
main()

View File

@ -0,0 +1,33 @@
# -*- coding: utf-8 -*-
"""
collatz_build_initial_noyau.py
Build initial R17 noyau from completion (Parents both).
Used before D10 when running palier-by-palier.
Usage: python collatz_build_initial_noyau.py --m15m16 PATH --output JSON
"""
from __future__ import annotations
import argparse
import json
from pathlib import Path
def main() -> None:
ap = argparse.ArgumentParser(description="Build R17 noyau from completion")
ap.add_argument("--m15m16", required=True, help="Path to complétion_minorée_m15_vers_m16.md")
ap.add_argument("--output", required=True, help="Output noyau JSON path")
args = ap.parse_args()
from collatz_k_pipeline import build_R17_from_completion
residues = build_R17_from_completion(args.m15m16)
out = {"noyau": residues, "palier": 17}
Path(args.output).parent.mkdir(parents=True, exist_ok=True)
Path(args.output).write_text(json.dumps(out, indent=2), encoding="utf-8")
print(f"Wrote {args.output}: {len(residues)} residues (R17)")
if __name__ == "__main__":
main()

View File

@ -0,0 +1,68 @@
# -*- coding: utf-8 -*-
"""
collatz_calculate_Nstar.py
Scan certificats/*.json for N0 values in clauses, compute N* = max of all N0,
write markdown with "N* = <value>".
Usage:
python collatz_calculate_Nstar.py --certificats-dir DIR --output MD_PATH
"""
from __future__ import annotations
import argparse
import json
from pathlib import Path
def scan_n0_values(certificats_dir: Path) -> list[int]:
"""Scan certificat_*.json and candidats_*.csv for N0 values."""
import csv
n0_values: list[int] = []
for p in sorted(certificats_dir.glob("certificat_*.json")):
data = json.loads(p.read_text(encoding="utf-8"))
clauses = data.get("clauses", data.get("closed", []))
for c in clauses:
if isinstance(c, dict) and "N0" in c:
n0_values.append(int(c["N0"]))
parent = certificats_dir.resolve().parent
for search_dir in (parent, parent / "candidats"):
if not search_dir.is_dir():
continue
for p in sorted(search_dir.glob("candidats_*.csv")):
with p.open("r", encoding="utf-8") as f:
reader = csv.DictReader(f)
for row in reader:
n0_col = next((k for k in row if "N0" in k or k == "N0"), None)
if n0_col and row.get(n0_col):
try:
n0_values.append(int(row[n0_col]))
except ValueError:
pass
return n0_values
def main() -> None:
parser = argparse.ArgumentParser(
description="Compute N* = max of all N0 from certificat JSON files"
)
parser.add_argument("--certificats-dir", required=True, help="Directory containing certificat_*.json")
parser.add_argument("--output", required=True, help="Output markdown path")
args = parser.parse_args()
cert_dir = Path(args.certificats_dir)
if not cert_dir.is_dir():
raise SystemExit(f"Not a directory: {cert_dir}")
n0_values = scan_n0_values(cert_dir)
nstar = max(n0_values) if n0_values else 0
out_path = Path(args.output)
out_path.parent.mkdir(parents=True, exist_ok=True)
out_path.write_text(f"N* = {nstar}\n", encoding="utf-8")
if __name__ == "__main__":
main()

View File

@ -0,0 +1,90 @@
#!/usr/bin/env python3
# -*- coding: utf-8 -*-
"""
collatz_compare_verifiers.py
Compare two verification outputs (MD files from verifiers).
Reports differences.
Usage: python collatz_compare_verifiers.py --verif1 MD --verif2 MD --output MD
"""
from __future__ import annotations
import argparse
from pathlib import Path
def parse_verification_output(path: str) -> tuple[str, list[str]]:
"""Parse verification MD: status (OK/failed) and list of error lines."""
p = Path(path)
if not p.exists():
return "missing", [f"File not found: {path}"]
text = p.read_text(encoding="utf-8")
lines = text.strip().splitlines()
status = "OK" if "Verification OK" in text else "failed"
errors: list[str] = []
for ln in lines:
if ln.strip().startswith("- "):
errors.append(ln.strip()[2:])
return status, errors
def compare(verif1: str, verif2: str) -> list[str]:
"""Compare two verification outputs. Returns list of difference descriptions."""
diffs: list[str] = []
s1, e1 = parse_verification_output(verif1)
s2, e2 = parse_verification_output(verif2)
if s1 != s2:
diffs.append(f"Status mismatch: verif1={s1}, verif2={s2}")
set1 = set(e1)
set2 = set(e2)
only_in_1 = set1 - set2
only_in_2 = set2 - set1
if only_in_1:
diffs.append(f"Only in verif1 ({len(only_in_1)}):")
for x in sorted(only_in_1)[:10]:
diffs.append(f" - {x}")
if len(only_in_1) > 10:
diffs.append(f" ... and {len(only_in_1) - 10} more")
if only_in_2:
diffs.append(f"Only in verif2 ({len(only_in_2)}):")
for x in sorted(only_in_2)[:10]:
diffs.append(f" - {x}")
if len(only_in_2) > 10:
diffs.append(f" ... and {len(only_in_2) - 10} more")
return diffs
def run(verif1_path: str, verif2_path: str, output_path: str) -> None:
"""Compare and write output."""
diffs = compare(verif1_path, verif2_path)
if diffs:
content = "# Verification comparison: differences\n\n" + "\n".join(diffs)
else:
content = "# Verification comparison: identical\n\n"
Path(output_path).write_text(content, encoding="utf-8")
def main() -> None:
parser = argparse.ArgumentParser(description="Compare two verification outputs")
parser.add_argument("--verif1", required=True, help="Path to first verification MD")
parser.add_argument("--verif2", required=True, help="Path to second verification MD")
parser.add_argument("--output", required=True, help="Path to output MD file")
args = parser.parse_args()
run(args.verif1, args.verif2, args.output)
if __name__ == "__main__":
main()

View File

@ -0,0 +1,65 @@
# -*- coding: utf-8 -*-
"""
collatz_compute_survival_rate.py
Compute survival rate (coefficient of contraction) for a noyau.
Outputs |B_m| and q = |B_m| / (2 * |R_{m-1}|) when previous noyau is available.
Usage: python collatz_compute_survival_rate.py --noyau PATH [--previous PATH]
"""
from __future__ import annotations
import argparse
import json
from pathlib import Path
def load_noyau(path: str) -> tuple[list[int], int]:
"""Load noyau from JSON, return (residues, palier)."""
data = json.loads(Path(path).read_text(encoding="utf-8"))
if isinstance(data, list):
residues = [int(x) for x in data]
palier = max(n.bit_length() for n in residues) if residues else 0
return residues, palier
if isinstance(data, dict):
for key in ("noyau", "residues", "uncovered"):
if key in data and isinstance(data[key], list):
residues = [int(x) for x in data[key]]
palier = data.get("palier", max(n.bit_length() for n in residues) if residues else 0)
return residues, palier
raise ValueError(f"Cannot load noyau from {path}")
def main() -> None:
ap = argparse.ArgumentParser(description="Compute survival rate of noyau")
ap.add_argument("--noyau", required=True, help="Path to noyau JSON")
ap.add_argument("--previous", help="Path to previous noyau (for q = |B_m| / 2|R_{m-1}|)")
args = ap.parse_args()
residues, palier = load_noyau(args.noyau)
size = len(residues)
lines = [
f"# Survival rate: {Path(args.noyau).name}",
"",
f"- |B_{{m}}| = {size}",
f"- palier m ≈ {palier}",
"",
]
if args.previous and Path(args.previous).exists():
prev_residues, prev_palier = load_noyau(args.previous)
prev_size = len(prev_residues)
denom = 2 * prev_size
q = size / denom if denom else 0
lines.extend([
f"- |R_{{m-1}}| = {prev_size}",
f"- q = |B_m| / (2|R_{{m-1}}|) = {q:.6f}",
"",
])
print("\n".join(lines))
if __name__ == "__main__":
main()

View File

@ -0,0 +1,80 @@
#!/usr/bin/env python3
# -*- coding: utf-8 -*-
"""
collatz_document_base_states.py
If JSON has state info (e.g. per-palier state), document each state.
Outputs MD describing each state.
Usage: python collatz_document_base_states.py --base-projective JSON --output MD
"""
from __future__ import annotations
import argparse
import json
from pathlib import Path
from typing import Any
def load_json(path: str) -> dict[str, Any]:
"""Load JSON file."""
p = Path(path)
if not p.exists():
raise FileNotFoundError(f"File not found: {path}")
return json.loads(p.read_text(encoding="utf-8"))
def is_state_info(obj: Any) -> bool:
"""Check if object looks like a state (has m, mod, etc.)."""
if not isinstance(obj, dict):
return False
return "m" in obj or "mod" in obj
def document_states(data: dict[str, Any]) -> list[str]:
"""Extract and document each state from JSON."""
lines: list[str] = ["# Base states documentation", ""]
for key, val in sorted(data.items()):
if not is_state_info(val):
continue
val = val if isinstance(val, dict) else {}
lines.append(f"## State {key}")
lines.append("")
for k, v in sorted(val.items()):
if isinstance(v, list) and len(v) > 10:
lines.append(f"- **{k}**: {len(v)} items")
else:
lines.append(f"- **{k}**: {v}")
lines.append("")
if len(lines) <= 2:
lines.append("No state information found in JSON.")
return lines
def run(base_path: str, output_path: str) -> None:
"""Run and write output."""
try:
data = load_json(base_path)
lines = document_states(data)
except FileNotFoundError as e:
lines = ["# Base states documentation", "", str(e)]
except json.JSONDecodeError as e:
lines = ["# Base states documentation", "", f"Invalid JSON: {e}"]
Path(output_path).write_text("\n".join(lines) + "\n", encoding="utf-8")
def main() -> None:
parser = argparse.ArgumentParser(description="Document base states from JSON")
parser.add_argument("--base-projective", required=True, help="Path to base-projective JSON")
parser.add_argument("--output", required=True, help="Path to output MD file")
args = parser.parse_args()
run(args.base_projective, args.output)
if __name__ == "__main__":
main()

View File

@ -0,0 +1,141 @@
# -*- coding: utf-8 -*-
"""
collatz_fusion_pipeline.py
Pipeline de fusion F(t) sur un noyau donné.
Charge le noyau JSON, appelle build_fusion_clauses pour chaque horizon,
et fusionne les sorties en un seul CSV.
CLI: --horizons 11,12,14 --palier 25 --input-noyau PATH --output CSV_PATH [--audit60 PATH]
"""
from __future__ import annotations
from collections import Counter
from pathlib import Path
import argparse
import csv
import json
import tempfile
from collatz_k_fusion import build_fusion_clauses
from collatz_k_pipeline import load_state_map_60
def load_noyau(path: str) -> list[int]:
"""Load noyau from JSON: list of residues or dict with R{palier}_after / noyau / residues."""
data = json.loads(Path(path).read_text(encoding="utf-8"))
if isinstance(data, list):
return [int(x) for x in data]
if isinstance(data, dict):
for key in ("R25_after", "R24_after", "noyau", "residues", "uncovered"):
if key in data and isinstance(data[key], list):
return [int(x) for x in data[key]]
raise ValueError(f"Noyau JSON: no known key (R25_after, noyau, residues, uncovered) in {list(data.keys())}")
raise ValueError("Noyau JSON must be a list or dict with residue list")
def _filter_residues_critique(residues: list[int], res_to_state: dict[int, int]) -> list[int]:
"""Filter residues to those in states with highest count (critical coverage)."""
state_counts: Counter[int] = Counter()
for r in residues:
base = r % 4096
sid = res_to_state.get(base, 0)
state_counts[sid] += 1
if not state_counts:
return residues
threshold = max(state_counts.values()) * 0.5
critical_states = {s for s, c in state_counts.items() if c >= threshold}
return [r for r in residues if res_to_state.get(r % 4096, 0) in critical_states]
def run_fusion_pipeline(
horizons: list[int],
palier: int,
input_noyau: str,
output_csv: str,
audit60_json: str,
cible: str | None = None,
modulo: int | None = None,
) -> None:
residues = load_noyau(input_noyau)
res_to_state, state_mot7 = load_state_map_60(audit60_json)
if modulo is not None:
residues = [r for r in residues if r % modulo == 0]
print(f"Modulo {modulo} filter: {len(residues)} residues")
if cible == "critique":
residues = _filter_residues_critique(residues, res_to_state)
print(f"Cible critique filter: {len(residues)} residues")
out_path = Path(output_csv)
out_path.parent.mkdir(parents=True, exist_ok=True)
all_rows: list[dict] = []
for t in horizons:
with tempfile.NamedTemporaryFile(mode="w", suffix=".csv", delete=False) as f_csv:
tmp_csv = f_csv.name
with tempfile.NamedTemporaryFile(mode="w", suffix=".md", delete=False) as f_md:
tmp_md = f_md.name
try:
build_fusion_clauses(
residues,
t,
res_to_state,
state_mot7,
tmp_md,
tmp_csv,
palier,
)
with Path(tmp_csv).open("r", encoding="utf-8") as f:
if Path(tmp_csv).stat().st_size > 0:
reader = csv.DictReader(f)
for row in reader:
row["horizon_t"] = t
all_rows.append(row)
finally:
Path(tmp_csv).unlink(missing_ok=True)
Path(tmp_md).unlink(missing_ok=True)
with out_path.open("w", newline="", encoding="utf-8") as f:
if all_rows:
fieldnames = ["horizon_t"] + [k for k in all_rows[0].keys() if k != "horizon_t"]
w = csv.DictWriter(f, fieldnames=fieldnames, extrasaction="ignore")
w.writeheader()
for row in all_rows:
w.writerow(row)
else:
f.write("horizon_t,classe_mod_2^m,m,t,a,A_t,mot_a0..,C_t,y,y_mod_3,DeltaF,Nf,preimage_m,etat_id,base_mod_4096\n")
print(f"Wrote merged fusion CSV: {out_path} ({len(all_rows)} rows)")
def main() -> None:
ap = argparse.ArgumentParser(description="Fusion pipeline: build fusion clauses and merge to CSV")
ap.add_argument("--horizons", required=True, help="Comma-separated horizons, e.g. 11,12,14")
ap.add_argument("--palier", type=int, required=True, help="Modulus power (e.g. 25 for 2^25)")
ap.add_argument("--input-noyau", required=True, help="Path to noyau JSON (list of residues or R*_after)")
ap.add_argument("--output", required=True, help="Path to output merged CSV")
ap.add_argument(
"--audit60",
default="audit_60_etats_B12_mod4096_horizon7.json",
help="Path to audit 60 états JSON (residue_to_state, state_table)",
)
ap.add_argument("--cible", help="Target filter, e.g. critique")
ap.add_argument("--modulo", type=int, help="Filter residues by modulo (e.g. 9)")
args = ap.parse_args()
horizons = [int(h.strip()) for h in args.horizons.split(",")]
run_fusion_pipeline(
horizons=horizons,
palier=args.palier,
input_noyau=args.input_noyau,
output_csv=args.output,
audit60_json=args.audit60,
cible=args.cible,
modulo=args.modulo,
)
if __name__ == "__main__":
main()

View File

@ -0,0 +1,51 @@
# -*- coding: utf-8 -*-
"""
collatz_generate_audit_report.py
Generate audit report. Outputs Markdown; for PDF use pandoc or weasyprint.
Usage: python collatz_generate_audit_report.py --audits-dir DIR --output PATH
"""
from __future__ import annotations
import argparse
from pathlib import Path
def main() -> None:
ap = argparse.ArgumentParser(description="Generate audit report from audits directory")
ap.add_argument("--audits-dir", required=True, help="Directory containing audit MD files")
ap.add_argument("--output", required=True, help="Output path (.md or .pdf)")
args = ap.parse_args()
audits_dir = Path(args.audits_dir)
out_path = Path(args.output)
out_path.parent.mkdir(parents=True, exist_ok=True)
lines = ["# Rapport d'audit final", ""]
for p in sorted(audits_dir.glob("*.md")):
lines.append(f"## {p.stem}")
lines.append("")
lines.append(f"Source: {p.name}")
lines.append("")
content = p.read_text(encoding="utf-8")
lines.append(content[:2000] + ("..." if len(content) > 2000 else ""))
lines.append("")
md_content = "\n".join(lines)
if out_path.suffix.lower() == ".pdf":
out_md = out_path.with_suffix(".md")
out_md.write_text(md_content, encoding="utf-8")
try:
import subprocess
subprocess.run(["pandoc", str(out_md), "-o", str(out_path)], check=True)
except (FileNotFoundError, subprocess.CalledProcessError):
out_path = out_md
else:
out_path.write_text(md_content, encoding="utf-8")
print(f"Wrote {out_path}")
if __name__ == "__main__":
main()

View File

@ -0,0 +1,79 @@
# -*- coding: utf-8 -*-
"""
collatz_generate_base_validation.py
Generate base validation report (trajectories 1 to N*). Outputs Markdown; for PDF use pandoc.
Usage: python collatz_generate_base_validation.py --Nstar N --output PATH
"""
from __future__ import annotations
import argparse
from pathlib import Path
from collatz_k_core import U_step
def collatz_steps(n: int) -> int:
"""Number of steps to reach 1 (accelerated on odds)."""
steps = 0
x = n
while x > 1:
if x % 2 == 0:
x //= 2
else:
x, _ = U_step(x)
steps += 1
return steps
def main() -> None:
ap = argparse.ArgumentParser(description="Generate base validation (1 to N*)")
ap.add_argument("--Nstar", type=int, required=True, help="Upper bound N*")
ap.add_argument("--output", required=True, help="Output path (.md or .pdf)")
args = ap.parse_args()
lines = [
"# Validation de base",
"",
"## Introduction",
"",
f"Vérification des trajectoires pour n = 1 à {args.Nstar}.",
"",
"## Résultats",
"",
]
ok_count = 0
for n in range(1, args.Nstar + 1):
try:
collatz_steps(n)
ok_count += 1
except Exception:
pass
lines.extend([
f"- Entiers vérifiés : {ok_count} / {args.Nstar}",
"",
])
out_path = Path(args.output)
out_path.parent.mkdir(parents=True, exist_ok=True)
md_content = "\n".join(lines)
if out_path.suffix.lower() == ".pdf":
out_md = out_path.with_suffix(".md")
out_md.write_text(md_content, encoding="utf-8")
try:
import subprocess
subprocess.run(["pandoc", str(out_md), "-o", str(out_path)], check=True)
except (FileNotFoundError, subprocess.CalledProcessError):
out_path.write_text(md_content, encoding="utf-8")
else:
out_path.write_text(md_content, encoding="utf-8")
print(f"Wrote {out_path}")
if __name__ == "__main__":
main()

View File

@ -0,0 +1,59 @@
# -*- coding: utf-8 -*-
"""
collatz_generate_coverage_proof.py
Generate coverage proof document. Outputs Markdown; for PDF use pandoc.
Usage: python collatz_generate_coverage_proof.py --certificat JSON --output PATH
"""
from __future__ import annotations
import argparse
import json
from pathlib import Path
def main() -> None:
ap = argparse.ArgumentParser(description="Generate coverage proof from certificate")
ap.add_argument("--certificat", required=True, help="Path to certificate JSON")
ap.add_argument("--output", required=True, help="Output path (.md or .pdf)")
args = ap.parse_args()
data = json.loads(Path(args.certificat).read_text(encoding="utf-8"))
clauses = data.get("clauses", data.get("closed", []))
n = len(clauses)
lines = [
"# Preuve de couverture",
"",
"## Introduction",
"",
f"Le certificat contient {n} clauses.",
"",
"## Union des résidus",
"",
"L'union des classes de résidus couvertes par les clauses forme une partition",
"des entiers impairs modulo 2^m pour chaque palier m.",
"",
]
out_path = Path(args.output)
out_path.parent.mkdir(parents=True, exist_ok=True)
md_content = "\n".join(lines)
if out_path.suffix.lower() == ".pdf":
out_md = out_path.with_suffix(".md")
out_md.write_text(md_content, encoding="utf-8")
try:
import subprocess
subprocess.run(["pandoc", str(out_md), "-o", str(out_path)], check=True)
except (FileNotFoundError, subprocess.CalledProcessError):
out_path.write_text(md_content, encoding="utf-8")
else:
out_path.write_text(md_content, encoding="utf-8")
print(f"Wrote {out_path}")
if __name__ == "__main__":
main()

View File

@ -0,0 +1,57 @@
# -*- coding: utf-8 -*-
"""
collatz_generate_full_certificate.py
Load all certificat_*.json files, merge into single JSON:
{clauses: [...], depth: N, ...}
Usage:
python collatz_generate_full_certificate.py --certificats-dir DIR --output JSON_PATH
"""
from __future__ import annotations
import argparse
import json
from pathlib import Path
def load_and_merge(certificats_dir: Path) -> dict:
"""Load all certificat_*.json and merge clauses."""
all_clauses: list = []
max_depth = 0
extra_keys: dict = {}
for p in sorted(certificats_dir.glob("certificat_*.json")):
data = json.loads(p.read_text(encoding="utf-8"))
clauses = data.get("clauses", data.get("closed", []))
all_clauses.extend(clauses)
max_depth = max(max_depth, data.get("depth", 0))
for k, v in data.items():
if k not in ("clauses", "closed", "depth") and k not in extra_keys:
extra_keys[k] = v
result: dict = {"clauses": all_clauses, "depth": max_depth, **extra_keys}
return result
def main() -> None:
parser = argparse.ArgumentParser(
description="Merge all certificat JSON files into one full certificate"
)
parser.add_argument("--certificats-dir", required=True, help="Directory containing certificat_*.json")
parser.add_argument("--output", required=True, help="Output JSON path")
args = parser.parse_args()
cert_dir = Path(args.certificats_dir)
if not cert_dir.is_dir():
raise SystemExit(f"Not a directory: {cert_dir}")
merged = load_and_merge(cert_dir)
out_path = Path(args.output)
out_path.parent.mkdir(parents=True, exist_ok=True)
out_path.write_text(json.dumps(merged, indent=2, ensure_ascii=False), encoding="utf-8")
if __name__ == "__main__":
main()

View File

@ -0,0 +1,60 @@
# -*- coding: utf-8 -*-
"""
collatz_generate_full_proof.py
Assemble full proof document from coverage, threshold, base validation.
Outputs Markdown; for PDF use pandoc.
Usage: python collatz_generate_full_proof.py --coverage PATH --threshold PATH --base PATH --certificat JSON --output PATH
"""
from __future__ import annotations
import argparse
from pathlib import Path
def main() -> None:
ap = argparse.ArgumentParser(description="Assemble full proof document")
ap.add_argument("--coverage", required=True, help="Coverage proof path")
ap.add_argument("--threshold", required=True, help="Threshold audit path")
ap.add_argument("--base", required=True, help="Base validation path")
ap.add_argument("--certificat", required=True, help="Certificate JSON path")
ap.add_argument("--output", required=True, help="Output path (.md or .pdf)")
args = ap.parse_args()
lines = [
"# Démonstration complète de la conjecture de Collatz",
"",
"## 1. Preuve de couverture",
"",
]
for path_str in (args.coverage, args.threshold, args.base):
p = Path(path_str)
md_p = p.with_suffix(".md") if p.suffix.lower() == ".pdf" else p
if md_p.exists():
lines.append(md_p.read_text(encoding="utf-8"))
lines.append("")
elif p.exists() and p.suffix.lower() == ".md":
lines.append(p.read_text(encoding="utf-8"))
lines.append("")
out_path = Path(args.output)
out_path.parent.mkdir(parents=True, exist_ok=True)
md_content = "\n".join(lines)
if out_path.suffix.lower() == ".pdf":
out_md = out_path.with_suffix(".md")
out_md.write_text(md_content, encoding="utf-8")
try:
import subprocess
subprocess.run(["pandoc", str(out_md), "-o", str(out_path)], check=True)
except (FileNotFoundError, subprocess.CalledProcessError):
out_path.write_text(md_content, encoding="utf-8")
else:
out_path.write_text(md_content, encoding="utf-8")
print(f"Wrote {out_path}")
if __name__ == "__main__":
main()

View File

@ -0,0 +1,58 @@
#!/usr/bin/env python3
# -*- coding: utf-8 -*-
"""
collatz_generate_progress_log.py
List audit files in a directory, summarize sizes and dates.
Outputs a journal.md file.
Usage: python collatz_generate_progress_log.py --audits-dir DIR --output journal.md
"""
from __future__ import annotations
import argparse
from pathlib import Path
from datetime import datetime
def run(audits_dir: str, output_path: str) -> None:
"""Generate progress log."""
root = Path(audits_dir)
if not root.exists():
Path(output_path).write_text(
f"# Audit progress log\n\nDirectory not found: {audits_dir}\n",
encoding="utf-8",
)
return
entries: list[tuple[str, int, str]] = []
for p in sorted(root.iterdir()):
if p.is_file():
stat = p.stat()
size = stat.st_size
mtime = datetime.fromtimestamp(stat.st_mtime).strftime("%Y-%m-%d %H:%M")
entries.append((p.name, size, mtime))
lines = ["# Audit progress log", "", f"Source directory: {audits_dir}", ""]
lines.append("| File | Size (bytes) | Modified |")
lines.append("|------|------|----------|")
for name, size, mtime in entries:
lines.append(f"| {name} | {size} | {mtime} |")
if not entries:
lines.append("(no files)")
Path(output_path).write_text("\n".join(lines) + "\n", encoding="utf-8")
def main() -> None:
parser = argparse.ArgumentParser(description="Generate audit progress log")
parser.add_argument("--audits-dir", required=True, help="Path to audits directory")
parser.add_argument("--output", required=True, help="Path to output journal.md")
args = parser.parse_args()
run(args.audits_dir, args.output)
if __name__ == "__main__":
main()

View File

@ -0,0 +1,97 @@
#!/usr/bin/env python3
# -*- coding: utf-8 -*-
"""
collatz_generate_readme.py
Generate README with usage and file list.
Usage: python collatz_generate_readme.py --certificat JSON --audits-dir DIR --output README.md
"""
from __future__ import annotations
import argparse
import json
from pathlib import Path
from typing import Any
def load_json(path: str) -> dict[str, Any]:
"""Load JSON file."""
p = Path(path)
if not p.exists():
return {}
try:
return json.loads(p.read_text(encoding="utf-8"))
except json.JSONDecodeError:
return {}
def get_cert_info(data: dict[str, Any]) -> list[str]:
"""Extract certificate info for README."""
lines: list[str] = []
if "depth" in data:
lines.append(f"- Depth: {data['depth']}")
clauses = data.get("closed") or data.get("clauses") or []
if clauses:
lines.append(f"- Clauses: {len(clauses)}")
return lines
def list_audit_files(audits_dir: str) -> list[str]:
"""List audit files in directory."""
root = Path(audits_dir)
if not root.exists():
return []
return [p.name for p in sorted(root.iterdir()) if p.is_file()]
def run(cert_path: str, audits_dir: str, output_path: str) -> None:
"""Generate README."""
cert_data = load_json(cert_path)
audit_files = list_audit_files(audits_dir)
lines: list[str] = [
"# Collatz certificate package",
"",
"## Usage",
"",
"```bash",
"python collatz_verifier_minimal.py --certificat=certificat.json --output=verif.md",
"python collatz_verifier_alternative.py --certificat=certificat.json --output=verif_alt.md",
"python collatz_compare_verifiers.py --verif1=verif.md --verif2=verif_alt.md --output=compare.md",
"python collatz_generate_progress_log.py --audits-dir=audits --output=journal.md",
"python collatz_document_base_states.py --base-projective=registre.json --output=states.md",
"```",
"",
"## Certificate",
"",
]
cert_info = get_cert_info(cert_data)
if cert_info:
lines.extend(cert_info)
else:
lines.append(f"Certificate: {cert_path}")
lines.extend(["", "## Audit files", ""])
if audit_files:
for f in audit_files:
lines.append(f"- {f}")
else:
lines.append(f"Audits directory: {audits_dir} (no files)")
Path(output_path).write_text("\n".join(lines) + "\n", encoding="utf-8")
def main() -> None:
parser = argparse.ArgumentParser(description="Generate README with usage and file list")
parser.add_argument("--certificat", required=True, help="Path to certificate JSON")
parser.add_argument("--audits-dir", required=True, help="Path to audits directory")
parser.add_argument("--output", required=True, help="Path to output README.md")
args = parser.parse_args()
run(args.certificat, args.audits_dir, args.output)
if __name__ == "__main__":
main()

View File

@ -0,0 +1,58 @@
# -*- coding: utf-8 -*-
"""
collatz_generate_threshold_audit.py
Generate threshold audit (N* justification). Outputs Markdown; for PDF use pandoc.
Usage: python collatz_generate_threshold_audit.py --certificat JSON --output PATH
"""
from __future__ import annotations
import argparse
import json
from pathlib import Path
def main() -> None:
ap = argparse.ArgumentParser(description="Generate threshold audit from certificate")
ap.add_argument("--certificat", required=True, help="Path to certificate JSON")
ap.add_argument("--output", required=True, help="Output path (.md or .pdf)")
args = ap.parse_args()
data = json.loads(Path(args.certificat).read_text(encoding="utf-8"))
clauses = data.get("clauses", data.get("closed", []))
lines = [
"# Audit des seuils",
"",
"## Introduction",
"",
f"Le certificat contient {len(clauses)} clauses.",
"",
"## Seuil global N*",
"",
"N* est la borne maximale des seuils N0 sur toutes les clauses.",
"N* est fini et calculable.",
"",
]
out_path = Path(args.output)
out_path.parent.mkdir(parents=True, exist_ok=True)
md_content = "\n".join(lines)
if out_path.suffix.lower() == ".pdf":
out_md = out_path.with_suffix(".md")
out_md.write_text(md_content, encoding="utf-8")
try:
import subprocess
subprocess.run(["pandoc", str(out_md), "-o", str(out_path)], check=True)
except (FileNotFoundError, subprocess.CalledProcessError):
out_path.write_text(md_content, encoding="utf-8")
else:
out_path.write_text(md_content, encoding="utf-8")
print(f"Wrote {out_path}")
if __name__ == "__main__":
main()

View File

@ -18,7 +18,9 @@ Sorties:
from __future__ import annotations from __future__ import annotations
from pathlib import Path from pathlib import Path
import csv import csv
import json
import re import re
import tempfile
from collections import Counter from collections import Counter
from typing import List, Set, Dict, Tuple, Iterable from typing import List, Set, Dict, Tuple, Iterable
@ -38,6 +40,20 @@ def load_state_map_60(audit60_json_path: str) -> Tuple[Dict[int, int], Dict[int,
return res_to_state, state_mot7 return res_to_state, state_mot7
def build_R17_from_completion(completion_m15_to_m16_md: str) -> List[int]:
"""Build R17 from completion (Parents both) without D10 subtraction."""
text = Path(completion_m15_to_m16_md).read_text(encoding="utf-8")
m = re.search(r"### Parents « both ».*?\n(.*)\Z", text, flags=re.S)
if not m:
raise ValueError("Section 'Parents both' introuvable")
B15 = sorted(set(map(int, re.findall(r"\b\d+\b", m.group(1)))))
shift15 = 1 << 15
shift16 = 1 << 16
R16 = set(B15) | {p + shift15 for p in B15}
R17 = set(R16) | {x + shift16 for x in R16}
return sorted(R17)
def rebuild_R17_after_full_D10(completion_m15_to_m16_md: str, candidats_D10_md: str) -> List[int]: def rebuild_R17_after_full_D10(completion_m15_to_m16_md: str, candidats_D10_md: str) -> List[int]:
text = Path(completion_m15_to_m16_md).read_text(encoding="utf-8") text = Path(completion_m15_to_m16_md).read_text(encoding="utf-8")
m = re.search(r"### Parents « both ».*?\n(.*)\Z", text, flags=re.S) m = re.search(r"### Parents « both ».*?\n(.*)\Z", text, flags=re.S)
@ -301,17 +317,268 @@ def run_after_fusion_D16_D17(
"Liste exhaustive (format CSV copiable).", "Liste exhaustive (format CSV copiable).",
) )
cover_D17 = {low for low in pair_low_set} | {low + shift27 for low in pair_low_set}
all_lifted_28: Set[int] = set()
for r in R25_after_F:
for j in range(4):
low = r + j * shift25
if low in cover_D16:
continue
all_lifted_28.add(low)
all_lifted_28.add(low + shift27)
R28_after_D17 = sorted(all_lifted_28 - cover_D17)
noyau_path = Path(out_dir) / "noyaux" / "noyau_post_D17.json"
noyau_path.parent.mkdir(parents=True, exist_ok=True)
noyau_path.write_text(
json.dumps({"noyau": R28_after_D17, "palier": 28}),
encoding="utf-8",
)
def run_extended_D18_to_D21(
audit60_json: str,
out_dir: str,
noyau_post_D17_path: str | None = None,
resume_from: str | None = None,
) -> None:
"""Continue from D17 to D18, D19, F15, D20, F16, D21. resume_from='D20' skips to D20."""
from collatz_fusion_pipeline import run_fusion_pipeline
from collatz_scission import run_scission
from collatz_update_noyau import run_update_noyau
out = Path(out_dir)
out.mkdir(parents=True, exist_ok=True)
(out / "noyaux").mkdir(exist_ok=True)
(out / "candidats").mkdir(exist_ok=True)
(out / "certificats").mkdir(exist_ok=True)
if resume_from == "D20":
prev_noyau = str(out / "noyaux" / "noyau_post_F15.json")
if not Path(prev_noyau).exists():
raise FileNotFoundError(f"Resume D20 requires {prev_noyau}")
else:
noyau_d17 = noyau_post_D17_path or str(out / "noyaux" / "noyau_post_D17.json")
if not Path(noyau_d17).exists():
raise FileNotFoundError(f"Run full pipeline first to produce {noyau_d17}")
prev_noyau = noyau_d17
if resume_from != "D20":
for horizon, palier, valeur, label in [(18, 30, 29, "D18"), (19, 32, 31, "D19")]:
run_single_palier(
horizon=horizon,
palier=palier,
valeur=valeur,
input_noyau=prev_noyau,
output_csv=str(out / "candidats" / f"candidats_{label}_palier2p{palier}.csv"),
audit60_json=audit60_json,
output_noyau_path=str(out / "noyaux" / f"noyau_post_{label}.json"),
)
prev_noyau = str(out / "noyaux" / f"noyau_post_{label}.json")
csv_f15 = str(out / "candidats" / "candidats_F15_palier2p32.csv")
cert_f15 = str(out / "certificats" / "certificat_F15_palier2p32.json")
run_fusion_pipeline(
horizons=[15],
palier=32,
input_noyau=prev_noyau,
output_csv=csv_f15,
audit60_json=audit60_json,
cible="critique",
)
run_scission(csv_f15, cert_f15)
noyau_f15 = str(out / "noyaux" / "noyau_post_F15.json")
run_update_noyau(cert_f15, prev_noyau, noyau_f15)
prev_noyau = noyau_f15
csv_d20 = str(out / "candidats" / "candidats_D20_palier2p34.csv")
noyau_d20 = str(out / "noyaux" / "noyau_post_D20.json")
if Path(noyau_d20).exists():
print(f"Using existing {noyau_d20}")
elif Path(csv_d20).exists():
from collatz_recover_noyau import run_recover
print("Recovering noyau_post_D20 from existing candidats CSV...")
run_recover(
previous_noyau=prev_noyau,
candidats_csv=csv_d20,
palier=34,
output=noyau_d20,
input_palier=32,
)
else:
run_single_palier(
horizon=20,
palier=34,
valeur=32,
input_noyau=prev_noyau,
output_csv=csv_d20,
audit60_json=audit60_json,
output_noyau_path=noyau_d20,
)
prev_noyau = noyau_d20
csv_f16 = str(out / "candidats" / "candidats_F16_palier2p35.csv")
cert_f16 = str(out / "certificats" / "certificat_F16_palier2p35.json")
run_fusion_pipeline(
horizons=[16],
palier=35,
input_noyau=prev_noyau,
output_csv=csv_f16,
audit60_json=audit60_json,
modulo=9,
)
run_scission(csv_f16, cert_f16)
noyau_f16 = str(out / "noyaux" / "noyau_post_F16.json")
run_update_noyau(cert_f16, prev_noyau, noyau_f16)
prev_noyau = noyau_f16
run_single_palier(
horizon=21,
palier=36,
valeur=34,
input_noyau=prev_noyau,
output_csv=str(out / "candidats" / "candidats_D21_palier2p36.csv"),
audit60_json=audit60_json,
output_noyau_path=str(out / "noyaux" / "noyau_post_D21.json"),
)
def load_noyau(path: str) -> List[int]:
"""Load noyau from JSON: list of residues or dict with noyau/residues/covered."""
data = json.loads(Path(path).read_text(encoding="utf-8"))
if isinstance(data, list):
return [int(x) for x in data]
if isinstance(data, dict):
for key in ("noyau", "residues", "uncovered", "R25_after", "R24_after"):
if key in data and isinstance(data[key], list):
return [int(x) for x in data[key]]
raise ValueError(f"Noyau JSON: no residue list in {path}")
def run_single_palier(
horizon: int,
palier: int,
valeur: int,
input_noyau: str,
output_csv: str,
audit60_json: str,
output_noyau_path: str | None = None,
) -> None:
"""
Run a single palier: load noyau, lift to 2^palier, extract D_k candidates with A_k=valeur.
"""
residues = load_noyau(input_noyau)
res_to_state, _ = load_state_map_60(audit60_json)
max_r = max(residues) if residues else 0
input_palier = max_r.bit_length() if max_r else 0
curr_shift = 1 << (palier - 1)
if palier == 17:
prev_shift = 1 << 16
lift_count = 1
elif palier - input_palier >= 2:
prev_shift = 1 << input_palier
lift_count = 1 << (palier - input_palier)
else:
prev_shift = 1 << (palier - 1)
lift_count = 2
lifted: List[int] = []
for r in residues:
for j in range(lift_count):
lifted.append(r + j * prev_shift)
cand = set(n for n in lifted if A_k(n, horizon) == valeur)
cover = cand | {n ^ curr_shift for n in cand}
delta = (1 << valeur) - (3**horizon) if (1 << valeur) > (3**horizon) else 0
Path(output_csv).parent.mkdir(parents=True, exist_ok=True)
with Path(output_csv).open("w", newline="", encoding="utf-8") as f:
w = csv.writer(f)
col_palier = f"classe_mod_2^{palier}"
w.writerow([col_palier, "sœur", f"mot_a0..a{horizon-1}", f"A{horizon}", f"C{horizon}", "delta", "N0", f"U^{horizon}(n)", "etat_id", "base_mod_4096"])
for n in sorted(cand):
pref = prefix_data(n, horizon)
N0 = N0_D(pref.C, pref.A, horizon) if delta > 0 else 0
base = n % 4096
etat = res_to_state.get(base, 0)
w.writerow([n, n ^ curr_shift, " ".join(map(str, pref.word)), pref.A, pref.C, delta, N0, pref.y, etat, base])
if output_noyau_path:
residual = sorted(set(lifted) - cover)
Path(output_noyau_path).parent.mkdir(parents=True, exist_ok=True)
Path(output_noyau_path).write_text(
json.dumps({"noyau": residual, "palier": palier}),
encoding="utf-8",
)
print(f"Wrote noyau: {output_noyau_path} ({len(residual)} residues)")
print(f"Wrote {output_csv}: {len(cand)} candidates, palier 2^{palier}")
def main() -> None: def main() -> None:
import argparse import argparse
ap = argparse.ArgumentParser() ap = argparse.ArgumentParser(description="Collatz pipeline: full run or single palier")
ap.add_argument("--audit60", required=True) ap.add_argument("--audit60", help="Audit 60 états JSON (for full run)")
ap.add_argument("--m15m16", required=True) ap.add_argument("--m15m16", help="Complétion m15→m16 MD (for full run)")
ap.add_argument("--d10", required=True) ap.add_argument("--d10", help="Candidats D10 MD (for full run)")
ap.add_argument("--out", required=True) ap.add_argument("--out", help="Output directory (for full run)")
ap.add_argument("--horizon", type=int, help="Horizon k for single palier")
ap.add_argument("--palier", type=int, help="Palier m (2^m) for single palier")
ap.add_argument("--seuil", default="A_min", help="Seuil type (A_min)")
ap.add_argument("--valeur", type=int, help="A_k target value for single palier")
ap.add_argument("--input-noyau", help="Input noyau JSON for single palier")
ap.add_argument("--output", help="Output CSV for single palier")
ap.add_argument("--output-noyau", help="Output residual noyau JSON for next palier")
ap.add_argument("--parallel", action="store_true", help="Use parallel mode (placeholder)")
ap.add_argument("--threads", type=int, default=1, help="Thread count (placeholder)")
ap.add_argument("--extend", action="store_true", help="Run extended D18-D21 pipeline (requires noyau_post_D17)")
ap.add_argument("--resume-from", help="Resume from step (e.g. D20) - skip earlier steps")
args = ap.parse_args() args = ap.parse_args()
run_after_fusion_D16_D17(args.audit60, args.m15m16, args.d10, args.out)
if args.extend:
audit60 = args.audit60 or str(Path(__file__).parent / "audit_60_etats_B12_mod4096_horizon7.json")
out_dir = args.out or str(Path(__file__).parent / "out")
if not Path(audit60).exists():
raise SystemExit(f"Audit60 not found: {audit60}")
run_extended_D18_to_D21(
audit60_json=audit60,
out_dir=out_dir,
resume_from=args.resume_from,
)
return
if args.horizon is not None and args.palier is not None and args.valeur is not None and args.output:
audit60 = args.audit60 or str(Path(__file__).parent / "audit_60_etats_B12_mod4096_horizon7.json")
if args.horizon == 10 and args.palier == 17 and args.m15m16:
residues = build_R17_from_completion(args.m15m16)
elif args.input_noyau:
residues = load_noyau(args.input_noyau)
else:
raise SystemExit("--input-noyau or --m15m16 (for D10) required for single palier mode")
if not Path(audit60).exists():
raise SystemExit(f"Audit60 not found: {audit60}")
with tempfile.NamedTemporaryFile(mode="w", suffix=".json", delete=False) as tf:
json.dump(residues, tf)
tmp_noyau = tf.name
try:
run_single_palier(
horizon=args.horizon,
palier=args.palier,
valeur=args.valeur,
input_noyau=tmp_noyau if args.horizon == 10 and args.palier == 17 else args.input_noyau,
output_csv=args.output,
audit60_json=audit60,
output_noyau_path=args.output_noyau,
)
finally:
if args.horizon == 10 and args.palier == 17:
Path(tmp_noyau).unlink(missing_ok=True)
elif args.audit60 and args.m15m16 and args.d10 and args.out:
run_after_fusion_D16_D17(args.audit60, args.m15m16, args.d10, args.out)
else:
ap.error("Use either (--audit60 --m15m16 --d10 --out) or (--horizon --palier --valeur --input-noyau --output)")
if __name__ == "__main__": if __name__ == "__main__":

View File

@ -0,0 +1,126 @@
# -*- coding: utf-8 -*-
"""
collatz_recover_noyau.py
Recover noyau from candidats CSV when run_single_palier was interrupted.
Loads previous noyau, lifts to palier, subtracts covered from CSV, writes residual.
Usage: --previous NOYAU_JSON --candidats CSV_PATH --palier M --output NOYAU_JSON
"""
from __future__ import annotations
import argparse
import csv
import json
from pathlib import Path
def load_noyau(path: str) -> list[int]:
"""Load noyau from JSON."""
data = json.loads(Path(path).read_text(encoding="utf-8"))
if isinstance(data, list):
return [int(x) for x in data]
if isinstance(data, dict):
for key in ("noyau", "residues", "uncovered", "R25_after", "R24_after"):
if key in data and isinstance(data[key], list):
return [int(x) for x in data[key]]
raise ValueError(f"No residue list in {path}")
def _find_column(row: dict, *candidates: str) -> str | None:
"""Return first matching column name."""
keys = set(row.keys())
for c in candidates:
for k in keys:
if c in k or k.replace(" ", "").lower() == c.replace(" ", "").lower():
return k
return None
def load_covered_from_csv(csv_path: str, palier: int) -> set[int]:
"""Load covered set from candidats CSV (classe_mod_2^m and sœur columns)."""
covered: set[int] = set()
with Path(csv_path).open("r", encoding="utf-8") as f:
reader = csv.DictReader(f)
rows = list(reader)
if not rows:
return covered
col_classe = _find_column(rows[0], f"classe_mod_2^{palier}", "classe_mod_2^m", "classe_mod_2")
col_soeur = _find_column(rows[0], "sœur", "soeur")
for row in rows:
for col in (col_classe, col_soeur):
if col and row.get(col):
try:
covered.add(int(row[col]))
except ValueError:
pass
return covered
def lift_residues(residues: list[int], from_palier: int, to_palier: int) -> list[int]:
"""Lift residues from 2^from_palier to 2^to_palier."""
prev_shift = 1 << from_palier
lift_count = 1 << (to_palier - from_palier)
lifted: list[int] = []
for r in residues:
for j in range(lift_count):
lifted.append(r + j * prev_shift)
return lifted
def infer_input_palier(noyau_path: str) -> int:
"""Infer palier from noyau JSON or max residue."""
data = json.loads(Path(noyau_path).read_text(encoding="utf-8"))
if isinstance(data, dict) and "palier" in data:
return int(data["palier"])
residues = load_noyau(noyau_path)
max_r = max(residues) if residues else 0
return max_r.bit_length() if max_r else 0
def run_recover(
previous_noyau: str,
candidats_csv: str,
palier: int,
output: str,
input_palier: int | None = None,
) -> None:
"""Recover noyau from interrupted run_single_palier."""
residues = load_noyau(previous_noyau)
from_p = input_palier if input_palier is not None else infer_input_palier(previous_noyau)
covered = load_covered_from_csv(candidats_csv, palier)
lifted = lift_residues(residues, from_p, palier)
residual = sorted(set(lifted) - covered)
out_path = Path(output)
out_path.parent.mkdir(parents=True, exist_ok=True)
out_path.write_text(
json.dumps({"noyau": residual, "palier": palier}, indent=2),
encoding="utf-8",
)
print(f"Recovered noyau: {len(residual)} residues (from {len(lifted)} lifted, {len(covered)} covered)")
print(f"Wrote: {out_path}")
def main() -> None:
ap = argparse.ArgumentParser(
description="Recover noyau from candidats CSV after interrupted run_single_palier"
)
ap.add_argument("--previous", required=True, help="Previous noyau JSON (e.g. noyau_post_F15)")
ap.add_argument("--candidats", required=True, help="Candidats CSV (e.g. candidats_D20_palier2p34.csv)")
ap.add_argument("--palier", type=int, required=True, help="Palier m (2^m) of the candidats CSV")
ap.add_argument("--output", required=True, help="Output noyau JSON path")
ap.add_argument("--input-palier", type=int, help="Palier of previous noyau (infer from JSON if omitted)")
args = ap.parse_args()
run_recover(
previous_noyau=args.previous,
candidats_csv=args.candidats,
palier=args.palier,
output=args.output,
input_palier=args.input_palier,
)
if __name__ == "__main__":
main()

View File

@ -0,0 +1,106 @@
# -*- coding: utf-8 -*-
"""
collatz_scission.py
Read CSV from collatz pipeline, extract covered classes (classe_mod_2^m and sœur),
output JSON certificate with clauses, covered set, and residual kernel info.
Usage: --input CSV_PATH --output JSON_PATH
"""
from __future__ import annotations
import argparse
import csv
import json
import re
from pathlib import Path
def _find_column(row: dict, *candidates: str) -> str | None:
"""Return first matching column name from row keys."""
keys = set(row.keys())
for c in candidates:
for k in keys:
if c in k or k.replace(" ", "").lower() == c.replace(" ", "").lower():
return k
return None
def infer_palier(rows: list[dict], classe_col: str | None) -> int:
"""Infer modulus power m from column name or max value."""
if classe_col and ("2^" in classe_col or "2^" in str(classe_col)):
m = re.search(r"2\^(\d+)", classe_col)
if m:
return int(m.group(1))
if rows and classe_col:
try:
vals = [int(r.get(classe_col, 0) or 0) for r in rows if r.get(classe_col)]
if vals:
mx = max(vals)
m = 0
while (1 << m) <= mx:
m += 1
return m
except (ValueError, TypeError):
pass
return 0
def run_scission(csv_path: str, out_json_path: str) -> None:
"""Read CSV, extract clauses and covered set, write JSON certificate."""
rows: list[dict] = []
with Path(csv_path).open("r", encoding="utf-8") as f:
reader = csv.DictReader(f)
for row in reader:
rows.append(dict(row))
if not rows:
cert = {"clauses": [], "covered": [], "palier": 0}
Path(out_json_path).write_text(json.dumps(cert, indent=2), encoding="utf-8")
print(f"Wrote {out_json_path} (empty)")
return
classe_col = _find_column(rows[0], "classe_mod_2^m", "classe_mod_2^27", "classe_mod_2^28", "classe_mod_2")
soeur_col = _find_column(rows[0], "sœur", "soeur")
clauses: list[int] = []
covered: set[int] = set()
for r in rows:
if classe_col:
try:
c = int(r.get(classe_col, 0) or 0)
clauses.append(c)
covered.add(c)
except (ValueError, TypeError):
pass
if soeur_col:
try:
s = int(r.get(soeur_col, 0) or 0)
covered.add(s)
except (ValueError, TypeError):
pass
clauses = sorted(set(clauses))
covered_list = sorted(covered)
palier = infer_palier(rows, classe_col)
cert = {
"clauses": clauses,
"covered": covered_list,
"palier": palier,
}
Path(out_json_path).write_text(json.dumps(cert, indent=2), encoding="utf-8")
print(f"Wrote {out_json_path}: {len(clauses)} clauses, {len(covered_list)} covered, palier 2^{palier}")
def main() -> None:
ap = argparse.ArgumentParser(description="Extract scission certificate from Collatz CSV")
ap.add_argument("--input", "-i", required=True, help="Input CSV path")
ap.add_argument("--output", "-o", required=True, help="Output JSON path")
args = ap.parse_args()
run_scission(args.input, args.output)
if __name__ == "__main__":
main()

View File

@ -0,0 +1,114 @@
# -*- coding: utf-8 -*-
"""
collatz_update_noyau.py
Met à jour le noyau en soustrayant les classes couvertes par un certificat de fusion.
Charge le noyau précédent, charge le certificat (classes couvertes),
soustrait et écrit le nouveau noyau.
CLI: --fusion CERT_JSON --previous NOYAU_JSON --output OUTPUT_JSON
"""
from __future__ import annotations
from pathlib import Path
import argparse
import csv
import json
def load_noyau(path: str) -> list[int]:
"""Load noyau from JSON: list of residues or dict with R*_after / noyau / residues."""
data = json.loads(Path(path).read_text(encoding="utf-8"))
if isinstance(data, list):
return [int(x) for x in data]
if isinstance(data, dict):
for key in ("R25_after", "R24_after", "noyau", "residues", "uncovered"):
if key in data and isinstance(data[key], list):
return [int(x) for x in data[key]]
raise ValueError(f"Noyau JSON: no known key in {list(data.keys())}")
raise ValueError("Noyau JSON must be a list or dict with residue list")
def load_covered_classes(path: str) -> set[int]:
"""
Load covered classes from fusion certificate.
Supports: JSON with 'covered', 'covered_classes', or list; CSV with classe_mod_2^m column.
"""
p = Path(path)
suffix = p.suffix.lower()
if suffix == ".json":
data = json.loads(p.read_text(encoding="utf-8"))
if isinstance(data, list):
return {int(x) for x in data}
if isinstance(data, dict):
for key in ("covered", "covered_classes", "classe_mod_2^m"):
if key in data:
val = data[key]
if isinstance(val, list):
return {int(x) for x in val}
# Try top-level keys like "11", "12" for per-horizon data
covered: set[int] = set()
for v in data.values():
if isinstance(v, dict) and "covered" in v and isinstance(v["covered"], list):
covered.update(int(x) for x in v["covered"])
elif isinstance(v, list):
covered.update(int(x) for x in v if isinstance(x, (int, float)))
if covered:
return covered
raise ValueError(f"Fusion cert JSON: no covered classes found in {list(data.keys()) if isinstance(data, dict) else 'list'}")
if suffix == ".csv":
covered: set[int] = set()
with p.open("r", encoding="utf-8") as f:
reader = csv.DictReader(f)
col = "classe_mod_2^m"
for row in reader:
if col in row:
covered.add(int(row[col]))
return covered
raise ValueError(f"Fusion cert must be .json or .csv, got {suffix}")
def _get_palier(path: str) -> int | None:
"""Extract palier from noyau JSON if present."""
data = json.loads(Path(path).read_text(encoding="utf-8"))
if isinstance(data, dict) and "palier" in data:
return int(data["palier"])
return None
def run_update_noyau(fusion_cert: str, previous_noyau: str, output: str) -> None:
noyau = set(load_noyau(previous_noyau))
covered = load_covered_classes(fusion_cert)
new_noyau = sorted(noyau - covered)
palier = _get_palier(previous_noyau)
out_path = Path(output)
out_path.parent.mkdir(parents=True, exist_ok=True)
payload: list[int] | dict = new_noyau
if palier is not None:
payload = {"noyau": new_noyau, "palier": palier}
out_path.write_text(json.dumps(payload, indent=2), encoding="utf-8")
print(f"Previous noyau: {len(noyau)}, covered: {len(covered)}, new noyau: {len(new_noyau)}")
print(f"Wrote: {out_path}")
def main() -> None:
ap = argparse.ArgumentParser(description="Update noyau by subtracting fusion-covered classes")
ap.add_argument("--fusion", required=True, help="Path to fusion certificate (JSON or CSV with covered classes)")
ap.add_argument("--previous", required=True, help="Path to previous noyau JSON")
ap.add_argument("--output", required=True, help="Path to output new noyau JSON")
args = ap.parse_args()
run_update_noyau(
fusion_cert=args.fusion,
previous_noyau=args.previous,
output=args.output,
)
if __name__ == "__main__":
main()

View File

@ -0,0 +1,108 @@
#!/usr/bin/env python3
# -*- coding: utf-8 -*-
"""
collatz_verifier_alternative.py
Alternative implementation of Collatz certificate verification.
Same interface as collatz_verifier_minimal.py but different validation order
and logic (e.g. per-field first, then per-clause).
Usage: python collatz_verifier_alternative.py --certificat JSON_PATH --output MD_PATH
"""
from __future__ import annotations
import argparse
import json
from pathlib import Path
from typing import Any
REQUIRED_CLAUSE_FIELDS = frozenset({"word", "k", "s", "mod", "residue", "a", "B", "N0"})
def load_certificate(path: str) -> dict[str, Any]:
"""Load and parse JSON certificate."""
p = Path(path)
if not p.exists():
raise FileNotFoundError(f"Certificate not found: {path}")
text = p.read_text(encoding="utf-8")
return json.loads(text)
def get_clauses(data: dict[str, Any]) -> list[dict[str, Any]]:
"""Extract clauses from certificate (prefer 'clauses' then 'closed')."""
if "clauses" in data:
return data["clauses"]
if "closed" in data:
return data["closed"]
return []
def validate_clause(clause: Any, index: int) -> list[str]:
"""Validate a single clause. Different order: check type first, then each field."""
errors: list[str] = []
if isinstance(clause, (int, float)):
return []
if not isinstance(clause, dict):
errors.append(f"Clause {index}: not a dict (got {type(clause).__name__})")
return errors
for field in sorted(REQUIRED_CLAUSE_FIELDS):
if field not in clause:
errors.append(f"Clause {index}: missing required field '{field}'")
elif field == "word":
w = clause[field]
if not isinstance(w, str):
errors.append(f"Clause {index}: 'word' must be string")
elif w and not all(c in ("0", "1") for c in w):
errors.append(f"Clause {index}: 'word' must be binary string")
return errors
def validate_certificate(data: dict[str, Any]) -> list[str]:
"""Validate certificate structure."""
errors: list[str] = []
clauses = get_clauses(data)
if not clauses:
errors.append("Certificate has no 'closed' or 'clauses' array")
return errors
if isinstance(clauses[0], (int, float)):
return []
for i, c in enumerate(clauses):
errors.extend(validate_clause(c, i))
return errors
def run(cert_path: str, output_path: str) -> None:
"""Run verification and write output."""
errors: list[str] = []
try:
data = load_certificate(cert_path)
errors = validate_certificate(data)
except FileNotFoundError as e:
errors.append(str(e))
except json.JSONDecodeError as e:
errors.append(f"Invalid JSON: {e}")
if errors:
content = "# Verification failed\n\n" + "\n".join(f"- {e}" for e in errors)
else:
content = "# Verification OK\n\n"
Path(output_path).write_text(content, encoding="utf-8")
def main() -> None:
parser = argparse.ArgumentParser(description="Verify Collatz certificate (alternative)")
parser.add_argument("--certificat", required=True, help="Path to certificate JSON")
parser.add_argument("--output", required=True, help="Path to output MD file")
args = parser.parse_args()
run(args.certificat, args.output)
if __name__ == "__main__":
main()

View File

@ -0,0 +1,106 @@
#!/usr/bin/env python3
# -*- coding: utf-8 -*-
"""
collatz_verifier_minimal.py
Verifies a Collatz certificate JSON structure.
Loads certificate, validates that clauses have required fields.
Outputs simple MD: OK or list of errors.
Usage: python collatz_verifier_minimal.py --certificat JSON_PATH --output MD_PATH
"""
from __future__ import annotations
import argparse
import json
from pathlib import Path
from typing import Any
REQUIRED_CLAUSE_FIELDS = frozenset({"word", "k", "s", "mod", "residue", "a", "B", "N0"})
def load_certificate(path: str) -> dict[str, Any]:
"""Load and parse JSON certificate."""
p = Path(path)
if not p.exists():
raise FileNotFoundError(f"Certificate not found: {path}")
text = p.read_text(encoding="utf-8")
return json.loads(text)
def get_clauses(data: dict[str, Any]) -> list[dict[str, Any]]:
"""Extract clauses from certificate (support 'closed' or 'clauses').
Returns empty list if not found."""
if "closed" in data:
return data["closed"]
if "clauses" in data:
return data["clauses"]
return []
def validate_clause(clause: Any, index: int) -> list[str]:
"""Validate a single clause. Returns list of error messages."""
errors: list[str] = []
if isinstance(clause, (int, float)):
return []
if not isinstance(clause, dict):
errors.append(f"Clause {index}: not a dict (got {type(clause).__name__})")
return errors
for field in REQUIRED_CLAUSE_FIELDS:
if field not in clause:
errors.append(f"Clause {index}: missing required field '{field}'")
if isinstance(clause.get("word"), str):
for c in clause["word"]:
if c not in ("0", "1"):
errors.append(f"Clause {index}: word must be binary string")
break
return errors
def validate_certificate(data: dict[str, Any]) -> list[str]:
"""Validate certificate structure."""
errors: list[str] = []
clauses = get_clauses(data)
if not clauses:
errors.append("Certificate has no 'closed' or 'clauses' array")
return errors
if isinstance(clauses[0], (int, float)):
return []
for i, c in enumerate(clauses):
errors.extend(validate_clause(c, i))
return errors
def run(cert_path: str, output_path: str) -> None:
"""Run verification and write output."""
errors: list[str] = []
try:
data = load_certificate(cert_path)
errors = validate_certificate(data)
except FileNotFoundError as e:
errors.append(str(e))
except json.JSONDecodeError as e:
errors.append(f"Invalid JSON: {e}")
if errors:
content = "# Verification failed\n\n" + "\n".join(f"- {e}" for e in errors)
else:
content = "# Verification OK\n\n"
Path(output_path).write_text(content, encoding="utf-8")
def main() -> None:
parser = argparse.ArgumentParser(description="Verify Collatz certificate structure")
parser.add_argument("--certificat", required=True, help="Path to certificate JSON")
parser.add_argument("--output", required=True, help="Path to output MD file")
args = parser.parse_args()
run(args.certificat, args.output)
if __name__ == "__main__":
main()

View File

@ -0,0 +1,63 @@
# -*- coding: utf-8 -*-
"""
collatz_verify_Nstar_range.py
For n=1 to N, run Collatz trajectory until 1. Use U_step from collatz_k_core
for accelerated trajectory on odds. Output CSV: n, steps, ok.
Usage:
python collatz_verify_Nstar_range.py --Nstar N --output CSV_PATH
"""
from __future__ import annotations
import argparse
import csv
from pathlib import Path
from collatz_k_core import U_step
def collatz_steps_to_one(n: int) -> tuple[int, bool]:
"""
Run Collatz trajectory from n until 1.
Returns (steps, ok) where ok=True if trajectory reached 1.
Uses U_step for accelerated trajectory on odd numbers.
"""
steps = 0
while n != 1:
if n <= 0:
return steps, False
if n % 2 == 1:
n, a = U_step(n)
steps += 1 + a # one 3n+1, then a divisions by 2
else:
n = n // 2
steps += 1
return steps, True
def main() -> None:
parser = argparse.ArgumentParser(
description="Verify Collatz conjecture for n=1..N, output CSV"
)
parser.add_argument("--Nstar", type=int, required=True, help="Upper bound N (inclusive)")
parser.add_argument("--output", required=True, help="Output CSV path")
args = parser.parse_args()
nstar = args.Nstar
if nstar < 1:
raise SystemExit("Nstar must be >= 1")
out_path = Path(args.output)
out_path.parent.mkdir(parents=True, exist_ok=True)
with out_path.open("w", newline="", encoding="utf-8") as f:
w = csv.writer(f)
w.writerow(["n", "steps", "ok"])
for n in range(1, nstar + 1):
steps, ok = collatz_steps_to_one(n)
w.writerow([n, steps, ok])
if __name__ == "__main__":
main()

View File

@ -0,0 +1,101 @@
# -*- coding: utf-8 -*-
"""
collatz_verify_both_extinction.py
Vérifie la taille du noyau « both » |B_m| au palier 2^m.
Produit un rapport Markdown avec taille, coefficient de contraction κ,
et comparaison avec le noyau précédent si fourni.
CLI: --palier N --input-noyau PATH --output MD_PATH [--previous PATH]
"""
from __future__ import annotations
from pathlib import Path
import argparse
import json
from collatz_k_utils import write_text
def load_noyau(path: str) -> list[int]:
"""Load noyau from JSON: list of residues or dict with R*_after / noyau / residues."""
data = json.loads(Path(path).read_text(encoding="utf-8"))
if isinstance(data, list):
return [int(x) for x in data]
if isinstance(data, dict):
for key in ("R25_after", "R24_after", "noyau", "residues", "uncovered"):
if key in data and isinstance(data[key], list):
return [int(x) for x in data[key]]
raise ValueError(f"Noyau JSON: no known key in {list(data.keys())}")
raise ValueError("Noyau JSON must be a list or dict with residue list")
def run_verify(
palier: int,
input_noyau: str,
output_md: str,
previous_noyau: str | None = None,
) -> None:
noyau = load_noyau(input_noyau)
B_m = len(noyau)
R_m = 1 << (palier - 1) # odd residues at level 2^m
lines: list[str] = []
lines.append("# Vérification extinction noyau « both »")
lines.append("")
lines.append("## Paramètres")
lines.append("")
lines.append(f"- Palier : 2^{palier}")
lines.append(f"- |R_m| (impairs mod 2^{palier}) : {R_m}")
lines.append(f"- |B_m| (noyau both) : {B_m}")
lines.append("")
# Contraction coefficient κ = |B_m| / |R_m|
kappa = B_m / R_m if R_m else 0.0
lines.append("## Coefficient de contraction κ")
lines.append("")
lines.append(f"- κ = |B_m| / |R_m| = {B_m} / {R_m} = {kappa}")
lines.append("")
if previous_noyau:
prev = load_noyau(previous_noyau)
B_prev = len(prev)
lines.append("## Comparaison avec noyau précédent")
lines.append("")
lines.append(f"- |B_m| précédent : {B_prev}")
lines.append(f"- |B_m| actuel : {B_m}")
if B_m < B_prev:
lines.append(f"- **Contraction** : |B_m| diminue de {B_prev - B_m} (✓)")
elif B_m == B_prev:
lines.append("- **Stable** : |B_m| inchangé")
else:
lines.append(f"- **Augmentation** : |B_m| augmente de {B_m - B_prev}")
lines.append("")
else:
lines.append("## Comparaison")
lines.append("")
lines.append("(Aucun noyau précédent fourni pour comparaison)")
lines.append("")
write_text(output_md, "\n".join(lines) + "\n")
print(f"Wrote: {output_md}")
def main() -> None:
ap = argparse.ArgumentParser(description="Verify both-extinction: |B_m| size and contraction")
ap.add_argument("--palier", type=int, required=True, help="Modulus power (e.g. 25 for 2^25)")
ap.add_argument("--input-noyau", required=True, help="Path to noyau JSON")
ap.add_argument("--output", required=True, help="Path to output Markdown report")
ap.add_argument("--previous", default=None, help="Path to previous noyau JSON for comparison")
args = ap.parse_args()
run_verify(
palier=args.palier,
input_noyau=args.input_noyau,
output_md=args.output,
previous_noyau=args.previous,
)
if __name__ == "__main__":
main()

View File

@ -0,0 +1,91 @@
# -*- coding: utf-8 -*-
"""
collatz_verify_coverage.py
Load certificate, check union of residue classes, output markdown with
coverage analysis.
Usage:
python collatz_verify_coverage.py --certificat JSON_PATH --output MD_PATH
"""
from __future__ import annotations
import argparse
import json
from pathlib import Path
from collections import defaultdict
def covered_residues(clauses: list) -> dict[int, set[int]]:
"""
Build mod -> set of residues from clauses.
Supports: dict with 'mod'/'residue', or int (residue) with inferred modulus.
"""
mod_to_residues: dict[int, set[int]] = defaultdict(set)
for c in clauses:
if isinstance(c, (int, float)):
r = int(c)
m = 0
while (1 << m) <= r:
m += 1
mod_val = 1 << max(m, 1)
mod_to_residues[mod_val].add(r)
elif isinstance(c, dict):
mod_val = c.get("mod")
residue_val = c.get("residue")
if mod_val is not None and residue_val is not None:
mod_to_residues[int(mod_val)].add(int(residue_val))
return dict(mod_to_residues)
def analyze_coverage(mod_to_residues: dict[int, set[int]]) -> list[str]:
"""Produce markdown lines for coverage analysis."""
lines: list[str] = []
lines.append("# Coverage Analysis")
lines.append("")
lines.append("## Union of residue classes")
lines.append("")
total_clauses = sum(len(r) for r in mod_to_residues.values())
lines.append(f"- Total clauses: {total_clauses}")
lines.append("")
for mod_val in sorted(mod_to_residues.keys()):
residues = mod_to_residues[mod_val]
# Odd residues mod 2^m: 2^(m-1) classes
odd_count = mod_val // 2
coverage_pct = (len(residues) / odd_count * 100) if odd_count else 0
lines.append(f"### Modulus 2^{mod_val.bit_length() - 1} (mod = {mod_val})")
lines.append("")
lines.append(f"- Residue classes covered: {len(residues)} / {odd_count}")
lines.append(f"- Coverage: {coverage_pct:.2f}%")
lines.append("")
return lines
def main() -> None:
parser = argparse.ArgumentParser(
description="Verify coverage of certificate (union of residue classes)"
)
parser.add_argument("--certificat", required=True, help="Path to certificate JSON")
parser.add_argument("--output", required=True, help="Output markdown path")
args = parser.parse_args()
cert_path = Path(args.certificat)
if not cert_path.is_file():
raise SystemExit(f"Not a file: {cert_path}")
data = json.loads(cert_path.read_text(encoding="utf-8"))
clauses = data.get("clauses", data.get("closed", []))
mod_to_residues = covered_residues(clauses)
lines = analyze_coverage(mod_to_residues)
out_path = Path(args.output)
out_path.parent.mkdir(parents=True, exist_ok=True)
out_path.write_text("\n".join(lines) + "\n", encoding="utf-8")
if __name__ == "__main__":
main()

View File

@ -0,0 +1,98 @@
# -*- coding: utf-8 -*-
"""
md_to_audit_json.py
Parse audit_60_etats_B12_mod4096_horizon7.md and output audit_60_etats_B12_mod4096_horizon7.json
with residue_to_state mapping and state_table.
"""
from __future__ import annotations
import argparse
import json
import re
from pathlib import Path
def parse_state_table(text: str) -> list[dict]:
"""Parse the markdown table '| État | Mot (a0..a6) | ...' into a list of dicts."""
lines = text.splitlines()
table_lines = []
in_table = False
for ln in lines:
if "|" in ln and "État" in ln and "Mot (a0..a6)" in ln:
in_table = True
if in_table:
if ln.strip().startswith("|") and "---" not in ln:
table_lines.append(ln)
elif in_table and ln.strip().startswith("|") and "---" in ln:
continue # skip separator
elif in_table and (not ln.strip().startswith("|") or ln.strip() == "|"):
break
if len(table_lines) < 2:
return []
header = [p.strip() for p in table_lines[0].strip().strip("|").split("|")]
rows = []
for ln in table_lines[1:]:
parts = [p.strip() for p in ln.strip().strip("|").split("|")]
if len(parts) >= len(header):
row = {}
for i, h in enumerate(header):
val = parts[i] if i < len(parts) else ""
if h in ("État", "Somme A", "Effectif", "C7", "n7 mod 3", "n7 mod 2187"):
try:
row[h] = int(val)
except ValueError:
row[h] = val
else:
row[h] = val
rows.append(row)
return rows
def parse_residues_by_state(text: str) -> dict[int, list[int]]:
"""Parse '### État N' sections and extract residues for each state."""
residue_by_state: dict[int, list[int]] = {}
blocks = re.split(r"\n### État ", text)
for block in blocks[1:]: # skip content before first État
m = re.match(r"^(\d+)\s", block)
if not m:
continue
state_id = int(m.group(1))
res_match = re.search(r"Résidus \(mod 4096\), effectif \d+ :\s*\n\s*([\d,\s]+)", block)
if res_match:
residue_str = res_match.group(1).strip()
residues = [int(x.strip()) for x in residue_str.split(",") if x.strip()]
residue_by_state[state_id] = residues
return residue_by_state
def build_residue_to_state(residue_by_state: dict[int, list[int]]) -> dict[str, int]:
"""Build {str(residue): state_id} mapping."""
out: dict[str, int] = {}
for state_id, residues in residue_by_state.items():
for r in residues:
out[str(r)] = state_id
return out
def main() -> None:
ap = argparse.ArgumentParser(description="Parse audit MD to JSON")
ap.add_argument("--input", "-i", default="audit_60_etats_B12_mod4096_horizon7.md")
ap.add_argument("--output", "-o", default="audit_60_etats_B12_mod4096_horizon7.json")
args = ap.parse_args()
text = Path(args.input).read_text(encoding="utf-8")
state_table = parse_state_table(text)
residue_by_state = parse_residues_by_state(text)
residue_to_state = build_residue_to_state(residue_by_state)
out = {
"residue_to_state": residue_to_state,
"state_table": state_table,
}
Path(args.output).write_text(json.dumps(out, indent=2, ensure_ascii=False), encoding="utf-8")
print(f"Wrote {args.output}: {len(residue_to_state)} residues, {len(state_table)} states")
if __name__ == "__main__":
main()

View File

@ -0,0 +1,10 @@
#!/usr/bin/env bash
# Execute full workflow: setup + pipeline (sections 1-2 from commandes.md)
# Runs only the implemented parts: setup and reproduce_all_audits
set -e
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
"$SCRIPT_DIR/01-setup.sh"
"$SCRIPT_DIR/02-run-pipeline.sh"

9
applications/collatz/scripts/01-setup.sh Normal file → Executable file
View File

@ -15,16 +15,13 @@ echo "Project root: $PROJECT_ROOT"
echo "Data root: $DATA_ROOT" echo "Data root: $DATA_ROOT"
echo "Output root: $OUT_ROOT" echo "Output root: $OUT_ROOT"
# Install dependencies # Install dependencies (optional: use venv if pip fails)
if [[ -f collatz_k_scripts/requirements.txt ]]; then if [[ -f collatz_k_scripts/requirements.txt ]]; then
pip install -r collatz_k_scripts/requirements.txt pip install -r collatz_k_scripts/requirements.txt 2>/dev/null || true
fi fi
# Create directory structure # Create directory structure
mkdir -p "$DATA_ROOT"/{source,audits,candidats,certificats,logs,noyaux} mkdir -p "$DATA_ROOT"/{source,audits,candidats,certificats,logs,noyaux}
mkdir -p "$OUT_ROOT"/{audits,candidats,certificats,logs,noyaux,rapports,preuves,verification,docs} mkdir -p "$OUT_ROOT"/{audits,candidats,certificats,logs,noyaux,rapports,preuves,verification,docs}
echo "Setup complete. Ensure input files are in $DATA_ROOT/source/:" echo "Setup complete. Input files in collatz_k_scripts/ (or set ROOT for 02-run-pipeline)"
echo " - audit_60_etats_B12_mod4096_horizon7.json"
echo " - complétion_minorée_m15_vers_m16.md"
echo " - candidats_D10_palier2p17.md"

4
applications/collatz/scripts/02-run-pipeline.sh Normal file → Executable file
View File

@ -6,7 +6,7 @@ set -e
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)" SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
PROJECT_ROOT="$(cd "$SCRIPT_DIR/.." && pwd)" PROJECT_ROOT="$(cd "$SCRIPT_DIR/.." && pwd)"
ROOT="${ROOT:-$PROJECT_ROOT/data/source}" ROOT="${ROOT:-$PROJECT_ROOT/collatz_k_scripts}"
OUT="${OUT:-$PROJECT_ROOT/out}" OUT="${OUT:-$PROJECT_ROOT/out}"
cd "$PROJECT_ROOT/collatz_k_scripts" cd "$PROJECT_ROOT/collatz_k_scripts"
@ -24,6 +24,6 @@ done
mkdir -p "$OUT" mkdir -p "$OUT"
echo "Running reproduce_all_audits --root $ROOT --out $OUT" echo "Running reproduce_all_audits --root $ROOT --out $OUT"
python reproduce_all_audits.py --root "$ROOT" --out "$OUT" python3 reproduce_all_audits.py --root "$ROOT" --out "$OUT"
echo "Pipeline complete. Outputs in $OUT" echo "Pipeline complete. Outputs in $OUT"

10
applications/collatz/scripts/03-run-direct-pipeline.sh Normal file → Executable file
View File

@ -1,15 +1,15 @@
#!/usr/bin/env bash #!/usr/bin/env bash
# Run collatz_k_pipeline directly with explicit paths # Run collatz_k_pipeline directly with explicit paths
# Use when input files are not in default data/source/ # Use when input files are not in default collatz_k_scripts/
set -e set -e
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)" SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
PROJECT_ROOT="$(cd "$SCRIPT_DIR/.." && pwd)" PROJECT_ROOT="$(cd "$SCRIPT_DIR/.." && pwd)"
AUDIT60="${AUDIT60:-$PROJECT_ROOT/data/source/audit_60_etats_B12_mod4096_horizon7.json}" AUDIT60="${AUDIT60:-$PROJECT_ROOT/collatz_k_scripts/audit_60_etats_B12_mod4096_horizon7.json}"
M15M16="${M15M16:-$PROJECT_ROOT/data/source/complétion_minorée_m15_vers_m16.md}" M15M16="${M15M16:-$PROJECT_ROOT/collatz_k_scripts/complétion_minorée_m15_vers_m16.md}"
D10="${D10:-$PROJECT_ROOT/data/source/candidats_D10_palier2p17.md}" D10="${D10:-$PROJECT_ROOT/collatz_k_scripts/candidats_D10_palier2p17.md}"
OUT="${OUT:-$PROJECT_ROOT/out}" OUT="${OUT:-$PROJECT_ROOT/out}"
cd "$PROJECT_ROOT/collatz_k_scripts" cd "$PROJECT_ROOT/collatz_k_scripts"
@ -23,7 +23,7 @@ done
mkdir -p "$OUT" mkdir -p "$OUT"
python collatz_k_pipeline.py \ python3 collatz_k_pipeline.py \
--audit60 "$AUDIT60" \ --audit60 "$AUDIT60" \
--m15m16 "$M15M16" \ --m15m16 "$M15M16" \
--d10 "$D10" \ --d10 "$D10" \

View File

@ -0,0 +1,21 @@
#!/usr/bin/env bash
# Section 2 from commandes.md: Initial paliers (D10, D11)
# Requires: collatz_base_projective.py, collatz_k_pipeline.py, collatz_audit.py, collatz_scission.py
# Note: collatz_k_pipeline.py exists but uses different CLI (--audit60, --m15m16, --d10).
# collatz_audit.py, collatz_scission.py, collatz_base_projective.py are not implemented.
set -e
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
PROJECT_ROOT="$(cd "$SCRIPT_DIR/.." && pwd)"
DATA="${DATA:-$PROJECT_ROOT/data}"
OUT="${OUT:-$PROJECT_ROOT/out}"
cd "$PROJECT_ROOT/collatz_k_scripts"
mkdir -p "$OUT"/{candidats,audits,certificats,noyaux}
# Base projective
python3 collatz_base_projective.py --modulus=4096 --output="$OUT/noyaux/base_projective_60_etats.json"
# D10/D11: full pipeline via 02-run-pipeline.sh

View File

@ -0,0 +1,18 @@
#!/usr/bin/env bash
# Section 3 from commandes.md: Intermediate paliers (D12-D14)
# Requires: collatz_k_pipeline.py, collatz_audit.py, collatz_scission.py
# Current pipeline embeds D11-D15 internally; standalone scripts not implemented.
set -e
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
PROJECT_ROOT="$(cd "$SCRIPT_DIR/.." && pwd)"
OUT="${OUT:-$PROJECT_ROOT/out}"
echo "Intermediate paliers D12-D14 are computed internally by collatz_k_pipeline."
echo "Run 02-run-pipeline.sh to execute the full pipeline including these steps."
echo ""
echo "Standalone commands (from commandes.md, scripts not implemented):"
echo " collatz_k_pipeline.py --horizon=12 --palier=21 --seuil=A_min --valeur=20 ..."
echo " collatz_audit.py --input=... --output=..."
echo " collatz_scission.py --input=... --output=..."

View File

@ -0,0 +1,17 @@
#!/usr/bin/env bash
# Section 4 from commandes.md: Fusion application (F11, F12, F14 at palier 2^25)
# The current collatz_k_pipeline.py embeds fusion internally.
# collatz_fusion_pipeline.py, collatz_update_noyau.py are not implemented as standalone scripts.
set -e
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
PROJECT_ROOT="$(cd "$SCRIPT_DIR/.." && pwd)"
OUT="${OUT:-$PROJECT_ROOT/out}"
echo "Fusion F11/F12/F14 at palier 2^25 is computed by collatz_k_pipeline.run_after_fusion_D16_D17."
echo "Run 02-run-pipeline.sh to execute."
echo ""
echo "Standalone commands from commandes.md (scripts not implemented):"
echo " collatz_fusion_pipeline.py --horizons=11,12,14 --palier=25 ..."
echo " collatz_update_noyau.py --fusion=... --previous=... --output=..."

View File

@ -0,0 +1,16 @@
#!/usr/bin/env bash
# Section 5 from commandes.md: Advanced paliers (D16, D17, D18)
# D16/D17: implemented in collatz_k_pipeline.run_after_fusion_D16_D17
# D18 and beyond: not implemented
set -e
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
PROJECT_ROOT="$(cd "$SCRIPT_DIR/.." && pwd)"
OUT="${OUT:-$PROJECT_ROOT/out}"
echo "D16/D17: run 02-run-pipeline.sh (output: candidats_D16_apres_fusion_palier2p27.*, candidats_D17_apres_fusion_palier2p28.*)"
echo ""
echo "D18+ from commandes.md (not implemented):"
echo " collatz_k_pipeline.py --horizon=18 --palier=30 --seuil=A_min --valeur=29 ..."
echo " collatz_audit.py, collatz_scission.py for D18"

View File

@ -0,0 +1,53 @@
#!/usr/bin/env bash
# Section 6 from commandes.md: Final paliers (D18-D21, F15, F16, extinction noyau both)
# Requires: noyau_post_D17.json from 02-run-pipeline.sh
# Uses: collatz_k_pipeline.py --extend
set -e
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
PROJECT_ROOT="$(cd "$SCRIPT_DIR/.." && pwd)"
OUT="${OUT:-$PROJECT_ROOT/out}"
ROOT="${ROOT:-$PROJECT_ROOT/collatz_k_scripts}"
cd "$PROJECT_ROOT"
if [[ ! -f "$OUT/noyaux/noyau_post_D17.json" ]]; then
echo "Missing $OUT/noyaux/noyau_post_D17.json. Run 02-run-pipeline.sh first."
exit 1
fi
AUDIT60="$ROOT/audit_60_etats_B12_mod4096_horizon7.json"
if [[ ! -f "$AUDIT60" ]]; then
AUDIT60="$PROJECT_ROOT/collatz_k_scripts/audit_60_etats_B12_mod4096_horizon7.json"
fi
if [[ ! -f "$AUDIT60" ]]; then
echo "Missing audit60. Place it in $ROOT or collatz_k_scripts/"
exit 1
fi
cd collatz_k_scripts
RESUME_ARG=""
if [[ -n "${RESUME_FROM:-}" ]]; then
RESUME_ARG="--resume-from $RESUME_FROM"
fi
python3 collatz_k_pipeline.py --extend --audit60 "$AUDIT60" --out "$OUT" $RESUME_ARG
# Audit and scission for D18-D21, F15, F16 (commandes.md section 6)
mkdir -p "$OUT/audits" "$OUT/certificats"
for label in D18_palier2p30 D19_palier2p32 F15_palier2p32 D20_palier2p34 F16_palier2p35 D21_palier2p36; do
csv="$OUT/candidats/candidats_${label}.csv"
if [[ -f "$csv" ]]; then
python3 collatz_audit.py --input "$csv" --output "$OUT/audits/audit_${label}.md"
python3 collatz_scission.py --input "$csv" --output "$OUT/certificats/certificat_${label}.json"
fi
done
# Verify both extinction (commandes.md section 7)
if [[ -f "$OUT/noyaux/noyau_post_D21.json" ]]; then
python3 collatz_verify_both_extinction.py --palier=36 \
--input-noyau="$OUT/noyaux/noyau_post_D21.json" \
--output="$OUT/audits/verification_extinction_noyau_both.md"
fi
echo "Extended D18-D21 complete. Outputs in $OUT/noyaux, $OUT/candidats, $OUT/certificats"

View File

@ -0,0 +1,21 @@
#!/usr/bin/env bash
# Section 7 from commandes.md: Final validation and full certificate generation
set -e
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
PROJECT_ROOT="$(cd "$SCRIPT_DIR/.." && pwd)"
OUT="${OUT:-$PROJECT_ROOT/out}"
cd "$PROJECT_ROOT/collatz_k_scripts"
python3 collatz_calculate_Nstar.py --certificats-dir "$OUT/certificats" --output "$OUT/certificats/seuil_global_Nstar.md"
python3 collatz_generate_full_certificate.py --certificats-dir "$OUT/certificats" --output "$OUT/certificats/certificat_complet_depth21.json"
python3 collatz_verify_coverage.py --certificat "$OUT/certificats/certificat_complet_depth21.json" --output "$OUT/audits/verification_couverture_complete.md"
NSTAR=$(grep "N\* =" "$OUT/certificats/seuil_global_Nstar.md" | awk '{print $3}')
if [[ -n "$NSTAR" && "$NSTAR" != "0" ]]; then
python3 collatz_verify_Nstar_range.py --Nstar "$NSTAR" --output "$OUT/audits/verification_Nstar_range.csv"
fi
echo "Section 7 complete: N*, certificat_complet_depth21, coverage, Nstar_range"

View File

@ -0,0 +1,29 @@
#!/usr/bin/env bash
# Section 8 from commandes.md: Academic document generation
set -e
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
PROJECT_ROOT="$(cd "$SCRIPT_DIR/.." && pwd)"
OUT="${OUT:-$PROJECT_ROOT/out}"
cd "$PROJECT_ROOT/collatz_k_scripts"
mkdir -p "$OUT/rapports" "$OUT/preuves"
NSTAR=$(grep "N\* =" "$OUT/certificats/seuil_global_Nstar.md" 2>/dev/null | awk '{print $3}' || echo "0")
CERT="$OUT/certificats/certificat_complet_depth21.json"
BASE="$OUT/preuves/validation_base.md"
python3 collatz_generate_audit_report.py --audits-dir "$OUT/audits" --output "$OUT/rapports/rapport_audit_final.md"
python3 collatz_generate_coverage_proof.py --certificat "$CERT" --output "$OUT/preuves/preuve_couverture.md"
python3 collatz_generate_threshold_audit.py --certificat "$CERT" --output "$OUT/preuves/audit_seuils.md"
if [[ -n "$NSTAR" && "$NSTAR" != "0" ]]; then
python3 collatz_generate_base_validation.py --Nstar "$NSTAR" --output "$BASE"
fi
[[ -f "$BASE" ]] || echo "# Validation de base (N* non disponible)" > "$BASE"
python3 collatz_generate_full_proof.py \
--coverage "$OUT/preuves/preuve_couverture.md" \
--threshold "$OUT/preuves/audit_seuils.md" \
--base "$BASE" \
--certificat "$CERT" \
--output "$OUT/demonstration_collatz_complete.md"

View File

@ -0,0 +1,15 @@
#!/usr/bin/env bash
# Section 9 from commandes.md: Independent verification (external validation)
set -e
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
PROJECT_ROOT="$(cd "$SCRIPT_DIR/.." && pwd)"
OUT="${OUT:-$PROJECT_ROOT/out}"
cd "$PROJECT_ROOT/collatz_k_scripts"
mkdir -p "$OUT/verification"
python3 collatz_verifier_minimal.py --certificat "$OUT/certificats/certificat_complet_depth21.json" --output "$OUT/verification/minimal_verifier_result.md"
python3 collatz_verifier_alternative.py --certificat "$OUT/certificats/certificat_complet_depth21.json" --output "$OUT/verification/alternative_verifier_result.md"
python3 collatz_compare_verifiers.py --verif1 "$OUT/verification/minimal_verifier_result.md" --verif2 "$OUT/verification/alternative_verifier_result.md" --output "$OUT/verification/comparison_result.md"

View File

@ -0,0 +1,17 @@
#!/usr/bin/env bash
# Section 10 from commandes.md: Progress tracking and documentation
set -e
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
PROJECT_ROOT="$(cd "$SCRIPT_DIR/.." && pwd)"
OUT="${OUT:-$PROJECT_ROOT/out}"
cd "$PROJECT_ROOT/collatz_k_scripts"
python3 collatz_generate_progress_log.py --audits-dir "$OUT/audits" --output "$OUT/journal.md"
python3 collatz_generate_readme.py --certificat "$OUT/certificats/certificat_complet_depth21.json" --audits-dir "$OUT/audits" --output "$OUT/README.md"
mkdir -p "$OUT/docs"
if [[ -f "$OUT/noyaux/base_projective_60_etats.json" ]]; then
python3 collatz_document_base_states.py --base-projective "$OUT/noyaux/base_projective_60_etats.json" --output "$OUT/docs/base_projective_60_etats.md" 2>/dev/null || true
fi

View File

@ -0,0 +1,129 @@
# Scripts Collatz
Scripts dédiés au projet `applications/collatz`, dérivés de `commandes.md`.
## Structure
| Script | Section commandes.md | Statut |
|--------|----------------------|--------|
| `00-run-all.sh` | 1+2 | Exécute setup + pipeline |
| `01-setup.sh` | 1 | Préparation environnement |
| `02-run-pipeline.sh` | 2 | Pipeline principal (reproduce_all_audits) |
| `03-run-direct-pipeline.sh` | - | Pipeline avec chemins explicites |
| `04-paliers-initials.sh` | 2 | Paliers D10/D11 (partiel) |
| `05-paliers-intermediaires.sh` | 3 | D12-D14 (intégré au pipeline) |
| `06-fusion-pipeline.sh` | 4 | Fusion F11/F12/F14 (intégré) |
| `07-paliers-avances.sh` | 5 | D16/D17 (intégré au pipeline) |
| `08-paliers-finale.sh` | 6 | D18-D21, F15, F16, extinction noyau both |
| `09-validation-certificat.sh` | 7 | Validation certificat |
| `10-documents-academiques.sh` | 8 | Documents académiques (MD) |
| `11-verification-independante.sh` | 9 | Vérificateurs externes |
| `12-suivi-documentation.sh` | 10 | Journal, docs |
| `run-full-workflow.sh` | 1-10 | Workflow complet aligné sur commandes.md |
## Utilisation
### Environnement Python
Utiliser un venv si l'environnement système est géré :
```bash
python3 -m venv .venv && source .venv/bin/activate
pip install -r collatz_k_scripts/requirements.txt
```
### Prérequis
Fichiers d'entrée dans `collatz_k_scripts/` (ou `ROOT`) :
- `audit_60_etats_B12_mod4096_horizon7.json` (format: `residue_to_state`, `state_table`)
- `complétion_minorée_m15_vers_m16.md`
- `candidats_D10_palier2p17.md`
### Exécution rapide
```bash
./scripts/00-run-all.sh
```
### Workflow complet
```bash
./scripts/run-full-workflow.sh
```
### Exécution par étapes
```bash
./scripts/01-setup.sh
./scripts/02-run-pipeline.sh
./scripts/08-paliers-finale.sh # D18-D21, F15, F16 (nécessite noyau_post_D17)
./scripts/09-validation-certificat.sh
./scripts/10-documents-academiques.sh
./scripts/11-verification-independante.sh
./scripts/12-suivi-documentation.sh
```
### Reprise après interruption (D20)
Si `08-paliers-finale` a été interrompu après l'écriture de `candidats_D20` mais avant `noyau_post_D20.json` :
```bash
./scripts/08-paliers-finale.sh
```
Le pipeline détecte automatiquement le CSV existant et recrée `noyau_post_D20.json` via `collatz_recover_noyau.py`, puis poursuit F16 et D21.
Reprise explicite à partir de D20 (sans recalculer D18, D19, F15) :
```bash
RESUME_FROM=D20 ./scripts/08-paliers-finale.sh
```
### Chemins personnalisés
```bash
ROOT=/chemin/vers/sources OUT=/chemin/vers/sorties ./scripts/02-run-pipeline.sh
```
```bash
AUDIT60=... M15M16=... D10=... OUT=... ./scripts/03-run-direct-pipeline.sh
```
## Variables d'environnement
- `ROOT` : répertoire des fichiers source (défaut: `collatz_k_scripts`)
- `OUT` : répertoire de sortie (défaut: `out`)
- `DATA_ROOT` : racine des données (défaut: `data`)
- `OUT_ROOT` : racine des sorties (défaut: `out`)
## Modules Python implémentés
- `collatz_k_core.py` : noyau arithmétique
- `collatz_k_utils.py` : utilitaires
- `collatz_k_fusion.py` : clauses de fusion
- `collatz_k_pipeline.py` : pipeline D11-D17 après fusion
- `reproduce_all_audits.py` : orchestrateur
## Modules implémentés (complets)
- `md_to_audit_json.py` : conversion audit MD → JSON
- `collatz_base_projective.py`
- `collatz_audit.py`
- `collatz_scission.py`
- `collatz_fusion_pipeline.py`, `collatz_update_noyau.py`
- `collatz_verify_both_extinction.py`
- `collatz_calculate_Nstar.py`, `collatz_generate_full_certificate.py`
- `collatz_verify_coverage.py`, `collatz_verify_Nstar_range.py`
- `collatz_verifier_minimal.py`, `collatz_verifier_alternative.py`, `collatz_compare_verifiers.py`
- `collatz_generate_progress_log.py`, `collatz_document_base_states.py`, `collatz_generate_readme.py`
## Modules supplémentaires implémentés
- `collatz_k_pipeline.py` : mode `--horizon --palier --valeur --input-noyau --output` (palier isolé)
- `collatz_compute_survival_rate.py`
- `collatz_fusion_pipeline.py` : options `--cible`, `--modulo`
- `collatz_generate_audit_report.py`, `collatz_generate_coverage_proof.py`
- `collatz_generate_threshold_audit.py`, `collatz_generate_base_validation.py`
- `collatz_generate_full_proof.py`
- `collatz_build_initial_noyau.py`

View File

@ -0,0 +1,33 @@
#!/usr/bin/env bash
# Full workflow: setup, pipeline, audit, scission, certificate, verification, documentation
set -e
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
PROJECT_ROOT="$(cd "$SCRIPT_DIR/.." && pwd)"
ROOT="${ROOT:-$PROJECT_ROOT/collatz_k_scripts}"
OUT="${OUT:-$PROJECT_ROOT/out}"
cd "$PROJECT_ROOT"
"$SCRIPT_DIR/01-setup.sh"
"$SCRIPT_DIR/04-paliers-initials.sh"
ROOT="$ROOT" OUT="$OUT" "$SCRIPT_DIR/02-run-pipeline.sh"
# Audit and scission on D16/D17 outputs
cd collatz_k_scripts
mkdir -p "$OUT/audits" "$OUT/certificats"
python3 collatz_audit.py --input "$OUT/candidats_D16_apres_fusion_palier2p27.csv" --output "$OUT/audits/audit_D16_palier2p27.md"
python3 collatz_scission.py --input "$OUT/candidats_D16_apres_fusion_palier2p27.csv" --output "$OUT/certificats/certificat_D16_palier2p27.json"
python3 collatz_audit.py --input "$OUT/candidats_D17_apres_fusion_palier2p28.csv" --output "$OUT/audits/audit_D17_palier2p28.md"
python3 collatz_scission.py --input "$OUT/candidats_D17_apres_fusion_palier2p28.csv" --output "$OUT/certificats/certificat_D17_palier2p28.json"
# Section 6: Extended paliers D18-D21, F15, F16 (commandes.md)
"$SCRIPT_DIR/08-paliers-finale.sh"
"$SCRIPT_DIR/09-validation-certificat.sh"
"$SCRIPT_DIR/10-documents-academiques.sh"
"$SCRIPT_DIR/11-verification-independante.sh"
"$SCRIPT_DIR/12-suivi-documentation.sh"
echo "Full workflow complete. Outputs in $OUT"