Compare commits

..

27 Commits

Author SHA1 Message Date
Matt Hill
5ba68a3124 translations 2026-02-16 00:34:41 -07:00
Matt Hill
bb68c3b91c update binding for API types, add ARCHITECTURE 2026-02-16 00:05:24 -07:00
Aiden McClelland
3518eccc87 feat: add port_forwards field to Host for tracking gateway forwarding rules 2026-02-14 16:40:21 -07:00
Matt Hill
2f19188dae looking good 2026-02-14 16:37:04 -07:00
Aiden McClelland
3a63f3b840 feat: add mdns hostname metadata variant and fix vhost routing
- Add HostnameMetadata::Mdns variant to distinguish mDNS from private domains
- Mark mDNS addresses as private (public: false) since mDNS is local-only
- Fall back to null SNI entry when hostname not found in vhost mapping
- Simplify public detection in ProxyTarget filter
- Pass hostname to update_addresses for mDNS domain name generation
2026-02-14 15:34:48 -07:00
Matt Hill
098d9275f4 new service interfacee page 2026-02-14 12:24:16 -07:00
Matt Hill
d5c74bc22e re-arrange (#3123) 2026-02-14 08:15:50 -07:00
Aiden McClelland
49d4da03ca feat: refactor NetService to watch DB and reconcile network state
- NetService sync task now uses PatchDB DbWatch instead of being called
  directly after DB mutations
- Read gateways from DB instead of network interface context when
  updating host addresses
- gateway sync updates all host addresses in the DB
- Add Watch<u64> channel for callers to wait on sync completion
- Fix ts-rs codegen bug with #[ts(skip)] on flattened Plugin field
- Update SDK getServiceInterface.ts for new HostnameInfo shape
- Remove unnecessary HTTPS redirect in static_server.rs
- Fix tunnel/api.rs to filter for WAN IPv4 address
2026-02-13 16:21:57 -07:00
Aiden McClelland
3765465618 chore: update ts bindings for preferred port design 2026-02-13 14:23:48 -07:00
Aiden McClelland
61f820d09e Merge branch 'feat/preferred-port-design' of github.com:Start9Labs/start-os into feat/preferred-port-design 2026-02-13 13:39:25 -07:00
Aiden McClelland
db7f3341ac wip refactor 2026-02-12 14:51:33 -07:00
Matt Hill
4decf9335c fix license display in marketplace 2026-02-12 13:07:19 -07:00
Matt Hill
339e5f799a build ts types and fix i18n 2026-02-12 11:32:29 -07:00
Aiden McClelland
89d3e0cf35 Merge branch 'feat/preferred-port-design' of github.com:Start9Labs/start-os into feat/preferred-port-design 2026-02-12 10:51:32 -07:00
Aiden McClelland
638ed27599 feat: replace SourceFilter with IpNet, add policy routing, remove MASQUERADE 2026-02-12 10:51:26 -07:00
Matt Hill
da75b8498e Merge branch 'next/major' of github.com:Start9Labs/start-os into feat/preferred-port-design 2026-02-12 08:28:36 -07:00
Matt Hill
8ef4ecf5ac outbound gateway support (#3120)
* Multiple (#3111)

* fix alerts i18n, fix status display, better, remove usb media, hide shutdown for install complete

* trigger chnage detection for localize pipe and round out implementing localize pipe for consistency even though not needed

* Fix PackageInfoShort to handle LocaleString on releaseNotes (#3112)

* Fix PackageInfoShort to handle LocaleString on releaseNotes

* fix: filter by target_version in get_matching_models and pass otherVersions from install

* chore: add exver documentation for ai agents

* frontend plus some be types

---------

Co-authored-by: Aiden McClelland <3732071+dr-bonez@users.noreply.github.com>
2026-02-12 08:27:09 -07:00
Aiden McClelland
2a54625f43 feat: replace InterfaceFilter with ForwardRequirements, add WildcardListener, complete alpha.20 bump
- Replace DynInterfaceFilter with ForwardRequirements for per-IP forward
  precision with source-subnet iptables filtering for private forwards
- Add WildcardListener (binds [::]:port) to replace the per-gateway
  NetworkInterfaceListener/SelfContainedNetworkInterfaceListener/
  UpgradableListener infrastructure
- Update forward-port script with src_subnet and excluded_src env vars
- Remove unused filter types and listener infrastructure from gateway.rs
- Add availablePorts migration (IdPool -> BTreeMap<u16, bool>) to alpha.20
- Complete version bump to 0.4.0-alpha.20 in SDK and web
2026-02-11 18:10:27 -07:00
Aiden McClelland
4e638fb58e feat: implement preferred port allocation and per-address enable/disable
- Add AvailablePorts::try_alloc() with SSL tracking (BTreeMap<u16, bool>)
- Add DerivedAddressInfo on BindInfo with private_disabled/public_enabled/possible sets
- Add Bindings wrapper with Map impl for patchdb indexed access
- Flatten HostAddress from single-variant enum to struct
- Replace set-gateway-enabled RPC with set-address-enabled
- Remove hostname_info from Host; computed addresses now in BindInfo.addresses.possible
- Compute possible addresses inline in NetServiceData::update()
- Update DB migration, SDK types, frontend, and container-runtime
2026-02-10 17:38:51 -07:00
Aiden McClelland
73274ef6e0 docs: update TODO.md with DerivedAddressInfo design, remove completed tor task 2026-02-10 14:45:50 -07:00
Aiden McClelland
e1915bf497 chore: format RPCSpec.md markdown table 2026-02-10 13:38:40 -07:00
Aiden McClelland
8204074bdf chore: flatten HostnameInfo from enum to struct
HostnameInfo only had one variant (Ip) after removing Tor. Flatten it
into a plain struct with fields gateway, public, hostname. Remove all
kind === 'ip' type guards and narrowing across SDK, frontend, and
container runtime. Update DB migration to strip the kind field.
2026-02-10 13:38:12 -07:00
Aiden McClelland
2ee403e7de chore: remove tor from startos core
Tor is being moved from a built-in OS feature to a service. This removes
the Arti-based Tor client, onion address management, hidden service
creation, and all related code from the core backend, frontend, and SDK.

- Delete core/src/net/tor/ module (~2060 lines)
- Remove OnionAddress, TorSecretKey, TorController from all consumers
- Remove HostnameInfo::Onion and HostAddress::Onion variants
- Remove onion CRUD RPC endpoints and tor subcommand
- Remove tor key handling from account and backup/restore
- Remove ~12 tor-related Cargo dependencies (arti-client, torut, etc.)
- Remove tor UI components, API methods, mock data, and routes
- Remove OnionHostname and tor patterns/regexes from SDK
- Add v0_4_0_alpha_20 database migration to strip onion data
- Bump version to 0.4.0-alpha.20
2026-02-10 13:28:24 -07:00
Aiden McClelland
1974dfd66f docs: move address enable/disable to overflow menu, add SSL indicator, defer UI placement decisions 2026-02-09 13:29:49 -07:00
Aiden McClelland
2e03a95e47 docs: overhaul interfaces page design with view/manage split and per-address controls 2026-02-09 13:10:57 -07:00
Aiden McClelland
8f809dab21 docs: add user-controlled public/private and port forward mapping to design 2026-02-08 11:17:43 -07:00
Aiden McClelland
c0b2cbe1c8 docs: update preferred external port design in TODO 2026-02-06 09:30:35 -07:00
1121 changed files with 12605 additions and 43791 deletions

View File

@@ -47,18 +47,18 @@ runs:
sudo rm -rf /usr/share/swift sudo rm -rf /usr/share/swift
sudo rm -rf "$AGENT_TOOLSDIRECTORY" sudo rm -rf "$AGENT_TOOLSDIRECTORY"
# Some runners lack /opt/hostedtoolcache, which setup-python and setup-qemu expect # BuildJet runners lack /opt/hostedtoolcache, which setup-python and setup-qemu expect
- name: Ensure hostedtoolcache exists - name: Ensure hostedtoolcache exists
shell: bash shell: bash
run: sudo mkdir -p /opt/hostedtoolcache && sudo chown $USER:$USER /opt/hostedtoolcache run: sudo mkdir -p /opt/hostedtoolcache && sudo chown $USER:$USER /opt/hostedtoolcache
- name: Set up Python - name: Set up Python
if: inputs.setup-python == 'true' if: inputs.setup-python == 'true'
uses: actions/setup-python@v6 uses: actions/setup-python@v5
with: with:
python-version: "3.x" python-version: "3.x"
- uses: actions/setup-node@v6 - uses: actions/setup-node@v4
with: with:
node-version: ${{ inputs.nodejs-version }} node-version: ${{ inputs.nodejs-version }}
cache: npm cache: npm
@@ -66,15 +66,15 @@ runs:
- name: Set up Docker QEMU - name: Set up Docker QEMU
if: inputs.setup-docker == 'true' if: inputs.setup-docker == 'true'
uses: docker/setup-qemu-action@v4 uses: docker/setup-qemu-action@v3
- name: Set up Docker Buildx - name: Set up Docker Buildx
if: inputs.setup-docker == 'true' if: inputs.setup-docker == 'true'
uses: docker/setup-buildx-action@v4 uses: docker/setup-buildx-action@v3
- name: Configure sccache - name: Configure sccache
if: inputs.setup-sccache == 'true' if: inputs.setup-sccache == 'true'
uses: actions/github-script@v8 uses: actions/github-script@v7
with: with:
script: | script: |
core.exportVariable('ACTIONS_RESULTS_URL', process.env.ACTIONS_RESULTS_URL || ''); core.exportVariable('ACTIONS_RESULTS_URL', process.env.ACTIONS_RESULTS_URL || '');

View File

@@ -63,12 +63,12 @@ jobs:
"ALL": ["x86_64-unknown-linux-musl", "x86_64-apple-darwin", "aarch64-unknown-linux-musl", "aarch64-apple-darwin", "riscv64gc-unknown-linux-musl"] "ALL": ["x86_64-unknown-linux-musl", "x86_64-apple-darwin", "aarch64-unknown-linux-musl", "aarch64-apple-darwin", "riscv64gc-unknown-linux-musl"]
}')[github.event.inputs.platform || 'ALL'] }')[github.event.inputs.platform || 'ALL']
}} }}
runs-on: ${{ fromJson('["ubuntu-latest", "ubuntu-24.04-32-cores"]')[github.event.inputs.runner == 'fast'] }} runs-on: ${{ fromJson('["ubuntu-latest", "buildjet-32vcpu-ubuntu-2204"]')[github.event.inputs.runner == 'fast'] }}
steps: steps:
- name: Mount tmpfs - name: Mount tmpfs
if: ${{ github.event.inputs.runner == 'fast' }} if: ${{ github.event.inputs.runner == 'fast' }}
run: sudo mount -t tmpfs tmpfs . run: sudo mount -t tmpfs tmpfs .
- uses: actions/checkout@v6 - uses: actions/checkout@v4
with: with:
submodules: recursive submodules: recursive
- uses: ./.github/actions/setup-build - uses: ./.github/actions/setup-build
@@ -82,7 +82,7 @@ jobs:
SCCACHE_GHA_ENABLED: on SCCACHE_GHA_ENABLED: on
SCCACHE_GHA_VERSION: 0 SCCACHE_GHA_VERSION: 0
- uses: actions/upload-artifact@v7 - uses: actions/upload-artifact@v4
with: with:
name: start-cli_${{ matrix.triple }} name: start-cli_${{ matrix.triple }}
path: core/target/${{ matrix.triple }}/release/start-cli path: core/target/${{ matrix.triple }}/release/start-cli

View File

@@ -59,12 +59,12 @@ jobs:
"ALL": ["x86_64", "aarch64", "riscv64"] "ALL": ["x86_64", "aarch64", "riscv64"]
}')[github.event.inputs.platform || 'ALL'] }')[github.event.inputs.platform || 'ALL']
}} }}
runs-on: ${{ fromJson('["ubuntu-latest", "ubuntu-24.04-32-cores"]')[github.event.inputs.runner == 'fast'] }} runs-on: ${{ fromJson('["ubuntu-latest", "buildjet-32vcpu-ubuntu-2204"]')[github.event.inputs.runner == 'fast'] }}
steps: steps:
- name: Mount tmpfs - name: Mount tmpfs
if: ${{ github.event.inputs.runner == 'fast' }} if: ${{ github.event.inputs.runner == 'fast' }}
run: sudo mount -t tmpfs tmpfs . run: sudo mount -t tmpfs tmpfs .
- uses: actions/checkout@v6 - uses: actions/checkout@v4
with: with:
submodules: recursive submodules: recursive
- uses: ./.github/actions/setup-build - uses: ./.github/actions/setup-build
@@ -78,7 +78,7 @@ jobs:
SCCACHE_GHA_ENABLED: on SCCACHE_GHA_ENABLED: on
SCCACHE_GHA_VERSION: 0 SCCACHE_GHA_VERSION: 0
- uses: actions/upload-artifact@v7 - uses: actions/upload-artifact@v4
with: with:
name: start-registry_${{ matrix.arch }}.deb name: start-registry_${{ matrix.arch }}.deb
path: results/start-registry-*_${{ matrix.arch }}.deb path: results/start-registry-*_${{ matrix.arch }}.deb
@@ -89,7 +89,7 @@ jobs:
permissions: permissions:
contents: read contents: read
packages: write packages: write
runs-on: ${{ fromJson('["ubuntu-latest", "ubuntu-24.04-32-cores"]')[github.event.inputs.runner == 'fast'] }} runs-on: ${{ fromJson('["ubuntu-latest", "buildjet-32vcpu-ubuntu-2204"]')[github.event.inputs.runner == 'fast'] }}
steps: steps:
- name: Cleaning up unnecessary files - name: Cleaning up unnecessary files
run: | run: |
@@ -102,13 +102,13 @@ jobs:
if: ${{ github.event.inputs.runner == 'fast' }} if: ${{ github.event.inputs.runner == 'fast' }}
- name: Set up docker QEMU - name: Set up docker QEMU
uses: docker/setup-qemu-action@v4 uses: docker/setup-qemu-action@v3
- name: Set up Docker Buildx - name: Set up Docker Buildx
uses: docker/setup-buildx-action@v4 uses: docker/setup-buildx-action@v3
- name: "Login to GitHub Container Registry" - name: "Login to GitHub Container Registry"
uses: docker/login-action@v4 uses: docker/login-action@v3
with: with:
registry: ghcr.io registry: ghcr.io
username: ${{github.actor}} username: ${{github.actor}}
@@ -116,14 +116,14 @@ jobs:
- name: Docker meta - name: Docker meta
id: meta id: meta
uses: docker/metadata-action@v6 uses: docker/metadata-action@v5
with: with:
images: ghcr.io/Start9Labs/startos-registry images: ghcr.io/Start9Labs/startos-registry
tags: | tags: |
type=raw,value=${{ github.ref_name }} type=raw,value=${{ github.ref_name }}
- name: Download debian package - name: Download debian package
uses: actions/download-artifact@v8 uses: actions/download-artifact@v4
with: with:
pattern: start-registry_*.deb pattern: start-registry_*.deb
@@ -162,7 +162,7 @@ jobs:
ADD *.deb . ADD *.deb .
RUN apt-get update && apt-get install -y ./*_$(uname -m).deb && rm -rf *.deb /var/lib/apt/lists/* RUN apt-get install -y ./*_$(uname -m).deb && rm *.deb
VOLUME /var/lib/startos VOLUME /var/lib/startos

View File

@@ -59,12 +59,12 @@ jobs:
"ALL": ["x86_64", "aarch64", "riscv64"] "ALL": ["x86_64", "aarch64", "riscv64"]
}')[github.event.inputs.platform || 'ALL'] }')[github.event.inputs.platform || 'ALL']
}} }}
runs-on: ${{ fromJson('["ubuntu-latest", "ubuntu-24.04-32-cores"]')[github.event.inputs.runner == 'fast'] }} runs-on: ${{ fromJson('["ubuntu-latest", "buildjet-32vcpu-ubuntu-2204"]')[github.event.inputs.runner == 'fast'] }}
steps: steps:
- name: Mount tmpfs - name: Mount tmpfs
if: ${{ github.event.inputs.runner == 'fast' }} if: ${{ github.event.inputs.runner == 'fast' }}
run: sudo mount -t tmpfs tmpfs . run: sudo mount -t tmpfs tmpfs .
- uses: actions/checkout@v6 - uses: actions/checkout@v4
with: with:
submodules: recursive submodules: recursive
- uses: ./.github/actions/setup-build - uses: ./.github/actions/setup-build
@@ -78,7 +78,7 @@ jobs:
SCCACHE_GHA_ENABLED: on SCCACHE_GHA_ENABLED: on
SCCACHE_GHA_VERSION: 0 SCCACHE_GHA_VERSION: 0
- uses: actions/upload-artifact@v7 - uses: actions/upload-artifact@v4
with: with:
name: start-tunnel_${{ matrix.arch }}.deb name: start-tunnel_${{ matrix.arch }}.deb
path: results/start-tunnel-*_${{ matrix.arch }}.deb path: results/start-tunnel-*_${{ matrix.arch }}.deb

View File

@@ -25,13 +25,10 @@ on:
- ALL - ALL
- x86_64 - x86_64
- x86_64-nonfree - x86_64-nonfree
- x86_64-nvidia
- aarch64 - aarch64
- aarch64-nonfree - aarch64-nonfree
- aarch64-nvidia
# - raspberrypi # - raspberrypi
- riscv64 - riscv64
- riscv64-nonfree
deploy: deploy:
type: choice type: choice
description: Deploy description: Deploy
@@ -68,13 +65,10 @@ jobs:
fromJson('{ fromJson('{
"x86_64": ["x86_64"], "x86_64": ["x86_64"],
"x86_64-nonfree": ["x86_64"], "x86_64-nonfree": ["x86_64"],
"x86_64-nvidia": ["x86_64"],
"aarch64": ["aarch64"], "aarch64": ["aarch64"],
"aarch64-nonfree": ["aarch64"], "aarch64-nonfree": ["aarch64"],
"aarch64-nvidia": ["aarch64"],
"raspberrypi": ["aarch64"], "raspberrypi": ["aarch64"],
"riscv64": ["riscv64"], "riscv64": ["riscv64"],
"riscv64-nonfree": ["riscv64"],
"ALL": ["x86_64", "aarch64", "riscv64"] "ALL": ["x86_64", "aarch64", "riscv64"]
}')[github.event.inputs.platform || 'ALL'] }')[github.event.inputs.platform || 'ALL']
}} }}
@@ -89,9 +83,9 @@ jobs:
"riscv64": "ubuntu-latest" "riscv64": "ubuntu-latest"
}')[matrix.arch], }')[matrix.arch],
fromJson('{ fromJson('{
"x86_64": "amd64-fast", "x86_64": "buildjet-32vcpu-ubuntu-2204",
"aarch64": "aarch64-fast", "aarch64": "buildjet-32vcpu-ubuntu-2204-arm",
"riscv64": "amd64-fast" "riscv64": "buildjet-32vcpu-ubuntu-2204"
}')[matrix.arch] }')[matrix.arch]
) )
)[github.event.inputs.runner == 'fast'] )[github.event.inputs.runner == 'fast']
@@ -100,7 +94,7 @@ jobs:
- name: Mount tmpfs - name: Mount tmpfs
if: ${{ github.event.inputs.runner == 'fast' }} if: ${{ github.event.inputs.runner == 'fast' }}
run: sudo mount -t tmpfs tmpfs . run: sudo mount -t tmpfs tmpfs .
- uses: actions/checkout@v6 - uses: actions/checkout@v4
with: with:
submodules: recursive submodules: recursive
- uses: ./.github/actions/setup-build - uses: ./.github/actions/setup-build
@@ -114,7 +108,7 @@ jobs:
SCCACHE_GHA_ENABLED: on SCCACHE_GHA_ENABLED: on
SCCACHE_GHA_VERSION: 0 SCCACHE_GHA_VERSION: 0
- uses: actions/upload-artifact@v7 - uses: actions/upload-artifact@v4
with: with:
name: compiled-${{ matrix.arch }}.tar name: compiled-${{ matrix.arch }}.tar
path: compiled-${{ matrix.arch }}.tar path: compiled-${{ matrix.arch }}.tar
@@ -124,13 +118,14 @@ jobs:
strategy: strategy:
fail-fast: false fail-fast: false
matrix: matrix:
# TODO: re-add "raspberrypi" to the platform list below
platform: >- platform: >-
${{ ${{
fromJson( fromJson(
format( format(
'[ '[
["{0}"], ["{0}"],
["x86_64", "x86_64-nonfree", "x86_64-nvidia", "aarch64", "aarch64-nonfree", "aarch64-nvidia", "raspberrypi", "riscv64", "riscv64-nonfree"] ["x86_64", "x86_64-nonfree", "aarch64", "aarch64-nonfree", "riscv64"]
]', ]',
github.event.inputs.platform || 'ALL' github.event.inputs.platform || 'ALL'
) )
@@ -144,24 +139,18 @@ jobs:
fromJson('{ fromJson('{
"x86_64": "ubuntu-latest", "x86_64": "ubuntu-latest",
"x86_64-nonfree": "ubuntu-latest", "x86_64-nonfree": "ubuntu-latest",
"x86_64-nvidia": "ubuntu-latest",
"aarch64": "ubuntu-24.04-arm", "aarch64": "ubuntu-24.04-arm",
"aarch64-nonfree": "ubuntu-24.04-arm", "aarch64-nonfree": "ubuntu-24.04-arm",
"aarch64-nvidia": "ubuntu-24.04-arm",
"raspberrypi": "ubuntu-24.04-arm", "raspberrypi": "ubuntu-24.04-arm",
"riscv64": "ubuntu-24.04-arm", "riscv64": "ubuntu-24.04-arm",
"riscv64-nonfree": "ubuntu-24.04-arm",
}')[matrix.platform], }')[matrix.platform],
fromJson('{ fromJson('{
"x86_64": "amd64-fast", "x86_64": "buildjet-8vcpu-ubuntu-2204",
"x86_64-nonfree": "amd64-fast", "x86_64-nonfree": "buildjet-8vcpu-ubuntu-2204",
"x86_64-nvidia": "amd64-fast", "aarch64": "buildjet-8vcpu-ubuntu-2204-arm",
"aarch64": "aarch64-fast", "aarch64-nonfree": "buildjet-8vcpu-ubuntu-2204-arm",
"aarch64-nonfree": "aarch64-fast", "raspberrypi": "buildjet-8vcpu-ubuntu-2204-arm",
"aarch64-nvidia": "aarch64-fast", "riscv64": "buildjet-8vcpu-ubuntu-2204",
"raspberrypi": "aarch64-fast",
"riscv64": "amd64-fast",
"riscv64-nonfree": "amd64-fast",
}')[matrix.platform] }')[matrix.platform]
) )
)[github.event.inputs.runner == 'fast'] )[github.event.inputs.runner == 'fast']
@@ -172,13 +161,10 @@ jobs:
fromJson('{ fromJson('{
"x86_64": "x86_64", "x86_64": "x86_64",
"x86_64-nonfree": "x86_64", "x86_64-nonfree": "x86_64",
"x86_64-nvidia": "x86_64",
"aarch64": "aarch64", "aarch64": "aarch64",
"aarch64-nonfree": "aarch64", "aarch64-nonfree": "aarch64",
"aarch64-nvidia": "aarch64",
"raspberrypi": "aarch64", "raspberrypi": "aarch64",
"riscv64": "riscv64", "riscv64": "riscv64",
"riscv64-nonfree": "riscv64",
}')[matrix.platform] }')[matrix.platform]
}} }}
steps: steps:
@@ -203,19 +189,19 @@ jobs:
sudo rm -rf "$AGENT_TOOLSDIRECTORY" # Pre-cached tool cache (Go, Node, etc.) sudo rm -rf "$AGENT_TOOLSDIRECTORY" # Pre-cached tool cache (Go, Node, etc.)
if: ${{ github.event.inputs.runner != 'fast' }} if: ${{ github.event.inputs.runner != 'fast' }}
# Some runners lack /opt/hostedtoolcache, which setup-qemu expects # BuildJet runners lack /opt/hostedtoolcache, which setup-qemu expects
- name: Ensure hostedtoolcache exists - name: Ensure hostedtoolcache exists
run: sudo mkdir -p /opt/hostedtoolcache && sudo chown $USER:$USER /opt/hostedtoolcache run: sudo mkdir -p /opt/hostedtoolcache && sudo chown $USER:$USER /opt/hostedtoolcache
- name: Set up docker QEMU - name: Set up docker QEMU
uses: docker/setup-qemu-action@v4 uses: docker/setup-qemu-action@v3
- uses: actions/checkout@v6 - uses: actions/checkout@v4
with: with:
submodules: recursive submodules: recursive
- name: Download compiled artifacts - name: Download compiled artifacts
uses: actions/download-artifact@v8 uses: actions/download-artifact@v4
with: with:
name: compiled-${{ env.ARCH }}.tar name: compiled-${{ env.ARCH }}.tar
@@ -252,124 +238,19 @@ jobs:
run: PLATFORM=${{ matrix.platform }} make img run: PLATFORM=${{ matrix.platform }} make img
if: ${{ matrix.platform == 'raspberrypi' }} if: ${{ matrix.platform == 'raspberrypi' }}
- uses: actions/upload-artifact@v7 - uses: actions/upload-artifact@v4
with: with:
name: ${{ matrix.platform }}.squashfs name: ${{ matrix.platform }}.squashfs
path: results/*.squashfs path: results/*.squashfs
- uses: actions/upload-artifact@v7 - uses: actions/upload-artifact@v4
with: with:
name: ${{ matrix.platform }}.iso name: ${{ matrix.platform }}.iso
path: results/*.iso path: results/*.iso
if: ${{ matrix.platform != 'raspberrypi' }} if: ${{ matrix.platform != 'raspberrypi' }}
- uses: actions/upload-artifact@v7 - uses: actions/upload-artifact@v4
with: with:
name: ${{ matrix.platform }}.img name: ${{ matrix.platform }}.img
path: results/*.img path: results/*.img
if: ${{ matrix.platform == 'raspberrypi' }} if: ${{ matrix.platform == 'raspberrypi' }}
deploy:
name: Deploy
needs: [image]
if: github.event_name == 'workflow_dispatch' && github.event.inputs.deploy != 'NONE'
runs-on: ubuntu-latest
env:
REGISTRY: >-
${{
fromJson('{
"alpha": "https://alpha-registry-x.start9.com",
"beta": "https://beta-registry.start9.com"
}')[github.event.inputs.deploy]
}}
S3_BUCKET: s3://startos-images
S3_CDN: https://startos-images.nyc3.cdn.digitaloceanspaces.com
steps:
- uses: actions/checkout@v6
with:
sparse-checkout: web/package.json
- name: Determine version
id: version
run: |
VERSION=$(sed -n 's/.*"version": *"\([^"]*\)".*/\1/p' web/package.json | head -1)
echo "version=$VERSION" >> "$GITHUB_OUTPUT"
echo "Version: $VERSION"
- name: Download squashfs artifacts
uses: actions/download-artifact@v8
with:
pattern: "*.squashfs"
path: artifacts/
merge-multiple: true
- name: Download ISO artifacts
uses: actions/download-artifact@v8
with:
pattern: "*.iso"
path: artifacts/
merge-multiple: true
- name: Install start-cli
run: |
ARCH=$(uname -m)
OS=$(uname -s | tr '[:upper:]' '[:lower:]')
ASSET_NAME="start-cli_${ARCH}-${OS}"
DOWNLOAD_URL=$(curl -fsS \
-H "Authorization: token ${{ github.token }}" \
https://api.github.com/repos/Start9Labs/start-os/releases \
| jq -r '[.[].assets[] | select(.name=="'"$ASSET_NAME"'")] | first | .browser_download_url')
curl -fsSL \
-H "Authorization: token ${{ github.token }}" \
-H "Accept: application/octet-stream" \
"$DOWNLOAD_URL" -o /tmp/start-cli
sudo install -m 755 /tmp/start-cli /usr/local/bin/start-cli
echo "start-cli: $(start-cli --version)"
- name: Configure S3
run: |
sudo apt-get install -y -qq s3cmd > /dev/null
cat > ~/.s3cfg <<EOF
[default]
access_key = ${{ secrets.S3_ACCESS_KEY }}
secret_key = ${{ secrets.S3_SECRET_KEY }}
host_base = nyc3.digitaloceanspaces.com
host_bucket = %(bucket)s.nyc3.digitaloceanspaces.com
use_https = True
EOF
- name: Set up developer key
run: |
mkdir -p ~/.startos
printf '%s' "${{ secrets.DEV_KEY }}" > ~/.startos/developer.key.pem
- name: Upload to S3
run: |
VERSION="${{ steps.version.outputs.version }}"
cd artifacts
for file in *.iso *.squashfs; do
[ -f "$file" ] || continue
echo "Uploading $file..."
s3cmd put -P "$file" "${{ env.S3_BUCKET }}/v${VERSION}/$file"
done
- name: Register OS version
run: |
VERSION="${{ steps.version.outputs.version }}"
start-cli --registry="${{ env.REGISTRY }}" registry os version add \
"$VERSION" "v${VERSION}" '' ">=0.3.5 <=${VERSION}"
- name: Index assets in registry
run: |
VERSION="${{ steps.version.outputs.version }}"
cd artifacts
for file in *.squashfs *.iso; do
[ -f "$file" ] || continue
PLATFORM=$(echo "$file" | sed 's/.*_\([^.]*\)\.\(squashfs\|iso\)$/\1/')
echo "Indexing $file for platform $PLATFORM..."
start-cli --registry="${{ env.REGISTRY }}" registry os asset add \
--platform="$PLATFORM" \
--version="$VERSION" \
"$file" \
"${{ env.S3_CDN }}/v${VERSION}/$file"
done

View File

@@ -24,7 +24,7 @@ jobs:
if: github.event.pull_request.draft != true if: github.event.pull_request.draft != true
runs-on: ubuntu-latest runs-on: ubuntu-latest
steps: steps:
- uses: actions/checkout@v6 - uses: actions/checkout@v4
with: with:
submodules: recursive submodules: recursive
- uses: ./.github/actions/setup-build - uses: ./.github/actions/setup-build

2
.gitignore vendored
View File

@@ -22,5 +22,3 @@ secrets.db
tmp tmp
web/.i18n-checked web/.i18n-checked
docs/USER.md docs/USER.md
*.s9pk
/build/lib/migration-images

View File

@@ -5,7 +5,7 @@ StartOS is an open-source Linux distribution for running personal servers. It ma
## Tech Stack ## Tech Stack
- Backend: Rust (async/Tokio, Axum web framework) - Backend: Rust (async/Tokio, Axum web framework)
- Frontend: Angular 21 + TypeScript + Taiga UI 5 - Frontend: Angular 20 + TypeScript + TaigaUI
- Container runtime: Node.js/TypeScript with LXC - Container runtime: Node.js/TypeScript with LXC
- Database/State: Patch-DB (git submodule) - storage layer with reactive frontend sync - Database/State: Patch-DB (git submodule) - storage layer with reactive frontend sync
- API: JSON-RPC via rpc-toolkit (see `core/rpc-toolkit.md`) - API: JSON-RPC via rpc-toolkit (see `core/rpc-toolkit.md`)
@@ -30,7 +30,7 @@ StartOS is an open-source Linux distribution for running personal servers. It ma
- **`core/`** — Rust backend daemon. Produces a single binary `startbox` that is symlinked as `startd` (main daemon), `start-cli` (CLI), `start-container` (runs inside LXC containers), `registrybox` (package registry), and `tunnelbox` (VPN/tunnel). Handles all backend logic: RPC API, service lifecycle, networking (DNS, ACME, WiFi, Tor, WireGuard), backups, and database state management. See [core/ARCHITECTURE.md](core/ARCHITECTURE.md). - **`core/`** — Rust backend daemon. Produces a single binary `startbox` that is symlinked as `startd` (main daemon), `start-cli` (CLI), `start-container` (runs inside LXC containers), `registrybox` (package registry), and `tunnelbox` (VPN/tunnel). Handles all backend logic: RPC API, service lifecycle, networking (DNS, ACME, WiFi, Tor, WireGuard), backups, and database state management. See [core/ARCHITECTURE.md](core/ARCHITECTURE.md).
- **`web/`** — Angular 21 + TypeScript workspace using Taiga UI 5. Contains three applications (admin UI, setup wizard, VPN management) and two shared libraries (common components/services, marketplace). Communicates with the backend exclusively via JSON-RPC. See [web/ARCHITECTURE.md](web/ARCHITECTURE.md). - **`web/`** — Angular 20 + TypeScript workspace using Taiga UI. Contains three applications (admin UI, setup wizard, VPN management) and two shared libraries (common components/services, marketplace). Communicates with the backend exclusively via JSON-RPC. See [web/ARCHITECTURE.md](web/ARCHITECTURE.md).
- **`container-runtime/`** — Node.js runtime that runs inside each service's LXC container. Loads the service's JavaScript from its S9PK package and manages subcontainers. Communicates with the host daemon via JSON-RPC over Unix socket. See [container-runtime/CLAUDE.md](container-runtime/CLAUDE.md). - **`container-runtime/`** — Node.js runtime that runs inside each service's LXC container. Loads the service's JavaScript from its S9PK package and manages subcontainers. Communicates with the host daemon via JSON-RPC over Unix socket. See [container-runtime/CLAUDE.md](container-runtime/CLAUDE.md).

View File

@@ -11,14 +11,12 @@ Each major component has its own `CLAUDE.md` with detailed guidance: `core/`, `w
## Build & Development ## Build & Development
See [CONTRIBUTING.md](CONTRIBUTING.md) for: See [CONTRIBUTING.md](CONTRIBUTING.md) for:
- Environment setup and requirements - Environment setup and requirements
- Build commands and make targets - Build commands and make targets
- Testing and formatting commands - Testing and formatting commands
- Environment variables - Environment variables
**Quick reference:** **Quick reference:**
```bash ```bash
. ./devmode.sh # Enable dev mode . ./devmode.sh # Enable dev mode
make update-startbox REMOTE=start9@<ip> # Fastest iteration (binary + UI) make update-startbox REMOTE=start9@<ip> # Fastest iteration (binary + UI)
@@ -28,10 +26,8 @@ make test-core # Run Rust tests
## Operating Rules ## Operating Rules
- Always verify cross-layer changes using the order described in [ARCHITECTURE.md](ARCHITECTURE.md#cross-layer-verification) - Always verify cross-layer changes using the order described in [ARCHITECTURE.md](ARCHITECTURE.md#cross-layer-verification)
- Check component-level CLAUDE.md files for component-specific conventions. ALWAYS read it before operating on that component. - Check component-level CLAUDE.md files for component-specific conventions
- Follow existing patterns before inventing new ones - Follow existing patterns before inventing new ones
- Always use `make` recipes when they exist for testing builds rather than manually invoking build commands
- **Commit signing:** Never push unsigned commits. Before pushing, check all unpushed commits for signatures with `git log --show-signature @{upstream}..HEAD`. If any are unsigned, prompt the user to sign them with `git rebase --exec 'git commit --amend -S --no-edit' @{upstream}`.
## Supplementary Documentation ## Supplementary Documentation

View File

@@ -7,7 +7,7 @@ GIT_HASH_FILE := $(shell ./build/env/check-git-hash.sh)
VERSION_FILE := $(shell ./build/env/check-version.sh) VERSION_FILE := $(shell ./build/env/check-version.sh)
BASENAME := $(shell PROJECT=startos ./build/env/basename.sh) BASENAME := $(shell PROJECT=startos ./build/env/basename.sh)
PLATFORM := $(shell if [ -f $(PLATFORM_FILE) ]; then cat $(PLATFORM_FILE); else echo unknown; fi) PLATFORM := $(shell if [ -f $(PLATFORM_FILE) ]; then cat $(PLATFORM_FILE); else echo unknown; fi)
ARCH := $(shell if [ "$(PLATFORM)" = "raspberrypi" ]; then echo aarch64; elif [ "$(PLATFORM)" = "rockchip64" ]; then echo aarch64; else echo $(PLATFORM) | sed 's/-nonfree$$//g; s/-nvidia$$//g'; fi) ARCH := $(shell if [ "$(PLATFORM)" = "raspberrypi" ]; then echo aarch64; else echo $(PLATFORM) | sed 's/-nonfree$$//g'; fi)
RUST_ARCH := $(shell if [ "$(ARCH)" = "riscv64" ]; then echo riscv64gc; else echo $(ARCH); fi) RUST_ARCH := $(shell if [ "$(ARCH)" = "riscv64" ]; then echo riscv64gc; else echo $(ARCH); fi)
REGISTRY_BASENAME := $(shell PROJECT=start-registry PLATFORM=$(ARCH) ./build/env/basename.sh) REGISTRY_BASENAME := $(shell PROJECT=start-registry PLATFORM=$(ARCH) ./build/env/basename.sh)
TUNNEL_BASENAME := $(shell PROJECT=start-tunnel PLATFORM=$(ARCH) ./build/env/basename.sh) TUNNEL_BASENAME := $(shell PROJECT=start-tunnel PLATFORM=$(ARCH) ./build/env/basename.sh)
@@ -15,7 +15,7 @@ IMAGE_TYPE=$(shell if [ "$(PLATFORM)" = raspberrypi ]; then echo img; else echo
WEB_UIS := web/dist/raw/ui/index.html web/dist/raw/setup-wizard/index.html WEB_UIS := web/dist/raw/ui/index.html web/dist/raw/setup-wizard/index.html
COMPRESSED_WEB_UIS := web/dist/static/ui/index.html web/dist/static/setup-wizard/index.html COMPRESSED_WEB_UIS := web/dist/static/ui/index.html web/dist/static/setup-wizard/index.html
FIRMWARE_ROMS := build/lib/firmware/$(PLATFORM) $(shell jq --raw-output '.[] | select(.platform[] | contains("$(PLATFORM)")) | "./build/lib/firmware/$(PLATFORM)/" + .id + ".rom.gz"' build/lib/firmware.json) FIRMWARE_ROMS := build/lib/firmware/$(PLATFORM) $(shell jq --raw-output '.[] | select(.platform[] | contains("$(PLATFORM)")) | "./build/lib/firmware/$(PLATFORM)/" + .id + ".rom.gz"' build/lib/firmware.json)
BUILD_SRC := $(call ls-files, build/lib) build/lib/depends build/lib/conflicts $(FIRMWARE_ROMS) build/lib/migration-images/.done BUILD_SRC := $(call ls-files, build/lib) build/lib/depends build/lib/conflicts $(FIRMWARE_ROMS)
IMAGE_RECIPE_SRC := $(call ls-files, build/image-recipe/) IMAGE_RECIPE_SRC := $(call ls-files, build/image-recipe/)
STARTD_SRC := core/startd.service $(BUILD_SRC) STARTD_SRC := core/startd.service $(BUILD_SRC)
CORE_SRC := $(call ls-files, core) $(shell git ls-files --recurse-submodules patch-db) $(GIT_HASH_FILE) CORE_SRC := $(call ls-files, core) $(shell git ls-files --recurse-submodules patch-db) $(GIT_HASH_FILE)
@@ -89,7 +89,6 @@ clean:
rm -rf container-runtime/node_modules rm -rf container-runtime/node_modules
rm -f container-runtime/*.squashfs rm -f container-runtime/*.squashfs
(cd sdk && make clean) (cd sdk && make clean)
rm -rf build/lib/migration-images
rm -f env/*.txt rm -f env/*.txt
format: format:
@@ -106,10 +105,6 @@ test-sdk: $(call ls-files, sdk) sdk/base/lib/osBindings/index.ts
test-container-runtime: container-runtime/node_modules/.package-lock.json $(call ls-files, container-runtime/src) container-runtime/package.json container-runtime/tsconfig.json test-container-runtime: container-runtime/node_modules/.package-lock.json $(call ls-files, container-runtime/src) container-runtime/package.json container-runtime/tsconfig.json
cd container-runtime && npm test cd container-runtime && npm test
build/lib/migration-images/.done: build/save-migration-images.sh
ARCH=$(ARCH) ./build/save-migration-images.sh build/lib/migration-images
touch $@
install-cli: $(GIT_HASH_FILE) install-cli: $(GIT_HASH_FILE)
./core/build/build-cli.sh --install ./core/build/build-cli.sh --install
@@ -144,11 +139,6 @@ install-tunnel: core/target/$(RUST_ARCH)-unknown-linux-musl/$(PROFILE)/tunnelbox
$(call mkdir,$(DESTDIR)/usr/lib/startos/scripts) $(call mkdir,$(DESTDIR)/usr/lib/startos/scripts)
$(call cp,build/lib/scripts/forward-port,$(DESTDIR)/usr/lib/startos/scripts/forward-port) $(call cp,build/lib/scripts/forward-port,$(DESTDIR)/usr/lib/startos/scripts/forward-port)
$(call mkdir,$(DESTDIR)/etc/apt/sources.list.d)
$(call cp,apt/start9.list,$(DESTDIR)/etc/apt/sources.list.d/start9.list)
$(call mkdir,$(DESTDIR)/usr/share/keyrings)
$(call cp,apt/start9.gpg,$(DESTDIR)/usr/share/keyrings/start9.gpg)
core/target/$(RUST_ARCH)-unknown-linux-musl/$(PROFILE)/tunnelbox: $(CORE_SRC) $(ENVIRONMENT_FILE) $(GIT_HASH_FILE) web/dist/static/start-tunnel/index.html core/target/$(RUST_ARCH)-unknown-linux-musl/$(PROFILE)/tunnelbox: $(CORE_SRC) $(ENVIRONMENT_FILE) $(GIT_HASH_FILE) web/dist/static/start-tunnel/index.html
ARCH=$(ARCH) PROFILE=$(PROFILE) ./core/build/build-tunnelbox.sh ARCH=$(ARCH) PROFILE=$(PROFILE) ./core/build/build-tunnelbox.sh
@@ -160,7 +150,7 @@ results/$(BASENAME).deb: debian/dpkg-build.sh $(call ls-files,debian/startos) $(
registry-deb: results/$(REGISTRY_BASENAME).deb registry-deb: results/$(REGISTRY_BASENAME).deb
results/$(REGISTRY_BASENAME).deb: debian/dpkg-build.sh $(call ls-files,debian/start-registry) $(REGISTRY_TARGETS) results/$(REGISTRY_BASENAME).deb: debian/dpkg-build.sh $(call ls-files,debian/start-registry) $(REGISTRY_TARGETS)
PROJECT=start-registry PLATFORM=$(ARCH) REQUIRES=debian DEPENDS=ca-certificates ./build/os-compat/run-compat.sh ./debian/dpkg-build.sh PROJECT=start-registry PLATFORM=$(ARCH) REQUIRES=debian ./build/os-compat/run-compat.sh ./debian/dpkg-build.sh
tunnel-deb: results/$(TUNNEL_BASENAME).deb tunnel-deb: results/$(TUNNEL_BASENAME).deb
@@ -193,9 +183,6 @@ install: $(STARTOS_TARGETS)
$(call mkdir,$(DESTDIR)/lib/systemd/system) $(call mkdir,$(DESTDIR)/lib/systemd/system)
$(call cp,core/startd.service,$(DESTDIR)/lib/systemd/system/startd.service) $(call cp,core/startd.service,$(DESTDIR)/lib/systemd/system/startd.service)
if /bin/bash -c '[[ "${ENVIRONMENT}" =~ (^|-)unstable($$|-) ]]'; then \
sed -i '/^Environment=/a Environment=RUST_BACKTRACE=full' $(DESTDIR)/lib/systemd/system/startd.service; \
fi
$(call mkdir,$(DESTDIR)/usr/lib) $(call mkdir,$(DESTDIR)/usr/lib)
$(call rm,$(DESTDIR)/usr/lib/startos) $(call rm,$(DESTDIR)/usr/lib/startos)
@@ -255,10 +242,10 @@ update-deb: results/$(BASENAME).deb # better than update, but only available fro
update-squashfs: results/$(BASENAME).squashfs update-squashfs: results/$(BASENAME).squashfs
@if [ -z "$(REMOTE)" ]; then >&2 echo "Must specify REMOTE" && false; fi @if [ -z "$(REMOTE)" ]; then >&2 echo "Must specify REMOTE" && false; fi
$(eval SQFS_SUM := $(shell b3sum results/$(BASENAME).squashfs | head -c 32)) $(eval SQFS_SUM := $(shell b3sum results/$(BASENAME).squashfs))
$(eval SQFS_SIZE := $(shell du -s --bytes results/$(BASENAME).squashfs | awk '{print $$1}')) $(eval SQFS_SIZE := $(shell du -s --bytes results/$(BASENAME).squashfs | awk '{print $$1}'))
$(call ssh,'sudo /usr/lib/startos/scripts/prune-images $(SQFS_SIZE)') $(call ssh,'/usr/lib/startos/scripts/prune-images $(SQFS_SIZE)')
$(call ssh,'sudo /usr/lib/startos/scripts/prune-boot') $(call ssh,'/usr/lib/startos/scripts/prune-boot')
$(call cp,results/$(BASENAME).squashfs,/media/startos/images/next.rootfs) $(call cp,results/$(BASENAME).squashfs,/media/startos/images/next.rootfs)
$(call ssh,'sudo CHECKSUM=$(SQFS_SUM) /usr/lib/startos/scripts/upgrade /media/startos/images/next.rootfs') $(call ssh,'sudo CHECKSUM=$(SQFS_SUM) /usr/lib/startos/scripts/upgrade /media/startos/images/next.rootfs')
@@ -291,11 +278,7 @@ core/bindings/index.ts: $(call ls-files, core) $(ENVIRONMENT_FILE)
rm -rf core/bindings rm -rf core/bindings
./core/build/build-ts.sh ./core/build/build-ts.sh
ls core/bindings/*.ts | sed 's/core\/bindings\/\([^.]*\)\.ts/export { \1 } from ".\/\1";/g' | grep -v '"./index"' | tee core/bindings/index.ts ls core/bindings/*.ts | sed 's/core\/bindings\/\([^.]*\)\.ts/export { \1 } from ".\/\1";/g' | grep -v '"./index"' | tee core/bindings/index.ts
if [ -d core/bindings/tunnel ]; then \ npm --prefix sdk exec -- prettier --config ./sdk/base/package.json -w ./core/bindings/*.ts
ls core/bindings/tunnel/*.ts | sed 's/core\/bindings\/tunnel\/\([^.]*\)\.ts/export { \1 } from ".\/\1";/g' | grep -v '"./index"' > core/bindings/tunnel/index.ts; \
echo 'export * as Tunnel from "./tunnel";' >> core/bindings/index.ts; \
fi
npm --prefix sdk/base exec -- prettier --config=./sdk/base/package.json -w './core/bindings/**/*.ts'
touch core/bindings/index.ts touch core/bindings/index.ts
sdk/dist/package.json sdk/baseDist/package.json: $(call ls-files, sdk) sdk/base/lib/osBindings/index.ts sdk/dist/package.json sdk/baseDist/package.json: $(call ls-files, sdk) sdk/base/lib/osBindings/index.ts

View File

@@ -52,7 +52,7 @@ The easiest path. [Buy a server](https://store.start9.com) from Start9 and plug
### Build your own ### Build your own
Follow the [install guide](https://docs.start9.com/start-os/installing.html) to install StartOS on your own hardware. . Reasons to go this route: Install StartOS on your own hardware. Follow one of the [DIY guides](https://start9.com/latest/diy). Reasons to go this route:
1. You already have compatible hardware 1. You already have compatible hardware
2. You want to save on shipping costs 2. You want to save on shipping costs

215
TODO.md Normal file
View File

@@ -0,0 +1,215 @@
# AI Agent TODOs
Pending tasks for AI agents. Remove items when completed.
## Unreviewed CLAUDE.md Sections
- [ ] Architecture - Web (`/web`) - @MattDHill
## Features
- [ ] Support preferred external ports besides 443 - @dr-bonez
**Problem**: Currently, port 443 is the only preferred external port that is actually honored. When a
service requests `preferred_external_port: 8443` (or any non-443 value) for SSL, the system ignores
the preference and assigns a dynamic-range port (49152-65535). The `preferred_external_port` is only
used as a label for Tor mappings and as a trigger for the port-443 special case in `update()`.
**Goal**: Honor `preferred_external_port` for both SSL and non-SSL binds when the requested port is
available, with proper conflict resolution and fallback to dynamic-range allocation.
### Design
**Key distinction**: There are two separate concepts for SSL port usage:
1. **Port ownership** (`assigned_ssl_port`) — A port exclusively owned by a binding, allocated from
`AvailablePorts`. Used for server hostnames (`.local`, mDNS, etc.) and iptables forwards.
2. **Domain SSL port** — The port used for domain-based vhost entries. A binding does NOT need to own
a port to have a domain vhost on it. The VHostController already supports multiple hostnames on the
same port via SNI. Any binding can create a domain vhost entry on any SSL port that the
VHostController has a listener for, regardless of who "owns" that port.
For example: the OS owns port 443 as its `assigned_ssl_port`. A service with
`preferred_external_port: 443` won't get 443 as its `assigned_ssl_port` (it's taken), but it CAN
still have domain vhost entries on port 443 — SNI routes by hostname.
#### 1. Preferred Port Allocation for Ownership ✅ DONE
`AvailablePorts::try_alloc(port) -> Option<u16>` added to `forward.rs`. `BindInfo::new()` and
`BindInfo::update()` attempt the preferred port first, falling back to dynamic-range allocation.
#### 2. Per-Address Enable/Disable ✅ DONE
Gateway-level `private_disabled`/`public_enabled` on `NetInfo` replaced with per-address
`DerivedAddressInfo` on `BindInfo`. `hostname_info` removed from `Host` — computed addresses now
live in `BindInfo.addresses.possible`.
**`DerivedAddressInfo` struct** (on `BindInfo`):
```rust
pub struct DerivedAddressInfo {
pub private_disabled: BTreeSet<HostnameInfo>,
pub public_enabled: BTreeSet<HostnameInfo>,
pub possible: BTreeSet<HostnameInfo>, // COMPUTED by update()
}
```
`DerivedAddressInfo::enabled()` returns `possible` filtered by the two sets. `HostnameInfo` derives
`Ord` for `BTreeSet` usage. `AddressFilter` (implementing `InterfaceFilter`) derives enabled
gateway set from `DerivedAddressInfo` for vhost/forward filtering.
**RPC endpoint**: `set-gateway-enabled` replaced with `set-address-enabled` (on both
`server.host.binding` and `package.host.binding`).
**How disabling works per address type** (enforcement deferred to Section 3):
- **WAN/LAN IP:port**: Will be enforced via **source-IP gating** in the vhost layer (Section 3).
- **Hostname-based addresses** (`.local`, domains): Disabled by **not creating the vhost/SNI
entry** for that hostname.
#### 3. Eliminate the Port 5443 Hack: Source-IP-Based WAN Blocking (`vhost.rs`, `net_controller.rs`)
**Current problem**: The `if ssl.preferred_external_port == 443` branch (line 341 of
`net_controller.rs`) creates a bespoke dual-vhost setup: port 5443 for private-only access and port
443 for public (or public+private). This exists because both public and private traffic arrive on the
same port 443 listener, and the current `InterfaceFilter`/`PublicFilter` model distinguishes
public/private by which _network interface_ the connection arrived on — which doesn't work when both
traffic types share a listener.
**Solution**: Determine public vs private based on **source IP** at the vhost level. Traffic arriving
from the gateway IP should be treated as public (the gateway may MASQUERADE/NAT internet traffic, so
anything from the gateway is potentially public). Traffic from LAN IPs is private.
This applies to **all** vhost targets, not just port 443:
- **Add a `public` field to `ProxyTarget`** (or an enum: `Public`, `Private`, `Both`) indicating
what traffic this target accepts, derived from the binding's user-controlled `public` field.
- **Modify `VHostTarget::filter()`** (`vhost.rs:342`): Instead of (or in addition to) checking the
network interface via `GatewayInfo`, check the source IP of the TCP connection against known gateway
IPs. If the source IP matches a gateway or IP outside the subnet, the connection is public;
otherwise it's private. Use this to gate against the target's `public` field.
- **Eliminate the 5443 port entirely**: A single vhost entry on port 443 (or any shared SSL port) can
serve both public and private traffic, with per-target source-IP gating determining which backend
handles which connections.
#### 4. Port Forward Mapping in Patch-DB
When a binding is marked `public = true`, StartOS must record the required port forwards in patch-db
so the frontend can display them to the user. The user then configures these on their router manually.
For each public binding, store:
- The external port the router should forward (the actual vhost port used for domains, or the
`assigned_port` / `assigned_ssl_port` for non-domain access)
- The protocol (TCP/UDP)
- The StartOS LAN IP as the forward target
- Which service/binding this forward is for (for display purposes)
This mapping should be in the public database model so the frontend can read and display it.
#### 5. Simplify `update()` Domain Vhost Logic (`net_controller.rs`)
With source-IP gating in the vhost controller:
- **Remove the `== 443` special case** and the 5443 secondary vhost.
- For **server hostnames** (`.local`, mDNS, embassy, startos, localhost): use `assigned_ssl_port`
(the port the binding owns).
- For **domain-based vhost entries**: attempt to use `preferred_external_port` as the vhost port.
This succeeds if the port is either unused or already has an SSL listener (SNI handles sharing).
It fails only if the port is already in use by a non-SSL binding, or is a restricted port. On
failure, fall back to `assigned_ssl_port`.
- The binding's `public` field determines the `ProxyTarget`'s public/private gating.
- Hostname info must exactly match the actual vhost port used: for server hostnames, report
`ssl_port: assigned_ssl_port`. For domains, report `ssl_port: preferred_external_port` if it was
successfully used for the domain vhost, otherwise report `ssl_port: assigned_ssl_port`.
#### 6. Reachability Test Endpoint
New RPC endpoint that tests whether an address is actually reachable, with diagnostic info on
failure.
**RPC endpoint** (`binding.rs` or new file):
- **`test-address`** — Test reachability of a specific address.
```ts
interface BindingTestAddressParams {
internalPort: number;
address: HostnameInfo;
}
```
The backend simply performs the raw checks and returns the results. The **frontend** owns all
interpretation — it already knows the address type, expected IP, expected port, etc. from the
`HostnameInfo` data, so it can compare against the backend results and construct fix messaging.
```ts
interface TestAddressResult {
dns: string[] | null; // resolved IPs, null if not a domain address or lookup failed
portOpen: boolean | null; // TCP connect result, null if not applicable
}
```
This yields two RPC methods:
- `server.host.binding.test-address`
- `package.host.binding.test-address`
The frontend already has the full `HostnameInfo` context (expected IP, domain, port, gateway,
public/private). It compares the backend's raw results against the expected state and constructs
localized fix instructions. For example:
- `dns` returned but doesn't contain the expected WAN IP → "Update DNS A record for {domain}
to {wanIp}"
- `dns` is `null` for a domain address → "DNS lookup failed for {domain}"
- `portOpen` is `false` → "Configure port forward on your router: external {port} TCP →
{lanIp}:{port}"
### Key Files
| File | Role |
| ------------------------------------ | ------------------------------------------------------------------------------------------------------------------------------------------------------- |
| `core/src/net/forward.rs` | `AvailablePorts` — port pool allocation, `try_alloc()` for preferred ports |
| `core/src/net/host/binding.rs` | `Bindings` (Map wrapper for patchdb), `BindInfo`/`NetInfo`/`DerivedAddressInfo`/`AddressFilter` — per-address enable/disable, `set-address-enabled` RPC |
| `core/src/net/net_controller.rs:259` | `NetServiceData::update()` — computes `DerivedAddressInfo.possible`, vhost/forward/DNS reconciliation, 5443 hack removal |
| `core/src/net/vhost.rs` | `VHostController` / `ProxyTarget` — source-IP gating for public/private |
| `core/src/net/gateway.rs` | `InterfaceFilter` trait and filter types (`AddressFilter`, `PublicFilter`, etc.) |
| `core/src/net/service_interface.rs` | `HostnameInfo` — derives `Ord` for `BTreeSet` usage |
| `core/src/net/host/address.rs` | `HostAddress` (flattened struct), domain CRUD endpoints |
| `sdk/base/lib/interfaces/Host.ts` | SDK `MultiHost.bindPort()` — no changes needed |
| `core/src/db/model/public.rs` | Public DB model — port forward mapping |
- [ ] Extract TS-exported types into a lightweight sub-crate for fast binding generation
**Problem**: `make ts-bindings` compiles the entire `start-os` crate (with all dependencies: tokio,
axum, openssl, etc.) just to run test functions that serialize type definitions to `.ts` files.
Even in debug mode, this takes minutes. The generated output is pure type info — no runtime code
is needed.
**Goal**: Generate TS bindings in seconds by isolating exported types in a small crate with minimal
dependencies.
**Approach**: Create a `core/bindings-types/` sub-crate containing (or re-exporting) all 168
`#[ts(export)]` types. This crate depends only on `serde`, `ts-rs`, `exver`, and other type-only
crates — not on tokio, axum, openssl, etc. Then `build-ts.sh` runs `cargo test -p bindings-types`
instead of `cargo test -p start-os`.
**Challenge**: The exported types are scattered across `core/src/` and reference each other and
other crate types. Extracting them requires either moving the type definitions into the sub-crate
(and importing them back into `start-os`) or restructuring to share a common types crate.
- [ ] Use auto-generated RPC types in the frontend instead of manual duplicates
**Problem**: The web frontend manually defines ~755 lines of API request/response types in
`web/projects/ui/src/app/services/api/api.types.ts` that can drift from the actual Rust types.
**Current state**: The Rust backend already has `#[ts(export)]` on RPC param types (e.g.
`AddTunnelParams`, `SetWifiEnabledParams`, `LoginParams`), and they are generated into
`core/bindings/`. However, commit `71b83245b` ("Chore/unexport api ts #2585", April 2024)
deliberately stopped building them into the SDK and had the frontend maintain its own types.
**Goal**: Reverse that decision — pipe the generated RPC types through the SDK into the frontend
so `api.types.ts` can import them instead of duplicating them. This eliminates drift between
backend and frontend API contracts.
- [ ] Auto-configure port forwards via UPnP/NAT-PMP/PCP - @dr-bonez
**Blocked by**: "Support preferred external ports besides 443" (must be implemented and tested
end-to-end first).
**Goal**: When a binding is marked public, automatically configure port forwards on the user's router
using UPnP, NAT-PMP, or PCP, instead of requiring manual router configuration. Fall back to
displaying manual instructions (the port forward mapping from patch-db) when auto-configuration is
unavailable or fails.

Binary file not shown.

View File

@@ -1 +0,0 @@
deb [arch=amd64,arm64,riscv64 signed-by=/usr/share/keyrings/start9.gpg] https://start9-debs.nyc3.cdn.digitaloceanspaces.com stable main

View File

@@ -1,139 +0,0 @@
#!/bin/bash
#
# Publish .deb files to an S3-hosted apt repository.
#
# Usage: publish-deb.sh <deb-file-or-directory> [<deb-file-or-directory> ...]
#
# Environment variables:
# GPG_PRIVATE_KEY - Armored GPG private key (imported if set)
# GPG_KEY_ID - GPG key ID for signing
# S3_ACCESS_KEY - S3 access key
# S3_SECRET_KEY - S3 secret key
# S3_ENDPOINT - S3 endpoint (default: https://nyc3.digitaloceanspaces.com)
# S3_BUCKET - S3 bucket name (default: start9-debs)
# SUITE - Apt suite name (default: stable)
# COMPONENT - Apt component name (default: main)
set -e
if [ $# -eq 0 ]; then
echo "Usage: $0 <deb-file-or-directory> [...]" >&2
exit 1
fi
BUCKET="${S3_BUCKET:-start9-debs}"
ENDPOINT="${S3_ENDPOINT:-https://nyc3.digitaloceanspaces.com}"
GPG_KEY_ID="${GPG_KEY_ID:-5259ADFC2D63C217}"
SUITE="${SUITE:-stable}"
COMPONENT="${COMPONENT:-main}"
REPO_DIR="$(mktemp -d)"
cleanup() {
rm -rf "$REPO_DIR"
}
trap cleanup EXIT
# Import GPG key if provided
if [ -n "$GPG_PRIVATE_KEY" ]; then
echo "$GPG_PRIVATE_KEY" | gpg --batch --import 2>/dev/null
fi
# Configure s3cmd
if [ -n "$S3_ACCESS_KEY" ] && [ -n "$S3_SECRET_KEY" ]; then
S3CMD_CONFIG="$(mktemp)"
cat > "$S3CMD_CONFIG" <<EOF
[default]
access_key = ${S3_ACCESS_KEY}
secret_key = ${S3_SECRET_KEY}
host_base = $(echo "$ENDPOINT" | sed 's|https://||')
host_bucket = %(bucket)s.$(echo "$ENDPOINT" | sed 's|https://||')
use_https = True
EOF
s3() {
s3cmd -c "$S3CMD_CONFIG" "$@"
}
else
# Fall back to default ~/.s3cfg
S3CMD_CONFIG=""
s3() {
s3cmd "$@"
}
fi
# Sync existing repo from S3
echo "Syncing existing repo from s3://${BUCKET}/ ..."
s3 sync --no-mime-magic "s3://${BUCKET}/" "$REPO_DIR/" 2>/dev/null || true
# Collect all .deb files from arguments
DEB_FILES=()
for arg in "$@"; do
if [ -d "$arg" ]; then
while IFS= read -r -d '' f; do
DEB_FILES+=("$f")
done < <(find "$arg" -name '*.deb' -print0)
elif [ -f "$arg" ]; then
DEB_FILES+=("$arg")
else
echo "Warning: $arg is not a file or directory, skipping" >&2
fi
done
if [ ${#DEB_FILES[@]} -eq 0 ]; then
echo "No .deb files found" >&2
exit 1
fi
# Copy each deb to the pool, renaming to standard format
for deb in "${DEB_FILES[@]}"; do
PKG_NAME="$(dpkg-deb --field "$deb" Package)"
POOL_DIR="$REPO_DIR/pool/${COMPONENT}/${PKG_NAME:0:1}/${PKG_NAME}"
mkdir -p "$POOL_DIR"
cp "$deb" "$POOL_DIR/"
dpkg-name -o "$POOL_DIR/$(basename "$deb")" 2>/dev/null || true
echo "Added: $(basename "$deb") -> pool/${COMPONENT}/${PKG_NAME:0:1}/${PKG_NAME}/"
done
# Generate Packages indices for each architecture
for arch in amd64 arm64 riscv64; do
BINARY_DIR="$REPO_DIR/dists/${SUITE}/${COMPONENT}/binary-${arch}"
mkdir -p "$BINARY_DIR"
(
cd "$REPO_DIR"
dpkg-scanpackages --multiversion --arch "$arch" pool/ > "$BINARY_DIR/Packages"
gzip -k -f "$BINARY_DIR/Packages"
)
echo "Generated Packages index for ${arch}"
done
# Generate Release file
(
cd "$REPO_DIR/dists/${SUITE}"
apt-ftparchive release \
-o "APT::FTPArchive::Release::Origin=Start9" \
-o "APT::FTPArchive::Release::Label=Start9" \
-o "APT::FTPArchive::Release::Suite=${SUITE}" \
-o "APT::FTPArchive::Release::Codename=${SUITE}" \
-o "APT::FTPArchive::Release::Architectures=amd64 arm64 riscv64" \
-o "APT::FTPArchive::Release::Components=${COMPONENT}" \
. > Release
)
echo "Generated Release file"
# Sign if GPG key is available
if [ -n "$GPG_KEY_ID" ]; then
(
cd "$REPO_DIR/dists/${SUITE}"
gpg --default-key "$GPG_KEY_ID" --batch --yes --detach-sign -o Release.gpg Release
gpg --default-key "$GPG_KEY_ID" --batch --yes --clearsign -o InRelease Release
)
echo "Signed Release file with key ${GPG_KEY_ID}"
else
echo "Warning: GPG_KEY_ID not set, Release file is unsigned" >&2
fi
# Upload to S3
echo "Uploading to s3://${BUCKET}/ ..."
s3 sync --acl-public --no-mime-magic "$REPO_DIR/" "s3://${BUCKET}/"
[ -n "$S3CMD_CONFIG" ] && rm -f "$S3CMD_CONFIG"
echo "Done."

View File

@@ -11,7 +11,6 @@ cifs-utils
conntrack conntrack
cryptsetup cryptsetup
curl curl
dkms
dmidecode dmidecode
dnsutils dnsutils
dosfstools dosfstools
@@ -37,7 +36,6 @@ lvm2
lxc lxc
magic-wormhole magic-wormhole
man-db man-db
mokutil
ncdu ncdu
net-tools net-tools
network-manager network-manager
@@ -57,7 +55,6 @@ socat
sqlite3 sqlite3
squashfs-tools squashfs-tools
squashfs-tools-ng squashfs-tools-ng
ssl-cert
sudo sudo
systemd systemd
systemd-resolved systemd-resolved

View File

@@ -1 +0,0 @@
+ nmap

View File

@@ -12,10 +12,6 @@ fi
if [[ "$PLATFORM" =~ -nonfree$ ]]; then if [[ "$PLATFORM" =~ -nonfree$ ]]; then
FEATURES+=("nonfree") FEATURES+=("nonfree")
fi fi
if [[ "$PLATFORM" =~ -nvidia$ ]]; then
FEATURES+=("nonfree")
FEATURES+=("nvidia")
fi
feature_file_checker=' feature_file_checker='
/^#/ { next } /^#/ { next }

View File

@@ -5,3 +5,6 @@
+ firmware-libertas + firmware-libertas
+ firmware-misc-nonfree + firmware-misc-nonfree
+ firmware-realtek + firmware-realtek
+ nvidia-container-toolkit
# + nvidia-driver
# + nvidia-kernel-dkms

View File

@@ -1 +0,0 @@
+ nvidia-container-toolkit

View File

@@ -1,6 +1,5 @@
+ gdisk - grub-efi
+ parted + parted
+ u-boot-rpi
+ raspberrypi-net-mods + raspberrypi-net-mods
+ raspberrypi-sys-mods + raspberrypi-sys-mods
+ raspi-config + raspi-config

View File

@@ -23,8 +23,6 @@ RUN apt-get update && \
squashfs-tools \ squashfs-tools \
rsync \ rsync \
b3sum \ b3sum \
btrfs-progs \
gdisk \
dpkg-dev dpkg-dev

View File

@@ -1,6 +1,7 @@
#!/bin/bash #!/bin/bash
set -e set -e
MAX_IMG_LEN=$((4 * 1024 * 1024 * 1024)) # 4GB
echo "==== StartOS Image Build ====" echo "==== StartOS Image Build ===="
@@ -33,14 +34,14 @@ fi
IMAGE_BASENAME=startos-${VERSION_FULL}_${IB_TARGET_PLATFORM} IMAGE_BASENAME=startos-${VERSION_FULL}_${IB_TARGET_PLATFORM}
BOOTLOADERS=grub-efi BOOTLOADERS=grub-efi
if [ "$IB_TARGET_PLATFORM" = "x86_64" ] || [ "$IB_TARGET_PLATFORM" = "x86_64-nonfree" ] || [ "$IB_TARGET_PLATFORM" = "x86_64-nvidia" ]; then if [ "$IB_TARGET_PLATFORM" = "x86_64" ] || [ "$IB_TARGET_PLATFORM" = "x86_64-nonfree" ]; then
IB_TARGET_ARCH=amd64 IB_TARGET_ARCH=amd64
QEMU_ARCH=x86_64 QEMU_ARCH=x86_64
BOOTLOADERS=grub-efi,syslinux BOOTLOADERS=grub-efi,syslinux
elif [ "$IB_TARGET_PLATFORM" = "aarch64" ] || [ "$IB_TARGET_PLATFORM" = "aarch64-nonfree" ] || [ "$IB_TARGET_PLATFORM" = "aarch64-nvidia" ] || [ "$IB_TARGET_PLATFORM" = "raspberrypi" ] || [ "$IB_TARGET_PLATFORM" = "rockchip64" ]; then elif [ "$IB_TARGET_PLATFORM" = "aarch64" ] || [ "$IB_TARGET_PLATFORM" = "aarch64-nonfree" ] || [ "$IB_TARGET_PLATFORM" = "raspberrypi" ] || [ "$IB_TARGET_PLATFORM" = "rockchip64" ]; then
IB_TARGET_ARCH=arm64 IB_TARGET_ARCH=arm64
QEMU_ARCH=aarch64 QEMU_ARCH=aarch64
elif [ "$IB_TARGET_PLATFORM" = "riscv64" ] || [ "$IB_TARGET_PLATFORM" = "riscv64-nonfree" ]; then elif [ "$IB_TARGET_PLATFORM" = "riscv64" ]; then
IB_TARGET_ARCH=riscv64 IB_TARGET_ARCH=riscv64
QEMU_ARCH=riscv64 QEMU_ARCH=riscv64
else else
@@ -59,13 +60,9 @@ mkdir -p $prep_results_dir
cd $prep_results_dir cd $prep_results_dir
NON_FREE= NON_FREE=
if [[ "${IB_TARGET_PLATFORM}" =~ -nonfree$ ]] || [[ "${IB_TARGET_PLATFORM}" =~ -nvidia$ ]] || [ "${IB_TARGET_PLATFORM}" = "raspberrypi" ]; then if [[ "${IB_TARGET_PLATFORM}" =~ -nonfree$ ]] || [ "${IB_TARGET_PLATFORM}" = "raspberrypi" ]; then
NON_FREE=1 NON_FREE=1
fi fi
NVIDIA=
if [[ "${IB_TARGET_PLATFORM}" =~ -nvidia$ ]]; then
NVIDIA=1
fi
IMAGE_TYPE=iso IMAGE_TYPE=iso
if [ "${IB_TARGET_PLATFORM}" = "raspberrypi" ] || [ "${IB_TARGET_PLATFORM}" = "rockchip64" ]; then if [ "${IB_TARGET_PLATFORM}" = "raspberrypi" ] || [ "${IB_TARGET_PLATFORM}" = "rockchip64" ]; then
IMAGE_TYPE=img IMAGE_TYPE=img
@@ -104,7 +101,7 @@ lb config \
--iso-preparer "START9 LABS; HTTPS://START9.COM" \ --iso-preparer "START9 LABS; HTTPS://START9.COM" \
--iso-publisher "START9 LABS; HTTPS://START9.COM" \ --iso-publisher "START9 LABS; HTTPS://START9.COM" \
--backports true \ --backports true \
--bootappend-live "boot=live noautologin console=tty0" \ --bootappend-live "boot=live noautologin" \
--bootloaders $BOOTLOADERS \ --bootloaders $BOOTLOADERS \
--cache false \ --cache false \
--mirror-bootstrap "https://deb.debian.org/debian/" \ --mirror-bootstrap "https://deb.debian.org/debian/" \
@@ -131,15 +128,6 @@ ff02::1 ip6-allnodes
ff02::2 ip6-allrouters ff02::2 ip6-allrouters
EOT EOT
if [[ "${IB_OS_ENV}" =~ (^|-)dev($|-) ]]; then
mkdir -p config/includes.chroot/etc/ssh/sshd_config.d
echo "PasswordAuthentication yes" > config/includes.chroot/etc/ssh/sshd_config.d/dev-password-auth.conf
fi
# Installer marker file (used by installed GRUB to detect the live USB)
mkdir -p config/includes.binary
touch config/includes.binary/.startos-installer
if [ "${IB_TARGET_PLATFORM}" = "raspberrypi" ]; then if [ "${IB_TARGET_PLATFORM}" = "raspberrypi" ]; then
mkdir -p config/includes.chroot mkdir -p config/includes.chroot
git clone --depth=1 --branch=stable https://github.com/raspberrypi/rpi-firmware.git config/includes.chroot/boot git clone --depth=1 --branch=stable https://github.com/raspberrypi/rpi-firmware.git config/includes.chroot/boot
@@ -180,13 +168,7 @@ sed -i -e '2i set timeout=5' config/bootloaders/grub-pc/config.cfg
mkdir -p config/archives mkdir -p config/archives
if [ "${IB_TARGET_PLATFORM}" = "raspberrypi" ]; then if [ "${IB_TARGET_PLATFORM}" = "raspberrypi" ]; then
# Fetch the keyring package (not the old raspberrypi.gpg.key, which has curl -fsSL https://archive.raspberrypi.com/debian/raspberrypi.gpg.key | gpg --dearmor -o config/archives/raspi.key
# SHA1-only binding signatures that sqv on Trixie rejects).
KEYRING_DEB=$(mktemp)
curl -fsSL -o "$KEYRING_DEB" https://archive.raspberrypi.com/debian/pool/main/r/raspberrypi-archive-keyring/raspberrypi-archive-keyring_2025.1+rpt1_all.deb
dpkg-deb -x "$KEYRING_DEB" "$KEYRING_DEB.d"
cp "$KEYRING_DEB.d/usr/share/keyrings/raspberrypi-archive-keyring.gpg" config/archives/raspi.key
rm -rf "$KEYRING_DEB" "$KEYRING_DEB.d"
echo "deb [arch=${IB_TARGET_ARCH} signed-by=/etc/apt/trusted.gpg.d/raspi.key.gpg] https://archive.raspberrypi.com/debian/ ${IB_SUITE} main" > config/archives/raspi.list echo "deb [arch=${IB_TARGET_ARCH} signed-by=/etc/apt/trusted.gpg.d/raspi.key.gpg] https://archive.raspberrypi.com/debian/ ${IB_SUITE} main" > config/archives/raspi.list
fi fi
@@ -195,7 +177,7 @@ if [ "${IB_TARGET_PLATFORM}" = "rockchip64" ]; then
echo "deb https://apt.armbian.com/ ${IB_SUITE} main" > config/archives/armbian.list echo "deb https://apt.armbian.com/ ${IB_SUITE} main" > config/archives/armbian.list
fi fi
if [ "$NVIDIA" = 1 ]; then if [ "$NON_FREE" = 1 ]; then
curl -fsSL https://nvidia.github.io/libnvidia-container/gpgkey | gpg --dearmor -o config/archives/nvidia-container-toolkit.key curl -fsSL https://nvidia.github.io/libnvidia-container/gpgkey | gpg --dearmor -o config/archives/nvidia-container-toolkit.key
curl -s -L https://nvidia.github.io/libnvidia-container/stable/deb/nvidia-container-toolkit.list \ curl -s -L https://nvidia.github.io/libnvidia-container/stable/deb/nvidia-container-toolkit.list \
| sed 's#deb https://#deb [signed-by=/etc/apt/trusted.gpg.d/nvidia-container-toolkit.key.gpg] https://#g' \ | sed 's#deb https://#deb [signed-by=/etc/apt/trusted.gpg.d/nvidia-container-toolkit.key.gpg] https://#g' \
@@ -223,15 +205,11 @@ cat > config/hooks/normal/9000-install-startos.hook.chroot << EOF
set -e set -e
if [ "${IB_TARGET_PLATFORM}" != "raspberrypi" ]; then if [ "${NON_FREE}" = "1" ] && [ "${IB_TARGET_PLATFORM}" != "raspberrypi" ]; then
/usr/lib/startos/scripts/enable-kiosk
fi
if [ "${NVIDIA}" = "1" ]; then
# install a specific NVIDIA driver version # install a specific NVIDIA driver version
# ---------------- configuration ---------------- # ---------------- configuration ----------------
NVIDIA_DRIVER_VERSION="\${NVIDIA_DRIVER_VERSION:-580.126.09}" NVIDIA_DRIVER_VERSION="\${NVIDIA_DRIVER_VERSION:-580.119.02}"
BASE_URL="https://download.nvidia.com/XFree86/Linux-${QEMU_ARCH}" BASE_URL="https://download.nvidia.com/XFree86/Linux-${QEMU_ARCH}"
@@ -254,7 +232,7 @@ if [ "${NVIDIA}" = "1" ]; then
echo "[nvidia-hook] Target kernel version: \${KVER}" >&2 echo "[nvidia-hook] Target kernel version: \${KVER}" >&2
# Ensure kernel headers are present # Ensure kernel headers are present
TEMP_APT_DEPS=(build-essential pkg-config) TEMP_APT_DEPS=(build-essential)
if [ ! -e "/lib/modules/\${KVER}/build" ]; then if [ ! -e "/lib/modules/\${KVER}/build" ]; then
TEMP_APT_DEPS+=(linux-headers-\${KVER}) TEMP_APT_DEPS+=(linux-headers-\${KVER})
fi fi
@@ -281,15 +259,12 @@ if [ "${NVIDIA}" = "1" ]; then
echo "[nvidia-hook] Running NVIDIA installer for kernel \${KVER}" >&2 echo "[nvidia-hook] Running NVIDIA installer for kernel \${KVER}" >&2
if ! sh "\${RUN_PATH}" \ sh "\${RUN_PATH}" \
--silent \ --silent \
--kernel-name="\${KVER}" \ --kernel-name="\${KVER}" \
--no-x-check \ --no-x-check \
--no-nouveau-check \ --no-nouveau-check \
--no-runlevel-check; then --no-runlevel-check
cat /var/log/nvidia-installer.log
exit 1
fi
# Rebuild module metadata # Rebuild module metadata
echo "[nvidia-hook] Running depmod for \${KVER}" >&2 echo "[nvidia-hook] Running depmod for \${KVER}" >&2
@@ -297,32 +272,12 @@ if [ "${NVIDIA}" = "1" ]; then
echo "[nvidia-hook] NVIDIA \${NVIDIA_DRIVER_VERSION} installation complete for kernel \${KVER}" >&2 echo "[nvidia-hook] NVIDIA \${NVIDIA_DRIVER_VERSION} installation complete for kernel \${KVER}" >&2
echo "[nvidia-hook] Removing .run installer..." >&2
rm -f "\${RUN_PATH}"
echo "[nvidia-hook] Blacklisting nouveau..." >&2
echo "blacklist nouveau" > /etc/modprobe.d/blacklist-nouveau.conf
echo "options nouveau modeset=0" >> /etc/modprobe.d/blacklist-nouveau.conf
echo "[nvidia-hook] Rebuilding initramfs..." >&2
update-initramfs -u -k "\${KVER}"
echo "[nvidia-hook] Removing build dependencies..." >&2 echo "[nvidia-hook] Removing build dependencies..." >&2
apt-get purge -y nvidia-depends apt-get purge -y nvidia-depends
apt-get autoremove -y apt-get autoremove -y
echo "[nvidia-hook] Removed build dependencies." >&2 echo "[nvidia-hook] Removed build dependencies." >&2
fi fi
# Install linux-kbuild for sign-file (Secure Boot module signing)
KVER_ALL="\$(ls -1t /boot/vmlinuz-* 2>/dev/null | head -n1 | sed 's|.*/vmlinuz-||')"
if [ -n "\${KVER_ALL}" ]; then
KBUILD_VER="\$(echo "\${KVER_ALL}" | grep -oP '^\d+\.\d+')"
if [ -n "\${KBUILD_VER}" ]; then
echo "[build] Installing linux-kbuild-\${KBUILD_VER} for Secure Boot support" >&2
apt-get install -y "linux-kbuild-\${KBUILD_VER}" || echo "[build] WARNING: linux-kbuild-\${KBUILD_VER} not available" >&2
fi
fi
cp /etc/resolv.conf /etc/resolv.conf.bak cp /etc/resolv.conf /etc/resolv.conf.bak
if [ "${IB_SUITE}" = trixie ] && [ "${IB_TARGET_ARCH}" != riscv64 ]; then if [ "${IB_SUITE}" = trixie ] && [ "${IB_TARGET_ARCH}" != riscv64 ]; then
@@ -336,10 +291,9 @@ fi
if [ "${IB_TARGET_PLATFORM}" = "raspberrypi" ]; then if [ "${IB_TARGET_PLATFORM}" = "raspberrypi" ]; then
ln -sf /usr/bin/pi-beep /usr/local/bin/beep ln -sf /usr/bin/pi-beep /usr/local/bin/beep
sh /boot/firmware/config.sh > /boot/firmware/config.txt KERNEL_VERSION=${RPI_KERNEL_VERSION} sh /boot/config.sh > /boot/config.txt
mkinitramfs -c gzip -o /boot/initrd.img-${RPI_KERNEL_VERSION}-rpi-v8 ${RPI_KERNEL_VERSION}-rpi-v8 mkinitramfs -c gzip -o /boot/initrd.img-${RPI_KERNEL_VERSION}-rpi-v8 ${RPI_KERNEL_VERSION}-rpi-v8
mkinitramfs -c gzip -o /boot/initrd.img-${RPI_KERNEL_VERSION}-rpi-2712 ${RPI_KERNEL_VERSION}-rpi-2712 mkinitramfs -c gzip -o /boot/initrd.img-${RPI_KERNEL_VERSION}-rpi-2712 ${RPI_KERNEL_VERSION}-rpi-2712
cp /usr/lib/u-boot/rpi_arm64/u-boot.bin /boot/firmware/u-boot.bin
fi fi
useradd --shell /bin/bash -G startos -m start9 useradd --shell /bin/bash -G startos -m start9
@@ -349,16 +303,14 @@ usermod -aG systemd-journal start9
echo "start9 ALL=(ALL:ALL) NOPASSWD: ALL" | sudo tee "/etc/sudoers.d/010_start9-nopasswd" echo "start9 ALL=(ALL:ALL) NOPASSWD: ALL" | sudo tee "/etc/sudoers.d/010_start9-nopasswd"
if [ "${IB_TARGET_PLATFORM}" != "raspberrypi" ]; then
/usr/lib/startos/scripts/enable-kiosk
fi
if ! [[ "${IB_OS_ENV}" =~ (^|-)dev($|-) ]]; then if ! [[ "${IB_OS_ENV}" =~ (^|-)dev($|-) ]]; then
passwd -l start9 passwd -l start9
fi fi
mkdir -p /media/startos
chmod 750 /media/startos
chown root:startos /media/startos
start-cli --registry=https://alpha-registry-x.start9.com registry package download tor -d /usr/lib/startos/tor_${QEMU_ARCH}.s9pk -a "${QEMU_ARCH}"
EOF EOF
SOURCE_DATE_EPOCH="${SOURCE_DATE_EPOCH:-$(date '+%s')}" SOURCE_DATE_EPOCH="${SOURCE_DATE_EPOCH:-$(date '+%s')}"
@@ -411,85 +363,38 @@ if [ "${IMAGE_TYPE}" = iso ]; then
elif [ "${IMAGE_TYPE}" = img ]; then elif [ "${IMAGE_TYPE}" = img ]; then
SECTOR_LEN=512 SECTOR_LEN=512
FW_START=$((1024 * 1024)) # 1MiB (sector 2048) — Pi-specific BOOT_START=$((1024 * 1024)) # 1MiB
FW_LEN=$((128 * 1024 * 1024)) # 128MiB (Pi firmware + U-Boot + DTBs) BOOT_LEN=$((512 * 1024 * 1024)) # 512MiB
FW_END=$((FW_START + FW_LEN - 1))
ESP_START=$((FW_END + 1)) # 100MB EFI System Partition (matches os_install)
ESP_LEN=$((100 * 1024 * 1024))
ESP_END=$((ESP_START + ESP_LEN - 1))
BOOT_START=$((ESP_END + 1)) # 2GB /boot (matches os_install)
BOOT_LEN=$((2 * 1024 * 1024 * 1024))
BOOT_END=$((BOOT_START + BOOT_LEN - 1)) BOOT_END=$((BOOT_START + BOOT_LEN - 1))
ROOT_START=$((BOOT_END + 1)) ROOT_START=$((BOOT_END + 1))
ROOT_LEN=$((MAX_IMG_LEN - ROOT_START))
# Size root partition to fit the squashfs + 256MB overhead for btrfs ROOT_END=$((MAX_IMG_LEN - 1))
# metadata and config overlay, avoiding the need for btrfs resize
SQUASHFS_SIZE=$(stat -c %s $prep_results_dir/binary/live/filesystem.squashfs)
ROOT_LEN=$(( SQUASHFS_SIZE + 256 * 1024 * 1024 ))
# Align to sector boundary
ROOT_LEN=$(( (ROOT_LEN + SECTOR_LEN - 1) / SECTOR_LEN * SECTOR_LEN ))
# Total image: partitions + GPT backup header (34 sectors)
IMG_LEN=$((ROOT_START + ROOT_LEN + 34 * SECTOR_LEN))
# Fixed GPT partition UUIDs (deterministic, based on old MBR disk ID cb15ae4d)
FW_UUID=cb15ae4d-0001-4000-8000-000000000001
ESP_UUID=cb15ae4d-0002-4000-8000-000000000002
BOOT_UUID=cb15ae4d-0003-4000-8000-000000000003
ROOT_UUID=cb15ae4d-0004-4000-8000-000000000004
TARGET_NAME=$prep_results_dir/${IMAGE_BASENAME}.img TARGET_NAME=$prep_results_dir/${IMAGE_BASENAME}.img
truncate -s $IMG_LEN $TARGET_NAME truncate -s $MAX_IMG_LEN $TARGET_NAME
sfdisk $TARGET_NAME <<-EOF sfdisk $TARGET_NAME <<-EOF
label: gpt label: dos
label-id: 0xcb15ae4d
unit: sectors
sector-size: 512
${TARGET_NAME}1 : start=$((FW_START / SECTOR_LEN)), size=$((FW_LEN / SECTOR_LEN)), type=EBD0A0A2-B9E5-4433-87C0-68B6B72699C7, uuid=${FW_UUID}, name="firmware" ${TARGET_NAME}1 : start=$((BOOT_START / SECTOR_LEN)), size=$((BOOT_LEN / SECTOR_LEN)), type=c, bootable
${TARGET_NAME}2 : start=$((ESP_START / SECTOR_LEN)), size=$((ESP_LEN / SECTOR_LEN)), type=C12A7328-F81F-11D2-BA4B-00A0C93EC93B, uuid=${ESP_UUID}, name="efi" ${TARGET_NAME}2 : start=$((ROOT_START / SECTOR_LEN)), size=$((ROOT_LEN / SECTOR_LEN)), type=83
${TARGET_NAME}3 : start=$((BOOT_START / SECTOR_LEN)), size=$((BOOT_LEN / SECTOR_LEN)), type=0FC63DAF-8483-4772-8E79-3D69D8477DE4, uuid=${BOOT_UUID}, name="boot"
${TARGET_NAME}4 : start=$((ROOT_START / SECTOR_LEN)), size=$((ROOT_LEN / SECTOR_LEN)), type=B921B045-1DF0-41C3-AF44-4C6F280D3FAE, uuid=${ROOT_UUID}, name="root"
EOF EOF
# Create named loop device nodes (high minor numbers to avoid conflicts) BOOT_DEV=$(losetup --show -f --offset $BOOT_START --sizelimit $BOOT_LEN $TARGET_NAME)
# and detach any stale ones from previous failed builds ROOT_DEV=$(losetup --show -f --offset $ROOT_START --sizelimit $ROOT_LEN $TARGET_NAME)
FW_DEV=/dev/startos-loop-fw
ESP_DEV=/dev/startos-loop-esp
BOOT_DEV=/dev/startos-loop-boot
ROOT_DEV=/dev/startos-loop-root
for dev in $FW_DEV:200 $ESP_DEV:201 $BOOT_DEV:202 $ROOT_DEV:203; do
name=${dev%:*}
minor=${dev#*:}
[ -e $name ] || mknod $name b 7 $minor
losetup -d $name 2>/dev/null || true
done
losetup $FW_DEV --offset $FW_START --sizelimit $FW_LEN $TARGET_NAME mkfs.vfat -F32 $BOOT_DEV
losetup $ESP_DEV --offset $ESP_START --sizelimit $ESP_LEN $TARGET_NAME mkfs.ext4 $ROOT_DEV
losetup $BOOT_DEV --offset $BOOT_START --sizelimit $BOOT_LEN $TARGET_NAME
losetup $ROOT_DEV --offset $ROOT_START --sizelimit $ROOT_LEN $TARGET_NAME
mkfs.vfat -F32 -n firmware $FW_DEV
mkfs.vfat -F32 -n efi $ESP_DEV
mkfs.vfat -F32 -n boot $BOOT_DEV
mkfs.btrfs -f -L rootfs $ROOT_DEV
TMPDIR=$(mktemp -d) TMPDIR=$(mktemp -d)
# Extract boot files from squashfs to staging area
BOOT_STAGING=$(mktemp -d)
unsquashfs -n -f -d $BOOT_STAGING $prep_results_dir/binary/live/filesystem.squashfs boot
# Mount partitions (nested: firmware and efi inside boot)
mkdir -p $TMPDIR/boot $TMPDIR/root mkdir -p $TMPDIR/boot $TMPDIR/root
mount $BOOT_DEV $TMPDIR/boot
mkdir -p $TMPDIR/boot/firmware $TMPDIR/boot/efi
mount $FW_DEV $TMPDIR/boot/firmware
mount $ESP_DEV $TMPDIR/boot/efi
mount $ROOT_DEV $TMPDIR/root mount $ROOT_DEV $TMPDIR/root
mount $BOOT_DEV $TMPDIR/boot
# Copy boot files — nested mounts route firmware/* to the firmware partition unsquashfs -n -f -d $TMPDIR $prep_results_dir/binary/live/filesystem.squashfs boot
cp -a $BOOT_STAGING/boot/. $TMPDIR/boot/
rm -rf $BOOT_STAGING
mkdir $TMPDIR/root/images $TMPDIR/root/config mkdir $TMPDIR/root/images $TMPDIR/root/config
B3SUM=$(b3sum $prep_results_dir/binary/live/filesystem.squashfs | head -c 16) B3SUM=$(b3sum $prep_results_dir/binary/live/filesystem.squashfs | head -c 16)
@@ -502,46 +407,40 @@ elif [ "${IMAGE_TYPE}" = img ]; then
mount -t overlay -o lowerdir=$TMPDIR/lower,workdir=$TMPDIR/root/config/work,upperdir=$TMPDIR/root/config/overlay overlay $TMPDIR/next mount -t overlay -o lowerdir=$TMPDIR/lower,workdir=$TMPDIR/root/config/work,upperdir=$TMPDIR/root/config/overlay overlay $TMPDIR/next
if [ "${IB_TARGET_PLATFORM}" = "raspberrypi" ]; then if [ "${IB_TARGET_PLATFORM}" = "raspberrypi" ]; then
sed -i 's| boot=startos| boot=startos init=/usr/lib/startos/scripts/init_resize\.sh|' $TMPDIR/boot/cmdline.txt
rsync -a $SOURCE_DIR/raspberrypi/img/ $TMPDIR/next/ rsync -a $SOURCE_DIR/raspberrypi/img/ $TMPDIR/next/
# Install GRUB: ESP at /boot/efi (Part 2), /boot (Part 3)
mkdir -p $TMPDIR/next/boot \
$TMPDIR/next/dev $TMPDIR/next/proc $TMPDIR/next/sys $TMPDIR/next/media/startos/root
mount --rbind $TMPDIR/boot $TMPDIR/next/boot
mount --bind /dev $TMPDIR/next/dev
mount -t proc proc $TMPDIR/next/proc
mount -t sysfs sysfs $TMPDIR/next/sys
mount --bind $TMPDIR/root $TMPDIR/next/media/startos/root
chroot $TMPDIR/next grub-install --target=arm64-efi --removable --efi-directory=/boot/efi --boot-directory=/boot --no-nvram
chroot $TMPDIR/next update-grub
umount $TMPDIR/next/media/startos/root
umount $TMPDIR/next/sys
umount $TMPDIR/next/proc
umount $TMPDIR/next/dev
umount -l $TMPDIR/next/boot
# Fix root= in grub.cfg: update-grub sees loop devices, but the
# real device uses a fixed GPT PARTUUID for root (Part 4).
sed -i "s|root=[^ ]*|root=PARTUUID=${ROOT_UUID}|g" $TMPDIR/boot/grub/grub.cfg
# Inject first-boot resize script into GRUB config
sed -i 's| boot=startos| boot=startos init=/usr/lib/startos/scripts/init_resize\.sh|' $TMPDIR/boot/grub/grub.cfg
fi fi
umount $TMPDIR/next umount $TMPDIR/next
umount $TMPDIR/lower umount $TMPDIR/lower
umount $TMPDIR/boot/firmware
umount $TMPDIR/boot/efi
umount $TMPDIR/boot umount $TMPDIR/boot
umount $TMPDIR/root umount $TMPDIR/root
e2fsck -fy $ROOT_DEV
resize2fs -M $ROOT_DEV
BLOCK_COUNT=$(dumpe2fs -h $ROOT_DEV | awk '/^Block count:/ { print $3 }')
BLOCK_SIZE=$(dumpe2fs -h $ROOT_DEV | awk '/^Block size:/ { print $3 }')
ROOT_LEN=$((BLOCK_COUNT * BLOCK_SIZE))
losetup -d $ROOT_DEV losetup -d $ROOT_DEV
losetup -d $BOOT_DEV losetup -d $BOOT_DEV
losetup -d $ESP_DEV
losetup -d $FW_DEV # Recreate partition 2 with the new size using sfdisk
sfdisk $TARGET_NAME <<-EOF
label: dos
label-id: 0xcb15ae4d
unit: sectors
sector-size: 512
${TARGET_NAME}1 : start=$((BOOT_START / SECTOR_LEN)), size=$((BOOT_LEN / SECTOR_LEN)), type=c, bootable
${TARGET_NAME}2 : start=$((ROOT_START / SECTOR_LEN)), size=$((ROOT_LEN / SECTOR_LEN)), type=83
EOF
TARGET_SIZE=$((ROOT_START + ROOT_LEN))
truncate -s $TARGET_SIZE $TARGET_NAME
mv $TARGET_NAME $RESULTS_DIR/$IMAGE_BASENAME.img mv $TARGET_NAME $RESULTS_DIR/$IMAGE_BASENAME.img

View File

@@ -1,4 +1,2 @@
PARTUUID=cb15ae4d-0001-4000-8000-000000000001 /boot/firmware vfat umask=0077 0 2 /dev/mmcblk0p1 /boot vfat umask=0077 0 2
PARTUUID=cb15ae4d-0002-4000-8000-000000000002 /boot/efi vfat umask=0077 0 1 /dev/mmcblk0p2 / ext4 defaults 0 1
PARTUUID=cb15ae4d-0003-4000-8000-000000000003 /boot vfat umask=0077 0 2
PARTUUID=cb15ae4d-0004-4000-8000-000000000004 / btrfs defaults 0 1

View File

@@ -12,16 +12,15 @@ get_variables () {
BOOT_DEV_NAME=$(echo /sys/block/*/"${BOOT_PART_NAME}" | cut -d "/" -f 4) BOOT_DEV_NAME=$(echo /sys/block/*/"${BOOT_PART_NAME}" | cut -d "/" -f 4)
BOOT_PART_NUM=$(cat "/sys/block/${BOOT_DEV_NAME}/${BOOT_PART_NAME}/partition") BOOT_PART_NUM=$(cat "/sys/block/${BOOT_DEV_NAME}/${BOOT_PART_NAME}/partition")
ROOT_DEV_SIZE=$(cat "/sys/block/${ROOT_DEV_NAME}/size") OLD_DISKID=$(fdisk -l "$ROOT_DEV" | sed -n 's/Disk identifier: 0x\([^ ]*\)/\1/p')
# GPT backup header/entries occupy last 33 sectors
USABLE_END=$((ROOT_DEV_SIZE - 34))
if [ "$USABLE_END" -le 67108864 ]; then ROOT_DEV_SIZE=$(cat "/sys/block/${ROOT_DEV_NAME}/size")
TARGET_END=$USABLE_END if [ "$ROOT_DEV_SIZE" -le 67108864 ]; then
TARGET_END=$((ROOT_DEV_SIZE - 1))
else else
TARGET_END=$((33554432 - 1)) TARGET_END=$((33554432 - 1))
DATA_PART_START=33554432 DATA_PART_START=33554432
DATA_PART_END=$USABLE_END DATA_PART_END=$((ROOT_DEV_SIZE - 1))
fi fi
PARTITION_TABLE=$(parted -m "$ROOT_DEV" unit s print | tr -d 's') PARTITION_TABLE=$(parted -m "$ROOT_DEV" unit s print | tr -d 's')
@@ -58,30 +57,37 @@ check_variables () {
main () { main () {
get_variables get_variables
# Fix GPT backup header first — the image was built with a tight root
# partition, so the backup GPT is not at the end of the SD card. parted
# will prompt interactively if this isn't fixed before we use it.
sgdisk -e "$ROOT_DEV" 2>/dev/null || true
if ! check_variables; then if ! check_variables; then
return 1 return 1
fi fi
# if [ "$ROOT_PART_END" -eq "$TARGET_END" ]; then
# reboot_pi
# fi
if ! echo Yes | parted -m --align=optimal "$ROOT_DEV" ---pretend-input-tty u s resizepart "$ROOT_PART_NUM" "$TARGET_END" ; then if ! echo Yes | parted -m --align=optimal "$ROOT_DEV" ---pretend-input-tty u s resizepart "$ROOT_PART_NUM" "$TARGET_END" ; then
FAIL_REASON="Root partition resize failed" FAIL_REASON="Root partition resize failed"
return 1 return 1
fi fi
if [ -n "$DATA_PART_START" ]; then if [ -n "$DATA_PART_START" ]; then
if ! parted -ms --align=optimal "$ROOT_DEV" u s mkpart data "$DATA_PART_START" "$DATA_PART_END"; then if ! parted -ms --align=optimal "$ROOT_DEV" u s mkpart primary "$DATA_PART_START" "$DATA_PART_END"; then
FAIL_REASON="Data partition creation failed" FAIL_REASON="Data partition creation failed"
return 1 return 1
fi fi
fi fi
(
echo x
echo i
echo "0xcb15ae4d"
echo r
echo w
) | fdisk $ROOT_DEV
mount / -o remount,rw mount / -o remount,rw
btrfs filesystem resize max /media/startos/root resize2fs $ROOT_PART_DEV
if ! systemd-machine-id-setup --root=/media/startos/config/overlay/; then if ! systemd-machine-id-setup --root=/media/startos/config/overlay/; then
FAIL_REASON="systemd-machine-id-setup failed" FAIL_REASON="systemd-machine-id-setup failed"
@@ -105,7 +111,7 @@ mount / -o remount,ro
beep beep
if main; then if main; then
sed -i 's| init=/usr/lib/startos/scripts/init_resize\.sh||' /boot/grub/grub.cfg sed -i 's| init=/usr/lib/startos/scripts/init_resize\.sh||' /boot/cmdline.txt
echo "Resized root filesystem. Rebooting in 5 seconds..." echo "Resized root filesystem. Rebooting in 5 seconds..."
sleep 5 sleep 5
else else

View File

@@ -0,0 +1 @@
usb-storage.quirks=152d:0562:u,14cd:121c:u,0781:cfcb:u console=serial0,115200 console=tty1 root=PARTUUID=cb15ae4d-02 rootfstype=ext4 fsck.repair=yes rootwait cgroup_enable=cpuset cgroup_memory=1 cgroup_enable=memory boot=startos

View File

@@ -27,18 +27,20 @@ disable_overscan=1
# (e.g. for USB device mode) or if USB support is not required. # (e.g. for USB device mode) or if USB support is not required.
otg_mode=1 otg_mode=1
[all]
[pi4] [pi4]
# Run as fast as firmware / board allows # Run as fast as firmware / board allows
arm_boost=1 arm_boost=1
kernel=vmlinuz-${KERNEL_VERSION}-rpi-v8
initramfs initrd.img-${KERNEL_VERSION}-rpi-v8 followkernel
[pi5]
kernel=vmlinuz-${KERNEL_VERSION}-rpi-2712
initramfs initrd.img-${KERNEL_VERSION}-rpi-2712 followkernel
[all] [all]
gpu_mem=16 gpu_mem=16
dtoverlay=pwm-2chan,disable-bt dtoverlay=pwm-2chan,disable-bt
# Enable UART for U-Boot and serial console
enable_uart=1
# Load U-Boot as the bootloader (GRUB is chainloaded from U-Boot)
kernel=u-boot.bin
EOF EOF

View File

@@ -84,8 +84,4 @@ arm_boost=1
gpu_mem=16 gpu_mem=16
dtoverlay=pwm-2chan,disable-bt dtoverlay=pwm-2chan,disable-bt
# Enable UART for U-Boot and serial console auto_initramfs=1
enable_uart=1
# Load U-Boot as the bootloader (GRUB is chainloaded from U-Boot)
kernel=u-boot.bin

View File

@@ -1,4 +0,0 @@
# Raspberry Pi-specific GRUB overrides
# Overrides GRUB_CMDLINE_LINUX from /etc/default/grub with Pi-specific
# console devices and hardware quirks.
GRUB_CMDLINE_LINUX="boot=startos console=serial0,115200 console=tty1 usb-storage.quirks=152d:0562:u,14cd:121c:u,0781:cfcb:u cgroup_enable=cpuset cgroup_memory=1 cgroup_enable=memory"

View File

@@ -1,3 +1,6 @@
os-partitions:
boot: /dev/mmcblk0p1
root: /dev/mmcblk0p2
ethernet-interface: end0 ethernet-interface: end0
wifi-interface: wlan0 wifi-interface: wlan0
disable-encryption: true disable-encryption: true

View File

@@ -118,6 +118,6 @@ else
fi fi
printf "\n \033[1;37m┌──────────────────────────────────────────────────── QUICK ACCESS ─┐\033[0m\n" printf "\n \033[1;37m┌──────────────────────────────────────────────────── QUICK ACCESS ─┐\033[0m\n"
printf " \033[1;37m│\033[0m Web Interface: \033[0;36m%-50s\033[0m \033[1;37m│\033[0m\n" "$web_url" printf " \033[1;37m│\033[0m Web Interface: \033[0;36m%-50s\033[0m \033[1;37m│\033[0m\n" "$web_url"
printf " \033[1;37m│\033[0m Documentation: \033[0;36m%-50s\033[0m \033[1;37m│\033[0m\n" "https://docs.start9.com" printf " \033[1;37m│\033[0m Documentation: \033[0;36m%-50s\033[0m \033[1;37m│\033[0m\n" "https://staging.docs.start9.com"
printf " \033[1;37m│\033[0m Support: \033[0;36m%-50s\033[0m \033[1;37m│\033[0m\n" "https://start9.com/contact" printf " \033[1;37m│\033[0m Support: \033[0;36m%-50s\033[0m \033[1;37m│\033[0m\n" "https://start9.com/contact"
printf " \033[1;37m└───────────────────────────────────────────────────────────────────┘\033[0m\n\n" printf " \033[1;37m└───────────────────────────────────────────────────────────────────┘\033[0m\n\n"

View File

@@ -34,7 +34,7 @@ set -- "${POSITIONAL_ARGS[@]}" # restore positional parameters
if [ -z "$NO_SYNC" ]; then if [ -z "$NO_SYNC" ]; then
echo 'Syncing...' echo 'Syncing...'
umount -l /media/startos/next 2> /dev/null umount -R /media/startos/next 2> /dev/null
umount /media/startos/upper 2> /dev/null umount /media/startos/upper 2> /dev/null
rm -rf /media/startos/upper /media/startos/next rm -rf /media/startos/upper /media/startos/next
mkdir /media/startos/upper mkdir /media/startos/upper
@@ -55,16 +55,16 @@ mkdir -p /media/startos/next/sys
mkdir -p /media/startos/next/proc mkdir -p /media/startos/next/proc
mkdir -p /media/startos/next/boot mkdir -p /media/startos/next/boot
mkdir -p /media/startos/next/media/startos/root mkdir -p /media/startos/next/media/startos/root
mount -t tmpfs tmpfs /media/startos/next/run mount --bind /run /media/startos/next/run
mount -t tmpfs tmpfs /media/startos/next/tmp mount --bind /tmp /media/startos/next/tmp
mount --bind /dev /media/startos/next/dev mount --bind /dev /media/startos/next/dev
mount -t sysfs sysfs /media/startos/next/sys mount --bind /sys /media/startos/next/sys
mount -t proc proc /media/startos/next/proc mount --bind /proc /media/startos/next/proc
mount --bind /boot /media/startos/next/boot mount --bind /boot /media/startos/next/boot
mount --bind /media/startos/root /media/startos/next/media/startos/root mount --bind /media/startos/root /media/startos/next/media/startos/root
if mountpoint /sys/firmware/efi/efivars 2>&1 > /dev/null; then if mountpoint /sys/firmware/efi/efivars 2>&1 > /dev/null; then
mount -t efivarfs efivarfs /media/startos/next/sys/firmware/efi/efivars mount --bind /sys/firmware/efi/efivars /media/startos/next/sys/firmware/efi/efivars
fi fi
if [ -z "$*" ]; then if [ -z "$*" ]; then
@@ -79,13 +79,13 @@ if mountpoint /media/startos/next/sys/firmware/efi/efivars 2>&1 > /dev/null; the
umount /media/startos/next/sys/firmware/efi/efivars umount /media/startos/next/sys/firmware/efi/efivars
fi fi
umount -l /media/startos/next/run umount /media/startos/next/run
umount -l /media/startos/next/tmp umount /media/startos/next/tmp
umount -l /media/startos/next/dev umount /media/startos/next/dev
umount -l /media/startos/next/sys umount /media/startos/next/sys
umount -l /media/startos/next/proc umount /media/startos/next/proc
umount -l /media/startos/next/boot umount /media/startos/next/boot
umount -l /media/startos/next/media/startos/root umount /media/startos/next/media/startos/root
if [ "$CHROOT_RES" -eq 0 ]; then if [ "$CHROOT_RES" -eq 0 ]; then
@@ -111,6 +111,6 @@ if [ "$CHROOT_RES" -eq 0 ]; then
reboot reboot
fi fi
umount -l /media/startos/next umount /media/startos/next
umount -l /media/startos/upper umount /media/startos/upper
rm -rf /media/startos/upper /media/startos/next rm -rf /media/startos/upper /media/startos/next

View File

@@ -13,7 +13,7 @@ for kind in INPUT FORWARD ACCEPT; do
iptables -A $kind -j "${NAME}_${kind}" iptables -A $kind -j "${NAME}_${kind}"
fi fi
done done
for kind in PREROUTING OUTPUT POSTROUTING; do for kind in PREROUTING OUTPUT; do
if ! iptables -t nat -C $kind -j "${NAME}_${kind}" 2> /dev/null; then if ! iptables -t nat -C $kind -j "${NAME}_${kind}" 2> /dev/null; then
iptables -t nat -N "${NAME}_${kind}" 2> /dev/null iptables -t nat -N "${NAME}_${kind}" 2> /dev/null
iptables -t nat -A $kind -j "${NAME}_${kind}" iptables -t nat -A $kind -j "${NAME}_${kind}"
@@ -26,7 +26,7 @@ trap 'err=1' ERR
for kind in INPUT FORWARD ACCEPT; do for kind in INPUT FORWARD ACCEPT; do
iptables -F "${NAME}_${kind}" 2> /dev/null iptables -F "${NAME}_${kind}" 2> /dev/null
done done
for kind in PREROUTING OUTPUT POSTROUTING; do for kind in PREROUTING OUTPUT; do
iptables -t nat -F "${NAME}_${kind}" 2> /dev/null iptables -t nat -F "${NAME}_${kind}" 2> /dev/null
done done
if [ "$UNDO" = 1 ]; then if [ "$UNDO" = 1 ]; then
@@ -40,11 +40,6 @@ fi
if [ -n "$src_subnet" ]; then if [ -n "$src_subnet" ]; then
iptables -t nat -A ${NAME}_PREROUTING -s "$src_subnet" -d "$sip" -p tcp --dport "$sport" -j DNAT --to-destination "$dip:$dport" iptables -t nat -A ${NAME}_PREROUTING -s "$src_subnet" -d "$sip" -p tcp --dport "$sport" -j DNAT --to-destination "$dip:$dport"
iptables -t nat -A ${NAME}_PREROUTING -s "$src_subnet" -d "$sip" -p udp --dport "$sport" -j DNAT --to-destination "$dip:$dport" iptables -t nat -A ${NAME}_PREROUTING -s "$src_subnet" -d "$sip" -p udp --dport "$sport" -j DNAT --to-destination "$dip:$dport"
# Also allow containers on the bridge subnet to reach this forward
if [ -n "$bridge_subnet" ]; then
iptables -t nat -A ${NAME}_PREROUTING -s "$bridge_subnet" -d "$sip" -p tcp --dport "$sport" -j DNAT --to-destination "$dip:$dport"
iptables -t nat -A ${NAME}_PREROUTING -s "$bridge_subnet" -d "$sip" -p udp --dport "$sport" -j DNAT --to-destination "$dip:$dport"
fi
else else
iptables -t nat -A ${NAME}_PREROUTING -d "$sip" -p tcp --dport "$sport" -j DNAT --to-destination "$dip:$dport" iptables -t nat -A ${NAME}_PREROUTING -d "$sip" -p tcp --dport "$sport" -j DNAT --to-destination "$dip:$dport"
iptables -t nat -A ${NAME}_PREROUTING -d "$sip" -p udp --dport "$sport" -j DNAT --to-destination "$dip:$dport" iptables -t nat -A ${NAME}_PREROUTING -d "$sip" -p udp --dport "$sport" -j DNAT --to-destination "$dip:$dport"
@@ -58,15 +53,4 @@ iptables -t nat -A ${NAME}_OUTPUT -d "$sip" -p udp --dport "$sport" -j DNAT --to
iptables -A ${NAME}_FORWARD -d $dip -p tcp --dport $dport -m state --state NEW -j ACCEPT iptables -A ${NAME}_FORWARD -d $dip -p tcp --dport $dport -m state --state NEW -j ACCEPT
iptables -A ${NAME}_FORWARD -d $dip -p udp --dport $dport -m state --state NEW -j ACCEPT iptables -A ${NAME}_FORWARD -d $dip -p udp --dport $dport -m state --state NEW -j ACCEPT
# NAT hairpin: masquerade traffic from the bridge subnet or host to the DNAT
# target, so replies route back through the host for proper NAT reversal.
# Container-to-container hairpin (source is on the bridge subnet)
if [ -n "$bridge_subnet" ]; then
iptables -t nat -A ${NAME}_POSTROUTING -s "$bridge_subnet" -d "$dip" -p tcp --dport "$dport" -j MASQUERADE
iptables -t nat -A ${NAME}_POSTROUTING -s "$bridge_subnet" -d "$dip" -p udp --dport "$dport" -j MASQUERADE
fi
# Host-to-container hairpin (host connects to its own gateway IP, source is sip)
iptables -t nat -A ${NAME}_POSTROUTING -s "$sip" -d "$dip" -p tcp --dport "$dport" -j MASQUERADE
iptables -t nat -A ${NAME}_POSTROUTING -s "$sip" -d "$dip" -p udp --dport "$dport" -j MASQUERADE
exit $err exit $err

View File

@@ -1,76 +0,0 @@
#!/bin/bash
# sign-unsigned-modules [--source <dir> --dest <dir>] [--sign-file <path>]
# [--mok-key <path>] [--mok-pub <path>]
#
# Signs all unsigned kernel modules using the DKMS MOK key.
#
# Default (install) mode:
# Run inside a chroot. Finds and signs unsigned modules in /lib/modules in-place.
# sign-file and MOK key are auto-detected from standard paths.
#
# Overlay mode (--source/--dest):
# Finds unsigned modules in <source>, copies to <dest>, signs the copies.
# Clears old signed modules in <dest> first. Used during upgrades where the
# overlay upper is tmpfs and writes would be lost.
set -e
SOURCE=""
DEST=""
SIGN_FILE=""
MOK_KEY="/var/lib/dkms/mok.key"
MOK_PUB="/var/lib/dkms/mok.pub"
while [[ $# -gt 0 ]]; do
case $1 in
--source) SOURCE="$2"; shift 2;;
--dest) DEST="$2"; shift 2;;
--sign-file) SIGN_FILE="$2"; shift 2;;
--mok-key) MOK_KEY="$2"; shift 2;;
--mok-pub) MOK_PUB="$2"; shift 2;;
*) echo "Unknown option: $1" >&2; exit 1;;
esac
done
# Auto-detect sign-file if not specified
if [ -z "$SIGN_FILE" ]; then
SIGN_FILE="$(ls -1 /usr/lib/linux-kbuild-*/scripts/sign-file 2>/dev/null | head -1)"
fi
if [ -z "$SIGN_FILE" ] || [ ! -x "$SIGN_FILE" ]; then
exit 0
fi
if [ ! -f "$MOK_KEY" ] || [ ! -f "$MOK_PUB" ]; then
exit 0
fi
COUNT=0
if [ -n "$SOURCE" ] && [ -n "$DEST" ]; then
# Overlay mode: find unsigned in source, copy to dest, sign in dest
rm -rf "${DEST}"/lib/modules
for ko in $(find "${SOURCE}"/lib/modules -name '*.ko' 2>/dev/null); do
if ! modinfo "$ko" 2>/dev/null | grep -q '^sig_id:'; then
rel_path="${ko#${SOURCE}}"
mkdir -p "${DEST}$(dirname "$rel_path")"
cp "$ko" "${DEST}${rel_path}"
"$SIGN_FILE" sha256 "$MOK_KEY" "$MOK_PUB" "${DEST}${rel_path}"
COUNT=$((COUNT + 1))
fi
done
else
# In-place mode: sign modules directly
for ko in $(find /lib/modules -name '*.ko' 2>/dev/null); do
if ! modinfo "$ko" 2>/dev/null | grep -q '^sig_id:'; then
"$SIGN_FILE" sha256 "$MOK_KEY" "$MOK_PUB" "$ko"
COUNT=$((COUNT + 1))
fi
done
fi
if [ $COUNT -gt 0 ]; then
echo "[sign-modules] Signed $COUNT unsigned kernel modules"
fi

View File

@@ -104,7 +104,6 @@ local_mount_root()
-olowerdir=/startos/config/overlay:/lower,upperdir=/upper/data,workdir=/upper/work \ -olowerdir=/startos/config/overlay:/lower,upperdir=/upper/data,workdir=/upper/work \
overlay ${rootmnt} overlay ${rootmnt}
mkdir -m 750 -p ${rootmnt}/media/startos
mkdir -p ${rootmnt}/media/startos/config mkdir -p ${rootmnt}/media/startos/config
mount --bind /startos/config ${rootmnt}/media/startos/config mount --bind /startos/config ${rootmnt}/media/startos/config
mkdir -p ${rootmnt}/media/startos/images mkdir -p ${rootmnt}/media/startos/images

View File

@@ -24,7 +24,7 @@ fi
unsquashfs -f -d / $1 boot unsquashfs -f -d / $1 boot
umount -l /media/startos/next 2> /dev/null || true umount -R /media/startos/next 2> /dev/null || true
umount /media/startos/upper 2> /dev/null || true umount /media/startos/upper 2> /dev/null || true
umount /media/startos/lower 2> /dev/null || true umount /media/startos/lower 2> /dev/null || true
@@ -45,13 +45,18 @@ mkdir -p /media/startos/next/media/startos/root
mount --bind /run /media/startos/next/run mount --bind /run /media/startos/next/run
mount --bind /tmp /media/startos/next/tmp mount --bind /tmp /media/startos/next/tmp
mount --bind /dev /media/startos/next/dev mount --bind /dev /media/startos/next/dev
mount -t sysfs sysfs /media/startos/next/sys mount --bind /sys /media/startos/next/sys
mount -t proc proc /media/startos/next/proc mount --bind /proc /media/startos/next/proc
mount --rbind /boot /media/startos/next/boot mount --bind /boot /media/startos/next/boot
mount --bind /media/startos/root /media/startos/next/media/startos/root mount --bind /media/startos/root /media/startos/next/media/startos/root
if mountpoint /boot/efi 2>&1 > /dev/null; then
mkdir -p /media/startos/next/boot/efi
mount --bind /boot/efi /media/startos/next/boot/efi
fi
if mountpoint /sys/firmware/efi/efivars 2>&1 > /dev/null; then if mountpoint /sys/firmware/efi/efivars 2>&1 > /dev/null; then
mount -t efivarfs efivarfs /media/startos/next/sys/firmware/efi/efivars mount --bind /sys/firmware/efi/efivars /media/startos/next/sys/firmware/efi/efivars
fi fi
chroot /media/startos/next bash -e << "EOF" chroot /media/startos/next bash -e << "EOF"
@@ -63,18 +68,9 @@ fi
EOF EOF
# Sign unsigned kernel modules for Secure Boot
SIGN_FILE="$(ls -1 /media/startos/next/usr/lib/linux-kbuild-*/scripts/sign-file 2>/dev/null | head -1)"
/media/startos/next/usr/lib/startos/scripts/sign-unsigned-modules \
--source /media/startos/lower \
--dest /media/startos/config/overlay \
--sign-file "$SIGN_FILE" \
--mok-key /media/startos/config/overlay/var/lib/dkms/mok.key \
--mok-pub /media/startos/config/overlay/var/lib/dkms/mok.pub
sync sync
umount -l /media/startos/next umount -Rl /media/startos/next
umount /media/startos/upper umount /media/startos/upper
umount /media/startos/lower umount /media/startos/lower

View File

@@ -1,367 +0,0 @@
#!/bin/bash
set -e
REPO="Start9Labs/start-os"
REGISTRY="https://alpha-registry-x.start9.com"
S3_BUCKET="s3://startos-images"
S3_CDN="https://startos-images.nyc3.cdn.digitaloceanspaces.com"
START9_GPG_KEY="2D63C217"
ARCHES="aarch64 aarch64-nonfree aarch64-nvidia riscv64 riscv64-nonfree x86_64 x86_64-nonfree x86_64-nvidia"
CLI_ARCHES="aarch64 riscv64 x86_64"
parse_run_id() {
local val="$1"
if [[ "$val" =~ /actions/runs/([0-9]+) ]]; then
echo "${BASH_REMATCH[1]}"
else
echo "$val"
fi
}
require_version() {
if [ -z "${VERSION:-}" ]; then
read -rp "VERSION: " VERSION
if [ -z "$VERSION" ]; then
>&2 echo '$VERSION required'
exit 2
fi
fi
}
release_dir() {
echo "$HOME/Downloads/v$VERSION"
}
ensure_release_dir() {
local dir
dir=$(release_dir)
if [ "$CLEAN" = "1" ]; then
rm -rf "$dir"
fi
mkdir -p "$dir"
cd "$dir"
}
enter_release_dir() {
local dir
dir=$(release_dir)
if [ ! -d "$dir" ]; then
>&2 echo "Release directory $dir does not exist. Run 'download' or 'pull' first."
exit 1
fi
cd "$dir"
}
cli_target_for() {
local arch=$1 os=$2
local pair="${arch}-${os}"
if [ "$pair" = "riscv64-linux" ]; then
echo "riscv64gc-unknown-linux-musl"
elif [ "$pair" = "riscv64-macos" ]; then
return 1
elif [ "$os" = "linux" ]; then
echo "${arch}-unknown-linux-musl"
elif [ "$os" = "macos" ]; then
echo "${arch}-apple-darwin"
fi
}
release_files() {
for file in *.iso *.squashfs *.deb; do
[ -f "$file" ] && echo "$file"
done
for file in start-cli_*; do
[[ "$file" == *.asc ]] && continue
[ -f "$file" ] && echo "$file"
done
}
resolve_gh_user() {
GH_USER=${GH_USER:-$(gh api user -q .login 2>/dev/null || true)}
GH_GPG_KEY=$(git config user.signingkey 2>/dev/null || true)
}
# --- Subcommands ---
cmd_download() {
require_version
if [ -z "${RUN_ID:-}" ]; then
read -rp "RUN_ID (OS images, leave blank to skip): " RUN_ID
fi
RUN_ID=$(parse_run_id "${RUN_ID:-}")
if [ -z "${ST_RUN_ID:-}" ]; then
read -rp "ST_RUN_ID (start-tunnel, leave blank to skip): " ST_RUN_ID
fi
ST_RUN_ID=$(parse_run_id "${ST_RUN_ID:-}")
if [ -z "${CLI_RUN_ID:-}" ]; then
read -rp "CLI_RUN_ID (start-cli, leave blank to skip): " CLI_RUN_ID
fi
CLI_RUN_ID=$(parse_run_id "${CLI_RUN_ID:-}")
ensure_release_dir
if [ -n "$RUN_ID" ]; then
for arch in $ARCHES; do
while ! gh run download -R $REPO "$RUN_ID" -n "$arch.squashfs" -D "$(pwd)"; do sleep 1; done
done
for arch in $ARCHES; do
while ! gh run download -R $REPO "$RUN_ID" -n "$arch.iso" -D "$(pwd)"; do sleep 1; done
done
fi
if [ -n "$ST_RUN_ID" ]; then
for arch in $CLI_ARCHES; do
while ! gh run download -R $REPO "$ST_RUN_ID" -n "start-tunnel_$arch.deb" -D "$(pwd)"; do sleep 1; done
done
fi
if [ -n "$CLI_RUN_ID" ]; then
for arch in $CLI_ARCHES; do
for os in linux macos; do
local target
target=$(cli_target_for "$arch" "$os") || continue
while ! gh run download -R $REPO "$CLI_RUN_ID" -n "start-cli_$target" -D "$(pwd)"; do sleep 1; done
mv start-cli "start-cli_${arch}-${os}"
done
done
fi
}
cmd_pull() {
require_version
ensure_release_dir
echo "Downloading release assets from tag v$VERSION..."
# Download debs and CLI binaries from the GH release
for file in $(gh release view -R $REPO "v$VERSION" --json assets -q '.assets[].name' | grep -E '\.(deb)$|^start-cli_'); do
gh release download -R $REPO "v$VERSION" -p "$file" -D "$(pwd)" --clobber
done
# Download ISOs and squashfs from S3 CDN
for arch in $ARCHES; do
for ext in squashfs iso; do
# Get the actual filename from the GH release asset list or body
local filename
filename=$(gh release view -R $REPO "v$VERSION" --json assets -q ".assets[].name" | grep "_${arch}\\.${ext}$" || true)
if [ -z "$filename" ]; then
filename=$(gh release view -R $REPO "v$VERSION" --json body -q .body | grep -oP "[^ ]*_${arch}\\.${ext}" | head -1 || true)
fi
if [ -n "$filename" ]; then
echo "Downloading $filename from S3..."
curl -fSL -o "$filename" "$S3_CDN/v$VERSION/$filename"
fi
done
done
}
cmd_register() {
require_version
enter_release_dir
start-cli --registry=$REGISTRY registry os version add "$VERSION" "v$VERSION" '' ">=0.3.5 <=$VERSION"
}
cmd_upload() {
require_version
enter_release_dir
for file in $(release_files); do
case "$file" in
*.iso|*.squashfs)
s3cmd put -P "$file" "$S3_BUCKET/v$VERSION/$file"
;;
*)
gh release upload -R $REPO "v$VERSION" "$file"
;;
esac
done
}
cmd_index() {
require_version
enter_release_dir
for arch in $ARCHES; do
for file in *_"$arch".squashfs *_"$arch".iso; do
start-cli --registry=$REGISTRY registry os asset add --platform="$arch" --version="$VERSION" "$file" "$S3_CDN/v$VERSION/$file"
done
done
}
cmd_sign() {
require_version
enter_release_dir
resolve_gh_user
mkdir -p signatures
for file in $(release_files); do
gpg -u $START9_GPG_KEY --detach-sign --armor -o "signatures/${file}.start9.asc" "$file"
if [ -n "$GH_USER" ] && [ -n "$GH_GPG_KEY" ]; then
gpg -u "$GH_GPG_KEY" --detach-sign --armor -o "signatures/${file}.${GH_USER}.asc" "$file"
fi
done
gpg --export -a $START9_GPG_KEY > signatures/start9.key.asc
if [ -n "$GH_USER" ] && [ -n "$GH_GPG_KEY" ]; then
gpg --export -a "$GH_GPG_KEY" > "signatures/${GH_USER}.key.asc"
else
>&2 echo 'Warning: could not determine GitHub user or GPG signing key, skipping personal signature'
fi
tar -czvf signatures.tar.gz -C signatures .
gh release upload -R $REPO "v$VERSION" signatures.tar.gz --clobber
}
cmd_cosign() {
require_version
enter_release_dir
resolve_gh_user
if [ -z "$GH_USER" ] || [ -z "$GH_GPG_KEY" ]; then
>&2 echo 'Error: could not determine GitHub user or GPG signing key'
>&2 echo "Set GH_USER and/or configure git user.signingkey"
exit 1
fi
echo "Downloading existing signatures..."
gh release download -R $REPO "v$VERSION" -p "signatures.tar.gz" -D "$(pwd)" --clobber
mkdir -p signatures
tar -xzf signatures.tar.gz -C signatures
echo "Adding personal signatures as $GH_USER..."
for file in $(release_files); do
gpg -u "$GH_GPG_KEY" --detach-sign --armor -o "signatures/${file}.${GH_USER}.asc" "$file"
done
gpg --export -a "$GH_GPG_KEY" > "signatures/${GH_USER}.key.asc"
echo "Re-packing signatures..."
tar -czvf signatures.tar.gz -C signatures .
gh release upload -R $REPO "v$VERSION" signatures.tar.gz --clobber
echo "Done. Personal signatures for $GH_USER added to v$VERSION."
}
cmd_notes() {
require_version
enter_release_dir
cat << EOF
# ISO Downloads
- [x86_64/AMD64]($S3_CDN/v$VERSION/$(ls *_x86_64-nonfree.iso))
- [x86_64/AMD64 + NVIDIA]($S3_CDN/v$VERSION/$(ls *_x86_64-nvidia.iso))
- [x86_64/AMD64-slim (FOSS-only)]($S3_CDN/v$VERSION/$(ls *_x86_64.iso) "Without proprietary software or drivers")
- [aarch64/ARM64]($S3_CDN/v$VERSION/$(ls *_aarch64-nonfree.iso))
- [aarch64/ARM64 + NVIDIA]($S3_CDN/v$VERSION/$(ls *_aarch64-nvidia.iso))
- [aarch64/ARM64-slim (FOSS-Only)]($S3_CDN/v$VERSION/$(ls *_aarch64.iso) "Without proprietary software or drivers")
- [RISCV64 (RVA23)]($S3_CDN/v$VERSION/$(ls *_riscv64-nonfree.iso))
- [RISCV64 (RVA23)-slim (FOSS-only)]($S3_CDN/v$VERSION/$(ls *_riscv64.iso) "Without proprietary software or drivers")
EOF
cat << 'EOF'
# StartOS Checksums
## SHA-256
```
EOF
sha256sum *.iso *.squashfs
cat << 'EOF'
```
## BLAKE-3
```
EOF
b3sum *.iso *.squashfs
cat << 'EOF'
```
# Start-Tunnel Checksums
## SHA-256
```
EOF
sha256sum start-tunnel*.deb
cat << 'EOF'
```
## BLAKE-3
```
EOF
b3sum start-tunnel*.deb
cat << 'EOF'
```
# start-cli Checksums
## SHA-256
```
EOF
release_files | grep '^start-cli_' | xargs sha256sum
cat << 'EOF'
```
## BLAKE-3
```
EOF
release_files | grep '^start-cli_' | xargs b3sum
cat << 'EOF'
```
EOF
}
cmd_full_release() {
cmd_download
cmd_register
cmd_upload
cmd_index
cmd_sign
cmd_notes
}
usage() {
cat << 'EOF'
Usage: manage-release.sh <subcommand>
Subcommands:
download Download artifacts from GitHub Actions runs
Requires: RUN_ID, ST_RUN_ID, CLI_RUN_ID (any combination)
pull Download an existing release from the GH tag and S3
register Register the version in the Start9 registry
upload Upload artifacts to GitHub Releases and S3
index Add assets to the registry index
sign Sign all artifacts with Start9 org key (+ personal key if available)
and upload signatures.tar.gz
cosign Add personal GPG signature to an existing release's signatures
(requires 'pull' first so you can verify assets before signing)
notes Print release notes with download links and checksums
full-release Run: download → register → upload → index → sign → notes
Environment variables:
VERSION (required) Release version
RUN_ID GitHub Actions run ID for OS images (download subcommand)
ST_RUN_ID GitHub Actions run ID for start-tunnel (download subcommand)
CLI_RUN_ID GitHub Actions run ID for start-cli (download subcommand)
GH_USER Override GitHub username (default: autodetected via gh cli)
CLEAN Set to 1 to wipe and recreate the release directory
EOF
}
case "${1:-}" in
download) cmd_download ;;
pull) cmd_pull ;;
register) cmd_register ;;
upload) cmd_upload ;;
index) cmd_index ;;
sign) cmd_sign ;;
cosign) cmd_cosign ;;
notes) cmd_notes ;;
full-release) cmd_full_release ;;
*) usage; exit 1 ;;
esac

View File

@@ -1,36 +0,0 @@
#!/bin/bash
# Save Docker images needed by the 0.3.6-alpha.0 migration as tarballs
# so they can be bundled into the OS and loaded without internet access.
set -e
ARCH="${ARCH:-x86_64}"
DESTDIR="${1:-build/lib/migration-images}"
if [ "$ARCH" = "x86_64" ]; then
DOCKER_PLATFORM="linux/amd64"
elif [ "$ARCH" = "aarch64" ]; then
DOCKER_PLATFORM="linux/arm64"
else
DOCKER_PLATFORM="linux/$ARCH"
fi
IMAGES=("tonistiigi/binfmt:latest")
if [ "$ARCH" != "riscv64" ]; then
IMAGES=("start9/compat:latest" "start9/utils:latest" "${IMAGES[@]}")
fi
mkdir -p "$DESTDIR"
for IMAGE in "${IMAGES[@]}"; do
FILENAME=$(echo "$IMAGE" | sed 's|/|_|g; s/:/_/g').tar
if [ -f "$DESTDIR/$FILENAME" ]; then
echo "Skipping $IMAGE (already saved)"
continue
fi
echo "Pulling $IMAGE for $DOCKER_PLATFORM..."
docker pull --platform "$DOCKER_PLATFORM" "$IMAGE"
echo "Saving $IMAGE to $DESTDIR/$FILENAME..."
docker save "$IMAGE" -o "$DESTDIR/$FILENAME"
done
echo "Migration images saved to $DESTDIR"

142
build/upload-ota.sh Executable file
View File

@@ -0,0 +1,142 @@
#!/bin/bash
if [ -z "$VERSION" ]; then
>&2 echo '$VERSION required'
exit 2
fi
set -e
if [ "$SKIP_DL" != "1" ]; then
if [ "$SKIP_CLEAN" != "1" ]; then
rm -rf ~/Downloads/v$VERSION
mkdir ~/Downloads/v$VERSION
cd ~/Downloads/v$VERSION
fi
if [ -n "$RUN_ID" ]; then
for arch in aarch64 aarch64-nonfree riscv64 x86_64 x86_64-nonfree; do
while ! gh run download -R Start9Labs/start-os $RUN_ID -n $arch.squashfs -D $(pwd); do sleep 1; done
done
for arch in aarch64 aarch64-nonfree riscv64 x86_64 x86_64-nonfree; do
while ! gh run download -R Start9Labs/start-os $RUN_ID -n $arch.iso -D $(pwd); do sleep 1; done
done
fi
if [ -n "$ST_RUN_ID" ]; then
for arch in aarch64 riscv64 x86_64; do
while ! gh run download -R Start9Labs/start-os $ST_RUN_ID -n start-tunnel_$arch.deb -D $(pwd); do sleep 1; done
done
fi
if [ -n "$CLI_RUN_ID" ]; then
for arch in aarch64 riscv64 x86_64; do
for os in linux macos; do
pair=${arch}-${os}
if [ "${pair}" = "riscv64-linux" ]; then
target=riscv64gc-unknown-linux-musl
elif [ "${pair}" = "riscv64-macos" ]; then
continue
elif [ "${os}" = "linux" ]; then
target="${arch}-unknown-linux-musl"
elif [ "${os}" = "macos" ]; then
target="${arch}-apple-darwin"
fi
while ! gh run download -R Start9Labs/start-os $CLI_RUN_ID -n start-cli_$target -D $(pwd); do sleep 1; done
mv start-cli "start-cli_${pair}"
done
done
fi
else
cd ~/Downloads/v$VERSION
fi
start-cli --registry=https://alpha-registry-x.start9.com registry os version add $VERSION "v$VERSION" '' ">=0.3.5 <=$VERSION"
if [ "$SKIP_UL" = "2" ]; then
exit 2
elif [ "$SKIP_UL" != "1" ]; then
for file in *.deb start-cli_*; do
gh release upload -R Start9Labs/start-os v$VERSION $file
done
for file in *.iso *.squashfs; do
s3cmd put -P $file s3://startos-images/v$VERSION/$file
done
fi
if [ "$SKIP_INDEX" != "1" ]; then
for arch in aarch64 aarch64-nonfree riscv64 x86_64 x86_64-nonfree; do
for file in *_$arch.squashfs *_$arch.iso; do
start-cli --registry=https://alpha-registry-x.start9.com registry os asset add --platform=$arch --version=$VERSION $file https://startos-images.nyc3.cdn.digitaloceanspaces.com/v$VERSION/$file
done
done
fi
for file in *.iso *.squashfs *.deb start-cli_*; do
gpg -u 7CFFDA41CA66056A --detach-sign --armor -o "${file}.asc" "$file"
done
gpg --export -a 7CFFDA41CA66056A > dr-bonez.key.asc
tar -czvf signatures.tar.gz *.asc
gh release upload -R Start9Labs/start-os v$VERSION signatures.tar.gz
cat << EOF
# ISO Downloads
- [x86_64/AMD64](https://startos-images.nyc3.cdn.digitaloceanspaces.com/v$VERSION/$(ls *_x86_64-nonfree.iso))
- [x86_64/AMD64-slim (FOSS-only)](https://startos-images.nyc3.cdn.digitaloceanspaces.com/v$VERSION/$(ls *_x86_64.iso) "Without proprietary software or drivers")
- [aarch64/ARM64](https://startos-images.nyc3.cdn.digitaloceanspaces.com/v$VERSION/$(ls *_aarch64-nonfree.iso))
- [aarch64/ARM64-slim (FOSS-Only)](https://startos-images.nyc3.cdn.digitaloceanspaces.com/v$VERSION/$(ls *_aarch64.iso) "Without proprietary software or drivers")
- [RISCV64 (RVA23)](https://startos-images.nyc3.cdn.digitaloceanspaces.com/v$VERSION/$(ls *_riscv64.iso))
EOF
cat << 'EOF'
# StartOS Checksums
## SHA-256
```
EOF
sha256sum *.iso *.squashfs
cat << 'EOF'
```
## BLAKE-3
```
EOF
b3sum *.iso *.squashfs
cat << 'EOF'
```
# Start-Tunnel Checksums
## SHA-256
```
EOF
sha256sum start-tunnel*.deb
cat << 'EOF'
```
## BLAKE-3
```
EOF
b3sum start-tunnel*.deb
cat << 'EOF'
```
# start-cli Checksums
## SHA-256
```
EOF
sha256sum start-cli_*
cat << 'EOF'
```
## BLAKE-3
```
EOF
b3sum start-cli_*
cat << 'EOF'
```
EOF

View File

@@ -1,30 +0,0 @@
// Mock for ESM-only mime package — Jest's module loader doesn't support require(esm)
const types = {
".png": "image/png",
".jpg": "image/jpeg",
".jpeg": "image/jpeg",
".gif": "image/gif",
".svg": "image/svg+xml",
".webp": "image/webp",
".ico": "image/x-icon",
".json": "application/json",
".js": "application/javascript",
".html": "text/html",
".css": "text/css",
".txt": "text/plain",
".md": "text/markdown",
}
module.exports = {
default: {
getType(path) {
const ext = "." + path.split(".").pop()
return types[ext] || null
},
getExtension(type) {
const entry = Object.entries(types).find(([, v]) => v === type)
return entry ? entry[0].slice(1) : null
},
},
__esModule: true,
}

View File

@@ -5,7 +5,7 @@ OnFailure=container-runtime-failure.service
[Service] [Service]
Type=simple Type=simple
Environment=RUST_LOG=startos=debug Environment=RUST_LOG=startos=debug
ExecStart=/usr/bin/start-container pipe-wrap /usr/bin/node --experimental-detect-module --trace-warnings /usr/lib/startos/init/index.js ExecStart=/usr/bin/node --experimental-detect-module --trace-warnings --unhandled-rejections=warn /usr/lib/startos/init/index.js
Restart=no Restart=no
[Install] [Install]

View File

@@ -5,7 +5,4 @@ module.exports = {
testEnvironment: "node", testEnvironment: "node",
rootDir: "./src/", rootDir: "./src/",
modulePathIgnorePatterns: ["./dist/"], modulePathIgnorePatterns: ["./dist/"],
moduleNameMapper: {
"^mime$": "<rootDir>/../__mocks__/mime.js",
},
} }

View File

@@ -19,6 +19,7 @@
"lodash.merge": "^4.6.2", "lodash.merge": "^4.6.2",
"mime": "^4.0.7", "mime": "^4.0.7",
"node-fetch": "^3.1.0", "node-fetch": "^3.1.0",
"ts-matches": "^6.3.2",
"tslib": "^2.5.3", "tslib": "^2.5.3",
"typescript": "^5.1.3", "typescript": "^5.1.3",
"yaml": "^2.3.1" "yaml": "^2.3.1"
@@ -37,7 +38,7 @@
}, },
"../sdk/dist": { "../sdk/dist": {
"name": "@start9labs/start-sdk", "name": "@start9labs/start-sdk",
"version": "0.4.0-beta.66", "version": "0.4.0-beta.48",
"license": "MIT", "license": "MIT",
"dependencies": { "dependencies": {
"@iarna/toml": "^3.0.0", "@iarna/toml": "^3.0.0",
@@ -45,13 +46,11 @@
"@noble/hashes": "^1.7.2", "@noble/hashes": "^1.7.2",
"@types/ini": "^4.1.1", "@types/ini": "^4.1.1",
"deep-equality-data-structures": "^2.0.0", "deep-equality-data-structures": "^2.0.0",
"fast-xml-parser": "^5.5.6",
"ini": "^5.0.0", "ini": "^5.0.0",
"isomorphic-fetch": "^3.0.0", "isomorphic-fetch": "^3.0.0",
"mime": "^4.0.7", "mime": "^4.0.7",
"yaml": "^2.7.1", "ts-matches": "^6.3.2",
"zod": "^4.3.6", "yaml": "^2.7.1"
"zod-deep-partial": "^1.2.0"
}, },
"devDependencies": { "devDependencies": {
"@types/jest": "^29.4.0", "@types/jest": "^29.4.0",
@@ -6495,6 +6494,12 @@
} }
} }
}, },
"node_modules/ts-matches": {
"version": "6.3.2",
"resolved": "https://registry.npmjs.org/ts-matches/-/ts-matches-6.3.2.tgz",
"integrity": "sha512-UhSgJymF8cLd4y0vV29qlKVCkQpUtekAaujXbQVc729FezS8HwqzepqvtjzQ3HboatIqN/Idor85O2RMwT7lIQ==",
"license": "MIT"
},
"node_modules/tslib": { "node_modules/tslib": {
"version": "2.8.1", "version": "2.8.1",
"resolved": "https://registry.npmjs.org/tslib/-/tslib-2.8.1.tgz", "resolved": "https://registry.npmjs.org/tslib/-/tslib-2.8.1.tgz",

View File

@@ -28,6 +28,7 @@
"lodash.merge": "^4.6.2", "lodash.merge": "^4.6.2",
"mime": "^4.0.7", "mime": "^4.0.7",
"node-fetch": "^3.1.0", "node-fetch": "^3.1.0",
"ts-matches": "^6.3.2",
"tslib": "^2.5.3", "tslib": "^2.5.3",
"typescript": "^5.1.3", "typescript": "^5.1.3",
"yaml": "^2.3.1" "yaml": "^2.3.1"

View File

@@ -3,39 +3,33 @@ import {
types as T, types as T,
utils, utils,
VersionRange, VersionRange,
z,
} from "@start9labs/start-sdk" } from "@start9labs/start-sdk"
import * as net from "net" import * as net from "net"
import { object, string, number, literals, some, unknown } from "ts-matches"
import { Effects } from "../Models/Effects" import { Effects } from "../Models/Effects"
import { CallbackHolder } from "../Models/CallbackHolder" import { CallbackHolder } from "../Models/CallbackHolder"
import { asError } from "@start9labs/start-sdk/base/lib/util" import { asError } from "@start9labs/start-sdk/base/lib/util"
const matchRpcError = z.object({ const matchRpcError = object({
error: z.object({ error: object({
code: z.number(), code: number,
message: z.string(), message: string,
data: z data: some(
.union([ string,
z.string(), object({
z.object({ details: string,
details: z.string(), debug: string.nullable().optional(),
debug: z.string().nullable().optional(), }),
}), )
])
.nullable() .nullable()
.optional(), .optional(),
}), }),
}) })
function testRpcError(v: unknown): v is RpcError { const testRpcError = matchRpcError.test
return matchRpcError.safeParse(v).success const testRpcResult = object({
} result: unknown,
const matchRpcResult = z.object({ }).test
result: z.unknown(), type RpcError = typeof matchRpcError._TYPE
})
function testRpcResult(v: unknown): v is z.infer<typeof matchRpcResult> {
return matchRpcResult.safeParse(v).success
}
type RpcError = z.infer<typeof matchRpcError>
const SOCKET_PATH = "/media/startos/rpc/host.sock" const SOCKET_PATH = "/media/startos/rpc/host.sock"
let hostSystemId = 0 let hostSystemId = 0
@@ -77,7 +71,7 @@ const rpcRoundFor =
"Error in host RPC:", "Error in host RPC:",
utils.asError({ method, params, error: res.error }), utils.asError({ method, params, error: res.error }),
) )
if (typeof res.error.data === "string") { if (string.test(res.error.data)) {
message += ": " + res.error.data message += ": " + res.error.data
console.error(`Details: ${res.error.data}`) console.error(`Details: ${res.error.data}`)
} else { } else {
@@ -187,10 +181,9 @@ export function makeEffects(context: EffectContext): Effects {
getServiceManifest( getServiceManifest(
...[options]: Parameters<T.Effects["getServiceManifest"]> ...[options]: Parameters<T.Effects["getServiceManifest"]>
) { ) {
return rpcRound("get-service-manifest", { return rpcRound("get-service-manifest", options) as ReturnType<
...options, T.Effects["getServiceManifest"]
callback: context.callbacks?.addCallback(options.callback) || null, >
}) as ReturnType<T.Effects["getServiceManifest"]>
}, },
subcontainer: { subcontainer: {
createFs(options: { imageId: string; name: string }) { createFs(options: { imageId: string; name: string }) {
@@ -212,10 +205,9 @@ export function makeEffects(context: EffectContext): Effects {
> >
}) as Effects["exportServiceInterface"], }) as Effects["exportServiceInterface"],
getContainerIp(...[options]: Parameters<T.Effects["getContainerIp"]>) { getContainerIp(...[options]: Parameters<T.Effects["getContainerIp"]>) {
return rpcRound("get-container-ip", { return rpcRound("get-container-ip", options) as ReturnType<
...options, T.Effects["getContainerIp"]
callback: context.callbacks?.addCallback(options.callback) || null, >
}) as ReturnType<T.Effects["getContainerIp"]>
}, },
getOsIp(...[]: Parameters<T.Effects["getOsIp"]>) { getOsIp(...[]: Parameters<T.Effects["getOsIp"]>) {
return rpcRound("get-os-ip", {}) as ReturnType<T.Effects["getOsIp"]> return rpcRound("get-os-ip", {}) as ReturnType<T.Effects["getOsIp"]>
@@ -246,10 +238,9 @@ export function makeEffects(context: EffectContext): Effects {
> >
}, },
getSslCertificate(options: Parameters<T.Effects["getSslCertificate"]>[0]) { getSslCertificate(options: Parameters<T.Effects["getSslCertificate"]>[0]) {
return rpcRound("get-ssl-certificate", { return rpcRound("get-ssl-certificate", options) as ReturnType<
...options, T.Effects["getSslCertificate"]
callback: context.callbacks?.addCallback(options.callback) || null, >
}) as ReturnType<T.Effects["getSslCertificate"]>
}, },
getSslKey(options: Parameters<T.Effects["getSslKey"]>[0]) { getSslKey(options: Parameters<T.Effects["getSslKey"]>[0]) {
return rpcRound("get-ssl-key", options) as ReturnType< return rpcRound("get-ssl-key", options) as ReturnType<
@@ -262,14 +253,6 @@ export function makeEffects(context: EffectContext): Effects {
callback: context.callbacks?.addCallback(options.callback) || null, callback: context.callbacks?.addCallback(options.callback) || null,
}) as ReturnType<T.Effects["getSystemSmtp"]> }) as ReturnType<T.Effects["getSystemSmtp"]>
}, },
getOutboundGateway(
...[options]: Parameters<T.Effects["getOutboundGateway"]>
) {
return rpcRound("get-outbound-gateway", {
...options,
callback: context.callbacks?.addCallback(options.callback) || null,
}) as ReturnType<T.Effects["getOutboundGateway"]>
},
listServiceInterfaces( listServiceInterfaces(
...[options]: Parameters<T.Effects["listServiceInterfaces"]> ...[options]: Parameters<T.Effects["listServiceInterfaces"]>
) { ) {
@@ -311,10 +294,7 @@ export function makeEffects(context: EffectContext): Effects {
}, },
getStatus(...[o]: Parameters<T.Effects["getStatus"]>) { getStatus(...[o]: Parameters<T.Effects["getStatus"]>) {
return rpcRound("get-status", { return rpcRound("get-status", o) as ReturnType<T.Effects["getStatus"]>
...o,
callback: context.callbacks?.addCallback(o.callback) || null,
}) as ReturnType<T.Effects["getStatus"]>
}, },
/// DEPRECATED /// DEPRECATED
setMainStatus(o: { status: "running" | "stopped" }): Promise<null> { setMainStatus(o: { status: "running" | "stopped" }): Promise<null> {
@@ -336,31 +316,6 @@ export function makeEffects(context: EffectContext): Effects {
T.Effects["setDataVersion"] T.Effects["setDataVersion"]
> >
}, },
plugin: {
url: {
register(
...[options]: Parameters<T.Effects["plugin"]["url"]["register"]>
) {
return rpcRound("plugin.url.register", options) as ReturnType<
T.Effects["plugin"]["url"]["register"]
>
},
exportUrl(
...[options]: Parameters<T.Effects["plugin"]["url"]["exportUrl"]>
) {
return rpcRound("plugin.url.export-url", options) as ReturnType<
T.Effects["plugin"]["url"]["exportUrl"]
>
},
clearUrls(
...[options]: Parameters<T.Effects["plugin"]["url"]["clearUrls"]>
) {
return rpcRound("plugin.url.clear-urls", options) as ReturnType<
T.Effects["plugin"]["url"]["clearUrls"]
>
},
},
},
} }
if (context.callbacks?.onLeaveContext) if (context.callbacks?.onLeaveContext)
self.onLeaveContext(() => { self.onLeaveContext(() => {

View File

@@ -1,13 +1,25 @@
// @ts-check // @ts-check
import * as net from "net" import * as net from "net"
import {
object,
some,
string,
literal,
array,
number,
matches,
any,
shape,
anyOf,
literals,
} from "ts-matches"
import { import {
ExtendedVersion, ExtendedVersion,
types as T, types as T,
utils, utils,
VersionRange, VersionRange,
z,
} from "@start9labs/start-sdk" } from "@start9labs/start-sdk"
import * as fs from "fs" import * as fs from "fs"
@@ -17,92 +29,89 @@ import { jsonPath, unNestPath } from "../Models/JsonPath"
import { System } from "../Interfaces/System" import { System } from "../Interfaces/System"
import { makeEffects } from "./EffectCreator" import { makeEffects } from "./EffectCreator"
type MaybePromise<T> = T | Promise<T> type MaybePromise<T> = T | Promise<T>
export const matchRpcResult = z.union([ export const matchRpcResult = anyOf(
z.object({ result: z.any() }), object({ result: any }),
z.object({ object({
error: z.object({ error: object({
code: z.number(), code: number,
message: z.string(), message: string,
data: z data: object({
.object({ details: string.optional(),
details: z.string().optional(), debug: any.optional(),
debug: z.any().optional(), })
})
.nullable() .nullable()
.optional(), .optional(),
}), }),
}), }),
]) )
export type RpcResult = z.infer<typeof matchRpcResult> export type RpcResult = typeof matchRpcResult._TYPE
type SocketResponse = ({ jsonrpc: "2.0"; id: IdType } & RpcResult) | null type SocketResponse = ({ jsonrpc: "2.0"; id: IdType } & RpcResult) | null
const SOCKET_PARENT = "/media/startos/rpc" const SOCKET_PARENT = "/media/startos/rpc"
const SOCKET_PATH = "/media/startos/rpc/service.sock" const SOCKET_PATH = "/media/startos/rpc/service.sock"
const jsonrpc = "2.0" as const const jsonrpc = "2.0" as const
const isResultSchema = z.object({ result: z.any() }) const isResult = object({ result: any }).test
const isResult = (v: unknown): v is z.infer<typeof isResultSchema> =>
isResultSchema.safeParse(v).success
const idType = z.union([z.string(), z.number(), z.literal(null)]) const idType = some(string, number, literal(null))
type IdType = null | string | number | undefined type IdType = null | string | number | undefined
const runType = z.object({ const runType = object({
id: idType.optional(), id: idType.optional(),
method: z.literal("execute"), method: literal("execute"),
params: z.object({ params: object({
id: z.string(), id: string,
procedure: z.string(), procedure: string,
input: z.any(), input: any,
timeout: z.number().nullable().optional(), timeout: number.nullable().optional(),
}), }),
}) })
const sandboxRunType = z.object({ const sandboxRunType = object({
id: idType.optional(), id: idType.optional(),
method: z.literal("sandbox"), method: literal("sandbox"),
params: z.object({ params: object({
id: z.string(), id: string,
procedure: z.string(), procedure: string,
input: z.any(), input: any,
timeout: z.number().nullable().optional(), timeout: number.nullable().optional(),
}), }),
}) })
const callbackType = z.object({ const callbackType = object({
method: z.literal("callback"), method: literal("callback"),
params: z.object({ params: object({
id: z.number(), id: number,
args: z.array(z.unknown()), args: array,
}), }),
}) })
const initType = z.object({ const initType = object({
id: idType.optional(), id: idType.optional(),
method: z.literal("init"), method: literal("init"),
params: z.object({ params: object({
id: z.string(), id: string,
kind: z.enum(["install", "update", "restore"]).nullable(), kind: literals("install", "update", "restore").nullable(),
}), }),
}) })
const startType = z.object({ const startType = object({
id: idType.optional(), id: idType.optional(),
method: z.literal("start"), method: literal("start"),
}) })
const stopType = z.object({ const stopType = object({
id: idType.optional(), id: idType.optional(),
method: z.literal("stop"), method: literal("stop"),
}) })
const exitType = z.object({ const exitType = object({
id: idType.optional(), id: idType.optional(),
method: z.literal("exit"), method: literal("exit"),
params: z.object({ params: object({
id: z.string(), id: string,
target: z.string().nullable(), target: string.nullable(),
}), }),
}) })
const evalType = z.object({ const evalType = object({
id: idType.optional(), id: idType.optional(),
method: z.literal("eval"), method: literal("eval"),
params: z.object({ params: object({
script: z.string(), script: string,
}), }),
}) })
@@ -135,9 +144,7 @@ const handleRpc = (id: IdType, result: Promise<RpcResult>) =>
}, },
})) }))
const hasIdSchema = z.object({ id: idType }) const hasId = object({ id: idType }).test
const hasId = (v: unknown): v is z.infer<typeof hasIdSchema> =>
hasIdSchema.safeParse(v).success
export class RpcListener { export class RpcListener {
shouldExit = false shouldExit = false
unixSocketServer = net.createServer(async (server) => {}) unixSocketServer = net.createServer(async (server) => {})
@@ -239,52 +246,40 @@ export class RpcListener {
} }
private dealWithInput(input: unknown): MaybePromise<SocketResponse> { private dealWithInput(input: unknown): MaybePromise<SocketResponse> {
const parsed = z.object({ method: z.string() }).safeParse(input) return matches(input)
if (!parsed.success) { .when(runType, async ({ id, params }) => {
console.warn(
`Couldn't parse the following input ${JSON.stringify(input)}`,
)
return {
jsonrpc,
id: (input as any)?.id,
error: {
code: -32602,
message: "invalid params",
data: {
details: JSON.stringify(input),
},
},
}
}
switch (parsed.data.method) {
case "execute": {
const { id, params } = runType.parse(input)
const system = this.system const system = this.system
const procedure = jsonPath.parse(params.procedure) const procedure = jsonPath.unsafeCast(params.procedure)
const { input: inp, timeout, id: eventId } = params const { input, timeout, id: eventId } = params
const result = this.getResult(procedure, system, eventId, timeout, inp) const result = this.getResult(
procedure,
system,
eventId,
timeout,
input,
)
return handleRpc(id, result) return handleRpc(id, result)
} })
case "sandbox": { .when(sandboxRunType, async ({ id, params }) => {
const { id, params } = sandboxRunType.parse(input)
const system = this.system const system = this.system
const procedure = jsonPath.parse(params.procedure) const procedure = jsonPath.unsafeCast(params.procedure)
const { input: inp, timeout, id: eventId } = params const { input, timeout, id: eventId } = params
const result = this.getResult(procedure, system, eventId, timeout, inp) const result = this.getResult(
procedure,
system,
eventId,
timeout,
input,
)
return handleRpc(id, result) return handleRpc(id, result)
} })
case "callback": { .when(callbackType, async ({ params: { id, args } }) => {
const {
params: { id, args },
} = callbackType.parse(input)
this.callCallback(id, args) this.callCallback(id, args)
return null return null
} })
case "start": { .when(startType, async ({ id }) => {
const { id } = startType.parse(input)
const callbacks = const callbacks =
this.callbacks?.getChild("main") || this.callbacks?.child("main") this.callbacks?.getChild("main") || this.callbacks?.child("main")
const effects = makeEffects({ const effects = makeEffects({
@@ -295,17 +290,18 @@ export class RpcListener {
id, id,
this.system.start(effects).then((result) => ({ result })), this.system.start(effects).then((result) => ({ result })),
) )
} })
case "stop": { .when(stopType, async ({ id }) => {
const { id } = stopType.parse(input)
this.callbacks?.removeChild("main")
return handleRpc( return handleRpc(
id, id,
this.system.stop().then((result) => ({ result })), this.system.stop().then((result) => {
this.callbacks?.removeChild("main")
return { result }
}),
) )
} })
case "exit": { .when(exitType, async ({ id, params }) => {
const { id, params } = exitType.parse(input)
return handleRpc( return handleRpc(
id, id,
(async () => { (async () => {
@@ -327,9 +323,8 @@ export class RpcListener {
} }
})().then((result) => ({ result })), })().then((result) => ({ result })),
) )
} })
case "init": { .when(initType, async ({ id, params }) => {
const { id, params } = initType.parse(input)
return handleRpc( return handleRpc(
id, id,
(async () => { (async () => {
@@ -354,9 +349,8 @@ export class RpcListener {
} }
})().then((result) => ({ result })), })().then((result) => ({ result })),
) )
} })
case "eval": { .when(evalType, async ({ id, params }) => {
const { id, params } = evalType.parse(input)
return handleRpc( return handleRpc(
id, id,
(async () => { (async () => {
@@ -381,28 +375,41 @@ export class RpcListener {
} }
})(), })(),
) )
} })
default: { .when(
const { id, method } = z shape({ id: idType.optional(), method: string }),
.object({ id: idType.optional(), method: z.string() }) ({ id, method }) => ({
.passthrough()
.parse(input)
return {
jsonrpc, jsonrpc,
id, id,
error: { error: {
code: -32601, code: -32601,
message: "Method not found", message: `Method not found`,
data: { data: {
details: method, details: method,
}, },
}, },
}),
)
.defaultToLazy(() => {
console.warn(
`Couldn't parse the following input ${JSON.stringify(input)}`,
)
return {
jsonrpc,
id: (input as any)?.id,
error: {
code: -32602,
message: "invalid params",
data: {
details: JSON.stringify(input),
},
},
} }
} })
}
} }
private getResult( private getResult(
procedure: z.infer<typeof jsonPath>, procedure: typeof jsonPath._TYPE,
system: System, system: System,
eventId: string, eventId: string,
timeout: number | null | undefined, timeout: number | null | undefined,
@@ -430,7 +437,6 @@ export class RpcListener {
return system.getActionInput( return system.getActionInput(
effects, effects,
procedures[2], procedures[2],
input?.prefill ?? null,
timeout || null, timeout || null,
) )
case procedures[1] === "actions" && procedures[3] === "run": case procedures[1] === "actions" && procedures[3] === "run":
@@ -442,18 +448,26 @@ export class RpcListener {
) )
} }
} }
})().then(ensureResultTypeShape, (error) => { })().then(ensureResultTypeShape, (error) =>
const errorSchema = z.object({ matches(error)
error: z.string(), .when(
code: z.number().default(0), object({
}) error: string,
const parsed = errorSchema.safeParse(error) code: number.defaultTo(0),
if (parsed.success) { }),
return { (error) => ({
error: { code: parsed.data.code, message: parsed.data.error }, error: {
} code: error.code,
} message: error.error,
return { error: { code: 0, message: String(error) } } },
}) }),
)
.defaultToLazy(() => ({
error: {
code: 0,
message: String(error),
},
})),
)
} }
} }

View File

@@ -2,7 +2,7 @@ import * as fs from "fs/promises"
import * as cp from "child_process" import * as cp from "child_process"
import { SubContainer, types as T } from "@start9labs/start-sdk" import { SubContainer, types as T } from "@start9labs/start-sdk"
import { promisify } from "util" import { promisify } from "util"
import { DockerProcedure } from "../../../Models/DockerProcedure" import { DockerProcedure, VolumeId } from "../../../Models/DockerProcedure"
import { Volume } from "./matchVolume" import { Volume } from "./matchVolume"
import { import {
CommandOptions, CommandOptions,
@@ -28,7 +28,7 @@ export class DockerProcedureContainer extends Drop {
effects: T.Effects, effects: T.Effects,
packageId: string, packageId: string,
data: DockerProcedure, data: DockerProcedure,
volumes: { [id: string]: Volume }, volumes: { [id: VolumeId]: Volume },
name: string, name: string,
options: { subcontainer?: SubContainer<SDKManifest> } = {}, options: { subcontainer?: SubContainer<SDKManifest> } = {},
) { ) {
@@ -47,7 +47,7 @@ export class DockerProcedureContainer extends Drop {
effects: T.Effects, effects: T.Effects,
packageId: string, packageId: string,
data: DockerProcedure, data: DockerProcedure,
volumes: { [id: string]: Volume }, volumes: { [id: VolumeId]: Volume },
name: string, name: string,
) { ) {
const subcontainer = await SubContainerOwned.of( const subcontainer = await SubContainerOwned.of(
@@ -64,7 +64,7 @@ export class DockerProcedureContainer extends Drop {
? `${subcontainer.rootfs}${mounts[mount]}` ? `${subcontainer.rootfs}${mounts[mount]}`
: `${subcontainer.rootfs}/${mounts[mount]}` : `${subcontainer.rootfs}/${mounts[mount]}`
await fs.mkdir(path, { recursive: true }) await fs.mkdir(path, { recursive: true })
const volumeMount: Volume = volumes[mount] const volumeMount = volumes[mount]
if (volumeMount.type === "data") { if (volumeMount.type === "data") {
await subcontainer.mount( await subcontainer.mount(
Mounts.of().mountVolume({ Mounts.of().mountVolume({
@@ -90,7 +90,7 @@ export class DockerProcedureContainer extends Drop {
...new Set( ...new Set(
Object.values(hostInfo?.bindings || {}) Object.values(hostInfo?.bindings || {})
.flatMap((b) => b.addresses.available) .flatMap((b) => b.addresses.available)
.map((h) => h.hostname), .map((h) => h.host),
).values(), ).values(),
] ]
const certChain = await effects.getSslCertificate({ const certChain = await effects.getSslCertificate({

View File

@@ -15,11 +15,26 @@ import { System } from "../../../Interfaces/System"
import { matchManifest, Manifest } from "./matchManifest" import { matchManifest, Manifest } from "./matchManifest"
import * as childProcess from "node:child_process" import * as childProcess from "node:child_process"
import { DockerProcedureContainer } from "./DockerProcedureContainer" import { DockerProcedureContainer } from "./DockerProcedureContainer"
import { DockerProcedure } from "../../../Models/DockerProcedure"
import { promisify } from "node:util" import { promisify } from "node:util"
import * as U from "./oldEmbassyTypes" import * as U from "./oldEmbassyTypes"
import { MainLoop } from "./MainLoop" import { MainLoop } from "./MainLoop"
import { z } from "@start9labs/start-sdk" import {
matches,
boolean,
dictionary,
literal,
literals,
object,
string,
unknown,
any,
tuple,
number,
anyOf,
deferred,
Parser,
array,
} from "ts-matches"
import { AddSslOptions } from "@start9labs/start-sdk/base/lib/osBindings" import { AddSslOptions } from "@start9labs/start-sdk/base/lib/osBindings"
import { import {
BindOptionsByProtocol, BindOptionsByProtocol,
@@ -42,83 +57,6 @@ function todo(): never {
throw new Error("Not implemented") throw new Error("Not implemented")
} }
function getStatus(
effects: Effects,
options: Omit<Parameters<Effects["getStatus"]>[0], "callback"> = {},
) {
async function* watch(abort?: AbortSignal) {
const resolveCell = { resolve: () => {} }
effects.onLeaveContext(() => {
resolveCell.resolve()
})
abort?.addEventListener("abort", () => resolveCell.resolve())
while (effects.isInContext && !abort?.aborted) {
let callback: () => void = () => {}
const waitForNext = new Promise<void>((resolve) => {
callback = resolve
resolveCell.resolve = resolve
})
yield await effects.getStatus({ ...options, callback })
await waitForNext
}
}
return {
const: () =>
effects.getStatus({
...options,
callback:
effects.constRetry &&
(() => effects.constRetry && effects.constRetry()),
}),
once: () => effects.getStatus(options),
watch: (abort?: AbortSignal) => {
const ctrl = new AbortController()
abort?.addEventListener("abort", () => ctrl.abort())
return watch(ctrl.signal)
},
onChange: (
callback: (
value: T.StatusInfo | null,
error?: Error,
) => { cancel: boolean } | Promise<{ cancel: boolean }>,
) => {
;(async () => {
const ctrl = new AbortController()
for await (const value of watch(ctrl.signal)) {
try {
const res = await callback(value)
if (res.cancel) {
ctrl.abort()
break
}
} catch (e) {
console.error(
"callback function threw an error @ getStatus.onChange",
e,
)
}
}
})()
.catch((e) => callback(null, e as Error))
.catch((e) =>
console.error(
"callback function threw an error @ getStatus.onChange",
e,
),
)
},
}
}
/**
* Local type for procedure values from the manifest.
* The manifest's zod schemas use ZodTypeAny casts that produce `unknown` in zod v4.
* This type restores the expected shape for type-safe property access.
*/
type Procedure =
| (DockerProcedure & { type: "docker" })
| { type: "script"; args: unknown[] | null }
const MANIFEST_LOCATION = "/usr/lib/startos/package/embassyManifest.json" const MANIFEST_LOCATION = "/usr/lib/startos/package/embassyManifest.json"
export const EMBASSY_JS_LOCATION = "/usr/lib/startos/package/embassy.js" export const EMBASSY_JS_LOCATION = "/usr/lib/startos/package/embassy.js"
@@ -127,24 +65,26 @@ const configFile = FileHelper.json(
base: new Volume("embassy"), base: new Volume("embassy"),
subpath: "config.json", subpath: "config.json",
}, },
z.any(), matches.any,
) )
const dependsOnFile = FileHelper.json( const dependsOnFile = FileHelper.json(
{ {
base: new Volume("embassy"), base: new Volume("embassy"),
subpath: "dependsOn.json", subpath: "dependsOn.json",
}, },
z.record(z.string(), z.array(z.string())), dictionary([string, array(string)]),
) )
const matchResult = z.object({ const matchResult = object({
result: z.any(), result: any,
}) })
const matchError = z.object({ const matchError = object({
error: z.string(), error: string,
}) })
const matchErrorCode = z.object({ const matchErrorCode = object<{
"error-code": z.tuple([z.number(), z.string()]), "error-code": [number, string] | readonly [number, string]
}>({
"error-code": tuple(number, string),
}) })
const assertNever = ( const assertNever = (
@@ -156,34 +96,29 @@ const assertNever = (
/** /**
Should be changing the type for specific properties, and this is mostly a transformation for the old return types to the newer one. Should be changing the type for specific properties, and this is mostly a transformation for the old return types to the newer one.
*/ */
function isMatchResult(a: unknown): a is z.infer<typeof matchResult> {
return matchResult.safeParse(a).success
}
function isMatchError(a: unknown): a is z.infer<typeof matchError> {
return matchError.safeParse(a).success
}
function isMatchErrorCode(a: unknown): a is z.infer<typeof matchErrorCode> {
return matchErrorCode.safeParse(a).success
}
const fromReturnType = <A>(a: U.ResultType<A>): A => { const fromReturnType = <A>(a: U.ResultType<A>): A => {
if (isMatchResult(a)) { if (matchResult.test(a)) {
return a.result return a.result
} }
if (isMatchError(a)) { if (matchError.test(a)) {
console.info({ passedErrorStack: new Error().stack, error: a.error }) console.info({ passedErrorStack: new Error().stack, error: a.error })
throw { error: a.error } throw { error: a.error }
} }
if (isMatchErrorCode(a)) { if (matchErrorCode.test(a)) {
const [code, message] = a["error-code"] const [code, message] = a["error-code"]
throw { error: message, code } throw { error: message, code }
} }
return assertNever(a as never) return assertNever(a)
} }
const matchSetResult = z.object({ const matchSetResult = object({
"depends-on": z.record(z.string(), z.array(z.string())).nullable().optional(), "depends-on": dictionary([string, array(string)])
dependsOn: z.record(z.string(), z.array(z.string())).nullable().optional(), .nullable()
signal: z.enum([ .optional(),
dependsOn: dictionary([string, array(string)])
.nullable()
.optional(),
signal: literals(
"SIGTERM", "SIGTERM",
"SIGHUP", "SIGHUP",
"SIGINT", "SIGINT",
@@ -216,7 +151,7 @@ const matchSetResult = z.object({
"SIGPWR", "SIGPWR",
"SIGSYS", "SIGSYS",
"SIGINFO", "SIGINFO",
]), ),
}) })
type OldGetConfigRes = { type OldGetConfigRes = {
@@ -298,29 +233,33 @@ const asProperty = (x: PackagePropertiesV2): PropertiesReturn =>
Object.fromEntries( Object.fromEntries(
Object.entries(x).map(([key, value]) => [key, asProperty_(value)]), Object.entries(x).map(([key, value]) => [key, asProperty_(value)]),
) )
const matchPackagePropertyObject: z.ZodType<PackagePropertyObject> = z.object({ const [matchPackageProperties, setMatchPackageProperties] =
value: z.lazy(() => matchPackageProperties), deferred<PackagePropertiesV2>()
type: z.literal("object"), const matchPackagePropertyObject: Parser<unknown, PackagePropertyObject> =
description: z.string(), object({
}) value: matchPackageProperties,
type: literal("object"),
description: string,
})
const matchPackagePropertyString: z.ZodType<PackagePropertyString> = z.object({ const matchPackagePropertyString: Parser<unknown, PackagePropertyString> =
type: z.literal("string"), object({
description: z.string().nullable().optional(), type: literal("string"),
value: z.string(), description: string.nullable().optional(),
copyable: z.boolean().nullable().optional(), value: string,
qr: z.boolean().nullable().optional(), copyable: boolean.nullable().optional(),
masked: z.boolean().nullable().optional(), qr: boolean.nullable().optional(),
}) masked: boolean.nullable().optional(),
const matchPackageProperties: z.ZodType<PackagePropertiesV2> = z.lazy(() => })
z.record( setMatchPackageProperties(
z.string(), dictionary([
z.union([matchPackagePropertyObject, matchPackagePropertyString]), string,
), anyOf(matchPackagePropertyObject, matchPackagePropertyString),
]),
) )
const matchProperties = z.object({ const matchProperties = object({
version: z.literal(2), version: literal(2),
data: matchPackageProperties, data: matchPackageProperties,
}) })
@@ -364,7 +303,7 @@ export class SystemForEmbassy implements System {
}) })
const manifestData = await fs.readFile(manifestLocation, "utf-8") const manifestData = await fs.readFile(manifestLocation, "utf-8")
return new SystemForEmbassy( return new SystemForEmbassy(
matchManifest.parse(JSON.parse(manifestData)), matchManifest.unsafeCast(JSON.parse(manifestData)),
moduleCode, moduleCode,
) )
} }
@@ -445,14 +384,13 @@ export class SystemForEmbassy implements System {
} }
callCallback(_callback: number, _args: any[]): void {} callCallback(_callback: number, _args: any[]): void {}
async stop(): Promise<void> { async stop(): Promise<void> {
const clean = this.currentRunning?.clean({ const { currentRunning } = this
timeout: fromDuration( this.currentRunning?.clean()
(this.manifest.main["sigterm-timeout"] as any) || "30s",
),
})
delete this.currentRunning delete this.currentRunning
if (clean) { if (currentRunning) {
await clean await currentRunning.clean({
timeout: fromDuration(this.manifest.main["sigterm-timeout"] || "30s"),
})
} }
} }
@@ -494,7 +432,7 @@ export class SystemForEmbassy implements System {
const host = new MultiHost({ effects, id }) const host = new MultiHost({ effects, id })
const internalPorts = new Set( const internalPorts = new Set(
Object.values(interfaceValue["tor-config"]?.["port-mapping"] ?? {}) Object.values(interfaceValue["tor-config"]?.["port-mapping"] ?? {})
.map((v) => parseInt(v)) .map(Number.parseInt)
.concat( .concat(
...Object.values(interfaceValue["lan-config"] ?? {}).map( ...Object.values(interfaceValue["lan-config"] ?? {}).map(
(c) => c.internal, (c) => c.internal,
@@ -572,7 +510,6 @@ export class SystemForEmbassy implements System {
async getActionInput( async getActionInput(
effects: Effects, effects: Effects,
actionId: string, actionId: string,
_prefill: Record<string, unknown> | null,
timeoutMs: number | null, timeoutMs: number | null,
): Promise<T.ActionInput | null> { ): Promise<T.ActionInput | null> {
if (actionId === "config") { if (actionId === "config") {
@@ -685,7 +622,7 @@ export class SystemForEmbassy implements System {
effects: Effects, effects: Effects,
timeoutMs: number | null, timeoutMs: number | null,
): Promise<void> { ): Promise<void> {
const backup = this.manifest.backup.create as Procedure const backup = this.manifest.backup.create
if (backup.type === "docker") { if (backup.type === "docker") {
const commands = [backup.entrypoint, ...backup.args] const commands = [backup.entrypoint, ...backup.args]
const container = await DockerProcedureContainer.of( const container = await DockerProcedureContainer.of(
@@ -718,7 +655,7 @@ export class SystemForEmbassy implements System {
encoding: "utf-8", encoding: "utf-8",
}) })
.catch((_) => null) .catch((_) => null)
const restoreBackup = this.manifest.backup.restore as Procedure const restoreBackup = this.manifest.backup.restore
if (restoreBackup.type === "docker") { if (restoreBackup.type === "docker") {
const commands = [restoreBackup.entrypoint, ...restoreBackup.args] const commands = [restoreBackup.entrypoint, ...restoreBackup.args]
const container = await DockerProcedureContainer.of( const container = await DockerProcedureContainer.of(
@@ -751,7 +688,7 @@ export class SystemForEmbassy implements System {
effects: Effects, effects: Effects,
timeoutMs: number | null, timeoutMs: number | null,
): Promise<OldGetConfigRes> { ): Promise<OldGetConfigRes> {
const config = this.manifest.config?.get as Procedure | undefined const config = this.manifest.config?.get
if (!config) return { spec: {} } if (!config) return { spec: {} }
if (config.type === "docker") { if (config.type === "docker") {
const commands = [config.entrypoint, ...config.args] const commands = [config.entrypoint, ...config.args]
@@ -793,7 +730,7 @@ export class SystemForEmbassy implements System {
) )
await updateConfig(effects, this.manifest, spec, newConfig) await updateConfig(effects, this.manifest, spec, newConfig)
await configFile.write(effects, newConfig) await configFile.write(effects, newConfig)
const setConfigValue = this.manifest.config?.set as Procedure | undefined const setConfigValue = this.manifest.config?.set
if (!setConfigValue) return if (!setConfigValue) return
if (setConfigValue.type === "docker") { if (setConfigValue.type === "docker") {
const commands = [ const commands = [
@@ -808,7 +745,7 @@ export class SystemForEmbassy implements System {
this.manifest.volumes, this.manifest.volumes,
`Set Config - ${commands.join(" ")}`, `Set Config - ${commands.join(" ")}`,
) )
const answer = matchSetResult.parse( const answer = matchSetResult.unsafeCast(
JSON.parse( JSON.parse(
(await container.execFail(commands, timeoutMs)).stdout.toString(), (await container.execFail(commands, timeoutMs)).stdout.toString(),
), ),
@@ -821,7 +758,7 @@ export class SystemForEmbassy implements System {
const method = moduleCode.setConfig const method = moduleCode.setConfig
if (!method) throw new Error("Expecting that the method setConfig exists") if (!method) throw new Error("Expecting that the method setConfig exists")
const answer = matchSetResult.parse( const answer = matchSetResult.unsafeCast(
await method( await method(
polyfillEffects(effects, this.manifest), polyfillEffects(effects, this.manifest),
newConfig as U.Config, newConfig as U.Config,
@@ -850,11 +787,7 @@ export class SystemForEmbassy implements System {
const requiredDeps = { const requiredDeps = {
...Object.fromEntries( ...Object.fromEntries(
Object.entries(this.manifest.dependencies ?? {}) Object.entries(this.manifest.dependencies ?? {})
.filter( .filter(([k, v]) => v?.requirement.type === "required")
([k, v]) =>
(v?.requirement as { type: string } | undefined)?.type ===
"required",
)
.map((x) => [x[0], []]) || [], .map((x) => [x[0], []]) || [],
), ),
} }
@@ -922,7 +855,7 @@ export class SystemForEmbassy implements System {
} }
if (migration) { if (migration) {
const [_, procedure] = migration as readonly [unknown, Procedure] const [_, procedure] = migration
if (procedure.type === "docker") { if (procedure.type === "docker") {
const commands = [procedure.entrypoint, ...procedure.args] const commands = [procedure.entrypoint, ...procedure.args]
const container = await DockerProcedureContainer.of( const container = await DockerProcedureContainer.of(
@@ -960,10 +893,7 @@ export class SystemForEmbassy implements System {
effects: Effects, effects: Effects,
timeoutMs: number | null, timeoutMs: number | null,
): Promise<PropertiesReturn> { ): Promise<PropertiesReturn> {
const setConfigValue = this.manifest.properties as const setConfigValue = this.manifest.properties
| Procedure
| null
| undefined
if (!setConfigValue) throw new Error("There is no properties") if (!setConfigValue) throw new Error("There is no properties")
if (setConfigValue.type === "docker") { if (setConfigValue.type === "docker") {
const commands = [setConfigValue.entrypoint, ...setConfigValue.args] const commands = [setConfigValue.entrypoint, ...setConfigValue.args]
@@ -974,7 +904,7 @@ export class SystemForEmbassy implements System {
this.manifest.volumes, this.manifest.volumes,
`Properties - ${commands.join(" ")}`, `Properties - ${commands.join(" ")}`,
) )
const properties = matchProperties.parse( const properties = matchProperties.unsafeCast(
JSON.parse( JSON.parse(
(await container.execFail(commands, timeoutMs)).stdout.toString(), (await container.execFail(commands, timeoutMs)).stdout.toString(),
), ),
@@ -985,7 +915,7 @@ export class SystemForEmbassy implements System {
const method = moduleCode.properties const method = moduleCode.properties
if (!method) if (!method)
throw new Error("Expecting that the method properties exists") throw new Error("Expecting that the method properties exists")
const properties = matchProperties.parse( const properties = matchProperties.unsafeCast(
await method(polyfillEffects(effects, this.manifest)).then( await method(polyfillEffects(effects, this.manifest)).then(
fromReturnType, fromReturnType,
), ),
@@ -1000,8 +930,7 @@ export class SystemForEmbassy implements System {
formData: unknown, formData: unknown,
timeoutMs: number | null, timeoutMs: number | null,
): Promise<T.ActionResult> { ): Promise<T.ActionResult> {
const actionProcedure = this.manifest.actions?.[actionId] const actionProcedure = this.manifest.actions?.[actionId]?.implementation
?.implementation as Procedure | undefined
const toActionResult = ({ const toActionResult = ({
message, message,
value, value,
@@ -1068,9 +997,7 @@ export class SystemForEmbassy implements System {
oldConfig: unknown, oldConfig: unknown,
timeoutMs: number | null, timeoutMs: number | null,
): Promise<object> { ): Promise<object> {
const actionProcedure = this.manifest.dependencies?.[id]?.config?.check as const actionProcedure = this.manifest.dependencies?.[id]?.config?.check
| Procedure
| undefined
if (!actionProcedure) return { message: "Action not found", value: null } if (!actionProcedure) return { message: "Action not found", value: null }
if (actionProcedure.type === "docker") { if (actionProcedure.type === "docker") {
const commands = [ const commands = [
@@ -1113,26 +1040,16 @@ export class SystemForEmbassy implements System {
timeoutMs: number | null, timeoutMs: number | null,
): Promise<void> { ): Promise<void> {
// TODO: docker // TODO: docker
const status = await getStatus(effects, { packageId: id }).const() await effects.mount({
if (!status) return location: `/media/embassy/${id}`,
try { target: {
await effects.mount({ packageId: id,
location: `/media/embassy/${id}`, volumeId: "embassy",
target: { subpath: null,
packageId: id, readonly: true,
volumeId: "embassy", idmap: [],
subpath: null, },
readonly: true, })
idmap: [],
},
})
} catch (e) {
console.error(
`Failed to mount dependency volume for ${id}, skipping autoconfig:`,
e,
)
return
}
configFile configFile
.withPath(`/media/embassy/${id}/config.json`) .withPath(`/media/embassy/${id}/config.json`)
.read() .read()
@@ -1172,50 +1089,40 @@ export class SystemForEmbassy implements System {
} }
} }
const matchPointer = z.object({ const matchPointer = object({
type: z.literal("pointer"), type: literal("pointer"),
}) })
const matchPointerPackage = z.object({ const matchPointerPackage = object({
subtype: z.literal("package"), subtype: literal("package"),
target: z.enum(["tor-key", "tor-address", "lan-address"]), target: literals("tor-key", "tor-address", "lan-address"),
"package-id": z.string(), "package-id": string,
interface: z.string(), interface: string,
}) })
const matchPointerConfig = z.object({ const matchPointerConfig = object({
subtype: z.literal("package"), subtype: literal("package"),
target: z.enum(["config"]), target: literals("config"),
"package-id": z.string(), "package-id": string,
selector: z.string(), selector: string,
multi: z.boolean(), multi: boolean,
}) })
const matchSpec = z.object({ const matchSpec = object({
spec: z.record(z.string(), z.unknown()), spec: object,
}) })
const matchVariants = z.object({ variants: z.record(z.string(), z.unknown()) }) const matchVariants = object({ variants: dictionary([string, unknown]) })
function isMatchPointer(v: unknown): v is z.infer<typeof matchPointer> {
return matchPointer.safeParse(v).success
}
function isMatchSpec(v: unknown): v is z.infer<typeof matchSpec> {
return matchSpec.safeParse(v).success
}
function isMatchVariants(v: unknown): v is z.infer<typeof matchVariants> {
return matchVariants.safeParse(v).success
}
function cleanSpecOfPointers<T>(mutSpec: T): T { function cleanSpecOfPointers<T>(mutSpec: T): T {
if (typeof mutSpec !== "object" || mutSpec === null) return mutSpec if (!object.test(mutSpec)) return mutSpec
for (const key in mutSpec) { for (const key in mutSpec) {
const value = mutSpec[key] const value = mutSpec[key]
if (isMatchSpec(value)) if (matchSpec.test(value)) value.spec = cleanSpecOfPointers(value.spec)
value.spec = cleanSpecOfPointers(value.spec) as Record<string, unknown> if (matchVariants.test(value))
if (isMatchVariants(value))
value.variants = Object.fromEntries( value.variants = Object.fromEntries(
Object.entries(value.variants).map(([key, value]) => [ Object.entries(value.variants).map(([key, value]) => [
key, key,
cleanSpecOfPointers(value), cleanSpecOfPointers(value),
]), ]),
) )
if (!isMatchPointer(value)) continue if (!matchPointer.test(value)) continue
delete mutSpec[key] delete mutSpec[key]
// // if (value.target === ) // // if (value.target === )
} }
@@ -1281,11 +1188,6 @@ async function updateConfig(
if (specValue.target === "config") { if (specValue.target === "config") {
const jp = require("jsonpath") const jp = require("jsonpath")
const depId = specValue["package-id"] const depId = specValue["package-id"]
const depStatus = await getStatus(effects, { packageId: depId }).const()
if (!depStatus) {
mutConfigValue[key] = null
continue
}
await effects.mount({ await effects.mount({
location: `/media/embassy/${depId}`, location: `/media/embassy/${depId}`,
target: { target: {
@@ -1343,7 +1245,7 @@ async function updateConfig(
: catchFn( : catchFn(
() => () =>
filled.addressInfo!.filter({ kind: "mdns" })!.hostnames[0] filled.addressInfo!.filter({ kind: "mdns" })!.hostnames[0]
.hostname, .host,
) || "" ) || ""
mutConfigValue[key] = url mutConfigValue[key] = url
} }
@@ -1366,7 +1268,7 @@ function extractServiceInterfaceId(manifest: Manifest, specInterface: string) {
} }
async function convertToNewConfig(value: OldGetConfigRes) { async function convertToNewConfig(value: OldGetConfigRes) {
try { try {
const valueSpec: OldConfigSpec = matchOldConfigSpec.parse(value.spec) const valueSpec: OldConfigSpec = matchOldConfigSpec.unsafeCast(value.spec)
const spec = transformConfigSpec(valueSpec) const spec = transformConfigSpec(valueSpec)
if (!value.config) return { spec, config: null } if (!value.config) return { spec, config: null }
const config = transformOldConfigToNew(valueSpec, value.config) ?? null const config = transformOldConfigToNew(valueSpec, value.config) ?? null

View File

@@ -4,9 +4,9 @@ import synapseManifest from "./__fixtures__/synapseManifest"
describe("matchManifest", () => { describe("matchManifest", () => {
test("gittea", () => { test("gittea", () => {
matchManifest.parse(giteaManifest) matchManifest.unsafeCast(giteaManifest)
}) })
test("synapse", () => { test("synapse", () => {
matchManifest.parse(synapseManifest) matchManifest.unsafeCast(synapseManifest)
}) })
}) })

View File

@@ -1,123 +1,126 @@
import { z } from "@start9labs/start-sdk" import {
object,
literal,
string,
array,
boolean,
dictionary,
literals,
number,
unknown,
some,
every,
} from "ts-matches"
import { matchVolume } from "./matchVolume" import { matchVolume } from "./matchVolume"
import { matchDockerProcedure } from "../../../Models/DockerProcedure" import { matchDockerProcedure } from "../../../Models/DockerProcedure"
const matchJsProcedure = z.object({ const matchJsProcedure = object({
type: z.literal("script"), type: literal("script"),
args: z.array(z.unknown()).nullable().optional().default([]), args: array(unknown).nullable().optional().defaultTo([]),
}) })
const matchProcedure = z.union([matchDockerProcedure, matchJsProcedure]) const matchProcedure = some(matchDockerProcedure, matchJsProcedure)
export type Procedure = z.infer<typeof matchProcedure> export type Procedure = typeof matchProcedure._TYPE
const healthCheckFields = { const matchAction = object({
name: z.string(), name: string,
"success-message": z.string().nullable().optional(), description: string,
} warning: string.nullable().optional(),
const matchAction = z.object({
name: z.string(),
description: z.string(),
warning: z.string().nullable().optional(),
implementation: matchProcedure, implementation: matchProcedure,
"allowed-statuses": z.array(z.enum(["running", "stopped"])), "allowed-statuses": array(literals("running", "stopped")),
"input-spec": z.unknown().nullable().optional(), "input-spec": unknown.nullable().optional(),
}) })
export const matchManifest = z.object({ export const matchManifest = object({
id: z.string(), id: string,
title: z.string(), title: string,
version: z.string(), version: string,
main: matchDockerProcedure, main: matchDockerProcedure,
assets: z assets: object({
.object({ assets: string.nullable().optional(),
assets: z.string().nullable().optional(), scripts: string.nullable().optional(),
scripts: z.string().nullable().optional(), })
})
.nullable() .nullable()
.optional(), .optional(),
"health-checks": z.record( "health-checks": dictionary([
z.string(), string,
z.union([ every(
matchDockerProcedure.extend(healthCheckFields), matchProcedure,
matchJsProcedure.extend(healthCheckFields), object({
]), name: string,
), ["success-message"]: string.nullable().optional(),
config: z }),
.object({ ),
get: matchProcedure, ]),
set: matchProcedure, config: object({
}) get: matchProcedure,
set: matchProcedure,
})
.nullable() .nullable()
.optional(), .optional(),
properties: matchProcedure.nullable().optional(), properties: matchProcedure.nullable().optional(),
volumes: z.record(z.string(), matchVolume), volumes: dictionary([string, matchVolume]),
interfaces: z.record( interfaces: dictionary([
z.string(), string,
z.object({ object({
name: z.string(), name: string,
description: z.string(), description: string,
"tor-config": z "tor-config": object({
.object({ "port-mapping": dictionary([string, string]),
"port-mapping": z.record(z.string(), z.string()), })
})
.nullable() .nullable()
.optional(), .optional(),
"lan-config": z "lan-config": dictionary([
.record( string,
z.string(), object({
z.object({ ssl: boolean,
ssl: z.boolean(), internal: number,
internal: z.number(), }),
}), ])
)
.nullable() .nullable()
.optional(), .optional(),
ui: z.boolean(), ui: boolean,
protocols: z.array(z.string()), protocols: array(string),
}), }),
), ]),
backup: z.object({ backup: object({
create: matchProcedure, create: matchProcedure,
restore: matchProcedure, restore: matchProcedure,
}), }),
migrations: z migrations: object({
.object({ to: dictionary([string, matchProcedure]),
to: z.record(z.string(), matchProcedure), from: dictionary([string, matchProcedure]),
from: z.record(z.string(), matchProcedure), })
})
.nullable() .nullable()
.optional(), .optional(),
dependencies: z.record( dependencies: dictionary([
z.string(), string,
z object({
.object({ version: string,
version: z.string(), requirement: some(
requirement: z.union([ object({
z.object({ type: literal("opt-in"),
type: z.literal("opt-in"), how: string,
how: z.string(), }),
}), object({
z.object({ type: literal("opt-out"),
type: z.literal("opt-out"), how: string,
how: z.string(), }),
}), object({
z.object({ type: literal("required"),
type: z.literal("required"), }),
}), ),
]), description: string.nullable().optional(),
description: z.string().nullable().optional(), config: object({
config: z check: matchProcedure,
.object({ "auto-configure": matchProcedure,
check: matchProcedure,
"auto-configure": matchProcedure,
})
.nullable()
.optional(),
}) })
.nullable()
.optional(),
})
.nullable() .nullable()
.optional(), .optional(),
), ]),
actions: z.record(z.string(), matchAction), actions: dictionary([string, matchAction]),
}) })
export type Manifest = z.infer<typeof matchManifest> export type Manifest = typeof matchManifest._TYPE

View File

@@ -1,32 +1,32 @@
import { z } from "@start9labs/start-sdk" import { object, literal, string, boolean, some } from "ts-matches"
const matchDataVolume = z.object({ const matchDataVolume = object({
type: z.literal("data"), type: literal("data"),
readonly: z.boolean().optional(), readonly: boolean.optional(),
}) })
const matchAssetVolume = z.object({ const matchAssetVolume = object({
type: z.literal("assets"), type: literal("assets"),
}) })
const matchPointerVolume = z.object({ const matchPointerVolume = object({
type: z.literal("pointer"), type: literal("pointer"),
"package-id": z.string(), "package-id": string,
"volume-id": z.string(), "volume-id": string,
path: z.string(), path: string,
readonly: z.boolean(), readonly: boolean,
}) })
const matchCertificateVolume = z.object({ const matchCertificateVolume = object({
type: z.literal("certificate"), type: literal("certificate"),
"interface-id": z.string(), "interface-id": string,
}) })
const matchBackupVolume = z.object({ const matchBackupVolume = object({
type: z.literal("backup"), type: literal("backup"),
readonly: z.boolean(), readonly: boolean,
}) })
export const matchVolume = z.union([ export const matchVolume = some(
matchDataVolume, matchDataVolume,
matchAssetVolume, matchAssetVolume,
matchPointerVolume, matchPointerVolume,
matchCertificateVolume, matchCertificateVolume,
matchBackupVolume, matchBackupVolume,
]) )
export type Volume = z.infer<typeof matchVolume> export type Volume = typeof matchVolume._TYPE

View File

@@ -12,43 +12,43 @@ import nostrConfig2 from "./__fixtures__/nostrConfig2"
describe("transformConfigSpec", () => { describe("transformConfigSpec", () => {
test("matchOldConfigSpec(embassyPages.homepage.variants[web-page])", () => { test("matchOldConfigSpec(embassyPages.homepage.variants[web-page])", () => {
matchOldConfigSpec.parse( matchOldConfigSpec.unsafeCast(
fixtureEmbassyPagesConfig.homepage.variants["web-page"], fixtureEmbassyPagesConfig.homepage.variants["web-page"],
) )
}) })
test("matchOldConfigSpec(embassyPages)", () => { test("matchOldConfigSpec(embassyPages)", () => {
matchOldConfigSpec.parse(fixtureEmbassyPagesConfig) matchOldConfigSpec.unsafeCast(fixtureEmbassyPagesConfig)
}) })
test("transformConfigSpec(embassyPages)", () => { test("transformConfigSpec(embassyPages)", () => {
const spec = matchOldConfigSpec.parse(fixtureEmbassyPagesConfig) const spec = matchOldConfigSpec.unsafeCast(fixtureEmbassyPagesConfig)
expect(transformConfigSpec(spec)).toMatchSnapshot() expect(transformConfigSpec(spec)).toMatchSnapshot()
}) })
test("matchOldConfigSpec(RTL.nodes)", () => { test("matchOldConfigSpec(RTL.nodes)", () => {
matchOldValueSpecList.parse(fixtureRTLConfig.nodes) matchOldValueSpecList.unsafeCast(fixtureRTLConfig.nodes)
}) })
test("matchOldConfigSpec(RTL)", () => { test("matchOldConfigSpec(RTL)", () => {
matchOldConfigSpec.parse(fixtureRTLConfig) matchOldConfigSpec.unsafeCast(fixtureRTLConfig)
}) })
test("transformConfigSpec(RTL)", () => { test("transformConfigSpec(RTL)", () => {
const spec = matchOldConfigSpec.parse(fixtureRTLConfig) const spec = matchOldConfigSpec.unsafeCast(fixtureRTLConfig)
expect(transformConfigSpec(spec)).toMatchSnapshot() expect(transformConfigSpec(spec)).toMatchSnapshot()
}) })
test("transformConfigSpec(searNXG)", () => { test("transformConfigSpec(searNXG)", () => {
const spec = matchOldConfigSpec.parse(searNXG) const spec = matchOldConfigSpec.unsafeCast(searNXG)
expect(transformConfigSpec(spec)).toMatchSnapshot() expect(transformConfigSpec(spec)).toMatchSnapshot()
}) })
test("transformConfigSpec(bitcoind)", () => { test("transformConfigSpec(bitcoind)", () => {
const spec = matchOldConfigSpec.parse(bitcoind) const spec = matchOldConfigSpec.unsafeCast(bitcoind)
expect(transformConfigSpec(spec)).toMatchSnapshot() expect(transformConfigSpec(spec)).toMatchSnapshot()
}) })
test("transformConfigSpec(nostr)", () => { test("transformConfigSpec(nostr)", () => {
const spec = matchOldConfigSpec.parse(nostr) const spec = matchOldConfigSpec.unsafeCast(nostr)
expect(transformConfigSpec(spec)).toMatchSnapshot() expect(transformConfigSpec(spec)).toMatchSnapshot()
}) })
test("transformConfigSpec(nostr2)", () => { test("transformConfigSpec(nostr2)", () => {
const spec = matchOldConfigSpec.parse(nostrConfig2) const spec = matchOldConfigSpec.unsafeCast(nostrConfig2)
expect(transformConfigSpec(spec)).toMatchSnapshot() expect(transformConfigSpec(spec)).toMatchSnapshot()
}) })
}) })

View File

@@ -1,4 +1,19 @@
import { IST, z } from "@start9labs/start-sdk" import { IST } from "@start9labs/start-sdk"
import {
dictionary,
object,
anyOf,
string,
literals,
array,
number,
boolean,
Parser,
deferred,
every,
nill,
literal,
} from "ts-matches"
export function transformConfigSpec(oldSpec: OldConfigSpec): IST.InputSpec { export function transformConfigSpec(oldSpec: OldConfigSpec): IST.InputSpec {
return Object.entries(oldSpec).reduce((inputSpec, [key, oldVal]) => { return Object.entries(oldSpec).reduce((inputSpec, [key, oldVal]) => {
@@ -67,7 +82,7 @@ export function transformConfigSpec(oldSpec: OldConfigSpec): IST.InputSpec {
name: oldVal.name, name: oldVal.name,
description: oldVal.description || null, description: oldVal.description || null,
warning: oldVal.warning || null, warning: oldVal.warning || null,
spec: transformConfigSpec(matchOldConfigSpec.parse(oldVal.spec)), spec: transformConfigSpec(matchOldConfigSpec.unsafeCast(oldVal.spec)),
} }
} else if (oldVal.type === "string") { } else if (oldVal.type === "string") {
newVal = { newVal = {
@@ -106,7 +121,7 @@ export function transformConfigSpec(oldSpec: OldConfigSpec): IST.InputSpec {
...obj, ...obj,
[id]: { [id]: {
name: oldVal.tag["variant-names"][id] || id, name: oldVal.tag["variant-names"][id] || id,
spec: transformConfigSpec(matchOldConfigSpec.parse(spec)), spec: transformConfigSpec(matchOldConfigSpec.unsafeCast(spec)),
}, },
}), }),
{} as Record<string, { name: string; spec: IST.InputSpec }>, {} as Record<string, { name: string; spec: IST.InputSpec }>,
@@ -138,7 +153,7 @@ export function transformOldConfigToNew(
if (isObject(val)) { if (isObject(val)) {
newVal = transformOldConfigToNew( newVal = transformOldConfigToNew(
matchOldConfigSpec.parse(val.spec), matchOldConfigSpec.unsafeCast(val.spec),
config[key], config[key],
) )
} }
@@ -157,7 +172,7 @@ export function transformOldConfigToNew(
newVal = { newVal = {
selection, selection,
value: transformOldConfigToNew( value: transformOldConfigToNew(
matchOldConfigSpec.parse(val.variants[selection]), matchOldConfigSpec.unsafeCast(val.variants[selection]),
config[key], config[key],
), ),
} }
@@ -168,7 +183,10 @@ export function transformOldConfigToNew(
if (isObjectList(val)) { if (isObjectList(val)) {
newVal = (config[key] as object[]).map((obj) => newVal = (config[key] as object[]).map((obj) =>
transformOldConfigToNew(matchOldConfigSpec.parse(val.spec.spec), obj), transformOldConfigToNew(
matchOldConfigSpec.unsafeCast(val.spec.spec),
obj,
),
) )
} else if (isUnionList(val)) return obj } else if (isUnionList(val)) return obj
} }
@@ -194,7 +212,7 @@ export function transformNewConfigToOld(
if (isObject(val)) { if (isObject(val)) {
newVal = transformNewConfigToOld( newVal = transformNewConfigToOld(
matchOldConfigSpec.parse(val.spec), matchOldConfigSpec.unsafeCast(val.spec),
config[key], config[key],
) )
} }
@@ -203,7 +221,7 @@ export function transformNewConfigToOld(
newVal = { newVal = {
[val.tag.id]: config[key].selection, [val.tag.id]: config[key].selection,
...transformNewConfigToOld( ...transformNewConfigToOld(
matchOldConfigSpec.parse(val.variants[config[key].selection]), matchOldConfigSpec.unsafeCast(val.variants[config[key].selection]),
config[key].value, config[key].value,
), ),
} }
@@ -212,7 +230,10 @@ export function transformNewConfigToOld(
if (isList(val)) { if (isList(val)) {
if (isObjectList(val)) { if (isObjectList(val)) {
newVal = (config[key] as object[]).map((obj) => newVal = (config[key] as object[]).map((obj) =>
transformNewConfigToOld(matchOldConfigSpec.parse(val.spec.spec), obj), transformNewConfigToOld(
matchOldConfigSpec.unsafeCast(val.spec.spec),
obj,
),
) )
} else if (isUnionList(val)) return obj } else if (isUnionList(val)) return obj
} }
@@ -316,7 +337,9 @@ function getListSpec(
default: oldVal.default as Record<string, unknown>[], default: oldVal.default as Record<string, unknown>[],
spec: { spec: {
type: "object", type: "object",
spec: transformConfigSpec(matchOldConfigSpec.parse(oldVal.spec.spec)), spec: transformConfigSpec(
matchOldConfigSpec.unsafeCast(oldVal.spec.spec),
),
uniqueBy: oldVal.spec["unique-by"] || null, uniqueBy: oldVal.spec["unique-by"] || null,
displayAs: oldVal.spec["display-as"] || null, displayAs: oldVal.spec["display-as"] || null,
}, },
@@ -370,281 +393,211 @@ function isUnionList(
} }
export type OldConfigSpec = Record<string, OldValueSpec> export type OldConfigSpec = Record<string, OldValueSpec>
export const matchOldConfigSpec: z.ZodType<OldConfigSpec> = z.lazy(() => const [_matchOldConfigSpec, setMatchOldConfigSpec] = deferred<unknown>()
z.record(z.string(), matchOldValueSpec), export const matchOldConfigSpec = _matchOldConfigSpec as Parser<
unknown,
OldConfigSpec
>
export const matchOldDefaultString = anyOf(
string,
object({ charset: string, len: number }),
) )
export const matchOldDefaultString = z.union([ type OldDefaultString = typeof matchOldDefaultString._TYPE
z.string(),
z.object({ charset: z.string(), len: z.number() }),
])
type OldDefaultString = z.infer<typeof matchOldDefaultString>
export const matchOldValueSpecString = z.object({ export const matchOldValueSpecString = object({
type: z.enum(["string"]), type: literals("string"),
name: z.string(), name: string,
masked: z.boolean().nullable().optional(), masked: boolean.nullable().optional(),
copyable: z.boolean().nullable().optional(), copyable: boolean.nullable().optional(),
nullable: z.boolean().nullable().optional(), nullable: boolean.nullable().optional(),
placeholder: z.string().nullable().optional(), placeholder: string.nullable().optional(),
pattern: z.string().nullable().optional(), pattern: string.nullable().optional(),
"pattern-description": z.string().nullable().optional(), "pattern-description": string.nullable().optional(),
default: matchOldDefaultString.nullable().optional(), default: matchOldDefaultString.nullable().optional(),
textarea: z.boolean().nullable().optional(), textarea: boolean.nullable().optional(),
description: z.string().nullable().optional(), description: string.nullable().optional(),
warning: z.string().nullable().optional(), warning: string.nullable().optional(),
}) })
export const matchOldValueSpecNumber = z.object({ export const matchOldValueSpecNumber = object({
type: z.enum(["number"]), type: literals("number"),
nullable: z.boolean(), nullable: boolean,
name: z.string(), name: string,
range: z.string(), range: string,
integral: z.boolean(), integral: boolean,
default: z.number().nullable().optional(), default: number.nullable().optional(),
description: z.string().nullable().optional(), description: string.nullable().optional(),
warning: z.string().nullable().optional(), warning: string.nullable().optional(),
units: z.string().nullable().optional(), units: string.nullable().optional(),
placeholder: z.union([z.number(), z.string()]).nullable().optional(), placeholder: anyOf(number, string).nullable().optional(),
}) })
type OldValueSpecNumber = z.infer<typeof matchOldValueSpecNumber> type OldValueSpecNumber = typeof matchOldValueSpecNumber._TYPE
export const matchOldValueSpecBoolean = z.object({ export const matchOldValueSpecBoolean = object({
type: z.enum(["boolean"]), type: literals("boolean"),
default: z.boolean(), default: boolean,
name: z.string(), name: string,
description: z.string().nullable().optional(), description: string.nullable().optional(),
warning: z.string().nullable().optional(), warning: string.nullable().optional(),
}) })
type OldValueSpecBoolean = z.infer<typeof matchOldValueSpecBoolean> type OldValueSpecBoolean = typeof matchOldValueSpecBoolean._TYPE
type OldValueSpecObject = { const matchOldValueSpecObject = object({
type: "object" type: literals("object"),
spec: OldConfigSpec spec: _matchOldConfigSpec,
name: string name: string,
description?: string | null description: string.nullable().optional(),
warning?: string | null warning: string.nullable().optional(),
}
const matchOldValueSpecObject: z.ZodType<OldValueSpecObject> = z.object({
type: z.enum(["object"]),
spec: z.lazy(() => matchOldConfigSpec),
name: z.string(),
description: z.string().nullable().optional(),
warning: z.string().nullable().optional(),
}) })
type OldValueSpecObject = typeof matchOldValueSpecObject._TYPE
const matchOldValueSpecEnum = z.object({ const matchOldValueSpecEnum = object({
values: z.array(z.string()), values: array(string),
"value-names": z.record(z.string(), z.string()), "value-names": dictionary([string, string]),
type: z.enum(["enum"]), type: literals("enum"),
default: z.string(), default: string,
name: z.string(), name: string,
description: z.string().nullable().optional(), description: string.nullable().optional(),
warning: z.string().nullable().optional(), warning: string.nullable().optional(),
}) })
type OldValueSpecEnum = z.infer<typeof matchOldValueSpecEnum> type OldValueSpecEnum = typeof matchOldValueSpecEnum._TYPE
const matchOldUnionTagSpec = z.object({ const matchOldUnionTagSpec = object({
id: z.string(), // The name of the field containing one of the union variants id: string, // The name of the field containing one of the union variants
"variant-names": z.record(z.string(), z.string()), // The name of each variant "variant-names": dictionary([string, string]), // The name of each variant
name: z.string(), name: string,
description: z.string().nullable().optional(), description: string.nullable().optional(),
warning: z.string().nullable().optional(), warning: string.nullable().optional(),
}) })
type OldValueSpecUnion = { const matchOldValueSpecUnion = object({
type: "union" type: literals("union"),
tag: z.infer<typeof matchOldUnionTagSpec>
variants: Record<string, OldConfigSpec>
default: string
}
const matchOldValueSpecUnion: z.ZodType<OldValueSpecUnion> = z.object({
type: z.enum(["union"]),
tag: matchOldUnionTagSpec, tag: matchOldUnionTagSpec,
variants: z.record( variants: dictionary([string, _matchOldConfigSpec]),
z.string(), default: string,
z.lazy(() => matchOldConfigSpec),
),
default: z.string(),
}) })
type OldValueSpecUnion = typeof matchOldValueSpecUnion._TYPE
const [matchOldUniqueBy, setOldUniqueBy] = deferred<OldUniqueBy>()
type OldUniqueBy = type OldUniqueBy =
| null | null
| string | string
| { any: OldUniqueBy[] } | { any: OldUniqueBy[] }
| { all: OldUniqueBy[] } | { all: OldUniqueBy[] }
const matchOldUniqueBy: z.ZodType<OldUniqueBy> = z.lazy(() => setOldUniqueBy(
z.union([ anyOf(
z.null(), nill,
z.string(), string,
z.object({ any: z.array(matchOldUniqueBy) }), object({ any: array(matchOldUniqueBy) }),
z.object({ all: z.array(matchOldUniqueBy) }), object({ all: array(matchOldUniqueBy) }),
]),
)
type OldListValueSpecObject = {
spec: OldConfigSpec
"unique-by"?: OldUniqueBy | null
"display-as"?: string | null
}
const matchOldListValueSpecObject: z.ZodType<OldListValueSpecObject> = z.object(
{
spec: z.lazy(() => matchOldConfigSpec), // this is a mapped type of the config object at this level, replacing the object's values with specs on those values
"unique-by": matchOldUniqueBy.nullable().optional(), // indicates whether duplicates can be permitted in the list
"display-as": z.string().nullable().optional(), // this should be a handlebars template which can make use of the entire config which corresponds to 'spec'
},
)
type OldListValueSpecUnion = {
"unique-by"?: OldUniqueBy | null
"display-as"?: string | null
tag: z.infer<typeof matchOldUnionTagSpec>
variants: Record<string, OldConfigSpec>
}
const matchOldListValueSpecUnion: z.ZodType<OldListValueSpecUnion> = z.object({
"unique-by": matchOldUniqueBy.nullable().optional(),
"display-as": z.string().nullable().optional(),
tag: matchOldUnionTagSpec,
variants: z.record(
z.string(),
z.lazy(() => matchOldConfigSpec),
), ),
)
const matchOldListValueSpecObject = object({
spec: _matchOldConfigSpec, // this is a mapped type of the config object at this level, replacing the object's values with specs on those values
"unique-by": matchOldUniqueBy.nullable().optional(), // indicates whether duplicates can be permitted in the list
"display-as": string.nullable().optional(), // this should be a handlebars template which can make use of the entire config which corresponds to 'spec'
}) })
const matchOldListValueSpecString = z.object({ const matchOldListValueSpecUnion = object({
masked: z.boolean().nullable().optional(), "unique-by": matchOldUniqueBy.nullable().optional(),
copyable: z.boolean().nullable().optional(), "display-as": string.nullable().optional(),
pattern: z.string().nullable().optional(), tag: matchOldUnionTagSpec,
"pattern-description": z.string().nullable().optional(), variants: dictionary([string, _matchOldConfigSpec]),
placeholder: z.string().nullable().optional(), })
const matchOldListValueSpecString = object({
masked: boolean.nullable().optional(),
copyable: boolean.nullable().optional(),
pattern: string.nullable().optional(),
"pattern-description": string.nullable().optional(),
placeholder: string.nullable().optional(),
}) })
const matchOldListValueSpecEnum = z.object({ const matchOldListValueSpecEnum = object({
values: z.array(z.string()), values: array(string),
"value-names": z.record(z.string(), z.string()), "value-names": dictionary([string, string]),
}) })
const matchOldListValueSpecNumber = z.object({ const matchOldListValueSpecNumber = object({
range: z.string(), range: string,
integral: z.boolean(), integral: boolean,
units: z.string().nullable().optional(), units: string.nullable().optional(),
placeholder: z.union([z.number(), z.string()]).nullable().optional(), placeholder: anyOf(number, string).nullable().optional(),
}) })
type OldValueSpecListBase = {
type: "list"
range: string
default: string[] | number[] | OldDefaultString[] | Record<string, unknown>[]
name: string
description?: string | null
warning?: string | null
}
type OldValueSpecList = OldValueSpecListBase &
(
| { subtype: "string"; spec: z.infer<typeof matchOldListValueSpecString> }
| { subtype: "enum"; spec: z.infer<typeof matchOldListValueSpecEnum> }
| { subtype: "object"; spec: OldListValueSpecObject }
| { subtype: "number"; spec: z.infer<typeof matchOldListValueSpecNumber> }
| { subtype: "union"; spec: OldListValueSpecUnion }
)
// represents a spec for a list // represents a spec for a list
export const matchOldValueSpecList: z.ZodType<OldValueSpecList> = export const matchOldValueSpecList = every(
z.intersection( object({
z.object({ type: literals("list"),
type: z.enum(["list"]), range: string, // '[0,1]' (inclusive) OR '[0,*)' (right unbounded), normal math rules
range: z.string(), // '[0,1]' (inclusive) OR '[0,*)' (right unbounded), normal math rules default: anyOf(
default: z.union([ array(string),
z.array(z.string()), array(number),
z.array(z.number()), array(matchOldDefaultString),
z.array(matchOldDefaultString), array(object),
z.array(z.object({}).passthrough()), ),
]), name: string,
name: z.string(), description: string.nullable().optional(),
description: z.string().nullable().optional(), warning: string.nullable().optional(),
warning: z.string().nullable().optional(),
}),
z.union([
z.object({
subtype: z.enum(["string"]),
spec: matchOldListValueSpecString,
}),
z.object({
subtype: z.enum(["enum"]),
spec: matchOldListValueSpecEnum,
}),
z.object({
subtype: z.enum(["object"]),
spec: matchOldListValueSpecObject,
}),
z.object({
subtype: z.enum(["number"]),
spec: matchOldListValueSpecNumber,
}),
z.object({
subtype: z.enum(["union"]),
spec: matchOldListValueSpecUnion,
}),
]),
) as unknown as z.ZodType<OldValueSpecList>
type OldValueSpecPointer = {
type: "pointer"
} & (
| {
subtype: "package"
target: "tor-key" | "tor-address" | "lan-address"
"package-id": string
interface: string
}
| {
subtype: "package"
target: "config"
"package-id": string
selector: string
multi: boolean
}
)
const matchOldValueSpecPointer: z.ZodType<OldValueSpecPointer> = z.intersection(
z.object({
type: z.literal("pointer"),
}), }),
z.union([ anyOf(
z.object({ object({
subtype: z.literal("package"), subtype: literals("string"),
target: z.enum(["tor-key", "tor-address", "lan-address"]), spec: matchOldListValueSpecString,
"package-id": z.string(),
interface: z.string(),
}), }),
z.object({ object({
subtype: z.literal("package"), subtype: literals("enum"),
target: z.enum(["config"]), spec: matchOldListValueSpecEnum,
"package-id": z.string(),
selector: z.string(),
multi: z.boolean(),
}), }),
]), object({
) as unknown as z.ZodType<OldValueSpecPointer> subtype: literals("object"),
spec: matchOldListValueSpecObject,
}),
object({
subtype: literals("number"),
spec: matchOldListValueSpecNumber,
}),
object({
subtype: literals("union"),
spec: matchOldListValueSpecUnion,
}),
),
)
type OldValueSpecList = typeof matchOldValueSpecList._TYPE
type OldValueSpecString = z.infer<typeof matchOldValueSpecString> const matchOldValueSpecPointer = every(
object({
type: literal("pointer"),
}),
anyOf(
object({
subtype: literal("package"),
target: literals("tor-key", "tor-address", "lan-address"),
"package-id": string,
interface: string,
}),
object({
subtype: literal("package"),
target: literals("config"),
"package-id": string,
selector: string,
multi: boolean,
}),
),
)
type OldValueSpecPointer = typeof matchOldValueSpecPointer._TYPE
type OldValueSpec = export const matchOldValueSpec = anyOf(
| OldValueSpecString
| OldValueSpecNumber
| OldValueSpecBoolean
| OldValueSpecObject
| OldValueSpecEnum
| OldValueSpecList
| OldValueSpecUnion
| OldValueSpecPointer
export const matchOldValueSpec: z.ZodType<OldValueSpec> = z.union([
matchOldValueSpecString, matchOldValueSpecString,
matchOldValueSpecNumber, matchOldValueSpecNumber,
matchOldValueSpecBoolean, matchOldValueSpecBoolean,
matchOldValueSpecObject as z.ZodType<OldValueSpecObject>, matchOldValueSpecObject,
matchOldValueSpecEnum, matchOldValueSpecEnum,
matchOldValueSpecList as z.ZodType<OldValueSpecList>, matchOldValueSpecList,
matchOldValueSpecUnion as z.ZodType<OldValueSpecUnion>, matchOldValueSpecUnion,
matchOldValueSpecPointer as z.ZodType<OldValueSpecPointer>, matchOldValueSpecPointer,
]) )
type OldValueSpec = typeof matchOldValueSpec._TYPE
setMatchOldConfigSpec(dictionary([string, matchOldValueSpec]))
export class Range { export class Range {
min?: number min?: number

View File

@@ -47,12 +47,11 @@ export class SystemForStartOs implements System {
getActionInput( getActionInput(
effects: Effects, effects: Effects,
id: string, id: string,
prefill: Record<string, unknown> | null,
timeoutMs: number | null, timeoutMs: number | null,
): Promise<T.ActionInput | null> { ): Promise<T.ActionInput | null> {
const action = this.abi.actions.get(id) const action = this.abi.actions.get(id)
if (!action) throw new Error(`Action ${id} not found`) if (!action) throw new Error(`Action ${id} not found`)
return action.getInput({ effects, prefill }) return action.getInput({ effects })
} }
runAction( runAction(
effects: Effects, effects: Effects,
@@ -71,7 +70,7 @@ export class SystemForStartOs implements System {
this.starting = true this.starting = true
effects.constRetry = utils.once(() => { effects.constRetry = utils.once(() => {
console.debug(".const() triggered") console.debug(".const() triggered")
if (effects.isInContext) effects.restart() effects.restart()
}) })
let mainOnTerm: () => Promise<void> | undefined let mainOnTerm: () => Promise<void> | undefined
const daemons = await ( const daemons = await (

View File

@@ -33,7 +33,6 @@ export type System = {
getActionInput( getActionInput(
effects: Effects, effects: Effects,
actionId: string, actionId: string,
prefill: Record<string, unknown> | null,
timeoutMs: number | null, timeoutMs: number | null,
): Promise<T.ActionInput | null> ): Promise<T.ActionInput | null>

View File

@@ -1,19 +1,41 @@
import { z } from "@start9labs/start-sdk" import {
object,
literal,
string,
boolean,
array,
dictionary,
literals,
number,
Parser,
some,
} from "ts-matches"
import { matchDuration } from "./Duration" import { matchDuration } from "./Duration"
export const matchDockerProcedure = z.object({ const VolumeId = string
type: z.literal("docker"), const Path = string
image: z.string(),
system: z.boolean().optional(), export type VolumeId = string
entrypoint: z.string(), export type Path = string
args: z.array(z.string()).default([]), export const matchDockerProcedure = object({
mounts: z.record(z.string(), z.string()).optional(), type: literal("docker"),
"io-format": z image: string,
.enum(["json", "json-pretty", "yaml", "cbor", "toml", "toml-pretty"]) system: boolean.optional(),
entrypoint: string,
args: array(string).defaultTo([]),
mounts: dictionary([VolumeId, Path]).optional(),
"io-format": literals(
"json",
"json-pretty",
"yaml",
"cbor",
"toml",
"toml-pretty",
)
.nullable() .nullable()
.optional(), .optional(),
"sigterm-timeout": z.union([z.number(), matchDuration]).catch(30), "sigterm-timeout": some(number, matchDuration).onMismatch(30),
inject: z.boolean().default(false), inject: boolean.defaultTo(false),
}) })
export type DockerProcedure = z.infer<typeof matchDockerProcedure> export type DockerProcedure = typeof matchDockerProcedure._TYPE

View File

@@ -1,11 +1,11 @@
import { z } from "@start9labs/start-sdk" import { string } from "ts-matches"
export type TimeUnit = "d" | "h" | "s" | "ms" | "m" | "µs" | "ns" export type TimeUnit = "d" | "h" | "s" | "ms" | "m" | "µs" | "ns"
export type Duration = `${number}${TimeUnit}` export type Duration = `${number}${TimeUnit}`
const durationRegex = /^([0-9]*(\.[0-9]+)?)(ns|µs|ms|s|m|d)$/ const durationRegex = /^([0-9]*(\.[0-9]+)?)(ns|µs|ms|s|m|d)$/
export const matchDuration = z.string().refine(isDuration) export const matchDuration = string.refine(isDuration)
export function isDuration(value: string): value is Duration { export function isDuration(value: string): value is Duration {
return durationRegex.test(value) return durationRegex.test(value)
} }

View File

@@ -1,4 +1,4 @@
import { z } from "@start9labs/start-sdk" import { literals, some, string } from "ts-matches"
type NestedPath<A extends string, B extends string> = `/${A}/${string}/${B}` type NestedPath<A extends string, B extends string> = `/${A}/${string}/${B}`
type NestedPaths = NestedPath<"actions", "run" | "getInput"> type NestedPaths = NestedPath<"actions", "run" | "getInput">
@@ -17,14 +17,14 @@ function isNestedPath(path: string): path is NestedPaths {
return true return true
return false return false
} }
export const jsonPath = z.union([ export const jsonPath = some(
z.enum([ literals(
"/packageInit", "/packageInit",
"/packageUninit", "/packageUninit",
"/backup/create", "/backup/create",
"/backup/restore", "/backup/restore",
]), ),
z.string().refine(isNestedPath), string.refine(isNestedPath, "isNestedPath"),
]) )
export type JsonPath = z.infer<typeof jsonPath> export type JsonPath = typeof jsonPath._TYPE

View File

@@ -1,4 +1,5 @@
import { RpcListener } from "./Adapters/RpcListener" import { RpcListener } from "./Adapters/RpcListener"
import { SystemForEmbassy } from "./Adapters/Systems/SystemForEmbassy"
import { AllGetDependencies } from "./Interfaces/AllGetDependencies" import { AllGetDependencies } from "./Interfaces/AllGetDependencies"
import { getSystem } from "./Adapters/Systems" import { getSystem } from "./Adapters/Systems"
@@ -6,18 +7,6 @@ const getDependencies: AllGetDependencies = {
system: getSystem, system: getSystem,
} }
process.on("unhandledRejection", (reason) => {
if (
reason instanceof Error &&
"muteUnhandled" in reason &&
reason.muteUnhandled
) {
// mute
} else {
console.error("Unhandled promise rejection", reason)
}
})
for (let s of ["SIGTERM", "SIGINT", "SIGHUP"]) { for (let s of ["SIGTERM", "SIGINT", "SIGHUP"]) {
process.on(s, (s) => { process.on(s, (s) => {
console.log(`Caught ${s}`) console.log(`Caught ${s}`)

View File

@@ -16,6 +16,6 @@ case $ARCH in
esac esac
docker run --rm $USE_TTY --platform=$DOCKER_PLATFORM -eARCH --privileged -v "$(pwd):/root/start-os" start9/build-env /root/start-os/container-runtime/update-image.sh docker run --rm $USE_TTY --platform=$DOCKER_PLATFORM -eARCH --privileged -v "$(pwd):/root/start-os" start9/build-env /root/start-os/container-runtime/update-image.sh
if [ "$(ls -nd "container-runtime/rootfs.${ARCH}.squashfs" | awk '{ print $3 }')" != "$UID" ]; then if [ "$(ls -nd "rootfs.${ARCH}.squashfs" | awk '{ print $3 }')" != "$UID" ]; then
docker run --rm $USE_TTY -v "$(pwd):/root/start-os" start9/build-env chown -R $UID:$UID /root/start-os/container-runtime docker run --rm $USE_TTY -v "$(pwd):/root/start-os" start9/build-env chown -R $UID:$UID /root/start-os/container-runtime
fi fi

View File

@@ -53,8 +53,6 @@ Patch-DB provides diff-based state synchronization. Changes to `db/model/public.
- `.mutate(|v| ...)` — Deserialize, mutate, reserialize - `.mutate(|v| ...)` — Deserialize, mutate, reserialize
- For maps: `.keys()`, `.as_idx(&key)`, `.as_idx_mut(&key)`, `.insert()`, `.remove()`, `.contains_key()` - For maps: `.keys()`, `.as_idx(&key)`, `.as_idx_mut(&key)`, `.insert()`, `.remove()`, `.contains_key()`
See [patchdb.md](patchdb.md) for `TypedDbWatch<T>` construction, API, and usage patterns.
## i18n ## i18n
See [i18n-patterns.md](i18n-patterns.md) for internationalization key conventions and the `t!()` macro. See [i18n-patterns.md](i18n-patterns.md) for internationalization key conventions and the `t!()` macro.
@@ -66,7 +64,6 @@ See [core-rust-patterns.md](core-rust-patterns.md) for common utilities (Invoke
## Related Documentation ## Related Documentation
- [rpc-toolkit.md](rpc-toolkit.md) — JSON-RPC handler patterns - [rpc-toolkit.md](rpc-toolkit.md) — JSON-RPC handler patterns
- [patchdb.md](patchdb.md) — Patch-DB watch patterns and TypedDbWatch
- [i18n-patterns.md](i18n-patterns.md) — Internationalization conventions - [i18n-patterns.md](i18n-patterns.md) — Internationalization conventions
- [core-rust-patterns.md](core-rust-patterns.md) — Common Rust utilities - [core-rust-patterns.md](core-rust-patterns.md) — Common Rust utilities
- [s9pk-structure.md](s9pk-structure.md) — S9PK package format - [s9pk-structure.md](s9pk-structure.md) — S9PK package format

View File

@@ -22,7 +22,4 @@ cd sdk && make baseDist dist # Rebuild SDK after ts-bindings
- Always run `cargo check -p start-os` after modifying Rust code - Always run `cargo check -p start-os` after modifying Rust code
- When adding RPC endpoints, follow the patterns in [rpc-toolkit.md](rpc-toolkit.md) - When adding RPC endpoints, follow the patterns in [rpc-toolkit.md](rpc-toolkit.md)
- When modifying `#[ts(export)]` types, regenerate bindings and rebuild the SDK (see [ARCHITECTURE.md](../ARCHITECTURE.md#build-pipeline)) - When modifying `#[ts(export)]` types, regenerate bindings and rebuild the SDK (see [ARCHITECTURE.md](../ARCHITECTURE.md#build-pipeline))
- **i18n is mandatory** — any user-facing string must go in `core/locales/i18n.yaml` with all 5 locales (`en_US`, `de_DE`, `es_ES`, `fr_FR`, `pl_PL`). This includes CLI subcommand descriptions (`about.<name>`), CLI arg help (`help.arg.<name>`), error messages (`error.<name>`), notifications, setup messages, and any other text shown to users. Entries are alphabetically ordered within their section. See [i18n-patterns.md](i18n-patterns.md) - When adding i18n keys, add all 5 locales in `core/locales/i18n.yaml` (see [i18n-patterns.md](i18n-patterns.md))
- When using DB watches, follow the `TypedDbWatch<T>` patterns in [patchdb.md](patchdb.md)
- **Always use `.invoke(ErrorKind::...)` instead of `.status()` when running CLI commands** via `tokio::process::Command`. The `Invoke` trait (from `crate::util::Invoke`) captures stdout/stderr and checks exit codes properly. Using `.status()` leaks stderr directly to system logs, creating noise. For check-then-act patterns (e.g. `iptables -C`), use `.invoke(...).await.is_ok()` / `.is_err()` instead of `.status().await.map_or(false, |s| s.success())`.
- Always use file utils in util::io instead of tokio::fs when available

751
core/Cargo.lock generated

File diff suppressed because it is too large Load Diff

View File

@@ -15,7 +15,7 @@ license = "MIT"
name = "start-os" name = "start-os"
readme = "README.md" readme = "README.md"
repository = "https://github.com/Start9Labs/start-os" repository = "https://github.com/Start9Labs/start-os"
version = "0.4.0-alpha.23" # VERSION_BUMP version = "0.4.0-alpha.20" # VERSION_BUMP
[lib] [lib]
name = "startos" name = "startos"
@@ -63,7 +63,7 @@ async-compression = { version = "0.4.32", features = [
] } ] }
async-stream = "0.3.5" async-stream = "0.3.5"
async-trait = "0.1.74" async-trait = "0.1.74"
axum = { version = "0.8.4", features = ["http2", "ws"] } axum = { version = "0.8.4", features = ["ws", "http2"] }
backtrace-on-stack-overflow = { version = "0.3.0", optional = true } backtrace-on-stack-overflow = { version = "0.3.0", optional = true }
base32 = "0.5.0" base32 = "0.5.0"
base64 = "0.22.1" base64 = "0.22.1"
@@ -100,7 +100,6 @@ fd-lock-rs = "0.1.4"
form_urlencoded = "1.2.1" form_urlencoded = "1.2.1"
futures = "0.3.28" futures = "0.3.28"
gpt = "4.1.0" gpt = "4.1.0"
hashing-serializer = "0.1.1"
hex = "0.4.3" hex = "0.4.3"
hickory-server = { version = "0.25.2", features = ["resolver"] } hickory-server = { version = "0.25.2", features = ["resolver"] }
hmac = "0.12.1" hmac = "0.12.1"
@@ -171,7 +170,9 @@ once_cell = "1.19.0"
openssh-keys = "0.6.2" openssh-keys = "0.6.2"
openssl = { version = "0.10.57", features = ["vendored"] } openssl = { version = "0.10.57", features = ["vendored"] }
p256 = { version = "0.13.2", features = ["pem"] } p256 = { version = "0.13.2", features = ["pem"] }
patch-db = { version = "*", path = "../patch-db/core", features = ["trace"] } patch-db = { version = "*", path = "../patch-db/patch-db", features = [
"trace",
] }
pbkdf2 = "0.12.2" pbkdf2 = "0.12.2"
pin-project = "1.1.3" pin-project = "1.1.3"
pkcs8 = { version = "0.10.2", features = ["std"] } pkcs8 = { version = "0.10.2", features = ["std"] }
@@ -183,16 +184,16 @@ r3bl_tui = "0.7.6"
rand = "0.9.2" rand = "0.9.2"
regex = "1.10.2" regex = "1.10.2"
reqwest = { version = "0.12.25", features = [ reqwest = { version = "0.12.25", features = [
"http2",
"json", "json",
"socks", "socks",
"stream", "stream",
"http2",
] } ] }
reqwest_cookie_store = "0.9.0" reqwest_cookie_store = "0.9.0"
rpassword = "7.2.0" rpassword = "7.2.0"
rpc-toolkit = { git = "https://github.com/Start9Labs/rpc-toolkit.git" }
rust-argon2 = "3.0.0" rust-argon2 = "3.0.0"
rust-i18n = "3.1.5" rust-i18n = "3.1.5"
rpc-toolkit = { git = "https://github.com/Start9Labs/rpc-toolkit.git" }
semver = { version = "1.0.20", features = ["serde"] } semver = { version = "1.0.20", features = ["serde"] }
serde = { version = "1.0", features = ["derive", "rc"] } serde = { version = "1.0", features = ["derive", "rc"] }
serde_cbor = { package = "ciborium", version = "0.2.1" } serde_cbor = { package = "ciborium", version = "0.2.1" }
@@ -201,7 +202,6 @@ serde_toml = { package = "toml", version = "0.9.9+spec-1.0.0" }
serde_yaml = { package = "serde_yml", version = "0.0.12" } serde_yaml = { package = "serde_yml", version = "0.0.12" }
sha-crypt = "0.5.0" sha-crypt = "0.5.0"
sha2 = "0.10.2" sha2 = "0.10.2"
sha3 = "0.10"
signal-hook = "0.3.17" signal-hook = "0.3.17"
socket2 = { version = "0.6.0", features = ["all"] } socket2 = { version = "0.6.0", features = ["all"] }
socks5-impl = { version = "0.7.2", features = ["client", "server"] } socks5-impl = { version = "0.7.2", features = ["client", "server"] }
@@ -233,9 +233,7 @@ uuid = { version = "1.4.1", features = ["v4"] }
visit-rs = "0.1.1" visit-rs = "0.1.1"
x25519-dalek = { version = "2.0.1", features = ["static_secrets"] } x25519-dalek = { version = "2.0.1", features = ["static_secrets"] }
zbus = "5.1.1" zbus = "5.1.1"
hashing-serializer = "0.1.1"
[dev-dependencies]
clap_mangen = "0.2.33"
[target.'cfg(target_os = "linux")'.dependencies] [target.'cfg(target_os = "linux")'.dependencies]
procfs = "0.18.0" procfs = "0.18.0"

View File

@@ -67,10 +67,6 @@ if [[ "${ENVIRONMENT:-}" =~ (^|-)console($|-) ]]; then
RUSTFLAGS="--cfg tokio_unstable" RUSTFLAGS="--cfg tokio_unstable"
fi fi
if [[ "${ENVIRONMENT:-}" =~ (^|-)unstable($|-) ]]; then
RUSTFLAGS="$RUSTFLAGS -C debuginfo=1"
fi
echo "FEATURES=\"$FEATURES\"" echo "FEATURES=\"$FEATURES\""
echo "RUSTFLAGS=\"$RUSTFLAGS\"" echo "RUSTFLAGS=\"$RUSTFLAGS\""
rust-zig-builder cargo zigbuild --manifest-path=./core/Cargo.toml $BUILD_FLAGS --features=$FEATURES --locked --bin start-cli --target=$TARGET rust-zig-builder cargo zigbuild --manifest-path=./core/Cargo.toml $BUILD_FLAGS --features=$FEATURES --locked --bin start-cli --target=$TARGET

View File

@@ -1,44 +0,0 @@
#!/bin/bash
cd "$(dirname "${BASH_SOURCE[0]}")"
source ./builder-alias.sh
set -ea
shopt -s expand_aliases
PROFILE=${PROFILE:-debug}
if [ "${PROFILE}" = "release" ]; then
BUILD_FLAGS="--release"
else
if [ "$PROFILE" != "debug" ]; then
>&2 echo "Unknown profile $PROFILE: falling back to debug..."
PROFILE=debug
fi
fi
if [ -z "$ARCH" ]; then
ARCH=$(uname -m)
fi
if [ "$ARCH" = "arm64" ]; then
ARCH="aarch64"
fi
RUST_ARCH="$ARCH"
if [ "$ARCH" = "riscv64" ]; then
RUST_ARCH="riscv64gc"
fi
cd ../..
FEATURES="$(echo $ENVIRONMENT | sed 's/-/,/g')"
RUSTFLAGS=""
if [[ "${ENVIRONMENT}" =~ (^|-)console($|-) ]]; then
RUSTFLAGS="--cfg tokio_unstable"
fi
echo "FEATURES=\"$FEATURES\""
echo "RUSTFLAGS=\"$RUSTFLAGS\""
rust-zig-builder cargo test --manifest-path=./core/Cargo.toml --lib $BUILD_FLAGS --features test,$FEATURES --locked 'export_manpage_'
if [ "$(ls -nd "core/man" | awk '{ print $3 }')" != "$UID" ]; then
rust-zig-builder sh -c "chown -R $UID:$UID core/target && chown -R $UID:$UID core/man && chown -R $UID:$UID /usr/local/cargo"
fi

View File

@@ -38,10 +38,6 @@ if [[ "${ENVIRONMENT}" =~ (^|-)console($|-) ]]; then
RUSTFLAGS="--cfg tokio_unstable" RUSTFLAGS="--cfg tokio_unstable"
fi fi
if [[ "${ENVIRONMENT}" =~ (^|-)unstable($|-) ]]; then
RUSTFLAGS="$RUSTFLAGS -C debuginfo=1"
fi
echo "FEATURES=\"$FEATURES\"" echo "FEATURES=\"$FEATURES\""
echo "RUSTFLAGS=\"$RUSTFLAGS\"" echo "RUSTFLAGS=\"$RUSTFLAGS\""
rust-zig-builder cargo zigbuild --manifest-path=./core/Cargo.toml $BUILD_FLAGS --features=$FEATURES --locked --bin registrybox --target=$RUST_ARCH-unknown-linux-musl rust-zig-builder cargo zigbuild --manifest-path=./core/Cargo.toml $BUILD_FLAGS --features=$FEATURES --locked --bin registrybox --target=$RUST_ARCH-unknown-linux-musl

View File

@@ -38,10 +38,6 @@ if [[ "${ENVIRONMENT}" =~ (^|-)console($|-) ]]; then
RUSTFLAGS="--cfg tokio_unstable" RUSTFLAGS="--cfg tokio_unstable"
fi fi
if [[ "${ENVIRONMENT}" =~ (^|-)unstable($|-) ]]; then
RUSTFLAGS="$RUSTFLAGS -C debuginfo=1"
fi
echo "FEATURES=\"$FEATURES\"" echo "FEATURES=\"$FEATURES\""
echo "RUSTFLAGS=\"$RUSTFLAGS\"" echo "RUSTFLAGS=\"$RUSTFLAGS\""
rust-zig-builder cargo zigbuild --manifest-path=./core/Cargo.toml $BUILD_FLAGS --features=$FEATURES --locked --bin start-container --target=$RUST_ARCH-unknown-linux-musl rust-zig-builder cargo zigbuild --manifest-path=./core/Cargo.toml $BUILD_FLAGS --features=$FEATURES --locked --bin start-container --target=$RUST_ARCH-unknown-linux-musl

View File

@@ -38,10 +38,6 @@ if [[ "${ENVIRONMENT}" =~ (^|-)console($|-) ]]; then
RUSTFLAGS="--cfg tokio_unstable" RUSTFLAGS="--cfg tokio_unstable"
fi fi
if [[ "${ENVIRONMENT}" =~ (^|-)unstable($|-) ]]; then
RUSTFLAGS="$RUSTFLAGS -C debuginfo=1"
fi
echo "FEATURES=\"$FEATURES\"" echo "FEATURES=\"$FEATURES\""
echo "RUSTFLAGS=\"$RUSTFLAGS\"" echo "RUSTFLAGS=\"$RUSTFLAGS\""
rust-zig-builder cargo zigbuild --manifest-path=./core/Cargo.toml $BUILD_FLAGS --features=$FEATURES --locked --bin startbox --target=$RUST_ARCH-unknown-linux-musl rust-zig-builder cargo zigbuild --manifest-path=./core/Cargo.toml $BUILD_FLAGS --features=$FEATURES --locked --bin startbox --target=$RUST_ARCH-unknown-linux-musl

View File

@@ -38,10 +38,6 @@ if [[ "${ENVIRONMENT}" =~ (^|-)console($|-) ]]; then
RUSTFLAGS="--cfg tokio_unstable" RUSTFLAGS="--cfg tokio_unstable"
fi fi
if [[ "${ENVIRONMENT}" =~ (^|-)unstable($|-) ]]; then
RUSTFLAGS="$RUSTFLAGS -C debuginfo=1"
fi
echo "FEATURES=\"$FEATURES\"" echo "FEATURES=\"$FEATURES\""
echo "RUSTFLAGS=\"$RUSTFLAGS\"" echo "RUSTFLAGS=\"$RUSTFLAGS\""
rust-zig-builder cargo zigbuild --manifest-path=./core/Cargo.toml $BUILD_FLAGS --features=$FEATURES --locked --bin tunnelbox --target=$RUST_ARCH-unknown-linux-musl rust-zig-builder cargo zigbuild --manifest-path=./core/Cargo.toml $BUILD_FLAGS --features=$FEATURES --locked --bin tunnelbox --target=$RUST_ARCH-unknown-linux-musl

View File

@@ -197,13 +197,6 @@ setup.transferring-data:
fr_FR: "Transfert de données" fr_FR: "Transfert de données"
pl_PL: "Przesyłanie danych" pl_PL: "Przesyłanie danych"
setup.password-required:
en_US: "Password is required for fresh setup"
de_DE: "Passwort ist für die Ersteinrichtung erforderlich"
es_ES: "Se requiere contraseña para la configuración inicial"
fr_FR: "Le mot de passe est requis pour la première configuration"
pl_PL: "Hasło jest wymagane do nowej konfiguracji"
# system.rs # system.rs
system.governor-not-available: system.governor-not-available:
en_US: "Governor %{governor} not available" en_US: "Governor %{governor} not available"
@@ -857,13 +850,6 @@ error.set-sys-info:
fr_FR: "Erreur de Définition des Infos Système" fr_FR: "Erreur de Définition des Infos Système"
pl_PL: "Błąd Ustawiania Informacji o Systemie" pl_PL: "Błąd Ustawiania Informacji o Systemie"
error.bios:
en_US: "BIOS/UEFI Error"
de_DE: "BIOS/UEFI-Fehler"
es_ES: "Error de BIOS/UEFI"
fr_FR: "Erreur BIOS/UEFI"
pl_PL: "Błąd BIOS/UEFI"
# disk/main.rs # disk/main.rs
disk.main.disk-not-found: disk.main.disk-not-found:
en_US: "StartOS disk not found." en_US: "StartOS disk not found."
@@ -872,13 +858,6 @@ disk.main.disk-not-found:
fr_FR: "Disque StartOS non trouvé." fr_FR: "Disque StartOS non trouvé."
pl_PL: "Nie znaleziono dysku StartOS." pl_PL: "Nie znaleziono dysku StartOS."
disk.main.converting-to-btrfs:
en_US: "Performing file system conversion to btrfs. This can take many hours, please be patient and DO NOT unplug the server."
de_DE: "Dateisystemkonvertierung zu btrfs wird durchgeführt. Dies kann viele Stunden dauern, bitte haben Sie Geduld und trennen Sie den Server NICHT vom Strom."
es_ES: "Realizando conversión del sistema de archivos a btrfs. Esto puede tardar muchas horas, tenga paciencia y NO desconecte el servidor."
fr_FR: "Conversion du système de fichiers vers btrfs en cours. Cela peut prendre de nombreuses heures, soyez patient et NE débranchez PAS le serveur."
pl_PL: "Wykonywanie konwersji systemu plików na btrfs. To może potrwać wiele godzin, prosimy o cierpliwość i NIE odłączaj serwera od zasilania."
disk.main.incorrect-disk: disk.main.incorrect-disk:
en_US: "A StartOS disk was found, but it is not the correct disk for this device." en_US: "A StartOS disk was found, but it is not the correct disk for this device."
de_DE: "Eine StartOS-Festplatte wurde gefunden, aber es ist nicht die richtige Festplatte für dieses Gerät." de_DE: "Eine StartOS-Festplatte wurde gefunden, aber es ist nicht die richtige Festplatte für dieses Gerät."
@@ -1015,27 +994,6 @@ disk.mount.binding:
fr_FR: "Liaison de %{src} à %{dst}" fr_FR: "Liaison de %{src} à %{dst}"
pl_PL: "Wiązanie %{src} do %{dst}" pl_PL: "Wiązanie %{src} do %{dst}"
hostname.empty:
en_US: "Hostname cannot be empty"
de_DE: "Der Hostname darf nicht leer sein"
es_ES: "El nombre de host no puede estar vacío"
fr_FR: "Le nom d'hôte ne peut pas être vide"
pl_PL: "Nazwa hosta nie może być pusta"
hostname.invalid-character:
en_US: "Invalid character in hostname: %{char}"
de_DE: "Ungültiges Zeichen im Hostnamen: %{char}"
es_ES: "Carácter no válido en el nombre de host: %{char}"
fr_FR: "Caractère invalide dans le nom d'hôte : %{char}"
pl_PL: "Nieprawidłowy znak w nazwie hosta: %{char}"
hostname.must-provide-name-or-hostname:
en_US: "Must provide at least one of: name, hostname"
de_DE: "Es muss mindestens eines angegeben werden: name, hostname"
es_ES: "Se debe proporcionar al menos uno de: name, hostname"
fr_FR: "Vous devez fournir au moins l'un des éléments suivants : name, hostname"
pl_PL: "Należy podać co najmniej jedno z: name, hostname"
# init.rs # init.rs
init.running-preinit: init.running-preinit:
en_US: "Running preinit.sh" en_US: "Running preinit.sh"
@@ -1262,13 +1220,6 @@ backup.bulk.leaked-reference:
fr_FR: "référence fuitée vers BackupMountGuard" fr_FR: "référence fuitée vers BackupMountGuard"
pl_PL: "wyciekła referencja do BackupMountGuard" pl_PL: "wyciekła referencja do BackupMountGuard"
backup.bulk.service-not-ready:
en_US: "Cannot create a backup of a service that is still initializing or in an error state"
de_DE: "Es kann keine Sicherung eines Dienstes erstellt werden, der noch initialisiert wird oder sich im Fehlerzustand befindet"
es_ES: "No se puede crear una copia de seguridad de un servicio que aún se está inicializando o está en estado de error"
fr_FR: "Impossible de créer une sauvegarde d'un service encore en cours d'initialisation ou en état d'erreur"
pl_PL: "Nie można utworzyć kopii zapasowej usługi, która jest jeszcze inicjalizowana lub znajduje się w stanie błędu"
# backup/restore.rs # backup/restore.rs
backup.restore.package-error: backup.restore.package-error:
en_US: "Error restoring package %{id}: %{error}" en_US: "Error restoring package %{id}: %{error}"
@@ -1292,21 +1243,6 @@ backup.target.cifs.target-not-found-id:
fr_FR: "ID de cible de sauvegarde %{id} non trouvé" fr_FR: "ID de cible de sauvegarde %{id} non trouvé"
pl_PL: "Nie znaleziono ID celu kopii zapasowej %{id}" pl_PL: "Nie znaleziono ID celu kopii zapasowej %{id}"
# service/effects/net/plugin.rs
net.plugin.manifest-missing-plugin:
en_US: "manifest does not declare the \"%{plugin}\" plugin"
de_DE: "Manifest deklariert das Plugin \"%{plugin}\" nicht"
es_ES: "el manifiesto no declara el plugin \"%{plugin}\""
fr_FR: "le manifeste ne déclare pas le plugin \"%{plugin}\""
pl_PL: "manifest nie deklaruje wtyczki \"%{plugin}\""
net.plugin.binding-not-found:
en_US: "binding not found: %{binding}"
de_DE: "Bindung nicht gefunden: %{binding}"
es_ES: "enlace no encontrado: %{binding}"
fr_FR: "liaison introuvable : %{binding}"
pl_PL: "powiązanie nie znalezione: %{binding}"
# net/ssl.rs # net/ssl.rs
net.ssl.unreachable: net.ssl.unreachable:
en_US: "unreachable" en_US: "unreachable"
@@ -1393,21 +1329,6 @@ net.tor.client-error:
fr_FR: "Erreur du client Tor : %{error}" fr_FR: "Erreur du client Tor : %{error}"
pl_PL: "Błąd klienta Tor: %{error}" pl_PL: "Błąd klienta Tor: %{error}"
# net/tunnel.rs
net.tunnel.timeout-waiting-for-add:
en_US: "timed out waiting for gateway %{gateway} to appear in database"
de_DE: "Zeitüberschreitung beim Warten auf das Erscheinen von Gateway %{gateway} in der Datenbank"
es_ES: "se agotó el tiempo esperando que la puerta de enlace %{gateway} aparezca en la base de datos"
fr_FR: "délai d'attente dépassé pour l'apparition de la passerelle %{gateway} dans la base de données"
pl_PL: "upłynął limit czasu oczekiwania na pojawienie się bramy %{gateway} w bazie danych"
net.tunnel.timeout-waiting-for-remove:
en_US: "timed out waiting for gateway %{gateway} to be removed from database"
de_DE: "Zeitüberschreitung beim Warten auf das Entfernen von Gateway %{gateway} aus der Datenbank"
es_ES: "se agotó el tiempo esperando que la puerta de enlace %{gateway} sea eliminada de la base de datos"
fr_FR: "délai d'attente dépassé pour la suppression de la passerelle %{gateway} de la base de données"
pl_PL: "upłynął limit czasu oczekiwania na usunięcie bramy %{gateway} z bazy danych"
# net/wifi.rs # net/wifi.rs
net.wifi.ssid-no-special-characters: net.wifi.ssid-no-special-characters:
en_US: "SSID may not have special characters" en_US: "SSID may not have special characters"
@@ -1621,13 +1542,6 @@ net.gateway.cannot-delete-without-connection:
fr_FR: "Impossible de supprimer l'appareil sans connexion active" fr_FR: "Impossible de supprimer l'appareil sans connexion active"
pl_PL: "Nie można usunąć urządzenia bez aktywnego połączenia" pl_PL: "Nie można usunąć urządzenia bez aktywnego połączenia"
net.gateway.no-configured-echoip-urls:
en_US: "No configured echoip URLs"
de_DE: "Keine konfigurierten EchoIP-URLs"
es_ES: "No hay URLs de echoip configuradas"
fr_FR: "Aucune URL echoip configurée"
pl_PL: "Brak skonfigurowanych adresów URL echoip"
# net/dns.rs # net/dns.rs
net.dns.timeout-updating-catalog: net.dns.timeout-updating-catalog:
en_US: "timed out waiting to update dns catalog" en_US: "timed out waiting to update dns catalog"
@@ -1721,14 +1635,6 @@ lxc.mod.cleaned-up-containers:
fr_FR: "Conteneurs LXC orphelins nettoyés avec succès" fr_FR: "Conteneurs LXC orphelins nettoyés avec succès"
pl_PL: "Pomyślnie wyczyszczono wiszące kontenery LXC" pl_PL: "Pomyślnie wyczyszczono wiszące kontenery LXC"
# version/v0_3_6_alpha_0.rs
migration.migrating-package:
en_US: "Migrating package %{package}..."
de_DE: "Paket %{package} wird migriert..."
es_ES: "Migrando paquete %{package}..."
fr_FR: "Migration du paquet %{package}..."
pl_PL: "Migracja pakietu %{package}..."
# registry/admin.rs # registry/admin.rs
registry.admin.unknown-signer: registry.admin.unknown-signer:
en_US: "Unknown signer" en_US: "Unknown signer"
@@ -1884,28 +1790,6 @@ registry.package.remove-mirror.unauthorized:
fr_FR: "Non autorisé" fr_FR: "Non autorisé"
pl_PL: "Brak autoryzacji" pl_PL: "Brak autoryzacji"
# registry/package/index.rs
registry.package.index.metadata-mismatch:
en_US: "package metadata mismatch: remove the existing version first, then re-add"
de_DE: "Paketmetadaten stimmen nicht überein: vorhandene Version zuerst entfernen, dann erneut hinzufügen"
es_ES: "discrepancia de metadatos del paquete: elimine la versión existente primero, luego vuelva a agregarla"
fr_FR: "discordance des métadonnées du paquet : supprimez d'abord la version existante, puis ajoutez-la à nouveau"
pl_PL: "niezgodność metadanych pakietu: najpierw usuń istniejącą wersję, a następnie dodaj ponownie"
registry.package.index.icon-mismatch:
en_US: "package icon mismatch: remove the existing version first, then re-add"
de_DE: "Paketsymbol stimmt nicht überein: vorhandene Version zuerst entfernen, dann erneut hinzufügen"
es_ES: "discrepancia del icono del paquete: elimine la versión existente primero, luego vuelva a agregarla"
fr_FR: "discordance de l'icône du paquet : supprimez d'abord la version existante, puis ajoutez-la à nouveau"
pl_PL: "niezgodność ikony pakietu: najpierw usuń istniejącą wersję, a następnie dodaj ponownie"
registry.package.index.dependency-metadata-mismatch:
en_US: "dependency metadata mismatch: remove the existing version first, then re-add"
de_DE: "Abhängigkeitsmetadaten stimmen nicht überein: vorhandene Version zuerst entfernen, dann erneut hinzufügen"
es_ES: "discrepancia de metadatos de dependencia: elimine la versión existente primero, luego vuelva a agregarla"
fr_FR: "discordance des métadonnées de dépendance : supprimez d'abord la version existante, puis ajoutez-la à nouveau"
pl_PL: "niezgodność metadanych zależności: najpierw usuń istniejącą wersję, a następnie dodaj ponownie"
# registry/package/get.rs # registry/package/get.rs
registry.package.get.version-not-found: registry.package.get.version-not-found:
en_US: "Could not find a version of %{id} that satisfies %{version}" en_US: "Could not find a version of %{id} that satisfies %{version}"
@@ -2657,13 +2541,6 @@ help.arg.add-signer-key:
fr_FR: "Ajouter une clé publique au signataire" fr_FR: "Ajouter une clé publique au signataire"
pl_PL: "Dodaj klucz publiczny do sygnatariusza" pl_PL: "Dodaj klucz publiczny do sygnatariusza"
help.arg.address:
en_US: "Network address"
de_DE: "Netzwerkadresse"
es_ES: "Dirección de red"
fr_FR: "Adresse réseau"
pl_PL: "Adres sieciowy"
help.arg.allow-model-mismatch: help.arg.allow-model-mismatch:
en_US: "Allow database model mismatch" en_US: "Allow database model mismatch"
de_DE: "Datenbankmodell-Abweichung erlauben" de_DE: "Datenbankmodell-Abweichung erlauben"
@@ -2678,13 +2555,6 @@ help.arg.allow-partial-backup:
fr_FR: "Laisser le média monté même si backupfs échoue à monter" fr_FR: "Laisser le média monté même si backupfs échoue à monter"
pl_PL: "Pozostaw nośnik zamontowany nawet jeśli backupfs nie może się zamontować" pl_PL: "Pozostaw nośnik zamontowany nawet jeśli backupfs nie może się zamontować"
help.arg.architecture:
en_US: "Target CPU architecture (e.g. x86_64, aarch64)"
de_DE: "Ziel-CPU-Architektur (z.B. x86_64, aarch64)"
es_ES: "Arquitectura de CPU objetivo (ej. x86_64, aarch64)"
fr_FR: "Architecture CPU cible (ex. x86_64, aarch64)"
pl_PL: "Docelowa architektura CPU (np. x86_64, aarch64)"
help.arg.architecture-mask: help.arg.architecture-mask:
en_US: "Filter by CPU architecture" en_US: "Filter by CPU architecture"
de_DE: "Nach CPU-Architektur filtern" de_DE: "Nach CPU-Architektur filtern"
@@ -2811,20 +2681,6 @@ help.arg.download-directory:
fr_FR: "Chemin du répertoire de téléchargement" fr_FR: "Chemin du répertoire de téléchargement"
pl_PL: "Ścieżka katalogu do pobrania" pl_PL: "Ścieżka katalogu do pobrania"
help.arg.echoip-urls:
en_US: "Echo IP service URLs for external IP detection"
de_DE: "Echo-IP-Dienst-URLs zur externen IP-Erkennung"
es_ES: "URLs del servicio Echo IP para detección de IP externa"
fr_FR: "URLs du service Echo IP pour la détection d'IP externe"
pl_PL: "Adresy URL usługi Echo IP do wykrywania zewnętrznego IP"
help.arg.ed25519:
en_US: "Use Ed25519 instead of NIST P-256"
de_DE: "Ed25519 anstelle von NIST P-256 verwenden"
es_ES: "Usar Ed25519 en lugar de NIST P-256"
fr_FR: "Utiliser Ed25519 au lieu de NIST P-256"
pl_PL: "Użyj Ed25519 zamiast NIST P-256"
help.arg.emulate-missing-arch: help.arg.emulate-missing-arch:
en_US: "Emulate missing architecture using this one" en_US: "Emulate missing architecture using this one"
de_DE: "Fehlende Architektur mit dieser emulieren" de_DE: "Fehlende Architektur mit dieser emulieren"
@@ -2909,13 +2765,6 @@ help.arg.host-url:
fr_FR: "URL du serveur StartOS" fr_FR: "URL du serveur StartOS"
pl_PL: "URL serwera StartOS" pl_PL: "URL serwera StartOS"
help.arg.hostnames:
en_US: "Hostnames to include in the certificate"
de_DE: "Hostnamen, die in das Zertifikat aufgenommen werden sollen"
es_ES: "Nombres de host para incluir en el certificado"
fr_FR: "Noms d'hôtes à inclure dans le certificat"
pl_PL: "Nazwy hostów do uwzględnienia w certyfikacie"
help.arg.icon-path: help.arg.icon-path:
en_US: "Path to service icon file" en_US: "Path to service icon file"
de_DE: "Pfad zur Service-Icon-Datei" de_DE: "Pfad zur Service-Icon-Datei"
@@ -3000,13 +2849,6 @@ help.arg.log-limit:
fr_FR: "Nombre maximum d'entrées de journal" fr_FR: "Nombre maximum d'entrées de journal"
pl_PL: "Maksymalna liczba wpisów logu" pl_PL: "Maksymalna liczba wpisów logu"
help.arg.merge:
en_US: "Merge with existing version range instead of replacing"
de_DE: "Mit vorhandenem Versionsbereich zusammenführen statt ersetzen"
es_ES: "Combinar con el rango de versiones existente en lugar de reemplazar"
fr_FR: "Fusionner avec la plage de versions existante au lieu de remplacer"
pl_PL: "Połącz z istniejącym zakresem wersji zamiast zastępować"
help.arg.mirror-url: help.arg.mirror-url:
en_US: "URL of the mirror" en_US: "URL of the mirror"
de_DE: "URL des Spiegels" de_DE: "URL des Spiegels"
@@ -3119,13 +2961,6 @@ help.arg.platform:
fr_FR: "Identifiant de la plateforme cible" fr_FR: "Identifiant de la plateforme cible"
pl_PL: "Identyfikator platformy docelowej" pl_PL: "Identyfikator platformy docelowej"
help.arg.port:
en_US: "Port number"
de_DE: "Portnummer"
es_ES: "Número de puerto"
fr_FR: "Numéro de port"
pl_PL: "Numer portu"
help.arg.postgres-connection-url: help.arg.postgres-connection-url:
en_US: "PostgreSQL connection URL" en_US: "PostgreSQL connection URL"
de_DE: "PostgreSQL-Verbindungs-URL" de_DE: "PostgreSQL-Verbindungs-URL"
@@ -3210,13 +3045,6 @@ help.arg.server-id:
fr_FR: "Identifiant unique du serveur" fr_FR: "Identifiant unique du serveur"
pl_PL: "Unikalny identyfikator serwera" pl_PL: "Unikalny identyfikator serwera"
help.arg.set-as-default-outbound:
en_US: "Set as the default outbound gateway"
de_DE: "Als Standard-Ausgangs-Gateway festlegen"
es_ES: "Establecer como puerta de enlace de salida predeterminada"
fr_FR: "Définir comme passerelle de sortie par défaut"
pl_PL: "Ustaw jako domyślną bramę wychodzącą"
help.arg.set-signer-name: help.arg.set-signer-name:
en_US: "Set the signer name" en_US: "Set the signer name"
de_DE: "Unterzeichnernamen festlegen" de_DE: "Unterzeichnernamen festlegen"
@@ -3259,7 +3087,7 @@ help.arg.smtp-from:
fr_FR: "Adresse de l'expéditeur" fr_FR: "Adresse de l'expéditeur"
pl_PL: "Adres nadawcy e-mail" pl_PL: "Adres nadawcy e-mail"
help.arg.smtp-username: help.arg.smtp-login:
en_US: "SMTP authentication username" en_US: "SMTP authentication username"
de_DE: "SMTP-Authentifizierungsbenutzername" de_DE: "SMTP-Authentifizierungsbenutzername"
es_ES: "Nombre de usuario de autenticación SMTP" es_ES: "Nombre de usuario de autenticación SMTP"
@@ -3280,20 +3108,13 @@ help.arg.smtp-port:
fr_FR: "Port du serveur SMTP" fr_FR: "Port du serveur SMTP"
pl_PL: "Port serwera SMTP" pl_PL: "Port serwera SMTP"
help.arg.smtp-host: help.arg.smtp-server:
en_US: "SMTP server hostname" en_US: "SMTP server hostname"
de_DE: "SMTP-Server-Hostname" de_DE: "SMTP-Server-Hostname"
es_ES: "Nombre de host del servidor SMTP" es_ES: "Nombre de host del servidor SMTP"
fr_FR: "Nom d'hôte du serveur SMTP" fr_FR: "Nom d'hôte du serveur SMTP"
pl_PL: "Nazwa hosta serwera SMTP" pl_PL: "Nazwa hosta serwera SMTP"
help.arg.smtp-security:
en_US: "Connection security mode (starttls or tls)"
de_DE: "Verbindungssicherheitsmodus (starttls oder tls)"
es_ES: "Modo de seguridad de conexión (starttls o tls)"
fr_FR: "Mode de sécurité de connexion (starttls ou tls)"
pl_PL: "Tryb zabezpieczeń połączenia (starttls lub tls)"
help.arg.smtp-to: help.arg.smtp-to:
en_US: "Email recipient address" en_US: "Email recipient address"
de_DE: "E-Mail-Empfängeradresse" de_DE: "E-Mail-Empfängeradresse"
@@ -3581,13 +3402,6 @@ help.arg.gateway-name:
fr_FR: "Nom de la passerelle" fr_FR: "Nom de la passerelle"
pl_PL: "Nazwa bramy" pl_PL: "Nazwa bramy"
help.arg.gateway-type:
en_US: "Type of gateway"
de_DE: "Typ des Gateways"
es_ES: "Tipo de puerta de enlace"
fr_FR: "Type de passerelle"
pl_PL: "Typ bramy"
help.arg.governor-name: help.arg.governor-name:
en_US: "CPU governor name" en_US: "CPU governor name"
de_DE: "CPU-Governor-Name" de_DE: "CPU-Governor-Name"
@@ -3798,13 +3612,6 @@ help.arg.s9pk-file-path:
fr_FR: "Chemin vers le fichier de paquet s9pk" fr_FR: "Chemin vers le fichier de paquet s9pk"
pl_PL: "Ścieżka do pliku pakietu s9pk" pl_PL: "Ścieżka do pliku pakietu s9pk"
help.arg.s9pk-file-paths:
en_US: "Paths to s9pk package files"
de_DE: "Pfade zu s9pk-Paketdateien"
es_ES: "Rutas a los archivos de paquete s9pk"
fr_FR: "Chemins vers les fichiers de paquet s9pk"
pl_PL: "Ścieżki do plików pakietów s9pk"
help.arg.session-ids: help.arg.session-ids:
en_US: "Session identifiers" en_US: "Session identifiers"
de_DE: "Sitzungskennungen" de_DE: "Sitzungskennungen"
@@ -4114,13 +3921,6 @@ about.add-version-signer:
fr_FR: "Ajouter un signataire de version" fr_FR: "Ajouter un signataire de version"
pl_PL: "Dodaj sygnatariusza wersji" pl_PL: "Dodaj sygnatariusza wersji"
about.add-vhost-passthrough:
en_US: "Add vhost passthrough"
de_DE: "Vhost-Passthrough hinzufügen"
es_ES: "Agregar passthrough de vhost"
fr_FR: "Ajouter un passthrough vhost"
pl_PL: "Dodaj passthrough vhost"
about.add-wifi-ssid-password: about.add-wifi-ssid-password:
en_US: "Add wifi ssid and password" en_US: "Add wifi ssid and password"
de_DE: "WLAN-SSID und Passwort hinzufügen" de_DE: "WLAN-SSID und Passwort hinzufügen"
@@ -4135,13 +3935,6 @@ about.allow-gateway-infer-inbound-access-from-wan:
fr_FR: "Permettre à cette passerelle de déduire si elle a un accès entrant depuis le WAN en fonction de son adresse IPv4" fr_FR: "Permettre à cette passerelle de déduire si elle a un accès entrant depuis le WAN en fonction de son adresse IPv4"
pl_PL: "Pozwól tej bramce wywnioskować, czy ma dostęp przychodzący z WAN na podstawie adresu IPv4" pl_PL: "Pozwól tej bramce wywnioskować, czy ma dostęp przychodzący z WAN na podstawie adresu IPv4"
about.apply-available-update:
en_US: "Apply available update"
de_DE: "Verfügbares Update anwenden"
es_ES: "Aplicar actualización disponible"
fr_FR: "Appliquer la mise à jour disponible"
pl_PL: "Zastosuj dostępną aktualizację"
about.calculate-blake3-hash-for-file: about.calculate-blake3-hash-for-file:
en_US: "Calculate blake3 hash for a file" en_US: "Calculate blake3 hash for a file"
de_DE: "Blake3-Hash für eine Datei berechnen" de_DE: "Blake3-Hash für eine Datei berechnen"
@@ -4156,27 +3949,6 @@ about.cancel-install-package:
fr_FR: "Annuler l'installation d'un paquet" fr_FR: "Annuler l'installation d'un paquet"
pl_PL: "Anuluj instalację pakietu" pl_PL: "Anuluj instalację pakietu"
about.check-dns-configuration:
en_US: "Check DNS configuration for a gateway"
de_DE: "DNS-Konfiguration für ein Gateway prüfen"
es_ES: "Verificar la configuración DNS de un gateway"
fr_FR: "Vérifier la configuration DNS d'une passerelle"
pl_PL: "Sprawdź konfigurację DNS bramy"
about.check-for-updates:
en_US: "Check for available updates"
de_DE: "Nach verfügbaren Updates suchen"
es_ES: "Buscar actualizaciones disponibles"
fr_FR: "Vérifier les mises à jour disponibles"
pl_PL: "Sprawdź dostępne aktualizacje"
about.check-port-reachability:
en_US: "Check if a port is reachable from the WAN"
de_DE: "Prüfen, ob ein Port vom WAN erreichbar ist"
es_ES: "Comprobar si un puerto es accesible desde la WAN"
fr_FR: "Vérifier si un port est accessible depuis le WAN"
pl_PL: "Sprawdź, czy port jest osiągalny z WAN"
about.check-update-startos: about.check-update-startos:
en_US: "Check a given registry for StartOS updates and update if available" en_US: "Check a given registry for StartOS updates and update if available"
de_DE: "Ein bestimmtes Registry auf StartOS-Updates prüfen und bei Verfügbarkeit aktualisieren" de_DE: "Ein bestimmtes Registry auf StartOS-Updates prüfen und bei Verfügbarkeit aktualisieren"
@@ -4275,13 +4047,6 @@ about.commands-authentication:
fr_FR: "Commandes liées à l'authentification, comme connexion, déconnexion" fr_FR: "Commandes liées à l'authentification, comme connexion, déconnexion"
pl_PL: "Polecenia związane z uwierzytelnianiem, np. logowanie, wylogowanie" pl_PL: "Polecenia związane z uwierzytelnianiem, np. logowanie, wylogowanie"
about.commands-authorized-keys:
en_US: "Commands for managing authorized keys"
de_DE: "Befehle zur Verwaltung autorisierter Schlüssel"
es_ES: "Comandos para gestionar claves autorizadas"
fr_FR: "Commandes pour gérer les clés autorisées"
pl_PL: "Polecenia do zarządzania autoryzowanymi kluczami"
about.commands-backup: about.commands-backup:
en_US: "Commands related to backup creation and backup targets" en_US: "Commands related to backup creation and backup targets"
de_DE: "Befehle zur Backup-Erstellung und Backup-Zielen" de_DE: "Befehle zur Backup-Erstellung und Backup-Zielen"
@@ -4345,41 +4110,6 @@ about.commands-experimental:
fr_FR: "Commandes liées à la configuration d'options expérimentales comme zram et le gouverneur CPU" fr_FR: "Commandes liées à la configuration d'options expérimentales comme zram et le gouverneur CPU"
pl_PL: "Polecenia konfiguracji opcji eksperymentalnych jak zram i regulator CPU" pl_PL: "Polecenia konfiguracji opcji eksperymentalnych jak zram i regulator CPU"
about.commands-host-address-domain:
en_US: "Commands for managing host address domains"
de_DE: "Befehle zur Verwaltung von Host-Adressdomänen"
es_ES: "Comandos para gestionar dominios de direcciones del host"
fr_FR: "Commandes pour gérer les domaines d'adresses de l'hôte"
pl_PL: "Polecenia do zarządzania domenami adresów hosta"
about.commands-host-addresses:
en_US: "Commands for managing host addresses"
de_DE: "Befehle zur Verwaltung von Host-Adressen"
es_ES: "Comandos para gestionar direcciones del host"
fr_FR: "Commandes pour gérer les adresses de l'hôte"
pl_PL: "Polecenia do zarządzania adresami hosta"
about.commands-host-bindings:
en_US: "Commands for managing host bindings"
de_DE: "Befehle zur Verwaltung von Host-Bindungen"
es_ES: "Comandos para gestionar vínculos del host"
fr_FR: "Commandes pour gérer les liaisons de l'hôte"
pl_PL: "Polecenia do zarządzania powiązaniami hosta"
about.commands-host-private-domain:
en_US: "Commands for managing private domains for a host"
de_DE: "Befehle zur Verwaltung privater Domänen für einen Host"
es_ES: "Comandos para gestionar dominios privados de un host"
fr_FR: "Commandes pour gérer les domaines privés d'un hôte"
pl_PL: "Polecenia do zarządzania prywatnymi domenami hosta"
about.commands-host-public-domain:
en_US: "Commands for managing public domains for a host"
de_DE: "Befehle zur Verwaltung öffentlicher Domänen für einen Host"
es_ES: "Comandos para gestionar dominios públicos de un host"
fr_FR: "Commandes pour gérer les domaines publics d'un hôte"
pl_PL: "Polecenia do zarządzania publicznymi domenami hosta"
about.commands-host-system-ui: about.commands-host-system-ui:
en_US: "Commands for modifying the host for the system ui" en_US: "Commands for modifying the host for the system ui"
de_DE: "Befehle zum Ändern des Hosts für die System-UI" de_DE: "Befehle zum Ändern des Hosts für die System-UI"
@@ -4436,13 +4166,6 @@ about.commands-packages:
fr_FR: "Commandes liées aux paquets" fr_FR: "Commandes liées aux paquets"
pl_PL: "Polecenia związane z pakietami" pl_PL: "Polecenia związane z pakietami"
about.commands-port-forward:
en_US: "Commands for managing port forwards"
de_DE: "Befehle zur Verwaltung von Portweiterleitungen"
es_ES: "Comandos para gestionar reenvíos de puertos"
fr_FR: "Commandes pour gérer les redirections de ports"
pl_PL: "Polecenia do zarządzania przekierowaniami portów"
about.commands-registry: about.commands-registry:
en_US: "Commands related to the registry" en_US: "Commands related to the registry"
de_DE: "Befehle zum Registry" de_DE: "Befehle zum Registry"
@@ -4457,13 +4180,6 @@ about.commands-registry-db:
fr_FR: "Commandes pour interagir avec la base de données, comme dump et apply" fr_FR: "Commandes pour interagir avec la base de données, comme dump et apply"
pl_PL: "Polecenia interakcji z bazą danych, np. dump i apply" pl_PL: "Polecenia interakcji z bazą danych, np. dump i apply"
about.commands-registry-info:
en_US: "View or edit registry information"
de_DE: "Registry-Informationen anzeigen oder bearbeiten"
es_ES: "Ver o editar información del registro"
fr_FR: "Afficher ou modifier les informations du registre"
pl_PL: "Wyświetl lub edytuj informacje rejestru"
about.commands-restore-backup: about.commands-restore-backup:
en_US: "Commands for restoring package(s) from backup" en_US: "Commands for restoring package(s) from backup"
de_DE: "Befehle zum Wiederherstellen von Paketen aus dem Backup" de_DE: "Befehle zum Wiederherstellen von Paketen aus dem Backup"
@@ -4506,20 +4222,6 @@ about.commands-tunnel:
fr_FR: "Commandes liées à StartTunnel" fr_FR: "Commandes liées à StartTunnel"
pl_PL: "Polecenia związane z StartTunnel" pl_PL: "Polecenia związane z StartTunnel"
about.commands-tunnel-update:
en_US: "Commands for checking and applying tunnel updates"
de_DE: "Befehle zum Prüfen und Anwenden von Tunnel-Updates"
es_ES: "Comandos para verificar y aplicar actualizaciones del túnel"
fr_FR: "Commandes pour vérifier et appliquer les mises à jour du tunnel"
pl_PL: "Polecenia do sprawdzania i stosowania aktualizacji tunelu"
about.commands-tunnel-web:
en_US: "Commands for managing the tunnel web interface"
de_DE: "Befehle zur Verwaltung der Tunnel-Weboberfläche"
es_ES: "Comandos para gestionar la interfaz web del túnel"
fr_FR: "Commandes pour gérer l'interface web du tunnel"
pl_PL: "Polecenia do zarządzania interfejsem webowym tunelu"
about.commands-wifi: about.commands-wifi:
en_US: "Commands related to wifi networks i.e. add, connect, delete" en_US: "Commands related to wifi networks i.e. add, connect, delete"
de_DE: "Befehle zu WLAN-Netzwerken, z.B. hinzufügen, verbinden, löschen" de_DE: "Befehle zu WLAN-Netzwerken, z.B. hinzufügen, verbinden, löschen"
@@ -4660,13 +4362,6 @@ about.display-s9pk-manifest:
fr_FR: "Afficher le manifeste s9pk" fr_FR: "Afficher le manifeste s9pk"
pl_PL: "Wyświetl manifest s9pk" pl_PL: "Wyświetl manifest s9pk"
about.display-s9pk-root-sighash-and-maxsize:
en_US: "Display the s9pk root signature hash and max size"
de_DE: "Den s9pk-Root-Signaturhash und die maximale Größe anzeigen"
es_ES: "Mostrar el hash de firma raíz y el tamaño máximo del s9pk"
fr_FR: "Afficher le hachage de signature racine et la taille maximale du s9pk"
pl_PL: "Wyświetl hash podpisu głównego i maksymalny rozmiar s9pk"
about.display-server-metrics: about.display-server-metrics:
en_US: "Display server metrics" en_US: "Display server metrics"
de_DE: "Server-Metriken anzeigen" de_DE: "Server-Metriken anzeigen"
@@ -4730,20 +4425,6 @@ about.dump-address-resolution-table:
fr_FR: "Exporter la table de résolution d'adresses" fr_FR: "Exporter la table de résolution d'adresses"
pl_PL: "Zrzuć tabelę rozpoznawania adresów" pl_PL: "Zrzuć tabelę rozpoznawania adresów"
about.dump-port-forward-table:
en_US: "Dump port forward table"
de_DE: "Portweiterleitungstabelle ausgeben"
es_ES: "Volcar tabla de reenvío de puertos"
fr_FR: "Exporter la table de redirection de ports"
pl_PL: "Zrzuć tabelę przekierowań portów"
about.dump-vhost-proxy-table:
en_US: "Dump vhost proxy table"
de_DE: "Vhost-Proxy-Tabelle ausgeben"
es_ES: "Volcar tabla de proxy vhost"
fr_FR: "Exporter la table de proxy vhost"
pl_PL: "Zrzuć tabelę proxy vhost"
about.echo-message: about.echo-message:
en_US: "Echo a message back" en_US: "Echo a message back"
de_DE: "Eine Nachricht zurückgeben" de_DE: "Eine Nachricht zurückgeben"
@@ -4779,13 +4460,6 @@ about.enable-kiosk-mode:
fr_FR: "Activer le mode kiosque" fr_FR: "Activer le mode kiosque"
pl_PL: "Włącz tryb kiosku" pl_PL: "Włącz tryb kiosku"
about.enable-or-disable-port-forward:
en_US: "Enable or disable a port forward"
de_DE: "Portweiterleitung aktivieren oder deaktivieren"
es_ES: "Habilitar o deshabilitar un reenvío de puerto"
fr_FR: "Activer ou désactiver une redirection de port"
pl_PL: "Włącz lub wyłącz przekierowanie portu"
about.enable-webserver: about.enable-webserver:
en_US: "Enable the webserver" en_US: "Enable the webserver"
de_DE: "Webserver aktivieren" de_DE: "Webserver aktivieren"
@@ -4877,13 +4551,6 @@ about.get-developer-pubkey:
fr_FR: "Obtenir la clé publique du développeur" fr_FR: "Obtenir la clé publique du développeur"
pl_PL: "Pobierz klucz publiczny dewelopera" pl_PL: "Pobierz klucz publiczny dewelopera"
about.get-device-info:
en_US: "Display device information"
de_DE: "Geräteinformationen anzeigen"
es_ES: "Mostrar información del dispositivo"
fr_FR: "Afficher les informations de l'appareil"
pl_PL: "Wyświetl informacje o urządzeniu"
about.get-initialization-progress: about.get-initialization-progress:
en_US: "Get initialization progress" en_US: "Get initialization progress"
de_DE: "Initialisierungsfortschritt abrufen" de_DE: "Initialisierungsfortschritt abrufen"
@@ -5073,13 +4740,6 @@ about.list-paths-of-package-ingredients:
fr_FR: "Lister les chemins des composants du package" fr_FR: "Lister les chemins des composants du package"
pl_PL: "Wyświetl ścieżki składników pakietu" pl_PL: "Wyświetl ścieżki składników pakietu"
about.list-registry-categories:
en_US: "List registry categories"
de_DE: "Registry-Kategorien auflisten"
es_ES: "Listar categorías del registro"
fr_FR: "Lister les catégories du registre"
pl_PL: "Wyświetl kategorie rejestru"
about.list-registry-info-packages: about.list-registry-info-packages:
en_US: "List registry info and packages" en_US: "List registry info and packages"
de_DE: "Registry-Informationen und Pakete auflisten" de_DE: "Registry-Informationen und Pakete auflisten"
@@ -5108,13 +4768,6 @@ about.list-version-signers:
fr_FR: "Lister les signataires de versions" fr_FR: "Lister les signataires de versions"
pl_PL: "Wyświetl sygnatariuszy wersji" pl_PL: "Wyświetl sygnatariuszy wersji"
about.list-vhost-passthrough:
en_US: "List vhost passthroughs"
de_DE: "Vhost-Passthroughs auflisten"
es_ES: "Listar passthroughs de vhost"
fr_FR: "Lister les passthroughs vhost"
pl_PL: "Wyświetl passthrough vhost"
about.list-wifi-info: about.list-wifi-info:
en_US: "List wifi information" en_US: "List wifi information"
de_DE: "WLAN-Informationen auflisten" de_DE: "WLAN-Informationen auflisten"
@@ -5164,13 +4817,6 @@ about.manage-query-dns:
fr_FR: "Gérer et interroger le DNS" fr_FR: "Gérer et interroger le DNS"
pl_PL: "Zarządzaj i odpytuj DNS" pl_PL: "Zarządzaj i odpytuj DNS"
about.manage-ssl-certificates:
en_US: "Manage SSL certificates"
de_DE: "SSL-Zertifikate verwalten"
es_ES: "Gestionar certificados SSL"
fr_FR: "Gérer les certificats SSL"
pl_PL: "Zarządzaj certyfikatami SSL"
about.manage-ssl-vhost-proxy: about.manage-ssl-vhost-proxy:
en_US: "Manage SSL vhost proxy" en_US: "Manage SSL vhost proxy"
de_DE: "SSL-vhost-Proxy verwalten" de_DE: "SSL-vhost-Proxy verwalten"
@@ -5241,13 +4887,6 @@ about.publish-s9pk:
fr_FR: "Publier s9pk dans le bucket S3 et indexer dans le registre" fr_FR: "Publier s9pk dans le bucket S3 et indexer dans le registre"
pl_PL: "Opublikuj s9pk do bucketu S3 i zindeksuj w rejestrze" pl_PL: "Opublikuj s9pk do bucketu S3 i zindeksuj w rejestrze"
about.select-s9pk-for-device:
en_US: "Select the best compatible s9pk for a target device"
de_DE: "Das beste kompatible s9pk für ein Zielgerät auswählen"
es_ES: "Seleccionar el s9pk más compatible para un dispositivo destino"
fr_FR: "Sélectionner le meilleur s9pk compatible pour un appareil cible"
pl_PL: "Wybierz najlepiej kompatybilny s9pk dla urządzenia docelowego"
about.rebuild-service-container: about.rebuild-service-container:
en_US: "Rebuild service container" en_US: "Rebuild service container"
de_DE: "Dienst-Container neu erstellen" de_DE: "Dienst-Container neu erstellen"
@@ -5416,13 +5055,6 @@ about.remove-version-signer:
fr_FR: "Supprimer le signataire de version" fr_FR: "Supprimer le signataire de version"
pl_PL: "Usuń sygnatariusza wersji" pl_PL: "Usuń sygnatariusza wersji"
about.remove-vhost-passthrough:
en_US: "Remove vhost passthrough"
de_DE: "Vhost-Passthrough entfernen"
es_ES: "Eliminar passthrough de vhost"
fr_FR: "Supprimer un passthrough vhost"
pl_PL: "Usuń passthrough vhost"
about.remove-wifi-network: about.remove-wifi-network:
en_US: "Remove a wifi network" en_US: "Remove a wifi network"
de_DE: "Ein WLAN-Netzwerk entfernen" de_DE: "Ein WLAN-Netzwerk entfernen"
@@ -5465,12 +5097,12 @@ about.reset-user-interface-password:
fr_FR: "Réinitialiser le mot de passe de l'interface utilisateur" fr_FR: "Réinitialiser le mot de passe de l'interface utilisateur"
pl_PL: "Zresetuj hasło interfejsu użytkownika" pl_PL: "Zresetuj hasło interfejsu użytkownika"
about.uninitialize-webserver: about.reset-webserver:
en_US: "Uninitialize the webserver" en_US: "Reset the webserver"
de_DE: "Den Webserver deinitialisieren" de_DE: "Den Webserver zurücksetzen"
es_ES: "Desinicializar el servidor web" es_ES: "Restablecer el servidor web"
fr_FR: "Désinitialiser le serveur web" fr_FR: "initialiser le serveur web"
pl_PL: "Zdezinicjalizuj serwer internetowy" pl_PL: "Zresetuj serwer internetowy"
about.restart-server: about.restart-server:
en_US: "Restart the server" en_US: "Restart the server"
@@ -5486,13 +5118,6 @@ about.restart-service:
fr_FR: "Redémarrer un service" fr_FR: "Redémarrer un service"
pl_PL: "Uruchom ponownie usługę" pl_PL: "Uruchom ponownie usługę"
about.restart-tunnel:
en_US: "Reboot the tunnel server"
de_DE: "Den Tunnel-Server neu starten"
es_ES: "Reiniciar el servidor del túnel"
fr_FR: "Redémarrer le serveur tunnel"
pl_PL: "Uruchom ponownie serwer tunelu"
about.restore-packages-from-backup: about.restore-packages-from-backup:
en_US: "Restore packages from backup" en_US: "Restore packages from backup"
de_DE: "Pakete aus Backup wiederherstellen" de_DE: "Pakete aus Backup wiederherstellen"
@@ -5507,13 +5132,6 @@ about.run-service-action:
fr_FR: "Exécuter une action de service" fr_FR: "Exécuter une action de service"
pl_PL: "Uruchom akcję usługi" pl_PL: "Uruchom akcję usługi"
about.set-address-enabled-for-binding:
en_US: "Set a gateway address enabled for a binding"
de_DE: "Gateway-Adresse für eine Bindung aktivieren"
es_ES: "Establecer una dirección de gateway habilitada para un vínculo"
fr_FR: "Définir une adresse de passerelle activée pour une liaison"
pl_PL: "Ustaw adres bramy jako włączony dla powiązania"
about.set-country: about.set-country:
en_US: "Set the country" en_US: "Set the country"
de_DE: "Das Land festlegen" de_DE: "Das Land festlegen"
@@ -5521,27 +5139,6 @@ about.set-country:
fr_FR: "Définir le pays" fr_FR: "Définir le pays"
pl_PL: "Ustaw kraj" pl_PL: "Ustaw kraj"
about.set-default-outbound-gateway:
en_US: "Set the default outbound gateway"
de_DE: "Standard-Ausgangs-Gateway festlegen"
es_ES: "Establecer la puerta de enlace de salida predeterminada"
fr_FR: "Définir la passerelle sortante par défaut"
pl_PL: "Ustaw domyślną bramę wychodzącą"
about.set-echoip-urls:
en_US: "Set the Echo IP service URLs"
de_DE: "Die Echo-IP-Dienst-URLs festlegen"
es_ES: "Establecer las URLs del servicio Echo IP"
fr_FR: "Définir les URLs du service Echo IP"
pl_PL: "Ustaw adresy URL usługi Echo IP"
about.set-hostname:
en_US: "Set the server hostname"
de_DE: "Den Server-Hostnamen festlegen"
es_ES: "Establecer el nombre de host del servidor"
fr_FR: "Définir le nom d'hôte du serveur"
pl_PL: "Ustaw nazwę hosta serwera"
about.set-gateway-enabled-for-binding: about.set-gateway-enabled-for-binding:
en_US: "Set gateway enabled for binding" en_US: "Set gateway enabled for binding"
de_DE: "Gateway für Bindung aktivieren" de_DE: "Gateway für Bindung aktivieren"
@@ -5570,13 +5167,6 @@ about.set-listen-address-for-webserver:
fr_FR: "Définir l'adresse d'écoute du serveur web" fr_FR: "Définir l'adresse d'écoute du serveur web"
pl_PL: "Ustaw adres nasłuchiwania serwera internetowego" pl_PL: "Ustaw adres nasłuchiwania serwera internetowego"
about.set-outbound-gateway-package:
en_US: "Set the outbound gateway for a package"
de_DE: "Ausgangs-Gateway für ein Paket festlegen"
es_ES: "Establecer la puerta de enlace de salida para un paquete"
fr_FR: "Définir la passerelle sortante pour un package"
pl_PL: "Ustaw bramę wychodzącą dla pakietu"
about.set-registry-icon: about.set-registry-icon:
en_US: "Set the registry icon" en_US: "Set the registry icon"
de_DE: "Das Registry-Symbol festlegen" de_DE: "Das Registry-Symbol festlegen"
@@ -5675,13 +5265,6 @@ about.stop-service:
fr_FR: "Arrêter un service" fr_FR: "Arrêter un service"
pl_PL: "Zatrzymaj usługę" pl_PL: "Zatrzymaj usługę"
about.ssl-generate-certificate:
en_US: "Generate an SSL certificate from the system root CA"
de_DE: "SSL-Zertifikat von der System-Root-CA generieren"
es_ES: "Generar un certificado SSL desde la CA raíz del sistema"
fr_FR: "Générer un certificat SSL depuis l'autorité racine du système"
pl_PL: "Wygeneruj certyfikat SSL z głównego CA systemu"
about.teardown-rebuild-containers: about.teardown-rebuild-containers:
en_US: "Teardown and rebuild containers" en_US: "Teardown and rebuild containers"
de_DE: "Container abbauen und neu erstellen" de_DE: "Container abbauen und neu erstellen"
@@ -5752,13 +5335,6 @@ about.update-firmware:
fr_FR: "Mettre à jour le firmware" fr_FR: "Mettre à jour le firmware"
pl_PL: "Zaktualizuj oprogramowanie układowe" pl_PL: "Zaktualizuj oprogramowanie układowe"
about.update-port-forward-label:
en_US: "Update the label of a port forward"
de_DE: "Bezeichnung einer Portweiterleitung aktualisieren"
es_ES: "Actualizar la etiqueta de un reenvío de puerto"
fr_FR: "Mettre à jour le libellé d'une redirection de port"
pl_PL: "Zaktualizuj etykietę przekierowania portu"
about.view-edit-gateway-configs: about.view-edit-gateway-configs:
en_US: "View and edit gateway configurations" en_US: "View and edit gateway configurations"
de_DE: "Gateway-Konfigurationen anzeigen und bearbeiten" de_DE: "Gateway-Konfigurationen anzeigen und bearbeiten"

View File

@@ -1,13 +0,0 @@
.ie \n(.g .ds Aq \(aq
.el .ds Aq '
.TH start-cli-auth-get-pubkey 1 "get-pubkey "
.SH NAME
start\-cli\-auth\-get\-pubkey \- Get the public key from the server
.SH SYNOPSIS
\fBstart\-cli auth get\-pubkey\fR [\fB\-h\fR|\fB\-\-help\fR]
.SH DESCRIPTION
Get the public key from the server
.SH OPTIONS
.TP
\fB\-h\fR, \fB\-\-help\fR
Print help

View File

@@ -1,13 +0,0 @@
.ie \n(.g .ds Aq \(aq
.el .ds Aq '
.TH start-cli-auth-login 1 "login "
.SH NAME
start\-cli\-auth\-login \- Login to a new auth session
.SH SYNOPSIS
\fBstart\-cli auth login\fR [\fB\-h\fR|\fB\-\-help\fR]
.SH DESCRIPTION
Login to a new auth session
.SH OPTIONS
.TP
\fB\-h\fR, \fB\-\-help\fR
Print help

View File

@@ -1,16 +0,0 @@
.ie \n(.g .ds Aq \(aq
.el .ds Aq '
.TH start-cli-auth-logout 1 "logout "
.SH NAME
start\-cli\-auth\-logout \- Logout from current auth session
.SH SYNOPSIS
\fBstart\-cli auth logout\fR [\fB\-h\fR|\fB\-\-help\fR] <\fISESSION\fR>
.SH DESCRIPTION
Logout from current auth session
.SH OPTIONS
.TP
\fB\-h\fR, \fB\-\-help\fR
Print help
.TP
<\fISESSION\fR>

View File

@@ -1,13 +0,0 @@
.ie \n(.g .ds Aq \(aq
.el .ds Aq '
.TH start-cli-auth-reset-password 1 "reset-password "
.SH NAME
start\-cli\-auth\-reset\-password \- Reset the password
.SH SYNOPSIS
\fBstart\-cli auth reset\-password\fR [\fB\-h\fR|\fB\-\-help\fR]
.SH DESCRIPTION
Reset the password
.SH OPTIONS
.TP
\fB\-h\fR, \fB\-\-help\fR
Print help

View File

@@ -1,16 +0,0 @@
.ie \n(.g .ds Aq \(aq
.el .ds Aq '
.TH start-cli-auth-session-kill 1 "kill "
.SH NAME
start\-cli\-auth\-session\-kill \- Terminate auth sessions
.SH SYNOPSIS
\fBstart\-cli auth session kill\fR [\fB\-h\fR|\fB\-\-help\fR] [\fIIDS\fR]
.SH DESCRIPTION
Terminate auth sessions
.SH OPTIONS
.TP
\fB\-h\fR, \fB\-\-help\fR
Print help
.TP
[\fIIDS\fR]
Session identifiers

View File

@@ -1,16 +0,0 @@
.ie \n(.g .ds Aq \(aq
.el .ds Aq '
.TH start-cli-auth-session-list 1 "list "
.SH NAME
start\-cli\-auth\-session\-list \- Display all auth sessions
.SH SYNOPSIS
\fBstart\-cli auth session list\fR [\fB\-\-format\fR] [\fB\-h\fR|\fB\-\-help\fR]
.SH DESCRIPTION
Display all auth sessions
.SH OPTIONS
.TP
\fB\-\-format\fR
.TP
\fB\-h\fR, \fB\-\-help\fR
Print help

View File

@@ -1,20 +0,0 @@
.ie \n(.g .ds Aq \(aq
.el .ds Aq '
.TH start-cli-auth-session 1 "session "
.SH NAME
start\-cli\-auth\-session \- List or kill auth sessions
.SH SYNOPSIS
\fBstart\-cli auth session\fR [\fB\-h\fR|\fB\-\-help\fR] <\fIsubcommands\fR>
.SH DESCRIPTION
List or kill auth sessions
.SH OPTIONS
.TP
\fB\-h\fR, \fB\-\-help\fR
Print help
.SH SUBCOMMANDS
.TP
start\-cli\-auth\-session\-kill(1)
Terminate auth sessions
.TP
start\-cli\-auth\-session\-list(1)
Display all auth sessions

View File

@@ -1,29 +0,0 @@
.ie \n(.g .ds Aq \(aq
.el .ds Aq '
.TH start-cli-auth 1 "auth "
.SH NAME
start\-cli\-auth \- Commands related to Authentication i.e. login, logout
.SH SYNOPSIS
\fBstart\-cli auth\fR [\fB\-h\fR|\fB\-\-help\fR] <\fIsubcommands\fR>
.SH DESCRIPTION
Commands related to Authentication i.e. login, logout
.SH OPTIONS
.TP
\fB\-h\fR, \fB\-\-help\fR
Print help
.SH SUBCOMMANDS
.TP
start\-cli\-auth\-get\-pubkey(1)
Get the public key from the server
.TP
start\-cli\-auth\-login(1)
Login to a new auth session
.TP
start\-cli\-auth\-logout(1)
Logout from current auth session
.TP
start\-cli\-auth\-reset\-password(1)
Reset the password
.TP
start\-cli\-auth\-session(1)
List or kill auth sessions

View File

@@ -1,25 +0,0 @@
.ie \n(.g .ds Aq \(aq
.el .ds Aq '
.TH start-cli-backup-create 1 "create "
.SH NAME
start\-cli\-backup\-create \- Create a backup for all packages
.SH SYNOPSIS
\fBstart\-cli backup create\fR [\fB\-\-old\-password\fR] [\fB\-\-package\-ids\fR] [\fB\-h\fR|\fB\-\-help\fR] <\fITARGET_ID\fR> <\fIPASSWORD\fR>
.SH DESCRIPTION
Create a backup for all packages
.SH OPTIONS
.TP
\fB\-\-old\-password\fR \fI<OLD_PASSWORD>\fR
Previous backup password
.TP
\fB\-\-package\-ids\fR \fI<PACKAGE_IDS>\fR
Package IDs to include in backup
.TP
\fB\-h\fR, \fB\-\-help\fR
Print help
.TP
<\fITARGET_ID\fR>
Backup target identifier
.TP
<\fIPASSWORD\fR>
Password for backup encryption

View File

@@ -1,25 +0,0 @@
.ie \n(.g .ds Aq \(aq
.el .ds Aq '
.TH start-cli-backup-target-cifs-add 1 "add "
.SH NAME
start\-cli\-backup\-target\-cifs\-add \- Add a new backup target
.SH SYNOPSIS
\fBstart\-cli backup target cifs add\fR [\fB\-h\fR|\fB\-\-help\fR] <\fIHOSTNAME\fR> <\fIPATH\fR> <\fIUSERNAME\fR> [\fIPASSWORD\fR]
.SH DESCRIPTION
Add a new backup target
.SH OPTIONS
.TP
\fB\-h\fR, \fB\-\-help\fR
Print help
.TP
<\fIHOSTNAME\fR>
CIFS server hostname
.TP
<\fIPATH\fR>
Path on the CIFS share
.TP
<\fIUSERNAME\fR>
CIFS authentication username
.TP
[\fIPASSWORD\fR]
CIFS authentication password

View File

@@ -1,16 +0,0 @@
.ie \n(.g .ds Aq \(aq
.el .ds Aq '
.TH start-cli-backup-target-cifs-remove 1 "remove "
.SH NAME
start\-cli\-backup\-target\-cifs\-remove \- Remove existing backup target
.SH SYNOPSIS
\fBstart\-cli backup target cifs remove\fR [\fB\-h\fR|\fB\-\-help\fR] <\fIID\fR>
.SH DESCRIPTION
Remove existing backup target
.SH OPTIONS
.TP
\fB\-h\fR, \fB\-\-help\fR
Print help
.TP
<\fIID\fR>
Backup target identifier

View File

@@ -1,28 +0,0 @@
.ie \n(.g .ds Aq \(aq
.el .ds Aq '
.TH start-cli-backup-target-cifs-update 1 "update "
.SH NAME
start\-cli\-backup\-target\-cifs\-update \- Update an existing backup target
.SH SYNOPSIS
\fBstart\-cli backup target cifs update\fR [\fB\-h\fR|\fB\-\-help\fR] <\fIID\fR> <\fIHOSTNAME\fR> <\fIPATH\fR> <\fIUSERNAME\fR> [\fIPASSWORD\fR]
.SH DESCRIPTION
Update an existing backup target
.SH OPTIONS
.TP
\fB\-h\fR, \fB\-\-help\fR
Print help
.TP
<\fIID\fR>
Backup target identifier
.TP
<\fIHOSTNAME\fR>
CIFS server hostname
.TP
<\fIPATH\fR>
Path on the CIFS share
.TP
<\fIUSERNAME\fR>
CIFS authentication username
.TP
[\fIPASSWORD\fR]
CIFS authentication password

View File

@@ -1,23 +0,0 @@
.ie \n(.g .ds Aq \(aq
.el .ds Aq '
.TH start-cli-backup-target-cifs 1 "cifs "
.SH NAME
start\-cli\-backup\-target\-cifs \- Add, remove, or update a backup target
.SH SYNOPSIS
\fBstart\-cli backup target cifs\fR [\fB\-h\fR|\fB\-\-help\fR] <\fIsubcommands\fR>
.SH DESCRIPTION
Add, remove, or update a backup target
.SH OPTIONS
.TP
\fB\-h\fR, \fB\-\-help\fR
Print help
.SH SUBCOMMANDS
.TP
start\-cli\-backup\-target\-cifs\-add(1)
Add a new backup target
.TP
start\-cli\-backup\-target\-cifs\-remove(1)
Remove existing backup target
.TP
start\-cli\-backup\-target\-cifs\-update(1)
Update an existing backup target

View File

@@ -1,25 +0,0 @@
.ie \n(.g .ds Aq \(aq
.el .ds Aq '
.TH start-cli-backup-target-info 1 "info "
.SH NAME
start\-cli\-backup\-target\-info \- Display backup information for a package
.SH SYNOPSIS
\fBstart\-cli backup target info\fR [\fB\-\-format\fR] [\fB\-h\fR|\fB\-\-help\fR] <\fITARGET_ID\fR> <\fISERVER_ID\fR> <\fIPASSWORD\fR>
.SH DESCRIPTION
Display backup information for a package
.SH OPTIONS
.TP
\fB\-\-format\fR
.TP
\fB\-h\fR, \fB\-\-help\fR
Print help
.TP
<\fITARGET_ID\fR>
Backup target identifier
.TP
<\fISERVER_ID\fR>
Unique server identifier
.TP
<\fIPASSWORD\fR>
Password for backup encryption

View File

@@ -1,16 +0,0 @@
.ie \n(.g .ds Aq \(aq
.el .ds Aq '
.TH start-cli-backup-target-list 1 "list "
.SH NAME
start\-cli\-backup\-target\-list \- List existing backup targets
.SH SYNOPSIS
\fBstart\-cli backup target list\fR [\fB\-\-format\fR] [\fB\-h\fR|\fB\-\-help\fR]
.SH DESCRIPTION
List existing backup targets
.SH OPTIONS
.TP
\fB\-\-format\fR
.TP
\fB\-h\fR, \fB\-\-help\fR
Print help

View File

@@ -1,25 +0,0 @@
.ie \n(.g .ds Aq \(aq
.el .ds Aq '
.TH start-cli-backup-target-mount 1 "mount "
.SH NAME
start\-cli\-backup\-target\-mount \- Mount a backup target
.SH SYNOPSIS
\fBstart\-cli backup target mount\fR [\fB\-\-server\-id\fR] [\fB\-\-allow\-partial\fR] [\fB\-h\fR|\fB\-\-help\fR] <\fITARGET_ID\fR> <\fIPASSWORD\fR>
.SH DESCRIPTION
Mount a backup target
.SH OPTIONS
.TP
\fB\-\-server\-id\fR \fI<SERVER_ID>\fR
Unique server identifier
.TP
\fB\-\-allow\-partial\fR
Leave media mounted even if backupfs fails to mount
.TP
\fB\-h\fR, \fB\-\-help\fR
Print help
.TP
<\fITARGET_ID\fR>
Backup target identifier
.TP
<\fIPASSWORD\fR>
Password for backup encryption

View File

@@ -1,16 +0,0 @@
.ie \n(.g .ds Aq \(aq
.el .ds Aq '
.TH start-cli-backup-target-umount 1 "umount "
.SH NAME
start\-cli\-backup\-target\-umount \- Unmount a backup target
.SH SYNOPSIS
\fBstart\-cli backup target umount\fR [\fB\-h\fR|\fB\-\-help\fR] [\fITARGET_ID\fR]
.SH DESCRIPTION
Unmount a backup target
.SH OPTIONS
.TP
\fB\-h\fR, \fB\-\-help\fR
Print help
.TP
[\fITARGET_ID\fR]
Backup target identifier

View File

@@ -1,29 +0,0 @@
.ie \n(.g .ds Aq \(aq
.el .ds Aq '
.TH start-cli-backup-target 1 "target "
.SH NAME
start\-cli\-backup\-target \- Commands related to a backup target
.SH SYNOPSIS
\fBstart\-cli backup target\fR [\fB\-h\fR|\fB\-\-help\fR] <\fIsubcommands\fR>
.SH DESCRIPTION
Commands related to a backup target
.SH OPTIONS
.TP
\fB\-h\fR, \fB\-\-help\fR
Print help
.SH SUBCOMMANDS
.TP
start\-cli\-backup\-target\-cifs(1)
Add, remove, or update a backup target
.TP
start\-cli\-backup\-target\-info(1)
Display backup information for a package
.TP
start\-cli\-backup\-target\-list(1)
List existing backup targets
.TP
start\-cli\-backup\-target\-mount(1)
Mount a backup target
.TP
start\-cli\-backup\-target\-umount(1)
Unmount a backup target

View File

@@ -1,20 +0,0 @@
.ie \n(.g .ds Aq \(aq
.el .ds Aq '
.TH start-cli-backup 1 "backup "
.SH NAME
start\-cli\-backup \- Commands related to backup creation and backup targets
.SH SYNOPSIS
\fBstart\-cli backup\fR [\fB\-h\fR|\fB\-\-help\fR] <\fIsubcommands\fR>
.SH DESCRIPTION
Commands related to backup creation and backup targets
.SH OPTIONS
.TP
\fB\-h\fR, \fB\-\-help\fR
Print help
.SH SUBCOMMANDS
.TP
start\-cli\-backup\-create(1)
Create a backup for all packages
.TP
start\-cli\-backup\-target(1)
Commands related to a backup target

View File

@@ -1,22 +0,0 @@
.ie \n(.g .ds Aq \(aq
.el .ds Aq '
.TH start-cli-db-apply 1 "apply "
.SH NAME
start\-cli\-db\-apply \- Update a database record
.SH SYNOPSIS
\fBstart\-cli db apply\fR [\fB\-\-allow\-model\-mismatch\fR] [\fB\-h\fR|\fB\-\-help\fR] <\fIEXPR\fR> [\fIPATH\fR]
.SH DESCRIPTION
Update a database record
.SH OPTIONS
.TP
\fB\-\-allow\-model\-mismatch\fR
Allow database model mismatch
.TP
\fB\-h\fR, \fB\-\-help\fR
Print help
.TP
<\fIEXPR\fR>
Database patch expression to apply
.TP
[\fIPATH\fR]
Path to the database

View File

@@ -1,22 +0,0 @@
.ie \n(.g .ds Aq \(aq
.el .ds Aq '
.TH start-cli-db-dump 1 "dump "
.SH NAME
start\-cli\-db\-dump \- Filter and query the database
.SH SYNOPSIS
\fBstart\-cli db dump\fR [\fB\-p\fR|\fB\-\-include\-private\fR] [\fB\-\-format\fR] [\fB\-h\fR|\fB\-\-help\fR] [\fIPATH\fR]
.SH DESCRIPTION
Filter and query the database
.SH OPTIONS
.TP
\fB\-p\fR, \fB\-\-include\-private\fR
Include private data in output
.TP
\fB\-\-format\fR
.TP
\fB\-h\fR, \fB\-\-help\fR
Print help
.TP
[\fIPATH\fR]
Path to the database

View File

@@ -1,22 +0,0 @@
.ie \n(.g .ds Aq \(aq
.el .ds Aq '
.TH start-cli-db-put-ui 1 "ui "
.SH NAME
start\-cli\-db\-put\-ui \- Add path and value to db
.SH SYNOPSIS
\fBstart\-cli db put ui\fR [\fB\-\-format\fR] [\fB\-h\fR|\fB\-\-help\fR] <\fIPOINTER\fR> <\fIVALUE\fR>
.SH DESCRIPTION
Add path and value to db
.SH OPTIONS
.TP
\fB\-\-format\fR
.TP
\fB\-h\fR, \fB\-\-help\fR
Print help
.TP
<\fIPOINTER\fR>
JSON pointer to specific value
.TP
<\fIVALUE\fR>
JSON value to set

View File

@@ -1,17 +0,0 @@
.ie \n(.g .ds Aq \(aq
.el .ds Aq '
.TH start-cli-db-put 1 "put "
.SH NAME
start\-cli\-db\-put \- Command for adding UI record to db
.SH SYNOPSIS
\fBstart\-cli db put\fR [\fB\-h\fR|\fB\-\-help\fR] <\fIsubcommands\fR>
.SH DESCRIPTION
Command for adding UI record to db
.SH OPTIONS
.TP
\fB\-h\fR, \fB\-\-help\fR
Print help
.SH SUBCOMMANDS
.TP
start\-cli\-db\-put\-ui(1)
Add path and value to db

View File

@@ -1,23 +0,0 @@
.ie \n(.g .ds Aq \(aq
.el .ds Aq '
.TH start-cli-db 1 "db "
.SH NAME
start\-cli\-db \- Commands to interact with the db i.e. dump, put, apply
.SH SYNOPSIS
\fBstart\-cli db\fR [\fB\-h\fR|\fB\-\-help\fR] <\fIsubcommands\fR>
.SH DESCRIPTION
Commands to interact with the db i.e. dump, put, apply
.SH OPTIONS
.TP
\fB\-h\fR, \fB\-\-help\fR
Print help
.SH SUBCOMMANDS
.TP
start\-cli\-db\-apply(1)
Update a database record
.TP
start\-cli\-db\-dump(1)
Filter and query the database
.TP
start\-cli\-db\-put(1)
Command for adding UI record to db

View File

@@ -1,13 +0,0 @@
.ie \n(.g .ds Aq \(aq
.el .ds Aq '
.TH start-cli-diagnostic-disk-forget 1 "forget "
.SH NAME
start\-cli\-diagnostic\-disk\-forget \- Remove disk filesystem
.SH SYNOPSIS
\fBstart\-cli diagnostic disk forget\fR [\fB\-h\fR|\fB\-\-help\fR]
.SH DESCRIPTION
Remove disk filesystem
.SH OPTIONS
.TP
\fB\-h\fR, \fB\-\-help\fR
Print help

View File

@@ -1,13 +0,0 @@
.ie \n(.g .ds Aq \(aq
.el .ds Aq '
.TH start-cli-diagnostic-disk-repair 1 "repair "
.SH NAME
start\-cli\-diagnostic\-disk\-repair \- Repair disk corruption
.SH SYNOPSIS
\fBstart\-cli diagnostic disk repair\fR [\fB\-h\fR|\fB\-\-help\fR]
.SH DESCRIPTION
Repair disk corruption
.SH OPTIONS
.TP
\fB\-h\fR, \fB\-\-help\fR
Print help

View File

@@ -1,20 +0,0 @@
.ie \n(.g .ds Aq \(aq
.el .ds Aq '
.TH start-cli-diagnostic-disk 1 "disk "
.SH NAME
start\-cli\-diagnostic\-disk \- Command to remove disk from filesystem
.SH SYNOPSIS
\fBstart\-cli diagnostic disk\fR [\fB\-h\fR|\fB\-\-help\fR] <\fIsubcommands\fR>
.SH DESCRIPTION
Command to remove disk from filesystem
.SH OPTIONS
.TP
\fB\-h\fR, \fB\-\-help\fR
Print help
.SH SUBCOMMANDS
.TP
start\-cli\-diagnostic\-disk\-forget(1)
Remove disk filesystem
.TP
start\-cli\-diagnostic\-disk\-repair(1)
Repair disk corruption

View File

@@ -1,13 +0,0 @@
.ie \n(.g .ds Aq \(aq
.el .ds Aq '
.TH start-cli-diagnostic-error 1 "error "
.SH NAME
start\-cli\-diagnostic\-error \- Display diagnostic error
.SH SYNOPSIS
\fBstart\-cli diagnostic error\fR [\fB\-h\fR|\fB\-\-help\fR]
.SH DESCRIPTION
Display diagnostic error
.SH OPTIONS
.TP
\fB\-h\fR, \fB\-\-help\fR
Print help

View File

@@ -1,28 +0,0 @@
.ie \n(.g .ds Aq \(aq
.el .ds Aq '
.TH start-cli-diagnostic-kernel-logs 1 "kernel-logs "
.SH NAME
start\-cli\-diagnostic\-kernel\-logs \- Display kernel logs
.SH SYNOPSIS
\fBstart\-cli diagnostic kernel\-logs\fR [\fB\-l\fR|\fB\-\-limit\fR] [\fB\-c\fR|\fB\-\-cursor\fR] [\fB\-b\fR|\fB\-\-boot\fR] [\fB\-B\fR|\fB\-\-before\fR] [\fB\-f\fR|\fB\-\-follow\fR] [\fB\-h\fR|\fB\-\-help\fR]
.SH DESCRIPTION
Display kernel logs
.SH OPTIONS
.TP
\fB\-l\fR, \fB\-\-limit\fR \fI<LIMIT>\fR
Maximum number of log entries
.TP
\fB\-c\fR, \fB\-\-cursor\fR \fI<CURSOR>\fR
Start from this cursor position
.TP
\fB\-b\fR, \fB\-\-boot\fR \fI<BOOT>\fR
Filter logs by boot ID
.TP
\fB\-B\fR, \fB\-\-before\fR
Show logs before the cursor position
.TP
\fB\-f\fR, \fB\-\-follow\fR
Follow log output in real\-time
.TP
\fB\-h\fR, \fB\-\-help\fR
Print help

Some files were not shown because too many files have changed in this diff Show More