Compare commits

...

19 Commits

Author SHA1 Message Date
Matt Hill
9519684185 sdk comments 2026-02-05 08:10:53 -07:00
Matt Hill
58e0b166cb move comment to safe place 2026-02-02 21:09:19 -07:00
Matt Hill
2a678bb017 fix warning and skip raspberrypi builds for now 2026-02-02 20:16:41 -07:00
Matt Hill
5664456b77 fix for buildjet 2026-02-02 18:51:11 -07:00
Matt Hill
3685b7e57e fix workflows 2026-02-02 18:37:13 -07:00
Matt Hill
989d5f73b1 fix --arch flag to fall back to emulation when native image unavailab… (#3108)
* fix --arch flag to fall back to emulation when native image unavailable, always infer hardware requirement for arch

* better handling of arch filter

* dont cancel in-progress commit workflows and abstract common setup

* cli improvements

fix group handling

* fix cli publish

* alpha.19

---------

Co-authored-by: Aiden McClelland <me@drbonez.dev>
2026-02-03 00:56:59 +00:00
Matt Hill
4f84073cb5 actually eliminate duplicate workflows 2026-01-30 11:28:18 -07:00
Matt Hill
c190295c34 cleaner, also eliminate duplicate workflows 2026-01-30 11:23:40 -07:00
Matt Hill
60875644a1 better i18n checks, better action disabled, fix cert download for ios 2026-01-30 10:59:27 -07:00
Matt Hill
113b09ad01 fix cert download issue in index html 2026-01-29 16:57:12 -07:00
Alex Inkin
2605d0e671 chore: make column shorter (#3107) 2026-01-29 09:54:42 -07:00
Aiden McClelland
d232b91d31 update ota script, rbind for dependency mounts, cli list-ingredients fix, and formatting 2026-01-28 16:09:37 -07:00
Aiden McClelland
c65db31fd9 Feature/consolidate setup (#3092)
* start consolidating

* add start-cli flash-os

* combine install and setup and refactor all

* use http

* undo mock

* fix translation

* translations

* use dialogservice wrapper

* better ST messaging on setup

* only warn on update if breakages (#3097)

* finish setup wizard and ui language-keyboard feature

* fix typo

* wip: localization

* remove start-tunnel readme

* switch to posix strings for language internal

* revert mock

* translate backend strings

* fix missing about text

* help text for args

* feat: add "Add new gateway" option (#3098)

* feat: add "Add new gateway" option

* Update web/projects/ui/src/app/routes/portal/components/form/controls/select.component.ts

Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>

* add translation

---------

Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
Co-authored-by: Matt Hill <mattnine@protonmail.com>

* fix dns selection

* keyboard keymap also

* ability to shutdown after install

* revert mock

* working setup flow + manifest localization

* (mostly) redundant localization on frontend

* version bump

* omit live medium from disk list and better space management

* ignore missing package archive on 035 migration

* fix device migration

* add i18n helper to sdk

* fix install over 0.3.5.1

* fix grub config

---------

Co-authored-by: Matt Hill <mattnine@protonmail.com>
Co-authored-by: Matt Hill <MattDHill@users.noreply.github.com>
Co-authored-by: Alex Inkin <alexander@inkin.ru>
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
2026-01-27 14:44:41 -08:00
Aiden McClelland
99871805bd hardware acceleration and support for NVIDIA cards on nonfree images (#3089)
* add nvidia packages

* add nvidia deps to nonfree

* gpu_acceleration flag & nvidia hacking

* fix gpu_config & /tmp/lxc.log

* implement hardware acceleration more dynamically

* refactor OpenUI

* use mknod

* registry updates for multi-hardware-requirements

* pluralize

* handle new registry types

* remove log

* migrations and driver fixes

* wip

* misc patches

* handle nvidia-container differently

* chore: comments (#3093)

* chore: comments

* revert some sizing

---------

Co-authored-by: Matt Hill <mattnine@protonmail.com>

* Revert "handle nvidia-container differently"

This reverts commit d708ae53df.

* fix debian containers

* cleanup

* feat: add empty array placeholder in forms (#3095)

* fixes from testing, client side device filtering for better fingerprinting resistance

* fix mac builds

---------

Co-authored-by: Sam Sartor <me@samsartor.com>
Co-authored-by: Matt Hill <mattnine@protonmail.com>
Co-authored-by: Alex Inkin <alexander@inkin.ru>
2026-01-15 11:42:17 -08:00
Aiden McClelland
e8ef39adad misc fixes for alpha.16 (#3091)
* port misc fixes from feature/nvidia

* switch back to official tor proxy on 9050

* refactor OpenUI

* fix typo

* fixes, plus getServiceManifest

* fix EffectCreator, bump to beta.47

* fixes
2026-01-10 12:58:17 -07:00
Remco Ros
466b9217b5 fix: allow (multiple) equal signs in env filehelper values (#3090) 2026-01-06 18:32:03 +00:00
Matt Hill
c9a7f519b9 Misc (#3087)
* help ios downlaod .crt and add begin add masked for addresses

* only require and show CA for public domain if addSsl

* fix type and revert i18n const

* feat: add address masking and adjust design (#3088)

* feat: add address masking and adjust design

* update lockfile

* chore: move eye button to actions

* chore: refresh notifications and handle action error

* static width for health check name

---------

Co-authored-by: Matt Hill <mattnine@protonmail.com>

* hide certificate authorities tab

* alpha.17

* add waiting health check status

* remove "on" from waiting message

* reject on abort in `.watch`

* id migration: nostr -> nostr-rs-relay

* health check waiting state

* use interface type for launch button

* better wording for masked

* cleaner

* sdk improvements

* fix type error

* fix notification badge issue

---------

Co-authored-by: Alex Inkin <alexander@inkin.ru>
Co-authored-by: Aiden McClelland <me@drbonez.dev>
2025-12-31 11:30:57 -07:00
Aiden McClelland
96ae532879 Refactor/project structure (#3085)
* refactor project structure

* environment-based default registry

* fix tests

* update build container

* use docker platform for iso build emulation

* simplify compat

* Fix docker platform spec in run-compat.sh

* handle riscv compat

* fix bug with dep error exists attr

* undo removal of sorting

* use qemu for iso stage

---------

Co-authored-by: Mariusz Kogen <k0gen@pm.me>
Co-authored-by: Matt Hill <mattnine@protonmail.com>
2025-12-22 13:39:38 -07:00
Alex Inkin
eda08d5b0f chore: update taiga (#3086)
* chore: update taiga

* chore: fix UI menu
2025-12-22 13:33:02 -07:00
657 changed files with 23234 additions and 11922 deletions

81
.github/actions/setup-build/action.yml vendored Normal file
View File

@@ -0,0 +1,81 @@
name: Setup Build Environment
description: Common build environment setup steps
inputs:
nodejs-version:
description: Node.js version
required: true
setup-python:
description: Set up Python
required: false
default: "false"
setup-docker:
description: Set up Docker QEMU and Buildx
required: false
default: "true"
setup-sccache:
description: Configure sccache for GitHub Actions
required: false
default: "true"
free-space:
description: Remove unnecessary packages to free disk space
required: false
default: "true"
runs:
using: composite
steps:
- name: Free disk space
if: inputs.free-space == 'true'
shell: bash
run: |
sudo apt-get remove --purge -y azure-cli || true
sudo apt-get remove --purge -y firefox || true
sudo apt-get remove --purge -y ghc-* || true
sudo apt-get remove --purge -y google-cloud-sdk || true
sudo apt-get remove --purge -y google-chrome-stable || true
sudo apt-get remove --purge -y powershell || true
sudo apt-get remove --purge -y php* || true
sudo apt-get remove --purge -y ruby* || true
sudo apt-get remove --purge -y mono-* || true
sudo apt-get autoremove -y
sudo apt-get clean
sudo rm -rf /usr/lib/jvm
sudo rm -rf /usr/local/.ghcup
sudo rm -rf /usr/local/lib/android
sudo rm -rf /usr/share/dotnet
sudo rm -rf /usr/share/swift
sudo rm -rf "$AGENT_TOOLSDIRECTORY"
# BuildJet runners lack /opt/hostedtoolcache, which setup-python and setup-qemu expect
- name: Ensure hostedtoolcache exists
shell: bash
run: sudo mkdir -p /opt/hostedtoolcache && sudo chown $USER:$USER /opt/hostedtoolcache
- name: Set up Python
if: inputs.setup-python == 'true'
uses: actions/setup-python@v5
with:
python-version: "3.x"
- uses: actions/setup-node@v4
with:
node-version: ${{ inputs.nodejs-version }}
cache: npm
cache-dependency-path: "**/package-lock.json"
- name: Set up Docker QEMU
if: inputs.setup-docker == 'true'
uses: docker/setup-qemu-action@v3
- name: Set up Docker Buildx
if: inputs.setup-docker == 'true'
uses: docker/setup-buildx-action@v3
- name: Configure sccache
if: inputs.setup-sccache == 'true'
uses: actions/github-script@v7
with:
script: |
core.exportVariable('ACTIONS_RESULTS_URL', process.env.ACTIONS_RESULTS_URL || '');
core.exportVariable('ACTIONS_RUNTIME_TOKEN', process.env.ACTIONS_RUNTIME_TOKEN || '');

View File

@@ -37,6 +37,10 @@ on:
- master - master
- next/* - next/*
concurrency:
group: ${{ github.workflow }}-${{ github.head_ref || github.ref }}
cancel-in-progress: true
env: env:
NODEJS_VERSION: "24.11.0" NODEJS_VERSION: "24.11.0"
ENVIRONMENT: '${{ fromJson(format(''["{0}", ""]'', github.event.inputs.environment || ''dev''))[github.event.inputs.environment == ''NONE''] }}' ENVIRONMENT: '${{ fromJson(format(''["{0}", ""]'', github.event.inputs.environment || ''dev''))[github.event.inputs.environment == ''NONE''] }}'
@@ -44,6 +48,7 @@ env:
jobs: jobs:
compile: compile:
name: Build Debian Package name: Build Debian Package
if: github.event.pull_request.draft != true
strategy: strategy:
fail-fast: true fail-fast: true
matrix: matrix:
@@ -60,50 +65,15 @@ jobs:
}} }}
runs-on: ${{ fromJson('["ubuntu-latest", "buildjet-32vcpu-ubuntu-2204"]')[github.event.inputs.runner == 'fast'] }} runs-on: ${{ fromJson('["ubuntu-latest", "buildjet-32vcpu-ubuntu-2204"]')[github.event.inputs.runner == 'fast'] }}
steps: steps:
- name: Cleaning up unnecessary files - name: Mount tmpfs
run: |
sudo apt-get remove --purge -y mono-* \
ghc* cabal-install* \
dotnet* \
php* \
ruby* \
mysql-* \
postgresql-* \
azure-cli \
powershell \
google-cloud-sdk \
msodbcsql* mssql-tools* \
imagemagick* \
libgl1-mesa-dri \
google-chrome-stable \
firefox
sudo apt-get autoremove -y
sudo apt-get clean
- run: |
sudo mount -t tmpfs tmpfs .
if: ${{ github.event.inputs.runner == 'fast' }} if: ${{ github.event.inputs.runner == 'fast' }}
run: sudo mount -t tmpfs tmpfs .
- uses: actions/checkout@v4 - uses: actions/checkout@v4
with: with:
submodules: recursive submodules: recursive
- uses: ./.github/actions/setup-build
- uses: actions/setup-node@v4
with: with:
node-version: ${{ env.NODEJS_VERSION }} nodejs-version: ${{ env.NODEJS_VERSION }}
- name: Set up docker QEMU
uses: docker/setup-qemu-action@v3
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@v3
- name: Configure sccache
uses: actions/github-script@v7
with:
script: |
core.exportVariable('ACTIONS_RESULTS_URL', process.env.ACTIONS_RESULTS_URL || '');
core.exportVariable('ACTIONS_RUNTIME_TOKEN', process.env.ACTIONS_RUNTIME_TOKEN || '');
- name: Make - name: Make
run: TARGET=${{ matrix.triple }} make cli run: TARGET=${{ matrix.triple }} make cli

View File

@@ -1,4 +1,4 @@
name: Start-Registry name: start-registry
on: on:
workflow_call: workflow_call:
@@ -35,6 +35,10 @@ on:
- master - master
- next/* - next/*
concurrency:
group: ${{ github.workflow }}-${{ github.head_ref || github.ref }}
cancel-in-progress: true
env: env:
NODEJS_VERSION: "24.11.0" NODEJS_VERSION: "24.11.0"
ENVIRONMENT: '${{ fromJson(format(''["{0}", ""]'', github.event.inputs.environment || ''dev''))[github.event.inputs.environment == ''NONE''] }}' ENVIRONMENT: '${{ fromJson(format(''["{0}", ""]'', github.event.inputs.environment || ''dev''))[github.event.inputs.environment == ''NONE''] }}'
@@ -42,6 +46,7 @@ env:
jobs: jobs:
compile: compile:
name: Build Debian Package name: Build Debian Package
if: github.event.pull_request.draft != true
strategy: strategy:
fail-fast: true fail-fast: true
matrix: matrix:
@@ -56,50 +61,15 @@ jobs:
}} }}
runs-on: ${{ fromJson('["ubuntu-latest", "buildjet-32vcpu-ubuntu-2204"]')[github.event.inputs.runner == 'fast'] }} runs-on: ${{ fromJson('["ubuntu-latest", "buildjet-32vcpu-ubuntu-2204"]')[github.event.inputs.runner == 'fast'] }}
steps: steps:
- name: Cleaning up unnecessary files - name: Mount tmpfs
run: |
sudo apt-get remove --purge -y mono-* \
ghc* cabal-install* \
dotnet* \
php* \
ruby* \
mysql-* \
postgresql-* \
azure-cli \
powershell \
google-cloud-sdk \
msodbcsql* mssql-tools* \
imagemagick* \
libgl1-mesa-dri \
google-chrome-stable \
firefox
sudo apt-get autoremove -y
sudo apt-get clean
- run: |
sudo mount -t tmpfs tmpfs .
if: ${{ github.event.inputs.runner == 'fast' }} if: ${{ github.event.inputs.runner == 'fast' }}
run: sudo mount -t tmpfs tmpfs .
- uses: actions/checkout@v4 - uses: actions/checkout@v4
with: with:
submodules: recursive submodules: recursive
- uses: ./.github/actions/setup-build
- uses: actions/setup-node@v4
with: with:
node-version: ${{ env.NODEJS_VERSION }} nodejs-version: ${{ env.NODEJS_VERSION }}
- name: Set up docker QEMU
uses: docker/setup-qemu-action@v3
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@v3
- name: Configure sccache
uses: actions/github-script@v7
with:
script: |
core.exportVariable('ACTIONS_RESULTS_URL', process.env.ACTIONS_RESULTS_URL || '');
core.exportVariable('ACTIONS_RUNTIME_TOKEN', process.env.ACTIONS_RUNTIME_TOKEN || '');
- name: Make - name: Make
run: make registry-deb run: make registry-deb

View File

@@ -1,4 +1,4 @@
name: Start-Tunnel name: start-tunnel
on: on:
workflow_call: workflow_call:
@@ -35,6 +35,10 @@ on:
- master - master
- next/* - next/*
concurrency:
group: ${{ github.workflow }}-${{ github.head_ref || github.ref }}
cancel-in-progress: true
env: env:
NODEJS_VERSION: "24.11.0" NODEJS_VERSION: "24.11.0"
ENVIRONMENT: '${{ fromJson(format(''["{0}", ""]'', github.event.inputs.environment || ''dev''))[github.event.inputs.environment == ''NONE''] }}' ENVIRONMENT: '${{ fromJson(format(''["{0}", ""]'', github.event.inputs.environment || ''dev''))[github.event.inputs.environment == ''NONE''] }}'
@@ -42,6 +46,7 @@ env:
jobs: jobs:
compile: compile:
name: Build Debian Package name: Build Debian Package
if: github.event.pull_request.draft != true
strategy: strategy:
fail-fast: true fail-fast: true
matrix: matrix:
@@ -56,50 +61,15 @@ jobs:
}} }}
runs-on: ${{ fromJson('["ubuntu-latest", "buildjet-32vcpu-ubuntu-2204"]')[github.event.inputs.runner == 'fast'] }} runs-on: ${{ fromJson('["ubuntu-latest", "buildjet-32vcpu-ubuntu-2204"]')[github.event.inputs.runner == 'fast'] }}
steps: steps:
- name: Cleaning up unnecessary files - name: Mount tmpfs
run: |
sudo apt-get remove --purge -y mono-* \
ghc* cabal-install* \
dotnet* \
php* \
ruby* \
mysql-* \
postgresql-* \
azure-cli \
powershell \
google-cloud-sdk \
msodbcsql* mssql-tools* \
imagemagick* \
libgl1-mesa-dri \
google-chrome-stable \
firefox
sudo apt-get autoremove -y
sudo apt-get clean
- run: |
sudo mount -t tmpfs tmpfs .
if: ${{ github.event.inputs.runner == 'fast' }} if: ${{ github.event.inputs.runner == 'fast' }}
run: sudo mount -t tmpfs tmpfs .
- uses: actions/checkout@v4 - uses: actions/checkout@v4
with: with:
submodules: recursive submodules: recursive
- uses: ./.github/actions/setup-build
- uses: actions/setup-node@v4
with: with:
node-version: ${{ env.NODEJS_VERSION }} nodejs-version: ${{ env.NODEJS_VERSION }}
- name: Set up docker QEMU
uses: docker/setup-qemu-action@v3
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@v3
- name: Configure sccache
uses: actions/github-script@v7
with:
script: |
core.exportVariable('ACTIONS_RESULTS_URL', process.env.ACTIONS_RESULTS_URL || '');
core.exportVariable('ACTIONS_RUNTIME_TOKEN', process.env.ACTIONS_RUNTIME_TOKEN || '');
- name: Make - name: Make
run: make tunnel-deb run: make tunnel-deb

View File

@@ -27,7 +27,7 @@ on:
- x86_64-nonfree - x86_64-nonfree
- aarch64 - aarch64
- aarch64-nonfree - aarch64-nonfree
- raspberrypi # - raspberrypi
- riscv64 - riscv64
deploy: deploy:
type: choice type: choice
@@ -45,6 +45,10 @@ on:
- master - master
- next/* - next/*
concurrency:
group: ${{ github.workflow }}-${{ github.head_ref || github.ref }}
cancel-in-progress: true
env: env:
NODEJS_VERSION: "24.11.0" NODEJS_VERSION: "24.11.0"
ENVIRONMENT: '${{ fromJson(format(''["{0}", ""]'', github.event.inputs.environment || ''dev''))[github.event.inputs.environment == ''NONE''] }}' ENVIRONMENT: '${{ fromJson(format(''["{0}", ""]'', github.event.inputs.environment || ''dev''))[github.event.inputs.environment == ''NONE''] }}'
@@ -52,6 +56,7 @@ env:
jobs: jobs:
compile: compile:
name: Compile Base Binaries name: Compile Base Binaries
if: github.event.pull_request.draft != true
strategy: strategy:
fail-fast: true fail-fast: true
matrix: matrix:
@@ -86,57 +91,16 @@ jobs:
)[github.event.inputs.runner == 'fast'] )[github.event.inputs.runner == 'fast']
}} }}
steps: steps:
- name: Cleaning up unnecessary files - name: Mount tmpfs
run: |
sudo apt-get remove --purge -y azure-cli || true
sudo apt-get remove --purge -y firefox || true
sudo apt-get remove --purge -y ghc-* || true
sudo apt-get remove --purge -y google-cloud-sdk || true
sudo apt-get remove --purge -y google-chrome-stable || true
sudo apt-get remove --purge -y powershell || true
sudo apt-get remove --purge -y php* || true
sudo apt-get remove --purge -y ruby* || true
sudo apt-get remove --purge -y mono-* || true
sudo apt-get autoremove -y
sudo apt-get clean
sudo rm -rf /usr/lib/jvm # All JDKs
sudo rm -rf /usr/local/.ghcup # Haskell toolchain
sudo rm -rf /usr/local/lib/android # Android SDK/NDK, emulator
sudo rm -rf /usr/share/dotnet # .NET SDKs
sudo rm -rf /usr/share/swift # Swift toolchain (if present)
sudo rm -rf "$AGENT_TOOLSDIRECTORY" # Pre-cached tool cache (Go, Node, etc.)
- run: |
sudo mount -t tmpfs tmpfs .
if: ${{ github.event.inputs.runner == 'fast' }} if: ${{ github.event.inputs.runner == 'fast' }}
run: sudo mount -t tmpfs tmpfs .
- uses: actions/checkout@v4 - uses: actions/checkout@v4
with: with:
submodules: recursive submodules: recursive
- uses: ./.github/actions/setup-build
- name: Set up Python
uses: actions/setup-python@v5
with: with:
python-version: "3.x" nodejs-version: ${{ env.NODEJS_VERSION }}
setup-python: "true"
- uses: actions/setup-node@v4
with:
node-version: ${{ env.NODEJS_VERSION }}
- name: Set up docker QEMU
uses: docker/setup-qemu-action@v3
- name: Set up system dependencies
run: sudo apt-get update && sudo apt-get install -y qemu-user-static systemd-container squashfuse
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@v3
- name: Configure sccache
uses: actions/github-script@v7
with:
script: |
core.exportVariable('ACTIONS_RESULTS_URL', process.env.ACTIONS_RESULTS_URL || '');
core.exportVariable('ACTIONS_RUNTIME_TOKEN', process.env.ACTIONS_RUNTIME_TOKEN || '');
- name: Make - name: Make
run: make ARCH=${{ matrix.arch }} compiled-${{ matrix.arch }}.tar run: make ARCH=${{ matrix.arch }} compiled-${{ matrix.arch }}.tar
@@ -154,13 +118,14 @@ jobs:
strategy: strategy:
fail-fast: false fail-fast: false
matrix: matrix:
# TODO: re-add "raspberrypi" to the platform list below
platform: >- platform: >-
${{ ${{
fromJson( fromJson(
format( format(
'[ '[
["{0}"], ["{0}"],
["x86_64", "x86_64-nonfree", "aarch64", "aarch64-nonfree", "riscv64", "raspberrypi"] ["x86_64", "x86_64-nonfree", "aarch64", "aarch64-nonfree", "riscv64"]
]', ]',
github.event.inputs.platform || 'ALL' github.event.inputs.platform || 'ALL'
) )
@@ -224,32 +189,17 @@ jobs:
sudo rm -rf "$AGENT_TOOLSDIRECTORY" # Pre-cached tool cache (Go, Node, etc.) sudo rm -rf "$AGENT_TOOLSDIRECTORY" # Pre-cached tool cache (Go, Node, etc.)
if: ${{ github.event.inputs.runner != 'fast' }} if: ${{ github.event.inputs.runner != 'fast' }}
# BuildJet runners lack /opt/hostedtoolcache, which setup-qemu expects
- name: Ensure hostedtoolcache exists
run: sudo mkdir -p /opt/hostedtoolcache && sudo chown $USER:$USER /opt/hostedtoolcache
- name: Set up docker QEMU
uses: docker/setup-qemu-action@v3
- uses: actions/checkout@v4 - uses: actions/checkout@v4
with: with:
submodules: recursive submodules: recursive
- name: Set up Python
uses: actions/setup-python@v5
with:
python-version: "3.x"
- name: Install dependencies
run: |
sudo apt-get update
sudo apt-get install -y qemu-user-static
wget https://deb.debian.org/debian/pool/main/d/debspawn/debspawn_0.6.2-1_all.deb
sha256sum ./debspawn_0.6.2-1_all.deb | grep 37ef27458cb1e35e8bce4d4f639b06b4b3866fc0b9191ec6b9bd157afd06a817
sudo apt-get install -y ./debspawn_0.6.2-1_all.deb
- name: Configure debspawn
run: |
sudo mkdir -p /etc/debspawn/
echo "AllowUnsafePermissions=true" | sudo tee /etc/debspawn/global.toml
sudo mkdir -p /var/tmp/debspawn
- run: sudo mount -t tmpfs tmpfs /var/tmp/debspawn
if: ${{ github.event.inputs.runner == 'fast' && (matrix.platform == 'x86_64' || matrix.platform == 'x86_64-nonfree') }}
- name: Download compiled artifacts - name: Download compiled artifacts
uses: actions/download-artifact@v4 uses: actions/download-artifact@v4
with: with:
@@ -262,22 +212,19 @@ jobs:
run: | run: |
mkdir -p web/node_modules mkdir -p web/node_modules
mkdir -p web/dist/raw mkdir -p web/dist/raw
mkdir -p core/startos/bindings mkdir -p core/bindings
mkdir -p sdk/base/lib/osBindings mkdir -p sdk/base/lib/osBindings
mkdir -p container-runtime/node_modules mkdir -p container-runtime/node_modules
mkdir -p container-runtime/dist mkdir -p container-runtime/dist
mkdir -p container-runtime/dist/node_modules mkdir -p container-runtime/dist/node_modules
mkdir -p core/startos/bindings
mkdir -p sdk/dist mkdir -p sdk/dist
mkdir -p sdk/baseDist mkdir -p sdk/baseDist
mkdir -p patch-db/client/node_modules mkdir -p patch-db/client/node_modules
mkdir -p patch-db/client/dist mkdir -p patch-db/client/dist
mkdir -p web/.angular mkdir -p web/.angular
mkdir -p web/dist/raw/ui mkdir -p web/dist/raw/ui
mkdir -p web/dist/raw/install-wizard
mkdir -p web/dist/raw/setup-wizard mkdir -p web/dist/raw/setup-wizard
mkdir -p web/dist/static/ui mkdir -p web/dist/static/ui
mkdir -p web/dist/static/install-wizard
mkdir -p web/dist/static/setup-wizard mkdir -p web/dist/static/setup-wizard
PLATFORM=${{ matrix.platform }} make -t compiled-${{ env.ARCH }}.tar PLATFORM=${{ matrix.platform }} make -t compiled-${{ env.ARCH }}.tar
@@ -307,40 +254,3 @@ jobs:
name: ${{ matrix.platform }}.img name: ${{ matrix.platform }}.img
path: results/*.img path: results/*.img
if: ${{ matrix.platform == 'raspberrypi' }} if: ${{ matrix.platform == 'raspberrypi' }}
- name: Upload OTA to registry
run: >-
PLATFORM=${{ matrix.platform }} make upload-ota TARGET="${{
fromJson('{
"alpha": "alpha-registry-x.start9.com",
"beta": "beta-registry.start9.com",
}')[github.event.inputs.deploy]
}}" KEY="${{
fromJson(
format('{{
"alpha": "{0}",
"beta": "{1}",
}}', secrets.ALPHA_INDEX_KEY, secrets.BETA_INDEX_KEY)
)[github.event.inputs.deploy]
}}"
if: ${{ github.event.inputs.deploy != '' && github.event.inputs.deploy != 'NONE' }}
index:
if: ${{ github.event.inputs.deploy != '' && github.event.inputs.deploy != 'NONE' }}
needs: [image]
runs-on: ubuntu-latest
steps:
- run: >-
curl "https://${{
fromJson('{
"alpha": "alpha-registry-x.start9.com",
"beta": "beta-registry.start9.com",
}')[github.event.inputs.deploy]
}}:8443/resync.cgi?key=${{
fromJson(
format('{{
"alpha": "{0}",
"beta": "{1}",
}}', secrets.ALPHA_INDEX_KEY, secrets.BETA_INDEX_KEY)
)[github.event.inputs.deploy]
}}"

View File

@@ -10,6 +10,10 @@ on:
- master - master
- next/* - next/*
concurrency:
group: ${{ github.workflow }}-${{ github.head_ref || github.ref }}
cancel-in-progress: true
env: env:
NODEJS_VERSION: "24.11.0" NODEJS_VERSION: "24.11.0"
ENVIRONMENT: dev-unstable ENVIRONMENT: dev-unstable
@@ -17,15 +21,18 @@ env:
jobs: jobs:
test: test:
name: Run Automated Tests name: Run Automated Tests
if: github.event.pull_request.draft != true
runs-on: ubuntu-latest runs-on: ubuntu-latest
steps: steps:
- uses: actions/checkout@v4 - uses: actions/checkout@v4
with: with:
submodules: recursive submodules: recursive
- uses: ./.github/actions/setup-build
- uses: actions/setup-node@v4
with: with:
node-version: ${{ env.NODEJS_VERSION }} nodejs-version: ${{ env.NODEJS_VERSION }}
free-space: "false"
setup-docker: "false"
setup-sccache: "false"
- name: Build And Run Tests - name: Build And Run Tests
run: make test run: make test

24
.gitignore vendored
View File

@@ -1,28 +1,22 @@
.DS_Store .DS_Store
.idea .idea
/*.img *.img
/*.img.gz *.img.gz
/*.img.xz *.img.xz
/*-raspios-bullseye-arm64-lite.img *.zip
/*-raspios-bullseye-arm64-lite.zip
/product_key.txt /product_key.txt
/*_product_key.txt /*_product_key.txt
.vscode/settings.json .vscode/settings.json
deploy_web.sh deploy_web.sh
deploy_web.sh
secrets.db secrets.db
.vscode/ .vscode/
/cargo-deps/**/* /build/env/*.txt
/PLATFORM.txt *.deb
/ENVIRONMENT.txt
/GIT_HASH.txt
/VERSION.txt
/*.deb
/target /target
/*.squashfs *.squashfs
/results /results
/dpkg-workdir /dpkg-workdir
/compiled.tar /compiled.tar
/compiled-*.tar /compiled-*.tar
/firmware /build/lib/firmware
/tmp tmp

169
Makefile
View File

@@ -1,45 +1,44 @@
ls-files = $(shell git ls-files --cached --others --exclude-standard $1) ls-files = $(shell git ls-files --cached --others --exclude-standard $1)
PROFILE = release PROFILE = release
PLATFORM_FILE := $(shell ./check-platform.sh) PLATFORM_FILE := $(shell ./build/env/check-platform.sh)
ENVIRONMENT_FILE := $(shell ./check-environment.sh) ENVIRONMENT_FILE := $(shell ./build/env/check-environment.sh)
GIT_HASH_FILE := $(shell ./check-git-hash.sh) GIT_HASH_FILE := $(shell ./build/env/check-git-hash.sh)
VERSION_FILE := $(shell ./check-version.sh) VERSION_FILE := $(shell ./build/env/check-version.sh)
BASENAME := $(shell PROJECT=startos ./basename.sh) BASENAME := $(shell PROJECT=startos ./build/env/basename.sh)
PLATFORM := $(shell if [ -f ./PLATFORM.txt ]; then cat ./PLATFORM.txt; else echo unknown; fi) PLATFORM := $(shell if [ -f $(PLATFORM_FILE) ]; then cat $(PLATFORM_FILE); else echo unknown; fi)
ARCH := $(shell if [ "$(PLATFORM)" = "raspberrypi" ]; then echo aarch64; else echo $(PLATFORM) | sed 's/-nonfree$$//g'; fi) ARCH := $(shell if [ "$(PLATFORM)" = "raspberrypi" ]; then echo aarch64; else echo $(PLATFORM) | sed 's/-nonfree$$//g'; fi)
RUST_ARCH := $(shell if [ "$(ARCH)" = "riscv64" ]; then echo riscv64gc; else echo $(ARCH); fi) RUST_ARCH := $(shell if [ "$(ARCH)" = "riscv64" ]; then echo riscv64gc; else echo $(ARCH); fi)
REGISTRY_BASENAME := $(shell PROJECT=start-registry PLATFORM=$(ARCH) ./basename.sh) REGISTRY_BASENAME := $(shell PROJECT=start-registry PLATFORM=$(ARCH) ./build/env/basename.sh)
TUNNEL_BASENAME := $(shell PROJECT=start-tunnel PLATFORM=$(ARCH) ./basename.sh) TUNNEL_BASENAME := $(shell PROJECT=start-tunnel PLATFORM=$(ARCH) ./build/env/basename.sh)
IMAGE_TYPE=$(shell if [ "$(PLATFORM)" = raspberrypi ]; then echo img; else echo iso; fi) IMAGE_TYPE=$(shell if [ "$(PLATFORM)" = raspberrypi ]; then echo img; else echo iso; fi)
WEB_UIS := web/dist/raw/ui/index.html web/dist/raw/setup-wizard/index.html web/dist/raw/install-wizard/index.html WEB_UIS := web/dist/raw/ui/index.html web/dist/raw/setup-wizard/index.html
COMPRESSED_WEB_UIS := web/dist/static/ui/index.html web/dist/static/setup-wizard/index.html web/dist/static/install-wizard/index.html COMPRESSED_WEB_UIS := web/dist/static/ui/index.html web/dist/static/setup-wizard/index.html
FIRMWARE_ROMS := ./firmware/$(PLATFORM) $(shell jq --raw-output '.[] | select(.platform[] | contains("$(PLATFORM)")) | "./firmware/$(PLATFORM)/" + .id + ".rom.gz"' build/lib/firmware.json) FIRMWARE_ROMS := build/lib/firmware/$(PLATFORM) $(shell jq --raw-output '.[] | select(.platform[] | contains("$(PLATFORM)")) | "./build/lib/firmware/$(PLATFORM)/" + .id + ".rom.gz"' build/lib/firmware.json)
BUILD_SRC := $(call ls-files, build) build/lib/depends build/lib/conflicts $(FIRMWARE_ROMS) BUILD_SRC := $(call ls-files, build/lib) build/lib/depends build/lib/conflicts $(FIRMWARE_ROMS)
IMAGE_RECIPE_SRC := $(call ls-files, image-recipe/) IMAGE_RECIPE_SRC := $(call ls-files, build/image-recipe/)
STARTD_SRC := core/startos/startd.service $(BUILD_SRC) STARTD_SRC := core/startd.service $(BUILD_SRC)
CORE_SRC := $(call ls-files, core) $(shell git ls-files --recurse-submodules patch-db) $(GIT_HASH_FILE) CORE_SRC := $(call ls-files, core) $(shell git ls-files --recurse-submodules patch-db) $(GIT_HASH_FILE)
WEB_SHARED_SRC := $(call ls-files, web/projects/shared) $(call ls-files, web/projects/marketplace) $(shell ls -p web/ | grep -v / | sed 's/^/web\//g') web/node_modules/.package-lock.json web/config.json patch-db/client/dist/index.js sdk/baseDist/package.json web/patchdb-ui-seed.json sdk/dist/package.json WEB_SHARED_SRC := $(call ls-files, web/projects/shared) $(call ls-files, web/projects/marketplace) $(shell ls -p web/ | grep -v / | sed 's/^/web\//g') web/node_modules/.package-lock.json web/config.json patch-db/client/dist/index.js sdk/baseDist/package.json web/patchdb-ui-seed.json sdk/dist/package.json
WEB_UI_SRC := $(call ls-files, web/projects/ui) WEB_UI_SRC := $(call ls-files, web/projects/ui)
WEB_SETUP_WIZARD_SRC := $(call ls-files, web/projects/setup-wizard) WEB_SETUP_WIZARD_SRC := $(call ls-files, web/projects/setup-wizard)
WEB_INSTALL_WIZARD_SRC := $(call ls-files, web/projects/install-wizard)
WEB_START_TUNNEL_SRC := $(call ls-files, web/projects/start-tunnel) WEB_START_TUNNEL_SRC := $(call ls-files, web/projects/start-tunnel)
PATCH_DB_CLIENT_SRC := $(shell git ls-files --recurse-submodules patch-db/client) PATCH_DB_CLIENT_SRC := $(shell git ls-files --recurse-submodules patch-db/client)
GZIP_BIN := $(shell which pigz || which gzip) GZIP_BIN := $(shell which pigz || which gzip)
TAR_BIN := $(shell which gtar || which tar) TAR_BIN := $(shell which gtar || which tar)
COMPILED_TARGETS := core/target/$(RUST_ARCH)-unknown-linux-musl/$(PROFILE)/startbox core/target/$(RUST_ARCH)-unknown-linux-musl/release/start-container container-runtime/rootfs.$(ARCH).squashfs COMPILED_TARGETS := core/target/$(RUST_ARCH)-unknown-linux-musl/$(PROFILE)/startbox core/target/$(RUST_ARCH)-unknown-linux-musl/release/start-container container-runtime/rootfs.$(ARCH).squashfs
STARTOS_TARGETS := $(STARTD_SRC) $(ENVIRONMENT_FILE) $(GIT_HASH_FILE) $(VERSION_FILE) $(COMPILED_TARGETS) cargo-deps/$(RUST_ARCH)-unknown-linux-musl/release/startos-backup-fs $(PLATFORM_FILE) \ STARTOS_TARGETS := $(STARTD_SRC) $(ENVIRONMENT_FILE) $(GIT_HASH_FILE) $(VERSION_FILE) $(COMPILED_TARGETS) target/$(RUST_ARCH)-unknown-linux-musl/release/startos-backup-fs $(PLATFORM_FILE) \
$(shell if [ "$(PLATFORM)" = "raspberrypi" ]; then \ $(shell if [ "$(PLATFORM)" = "raspberrypi" ]; then \
echo cargo-deps/aarch64-unknown-linux-musl/release/pi-beep; \ echo target/aarch64-unknown-linux-musl/release/pi-beep; \
fi) \ fi) \
$(shell /bin/bash -c 'if [[ "${ENVIRONMENT}" =~ (^|-)unstable($$|-) ]]; then \ $(shell /bin/bash -c 'if [[ "${ENVIRONMENT}" =~ (^|-)unstable($$|-) ]]; then \
echo cargo-deps/$(RUST_ARCH)-unknown-linux-musl/release/flamegraph; \ echo target/$(RUST_ARCH)-unknown-linux-musl/release/flamegraph; \
fi') \ fi') \
$(shell /bin/bash -c 'if [[ "${ENVIRONMENT}" =~ (^|-)console($$|-) ]]; then \ $(shell /bin/bash -c 'if [[ "${ENVIRONMENT}" =~ (^|-)console($$|-) ]]; then \
echo cargo-deps/$(RUST_ARCH)-unknown-linux-musl/release/tokio-console; \ echo target/$(RUST_ARCH)-unknown-linux-musl/release/tokio-console; \
fi') fi')
REGISTRY_TARGETS := core/target/$(RUST_ARCH)-unknown-linux-musl/$(PROFILE)/registrybox core/startos/start-registryd.service REGISTRY_TARGETS := core/target/$(RUST_ARCH)-unknown-linux-musl/$(PROFILE)/registrybox core/start-registryd.service
TUNNEL_TARGETS := core/target/$(RUST_ARCH)-unknown-linux-musl/$(PROFILE)/tunnelbox core/startos/start-tunneld.service TUNNEL_TARGETS := core/target/$(RUST_ARCH)-unknown-linux-musl/$(PROFILE)/tunnelbox core/start-tunneld.service
ifeq ($(REMOTE),) ifeq ($(REMOTE),)
mkdir = mkdir -p $1 mkdir = mkdir -p $1
@@ -73,7 +72,7 @@ metadata: $(VERSION_FILE) $(PLATFORM_FILE) $(ENVIRONMENT_FILE) $(GIT_HASH_FILE)
clean: clean:
rm -rf core/target rm -rf core/target
rm -rf core/startos/bindings rm -rf core/bindings
rm -rf web/.angular rm -rf web/.angular
rm -f web/config.json rm -f web/config.json
rm -rf web/node_modules rm -rf web/node_modules
@@ -81,7 +80,7 @@ clean:
rm -rf patch-db/client/node_modules rm -rf patch-db/client/node_modules
rm -rf patch-db/client/dist rm -rf patch-db/client/dist
rm -rf patch-db/target rm -rf patch-db/target
rm -rf cargo-deps rm -rf target
rm -rf dpkg-workdir rm -rf dpkg-workdir
rm -rf image-recipe/deb rm -rf image-recipe/deb
rm -rf results rm -rf results
@@ -89,14 +88,8 @@ clean:
rm -rf container-runtime/dist rm -rf container-runtime/dist
rm -rf container-runtime/node_modules rm -rf container-runtime/node_modules
rm -f container-runtime/*.squashfs rm -f container-runtime/*.squashfs
if [ -d container-runtime/tmp/combined ] && mountpoint container-runtime/tmp/combined; then sudo umount container-runtime/tmp/combined; fi
if [ -d container-runtime/tmp/lower ] && mountpoint container-runtime/tmp/lower; then sudo umount container-runtime/tmp/lower; fi
rm -rf container-runtime/tmp
(cd sdk && make clean) (cd sdk && make clean)
rm -f ENVIRONMENT.txt rm -f env/*.txt
rm -f PLATFORM.txt
rm -f GIT_HASH.txt
rm -f VERSION.txt
format: format:
cd core && cargo +nightly fmt cd core && cargo +nightly fmt
@@ -113,10 +106,10 @@ test-container-runtime: container-runtime/node_modules/.package-lock.json $(call
cd container-runtime && npm test cd container-runtime && npm test
install-cli: $(GIT_HASH_FILE) install-cli: $(GIT_HASH_FILE)
./core/build-cli.sh --install ./core/build/build-cli.sh --install
cli: $(GIT_HASH_FILE) cli: $(GIT_HASH_FILE)
./core/build-cli.sh ./core/build/build-cli.sh
registry: core/target/$(RUST_ARCH)-unknown-linux-musl/$(PROFILE)/registrybox registry: core/target/$(RUST_ARCH)-unknown-linux-musl/$(PROFILE)/registrybox
@@ -127,49 +120,49 @@ install-registry: $(REGISTRY_TARGETS)
$(call ln,/usr/bin/start-registrybox,$(DESTDIR)/usr/bin/start-registry) $(call ln,/usr/bin/start-registrybox,$(DESTDIR)/usr/bin/start-registry)
$(call mkdir,$(DESTDIR)/lib/systemd/system) $(call mkdir,$(DESTDIR)/lib/systemd/system)
$(call cp,core/startos/start-registryd.service,$(DESTDIR)/lib/systemd/system/start-registryd.service) $(call cp,core/start-registryd.service,$(DESTDIR)/lib/systemd/system/start-registryd.service)
core/target/$(RUST_ARCH)-unknown-linux-musl/$(PROFILE)/registrybox: $(CORE_SRC) $(ENVIRONMENT_FILE) core/target/$(RUST_ARCH)-unknown-linux-musl/$(PROFILE)/registrybox: $(CORE_SRC) $(ENVIRONMENT_FILE)
ARCH=$(ARCH) PROFILE=$(PROFILE) ./core/build-registrybox.sh ARCH=$(ARCH) PROFILE=$(PROFILE) ./core/build/build-registrybox.sh
tunnel: core/target/$(RUST_ARCH)-unknown-linux-musl/$(PROFILE)/tunnelbox tunnel: core/target/$(RUST_ARCH)-unknown-linux-musl/$(PROFILE)/tunnelbox
install-tunnel: core/target/$(RUST_ARCH)-unknown-linux-musl/$(PROFILE)/tunnelbox core/startos/start-tunneld.service install-tunnel: core/target/$(RUST_ARCH)-unknown-linux-musl/$(PROFILE)/tunnelbox core/start-tunneld.service
$(call mkdir,$(DESTDIR)/usr/bin) $(call mkdir,$(DESTDIR)/usr/bin)
$(call cp,core/target/$(RUST_ARCH)-unknown-linux-musl/$(PROFILE)/tunnelbox,$(DESTDIR)/usr/bin/start-tunnelbox) $(call cp,core/target/$(RUST_ARCH)-unknown-linux-musl/$(PROFILE)/tunnelbox,$(DESTDIR)/usr/bin/start-tunnelbox)
$(call ln,/usr/bin/start-tunnelbox,$(DESTDIR)/usr/bin/start-tunneld) $(call ln,/usr/bin/start-tunnelbox,$(DESTDIR)/usr/bin/start-tunneld)
$(call ln,/usr/bin/start-tunnelbox,$(DESTDIR)/usr/bin/start-tunnel) $(call ln,/usr/bin/start-tunnelbox,$(DESTDIR)/usr/bin/start-tunnel)
$(call mkdir,$(DESTDIR)/lib/systemd/system) $(call mkdir,$(DESTDIR)/lib/systemd/system)
$(call cp,core/startos/start-tunneld.service,$(DESTDIR)/lib/systemd/system/start-tunneld.service) $(call cp,core/start-tunneld.service,$(DESTDIR)/lib/systemd/system/start-tunneld.service)
$(call mkdir,$(DESTDIR)/usr/lib/startos/scripts) $(call mkdir,$(DESTDIR)/usr/lib/startos/scripts)
$(call cp,build/lib/scripts/forward-port,$(DESTDIR)/usr/lib/startos/scripts/forward-port) $(call cp,build/lib/scripts/forward-port,$(DESTDIR)/usr/lib/startos/scripts/forward-port)
core/target/$(RUST_ARCH)-unknown-linux-musl/$(PROFILE)/tunnelbox: $(CORE_SRC) $(ENVIRONMENT_FILE) $(GIT_HASH_FILE) web/dist/static/start-tunnel/index.html core/target/$(RUST_ARCH)-unknown-linux-musl/$(PROFILE)/tunnelbox: $(CORE_SRC) $(ENVIRONMENT_FILE) $(GIT_HASH_FILE) web/dist/static/start-tunnel/index.html
ARCH=$(ARCH) PROFILE=$(PROFILE) ./core/build-tunnelbox.sh ARCH=$(ARCH) PROFILE=$(PROFILE) ./core/build/build-tunnelbox.sh
deb: results/$(BASENAME).deb deb: results/$(BASENAME).deb
results/$(BASENAME).deb: dpkg-build.sh $(call ls-files,debian/startos) $(STARTOS_TARGETS) results/$(BASENAME).deb: debian/dpkg-build.sh $(call ls-files,debian/startos) $(STARTOS_TARGETS)
PLATFORM=$(PLATFORM) REQUIRES=debian ./build/os-compat/run-compat.sh ./dpkg-build.sh PLATFORM=$(PLATFORM) REQUIRES=debian ./build/os-compat/run-compat.sh ./debian/dpkg-build.sh
registry-deb: results/$(REGISTRY_BASENAME).deb registry-deb: results/$(REGISTRY_BASENAME).deb
results/$(REGISTRY_BASENAME).deb: dpkg-build.sh $(call ls-files,debian/start-registry) $(REGISTRY_TARGETS) results/$(REGISTRY_BASENAME).deb: debian/dpkg-build.sh $(call ls-files,debian/start-registry) $(REGISTRY_TARGETS)
PROJECT=start-registry PLATFORM=$(ARCH) REQUIRES=debian ./build/os-compat/run-compat.sh ./dpkg-build.sh PROJECT=start-registry PLATFORM=$(ARCH) REQUIRES=debian ./build/os-compat/run-compat.sh ./debian/dpkg-build.sh
tunnel-deb: results/$(TUNNEL_BASENAME).deb tunnel-deb: results/$(TUNNEL_BASENAME).deb
results/$(TUNNEL_BASENAME).deb: dpkg-build.sh $(call ls-files,debian/start-tunnel) $(TUNNEL_TARGETS) build/lib/scripts/forward-port results/$(TUNNEL_BASENAME).deb: debian/dpkg-build.sh $(call ls-files,debian/start-tunnel) $(TUNNEL_TARGETS) build/lib/scripts/forward-port
PROJECT=start-tunnel PLATFORM=$(ARCH) REQUIRES=debian DEPENDS=wireguard-tools,iptables,conntrack ./build/os-compat/run-compat.sh ./dpkg-build.sh PROJECT=start-tunnel PLATFORM=$(ARCH) REQUIRES=debian DEPENDS=wireguard-tools,iptables,conntrack ./build/os-compat/run-compat.sh ./debian/dpkg-build.sh
$(IMAGE_TYPE): results/$(BASENAME).$(IMAGE_TYPE) $(IMAGE_TYPE): results/$(BASENAME).$(IMAGE_TYPE)
squashfs: results/$(BASENAME).squashfs squashfs: results/$(BASENAME).squashfs
results/$(BASENAME).$(IMAGE_TYPE) results/$(BASENAME).squashfs: $(IMAGE_RECIPE_SRC) results/$(BASENAME).deb results/$(BASENAME).$(IMAGE_TYPE) results/$(BASENAME).squashfs: $(IMAGE_RECIPE_SRC) results/$(BASENAME).deb
./image-recipe/run-local-build.sh "results/$(BASENAME).deb" ARCH=$(ARCH) ./build/image-recipe/run-local-build.sh "results/$(BASENAME).deb"
# For creating os images. DO NOT USE # For creating os images. DO NOT USE
install: $(STARTOS_TARGETS) install: $(STARTOS_TARGETS)
@@ -178,18 +171,18 @@ install: $(STARTOS_TARGETS)
$(call cp,core/target/$(RUST_ARCH)-unknown-linux-musl/$(PROFILE)/startbox,$(DESTDIR)/usr/bin/startbox) $(call cp,core/target/$(RUST_ARCH)-unknown-linux-musl/$(PROFILE)/startbox,$(DESTDIR)/usr/bin/startbox)
$(call ln,/usr/bin/startbox,$(DESTDIR)/usr/bin/startd) $(call ln,/usr/bin/startbox,$(DESTDIR)/usr/bin/startd)
$(call ln,/usr/bin/startbox,$(DESTDIR)/usr/bin/start-cli) $(call ln,/usr/bin/startbox,$(DESTDIR)/usr/bin/start-cli)
if [ "$(PLATFORM)" = "raspberrypi" ]; then $(call cp,cargo-deps/aarch64-unknown-linux-musl/release/pi-beep,$(DESTDIR)/usr/bin/pi-beep); fi if [ "$(PLATFORM)" = "raspberrypi" ]; then $(call cp,target/aarch64-unknown-linux-musl/release/pi-beep,$(DESTDIR)/usr/bin/pi-beep); fi
if /bin/bash -c '[[ "${ENVIRONMENT}" =~ (^|-)unstable($$|-) ]]'; then \ if /bin/bash -c '[[ "${ENVIRONMENT}" =~ (^|-)unstable($$|-) ]]'; then \
$(call cp,cargo-deps/$(RUST_ARCH)-unknown-linux-musl/release/flamegraph,$(DESTDIR)/usr/bin/flamegraph); \ $(call cp,target/$(RUST_ARCH)-unknown-linux-musl/release/flamegraph,$(DESTDIR)/usr/bin/flamegraph); \
fi fi
if /bin/bash -c '[[ "${ENVIRONMENT}" =~ (^|-)console($$|-) ]]'; then \ if /bin/bash -c '[[ "${ENVIRONMENT}" =~ (^|-)console($$|-) ]]'; then \
$(call cp,cargo-deps/$(RUST_ARCH)-unknown-linux-musl/release/tokio-console,$(DESTDIR)/usr/bin/tokio-console); \ $(call cp,target/$(RUST_ARCH)-unknown-linux-musl/release/tokio-console,$(DESTDIR)/usr/bin/tokio-console); \
fi fi
$(call cp,cargo-deps/$(RUST_ARCH)-unknown-linux-musl/release/startos-backup-fs,$(DESTDIR)/usr/bin/startos-backup-fs) $(call cp,target/$(RUST_ARCH)-unknown-linux-musl/release/startos-backup-fs,$(DESTDIR)/usr/bin/startos-backup-fs)
$(call ln,/usr/bin/startos-backup-fs,$(DESTDIR)/usr/sbin/mount.backup-fs) $(call ln,/usr/bin/startos-backup-fs,$(DESTDIR)/usr/sbin/mount.backup-fs)
$(call mkdir,$(DESTDIR)/lib/systemd/system) $(call mkdir,$(DESTDIR)/lib/systemd/system)
$(call cp,core/startos/startd.service,$(DESTDIR)/lib/systemd/system/startd.service) $(call cp,core/startd.service,$(DESTDIR)/lib/systemd/system/startd.service)
$(call mkdir,$(DESTDIR)/usr/lib) $(call mkdir,$(DESTDIR)/usr/lib)
$(call rm,$(DESTDIR)/usr/lib/startos) $(call rm,$(DESTDIR)/usr/lib/startos)
@@ -197,18 +190,16 @@ install: $(STARTOS_TARGETS)
$(call mkdir,$(DESTDIR)/usr/lib/startos/container-runtime) $(call mkdir,$(DESTDIR)/usr/lib/startos/container-runtime)
$(call cp,container-runtime/rootfs.$(ARCH).squashfs,$(DESTDIR)/usr/lib/startos/container-runtime/rootfs.squashfs) $(call cp,container-runtime/rootfs.$(ARCH).squashfs,$(DESTDIR)/usr/lib/startos/container-runtime/rootfs.squashfs)
$(call cp,PLATFORM.txt,$(DESTDIR)/usr/lib/startos/PLATFORM.txt) $(call cp,build/env/PLATFORM.txt,$(DESTDIR)/usr/lib/startos/PLATFORM.txt)
$(call cp,ENVIRONMENT.txt,$(DESTDIR)/usr/lib/startos/ENVIRONMENT.txt) $(call cp,build/env/ENVIRONMENT.txt,$(DESTDIR)/usr/lib/startos/ENVIRONMENT.txt)
$(call cp,GIT_HASH.txt,$(DESTDIR)/usr/lib/startos/GIT_HASH.txt) $(call cp,build/env/GIT_HASH.txt,$(DESTDIR)/usr/lib/startos/GIT_HASH.txt)
$(call cp,VERSION.txt,$(DESTDIR)/usr/lib/startos/VERSION.txt) $(call cp,build/env/VERSION.txt,$(DESTDIR)/usr/lib/startos/VERSION.txt)
$(call cp,firmware/$(PLATFORM),$(DESTDIR)/usr/lib/startos/firmware)
update-overlay: $(STARTOS_TARGETS) update-overlay: $(STARTOS_TARGETS)
@echo "\033[33m!!! THIS WILL ONLY REFLASH YOUR DEVICE IN MEMORY !!!\033[0m" @echo "\033[33m!!! THIS WILL ONLY REFLASH YOUR DEVICE IN MEMORY !!!\033[0m"
@echo "\033[33mALL CHANGES WILL BE REVERTED IF YOU RESTART THE DEVICE\033[0m" @echo "\033[33mALL CHANGES WILL BE REVERTED IF YOU RESTART THE DEVICE\033[0m"
@if [ -z "$(REMOTE)" ]; then >&2 echo "Must specify REMOTE" && false; fi @if [ -z "$(REMOTE)" ]; then >&2 echo "Must specify REMOTE" && false; fi
@if [ "`ssh $(REMOTE) 'cat /usr/lib/startos/VERSION.txt'`" != "`cat ./VERSION.txt`" ]; then >&2 echo "StartOS requires migrations: update-overlay is unavailable." && false; fi @if [ "`ssh $(REMOTE) 'cat /usr/lib/startos/VERSION.txt'`" != "`cat $(VERSION_FILE)`" ]; then >&2 echo "StartOS requires migrations: update-overlay is unavailable." && false; fi
$(call ssh,"sudo systemctl stop startd") $(call ssh,"sudo systemctl stop startd")
$(MAKE) install REMOTE=$(REMOTE) SSHPASS=$(SSHPASS) PLATFORM=$(PLATFORM) $(MAKE) install REMOTE=$(REMOTE) SSHPASS=$(SSHPASS) PLATFORM=$(PLATFORM)
$(call ssh,"sudo systemctl start startd") $(call ssh,"sudo systemctl start startd")
@@ -266,7 +257,7 @@ emulate-reflash: $(STARTOS_TARGETS)
$(call ssh,'sudo /media/startos/next/usr/lib/startos/scripts/chroot-and-upgrade --no-sync "apt-get install -y $(shell cat ./build/lib/depends)"') $(call ssh,'sudo /media/startos/next/usr/lib/startos/scripts/chroot-and-upgrade --no-sync "apt-get install -y $(shell cat ./build/lib/depends)"')
upload-ota: results/$(BASENAME).squashfs upload-ota: results/$(BASENAME).squashfs
TARGET=$(TARGET) KEY=$(KEY) ./upload-ota.sh TARGET=$(TARGET) KEY=$(KEY) ./build/upload-ota.sh
container-runtime/debian.$(ARCH).squashfs: ./container-runtime/download-base-image.sh container-runtime/debian.$(ARCH).squashfs: ./container-runtime/download-base-image.sh
ARCH=$(ARCH) ./container-runtime/download-base-image.sh ARCH=$(ARCH) ./container-runtime/download-base-image.sh
@@ -279,16 +270,16 @@ container-runtime/node_modules/.package-lock.json: container-runtime/package-loc
npm --prefix container-runtime ci npm --prefix container-runtime ci
touch container-runtime/node_modules/.package-lock.json touch container-runtime/node_modules/.package-lock.json
ts-bindings: core/startos/bindings/index.ts ts-bindings: core/bindings/index.ts
mkdir -p sdk/base/lib/osBindings mkdir -p sdk/base/lib/osBindings
rsync -ac --delete core/startos/bindings/ sdk/base/lib/osBindings/ rsync -ac --delete core/bindings/ sdk/base/lib/osBindings/
core/startos/bindings/index.ts: $(call ls-files, core) $(ENVIRONMENT_FILE) core/bindings/index.ts: $(call ls-files, core) $(ENVIRONMENT_FILE)
rm -rf core/startos/bindings rm -rf core/bindings
./core/build-ts.sh ./core/build/build-ts.sh
ls core/startos/bindings/*.ts | sed 's/core\/startos\/bindings\/\([^.]*\)\.ts/export { \1 } from ".\/\1";/g' | grep -v '"./index"' | tee core/startos/bindings/index.ts ls core/bindings/*.ts | sed 's/core\/bindings\/\([^.]*\)\.ts/export { \1 } from ".\/\1";/g' | grep -v '"./index"' | tee core/bindings/index.ts
npm --prefix sdk exec -- prettier --config ./sdk/base/package.json -w ./core/startos/bindings/*.ts npm --prefix sdk exec -- prettier --config ./sdk/base/package.json -w ./core/bindings/*.ts
touch core/startos/bindings/index.ts touch core/bindings/index.ts
sdk/dist/package.json sdk/baseDist/package.json: $(call ls-files, sdk) sdk/base/lib/osBindings/index.ts sdk/dist/package.json sdk/baseDist/package.json: $(call ls-files, sdk) sdk/base/lib/osBindings/index.ts
(cd sdk && make bundle) (cd sdk && make bundle)
@@ -303,21 +294,21 @@ container-runtime/dist/node_modules/.package-lock.json container-runtime/dist/pa
./container-runtime/install-dist-deps.sh ./container-runtime/install-dist-deps.sh
touch container-runtime/dist/node_modules/.package-lock.json touch container-runtime/dist/node_modules/.package-lock.json
container-runtime/rootfs.$(ARCH).squashfs: container-runtime/debian.$(ARCH).squashfs container-runtime/container-runtime.service container-runtime/update-image.sh container-runtime/deb-install.sh container-runtime/dist/index.js container-runtime/dist/node_modules/.package-lock.json core/target/$(RUST_ARCH)-unknown-linux-musl/release/start-container container-runtime/rootfs.$(ARCH).squashfs: container-runtime/debian.$(ARCH).squashfs container-runtime/container-runtime.service container-runtime/update-image.sh container-runtime/update-image-local.sh container-runtime/deb-install.sh container-runtime/dist/index.js container-runtime/dist/node_modules/.package-lock.json core/target/$(RUST_ARCH)-unknown-linux-musl/release/start-container
ARCH=$(ARCH) REQUIRES=qemu ./build/os-compat/run-compat.sh ./container-runtime/update-image.sh ARCH=$(ARCH) ./container-runtime/update-image-local.sh
build/lib/depends build/lib/conflicts: $(ENVIRONMENT_FILE) $(PLATFORM_FILE) $(shell ls build/dpkg-deps/*) build/lib/depends build/lib/conflicts: $(ENVIRONMENT_FILE) $(PLATFORM_FILE) $(shell ls build/dpkg-deps/*)
PLATFORM=$(PLATFORM) ARCH=$(ARCH) build/dpkg-deps/generate.sh PLATFORM=$(PLATFORM) ARCH=$(ARCH) build/dpkg-deps/generate.sh
$(FIRMWARE_ROMS): build/lib/firmware.json download-firmware.sh $(PLATFORM_FILE) $(FIRMWARE_ROMS): build/lib/firmware.json ./build/download-firmware.sh $(PLATFORM_FILE)
./download-firmware.sh $(PLATFORM) ./build/download-firmware.sh $(PLATFORM)
core/target/$(RUST_ARCH)-unknown-linux-musl/$(PROFILE)/startbox: $(CORE_SRC) $(COMPRESSED_WEB_UIS) web/patchdb-ui-seed.json $(ENVIRONMENT_FILE) core/target/$(RUST_ARCH)-unknown-linux-musl/$(PROFILE)/startbox: $(CORE_SRC) $(COMPRESSED_WEB_UIS) web/patchdb-ui-seed.json $(ENVIRONMENT_FILE)
ARCH=$(ARCH) PROFILE=$(PROFILE) ./core/build-startbox.sh ARCH=$(ARCH) PROFILE=$(PROFILE) ./core/build/build-startbox.sh
touch core/target/$(RUST_ARCH)-unknown-linux-musl/$(PROFILE)/startbox touch core/target/$(RUST_ARCH)-unknown-linux-musl/$(PROFILE)/startbox
core/target/$(RUST_ARCH)-unknown-linux-musl/release/start-container: $(CORE_SRC) $(ENVIRONMENT_FILE) core/target/$(RUST_ARCH)-unknown-linux-musl/release/start-container: $(CORE_SRC) $(ENVIRONMENT_FILE)
ARCH=$(ARCH) ./core/build-start-container.sh ARCH=$(ARCH) ./core/build/build-start-container.sh
touch core/target/$(RUST_ARCH)-unknown-linux-musl/release/start-container touch core/target/$(RUST_ARCH)-unknown-linux-musl/release/start-container
web/package-lock.json: web/package.json sdk/baseDist/package.json web/package-lock.json: web/package.json sdk/baseDist/package.json
@@ -333,27 +324,27 @@ web/.angular/.updated: patch-db/client/dist/index.js sdk/baseDist/package.json w
mkdir -p web/.angular mkdir -p web/.angular
touch web/.angular/.updated touch web/.angular/.updated
web/dist/raw/ui/index.html: $(WEB_UI_SRC) $(WEB_SHARED_SRC) web/.angular/.updated web/.i18n-checked: $(WEB_SHARED_SRC) $(WEB_UI_SRC) $(WEB_SETUP_WIZARD_SRC) $(WEB_START_TUNNEL_SRC)
npm --prefix web run check:i18n
touch web/.i18n-checked
web/dist/raw/ui/index.html: $(WEB_UI_SRC) $(WEB_SHARED_SRC) web/.angular/.updated web/.i18n-checked
npm --prefix web run build:ui npm --prefix web run build:ui
touch web/dist/raw/ui/index.html touch web/dist/raw/ui/index.html
web/dist/raw/setup-wizard/index.html: $(WEB_SETUP_WIZARD_SRC) $(WEB_SHARED_SRC) web/.angular/.updated web/dist/raw/setup-wizard/index.html: $(WEB_SETUP_WIZARD_SRC) $(WEB_SHARED_SRC) web/.angular/.updated web/.i18n-checked
npm --prefix web run build:setup npm --prefix web run build:setup
touch web/dist/raw/setup-wizard/index.html touch web/dist/raw/setup-wizard/index.html
web/dist/raw/install-wizard/index.html: $(WEB_INSTALL_WIZARD_SRC) $(WEB_SHARED_SRC) web/.angular/.updated web/dist/raw/start-tunnel/index.html: $(WEB_START_TUNNEL_SRC) $(WEB_SHARED_SRC) web/.angular/.updated web/.i18n-checked
npm --prefix web run build:install
touch web/dist/raw/install-wizard/index.html
web/dist/raw/start-tunnel/index.html: $(WEB_START_TUNNEL_SRC) $(WEB_SHARED_SRC) web/.angular/.updated
npm --prefix web run build:tunnel npm --prefix web run build:tunnel
touch web/dist/raw/start-tunnel/index.html touch web/dist/raw/start-tunnel/index.html
web/dist/static/%/index.html: web/dist/raw/%/index.html web/dist/static/%/index.html: web/dist/raw/%/index.html
./compress-uis.sh $* ./web/compress-uis.sh $*
web/config.json: $(GIT_HASH_FILE) web/config-sample.json web/config.json: $(GIT_HASH_FILE) $(ENVIRONMENT_FILE) web/config-sample.json web/update-config.sh
jq '.useMocks = false' web/config-sample.json | jq '.gitHash = "$(shell cat GIT_HASH.txt)"' > web/config.json ./web/update-config.sh
patch-db/client/node_modules/.package-lock.json: patch-db/client/package.json patch-db/client/node_modules/.package-lock.json: patch-db/client/package.json
npm --prefix patch-db/client ci npm --prefix patch-db/client ci
@@ -374,17 +365,17 @@ uis: $(WEB_UIS)
# this is a convenience step to build the UI # this is a convenience step to build the UI
ui: web/dist/raw/ui ui: web/dist/raw/ui
cargo-deps/aarch64-unknown-linux-musl/release/pi-beep: ./build-cargo-dep.sh target/aarch64-unknown-linux-musl/release/pi-beep: ./build/build-cargo-dep.sh
ARCH=aarch64 ./build-cargo-dep.sh pi-beep ARCH=aarch64 ./build/build-cargo-dep.sh pi-beep
cargo-deps/$(RUST_ARCH)-unknown-linux-musl/release/tokio-console: ./build-cargo-dep.sh target/$(RUST_ARCH)-unknown-linux-musl/release/tokio-console: ./build/build-cargo-dep.sh
ARCH=$(ARCH) ./build-cargo-dep.sh tokio-console ARCH=$(ARCH) ./build/build-cargo-dep.sh tokio-console
touch $@ touch $@
cargo-deps/$(RUST_ARCH)-unknown-linux-musl/release/startos-backup-fs: ./build-cargo-dep.sh target/$(RUST_ARCH)-unknown-linux-musl/release/startos-backup-fs: ./build/build-cargo-dep.sh
ARCH=$(ARCH) ./build-cargo-dep.sh --git https://github.com/Start9Labs/start-fs.git startos-backup-fs ARCH=$(ARCH) ./build/build-cargo-dep.sh --git https://github.com/Start9Labs/start-fs.git startos-backup-fs
touch $@ touch $@
cargo-deps/$(RUST_ARCH)-unknown-linux-musl/release/flamegraph: ./build-cargo-dep.sh target/$(RUST_ARCH)-unknown-linux-musl/release/flamegraph: ./build/build-cargo-dep.sh
ARCH=$(ARCH) ./build-cargo-dep.sh flamegraph ARCH=$(ARCH) ./build/build-cargo-dep.sh flamegraph
touch $@ touch $@

View File

@@ -1,95 +0,0 @@
# StartTunnel
A self-hosted WireGuard VPN optimized for creating VLANs and reverse tunneling to personal servers.
You can think of StartTunnel as "virtual router in the cloud".
Use it for private remote access to self-hosted services running on a personal server, or to expose self-hosted services to the public Internet without revealing the host server's IP address.
## Features
- **Create Subnets**: Each subnet creates a private, virtual local area network (VLAN), similar to the LAN created by a home router.
- **Add Devices**: When you add a device (server, phone, laptop) to a subnet, it receives a LAN IP address on that subnet as well as a unique WireGuard config that must be copied, downloaded, or scanned into the device.
- **Forward Ports**: Forwarding a port creates a "reverse tunnel", exposing a specific port on a specific device to the public Internet.
## Installation
1. Rent a low cost VPS. For most use cases, the cheapest option should be enough.
- It must have a dedicated public IP address.
- For compute (CPU), memory (RAM), and storage (disk), choose the minimum spec.
- For transfer (bandwidth), it depends on (1) your use case and (2) your home Internet's _upload_ speed. Even if you intend to serve large files or stream content from your server, there is no reason to pay for speeds that exceed your home Internet's upload speed.
1. Provision the VPS with the latest version of Debian.
1. Access the VPS via SSH.
1. Run the StartTunnel install script:
curl -fsSL https://start9labs.github.io/start-tunnel | sh
1. [Initialize the web interface](#web-interface) (recommended)
## Updating
Simply re-run the install command:
```sh
curl -fsSL https://start9labs.github.io/start-tunnel | sh
```
## CLI
By default, StartTunnel is managed via the `start-tunnel` command line interface, which is self-documented.
```
start-tunnel --help
```
## Web Interface
Enable the web interface (recommended in most cases) to access your StartTunnel from the browser or via API.
1. Initialize the web interface.
start-tunnel web init
1. If your VPS has multiple public IP addresses, you will be prompted to select the IP address at which to host the web interface.
1. When prompted, enter the port at which to host the web interface. The default is 8443, and we recommend using it. If you change the default, choose an uncommon port to avoid future conflicts.
1. To access your StartTunnel web interface securely over HTTPS, you need an SSL certificate. When prompted, select whether to autogenerate a certificate or provide your own. _This is only for accessing your StartTunnel web interface_.
1. You will receive a success message with 3 pieces of information:
- **<https://IP:port>**: the URL where you can reach your personal web interface.
- **Password**: an autogenerated password for your interface. If you lose/forget it, you can reset it using the start-tunnel CLI.
- **Root Certificate Authority**: the Root CA of your StartTunnel instance.
1. If you autogenerated your SSL certificate, visiting the `https://IP:port` URL in the browser will warn you that the website is insecure. This is expected. You have two options for getting past this warning:
- option 1 (recommended): [Trust your StartTunnel Root CA on your connecting device](#trusting-your-starttunnel-root-ca).
- Option 2: bypass the warning in the browser, creating a one-time security exception.
### Trusting your StartTunnel Root CA
1. Copy the contents of your Root CA (starting with -----BEGIN CERTIFICATE----- and ending with -----END CERTIFICATE-----).
2. Open a text editor:
- Linux: gedit, nano, or any editor
- Mac: TextEdit
- Windows: Notepad
3. Paste the contents of your Root CA.
4. Save the file with a `.crt` extension (e.g. `start-tunnel.crt`) (make sure it saves as plain text, not rich text).
5. Trust the Root CA on your client device(s):
- [Linux](https://staging.docs.start9.com/device-guides/linux/ca.html)
- [Mac](https://staging.docs.start9.com/device-guides/mac/ca.html)
- [Windows](https://staging.docs.start9.com/device-guides/windows/ca.html)
- [Android/Graphene](https://staging.docs.start9.com/device-guides/android/ca.html)
- [iOS](https://staging.docs.start9.com/device-guides/ios/ca.html)

View File

@@ -10,7 +10,7 @@ When bumping from version `X.Y.Z-alpha.N` to `X.Y.Z-alpha.N+1`, you need to upda
### 1. Core Rust Crate Version ### 1. Core Rust Crate Version
**File: `core/startos/Cargo.toml`** **File: `core/Cargo.toml`**
Update the version string (line ~18): Update the version string (line ~18):
@@ -31,7 +31,7 @@ This will update the version in `Cargo.lock` automatically.
### 2. Create New Version Migration Module ### 2. Create New Version Migration Module
**File: `core/startos/src/version/vX_Y_Z_alpha_N+1.rs`** **File: `core/src/version/vX_Y_Z_alpha_N+1.rs`**
Create a new version file by copying the previous version and updating: Create a new version file by copying the previous version and updating:
@@ -79,7 +79,7 @@ impl VersionT for Version {
### 3. Update Version Module Registry ### 3. Update Version Module Registry
**File: `core/startos/src/version/mod.rs`** **File: `core/src/version/mod.rs`**
Make changes in **5 locations**: Make changes in **5 locations**:
@@ -176,9 +176,9 @@ This pattern helps you quickly find all the places that need updating in the nex
## Summary Checklist ## Summary Checklist
- [ ] Update `core/startos/Cargo.toml` version - [ ] Update `core/Cargo.toml` version
- [ ] Create new `core/startos/src/version/vX_Y_Z_alpha_N+1.rs` file - [ ] Create new `core/src/version/vX_Y_Z_alpha_N+1.rs` file
- [ ] Update `core/startos/src/version/mod.rs` in 5 locations - [ ] Update `core/src/version/mod.rs` in 5 locations
- [ ] Run `cargo check` to update `core/Cargo.lock` - [ ] Run `cargo check` to update `core/Cargo.lock`
- [ ] Update `sdk/package/lib/StartSdk.ts` OSVersion - [ ] Update `sdk/package/lib/StartSdk.ts` OSVersion
- [ ] Update `web/package.json` and `web/package-lock.json` version - [ ] Update `web/package.json` and `web/package-lock.json` version

View File

@@ -1,29 +0,0 @@
#!/bin/bash
set -e
shopt -s expand_aliases
if [ "$0" != "./build-cargo-dep.sh" ]; then
>&2 echo "Must be run from start-os directory"
exit 1
fi
if [ -z "$ARCH" ]; then
ARCH=$(uname -m)
fi
RUST_ARCH="$ARCH"
if [ "$ARCH" = "riscv64" ]; then
RUST_ARCH="riscv64gc"
fi
mkdir -p cargo-deps
source core/builder-alias.sh
RUSTFLAGS="-C target-feature=+crt-static"
rust-zig-builder cargo-zigbuild install $* --target-dir /workdir/cargo-deps/ --target=$RUST_ARCH-unknown-linux-musl
if [ "$(ls -nd "cargo-deps/$RUST_ARCH-unknown-linux-musl/release/${!#}" | awk '{ print $3 }')" != "$UID" ]; then
rust-zig-builder sh -c "chown -R $UID:$UID cargo-deps && chown -R $UID:$UID /usr/local/cargo"
fi

0
build/README.md Normal file
View File

26
build/build-cargo-dep.sh Executable file
View File

@@ -0,0 +1,26 @@
#!/bin/bash
cd "$(dirname "${BASH_SOURCE[0]}")/.."
set -e
shopt -s expand_aliases
if [ -z "$ARCH" ]; then
ARCH=$(uname -m)
fi
RUST_ARCH="$ARCH"
if [ "$ARCH" = "riscv64" ]; then
RUST_ARCH="riscv64gc"
fi
mkdir -p target
source core/build/builder-alias.sh
RUSTFLAGS="-C target-feature=+crt-static"
rust-zig-builder cargo-zigbuild install $* --target-dir /workdir/target/ --target=$RUST_ARCH-unknown-linux-musl
if [ "$(ls -nd "target/$RUST_ARCH-unknown-linux-musl/release/${!#}" | awk '{ print $3 }')" != "$UID" ]; then
rust-zig-builder sh -c "chown -R $UID:$UID target && chown -R $UID:$UID /usr/local/cargo"
fi

View File

@@ -11,13 +11,13 @@ if [ -z "$PLATFORM" ]; then
exit 1 exit 1
fi fi
rm -rf ./firmware/$PLATFORM rm -rf ./lib/firmware/$PLATFORM
mkdir -p ./firmware/$PLATFORM mkdir -p ./lib/firmware/$PLATFORM
cd ./firmware/$PLATFORM cd ./lib/firmware/$PLATFORM
firmwares=() firmwares=()
while IFS= read -r line; do firmwares+=("$line"); done < <(jq -c ".[] | select(.platform[] | contains(\"$PLATFORM\"))" ../../build/lib/firmware.json) while IFS= read -r line; do firmwares+=("$line"); done < <(jq -c ".[] | select(.platform[] | contains(\"$PLATFORM\"))" ../../firmware.json)
for firmware in "${firmwares[@]}"; do for firmware in "${firmwares[@]}"; do
if [ -n "$firmware" ]; then if [ -n "$firmware" ]; then
id=$(echo "$firmware" | jq --raw-output '.id') id=$(echo "$firmware" | jq --raw-output '.id')

View File

@@ -3,6 +3,7 @@ avahi-utils
b3sum b3sum
bash-completion bash-completion
beep beep
binfmt-support
bmon bmon
btrfs-progs btrfs-progs
ca-certificates ca-certificates
@@ -15,6 +16,7 @@ dnsutils
dosfstools dosfstools
e2fsprogs e2fsprogs
ecryptfs-utils ecryptfs-utils
equivs
exfatprogs exfatprogs
flashrom flashrom
fuse3 fuse3
@@ -44,6 +46,7 @@ openssh-server
podman podman
psmisc psmisc
qemu-guest-agent qemu-guest-agent
qemu-user-static
rfkill rfkill
rsync rsync
samba-common-bin samba-common-bin

View File

@@ -9,6 +9,9 @@ FEATURES+=("${ARCH}")
if [ "$ARCH" != "$PLATFORM" ]; then if [ "$ARCH" != "$PLATFORM" ]; then
FEATURES+=("${PLATFORM}") FEATURES+=("${PLATFORM}")
fi fi
if [[ "$PLATFORM" =~ -nonfree$ ]]; then
FEATURES+=("nonfree")
fi
feature_file_checker=' feature_file_checker='
/^#/ { next } /^#/ { next }

View File

@@ -0,0 +1,10 @@
+ firmware-amd-graphics
+ firmware-atheros
+ firmware-brcm80211
+ firmware-iwlwifi
+ firmware-libertas
+ firmware-misc-nonfree
+ firmware-realtek
+ nvidia-container-toolkit
# + nvidia-driver
# + nvidia-kernel-dkms

View File

@@ -1,8 +1,10 @@
#!/bin/bash #!/bin/bash
cd "$(dirname "${BASH_SOURCE[0]}")"
if ! [ -f ./ENVIRONMENT.txt ] || [ "$(cat ./ENVIRONMENT.txt)" != "$ENVIRONMENT" ]; then if ! [ -f ./ENVIRONMENT.txt ] || [ "$(cat ./ENVIRONMENT.txt)" != "$ENVIRONMENT" ]; then
>&2 echo "Updating ENVIRONMENT.txt to \"$ENVIRONMENT\"" >&2 echo "Updating ENVIRONMENT.txt to \"$ENVIRONMENT\""
echo -n "$ENVIRONMENT" > ./ENVIRONMENT.txt echo -n "$ENVIRONMENT" > ./ENVIRONMENT.txt
fi fi
echo -n ./ENVIRONMENT.txt echo -n ./build/env/ENVIRONMENT.txt

View File

@@ -1,5 +1,7 @@
#!/bin/bash #!/bin/bash
cd "$(dirname "${BASH_SOURCE[0]}")"
if [ "$GIT_BRANCH_AS_HASH" != 1 ]; then if [ "$GIT_BRANCH_AS_HASH" != 1 ]; then
GIT_HASH="$(git rev-parse HEAD)$(if ! git diff-index --quiet HEAD --; then echo '-modified'; fi)" GIT_HASH="$(git rev-parse HEAD)$(if ! git diff-index --quiet HEAD --; then echo '-modified'; fi)"
else else
@@ -11,4 +13,4 @@ if ! [ -f ./GIT_HASH.txt ] || [ "$(cat ./GIT_HASH.txt)" != "$GIT_HASH" ]; then
echo -n "$GIT_HASH" > ./GIT_HASH.txt echo -n "$GIT_HASH" > ./GIT_HASH.txt
fi fi
echo -n ./GIT_HASH.txt echo -n ./build/env/GIT_HASH.txt

View File

@@ -1,8 +1,10 @@
#!/bin/bash #!/bin/bash
cd "$(dirname "${BASH_SOURCE[0]}")"
if ! [ -f ./PLATFORM.txt ] || [ "$(cat ./PLATFORM.txt)" != "$PLATFORM" ] && [ -n "$PLATFORM" ]; then if ! [ -f ./PLATFORM.txt ] || [ "$(cat ./PLATFORM.txt)" != "$PLATFORM" ] && [ -n "$PLATFORM" ]; then
>&2 echo "Updating PLATFORM.txt to \"$PLATFORM\"" >&2 echo "Updating PLATFORM.txt to \"$PLATFORM\""
echo -n "$PLATFORM" > ./PLATFORM.txt echo -n "$PLATFORM" > ./PLATFORM.txt
fi fi
echo -n ./PLATFORM.txt echo -n ./build/env/PLATFORM.txt

View File

@@ -1,6 +1,8 @@
#!/bin/bash #!/bin/bash
FE_VERSION="$(cat web/package.json | grep '"version"' | sed 's/[ \t]*"version":[ \t]*"\([^"]*\)",/\1/')" cd "$(dirname "${BASH_SOURCE[0]}")"
FE_VERSION="$(cat ../../web/package.json | grep '"version"' | sed 's/[ \t]*"version":[ \t]*"\([^"]*\)",/\1/')"
# TODO: Validate other version sources - backend/Cargo.toml, backend/src/version/mod.rs # TODO: Validate other version sources - backend/Cargo.toml, backend/src/version/mod.rs
@@ -10,4 +12,4 @@ if ! [ -f ./VERSION.txt ] || [ "$(cat ./VERSION.txt)" != "$VERSION" ]; then
echo -n "$VERSION" > ./VERSION.txt echo -n "$VERSION" > ./VERSION.txt
fi fi
echo -n ./VERSION.txt echo -n ./build/env/VERSION.txt

View File

@@ -73,7 +73,7 @@ if [ "$NON_FREE" = 1 ]; then
if [ "$IB_SUITE" = "bullseye" ]; then if [ "$IB_SUITE" = "bullseye" ]; then
ARCHIVE_AREAS="$ARCHIVE_AREAS non-free" ARCHIVE_AREAS="$ARCHIVE_AREAS non-free"
else else
ARCHIVE_AREAS="$ARCHIVE_AREAS non-free-firmware" ARCHIVE_AREAS="$ARCHIVE_AREAS non-free non-free-firmware"
fi fi
fi fi
@@ -154,9 +154,12 @@ prompt 0
timeout 50 timeout 50
EOF EOF
cp $SOURCE_DIR/splash.png config/bootloaders/syslinux_common/splash.png # Extract splash.png from the deb package
cp $SOURCE_DIR/splash.png config/bootloaders/isolinux/splash.png dpkg-deb --fsys-tarfile $DEB_PATH | tar --to-stdout -xf - ./usr/lib/startos/splash.png > /tmp/splash.png
cp $SOURCE_DIR/splash.png config/bootloaders/grub-pc/splash.png cp /tmp/splash.png config/bootloaders/syslinux_common/splash.png
cp /tmp/splash.png config/bootloaders/isolinux/splash.png
cp /tmp/splash.png config/bootloaders/grub-pc/splash.png
rm /tmp/splash.png
sed -i -e '2i set timeout=5' config/bootloaders/grub-pc/config.cfg sed -i -e '2i set timeout=5' config/bootloaders/grub-pc/config.cfg
@@ -174,40 +177,123 @@ if [ "${IB_TARGET_PLATFORM}" = "rockchip64" ]; then
echo "deb https://apt.armbian.com/ ${IB_SUITE} main" > config/archives/armbian.list echo "deb https://apt.armbian.com/ ${IB_SUITE} main" > config/archives/armbian.list
fi fi
cat > config/archives/backports.pref <<- EOF if [ "$NON_FREE" = 1 ]; then
curl -fsSL https://nvidia.github.io/libnvidia-container/gpgkey | gpg --dearmor -o config/archives/nvidia-container-toolkit.key
curl -s -L https://nvidia.github.io/libnvidia-container/stable/deb/nvidia-container-toolkit.list \
| sed 's#deb https://#deb [signed-by=/etc/apt/trusted.gpg.d/nvidia-container-toolkit.key.gpg] https://#g' \
> config/archives/nvidia-container-toolkit.list
fi
cat > config/archives/backports.pref <<-EOF
Package: linux-image-* Package: linux-image-*
Pin: release n=${IB_SUITE}-backports Pin: release n=${IB_SUITE}-backports
Pin-Priority: 500 Pin-Priority: 500
Package: linux-headers-*
Pin: release n=${IB_SUITE}-backports
Pin-Priority: 500
Package: *nvidia*
Pin: release n=${IB_SUITE}-backports
Pin-Priority: 500
EOF EOF
# Dependencies # Hooks
## Firmware
if [ "$NON_FREE" = 1 ]; then
echo 'firmware-iwlwifi firmware-misc-nonfree firmware-brcm80211 firmware-realtek firmware-atheros firmware-libertas firmware-amd-graphics' > config/package-lists/nonfree.list.chroot
fi
cat > config/hooks/normal/9000-install-startos.hook.chroot << EOF cat > config/hooks/normal/9000-install-startos.hook.chroot << EOF
#!/bin/bash #!/bin/bash
set -e set -e
if [ "${NON_FREE}" = "1" ] && [ "${IB_TARGET_PLATFORM}" != "raspberrypi" ]; then
# install a specific NVIDIA driver version
# ---------------- configuration ----------------
NVIDIA_DRIVER_VERSION="\${NVIDIA_DRIVER_VERSION:-580.119.02}"
BASE_URL="https://download.nvidia.com/XFree86/Linux-${QEMU_ARCH}"
echo "[nvidia-hook] Using NVIDIA driver: \${NVIDIA_DRIVER_VERSION}" >&2
# ---------------- kernel version ----------------
# Determine target kernel version from newest /boot/vmlinuz-* in the chroot.
KVER="\$(
ls -1t /boot/vmlinuz-* 2>/dev/null \
| head -n1 \
| sed 's|.*/vmlinuz-||'
)"
if [ -z "\${KVER}" ]; then
echo "[nvidia-hook] ERROR: no /boot/vmlinuz-* found; cannot determine kernel version" >&2
exit 1
fi
echo "[nvidia-hook] Target kernel version: \${KVER}" >&2
# Ensure kernel headers are present
TEMP_APT_DEPS=(build-essential)
if [ ! -e "/lib/modules/\${KVER}/build" ]; then
TEMP_APT_DEPS+=(linux-headers-\${KVER})
fi
echo "[nvidia-hook] Installing build dependencies" >&2
/usr/lib/startos/scripts/install-equivs <<-EOF
Package: nvidia-depends
Version: \${NVIDIA_DRIVER_VERSION}
Section: unknown
Priority: optional
Depends: \${dep_list="\$(IFS=', '; echo "\${TEMP_APT_DEPS[*]}")"}
EOF
# ---------------- download and run installer ----------------
RUN_NAME="NVIDIA-Linux-${QEMU_ARCH}-\${NVIDIA_DRIVER_VERSION}.run"
RUN_PATH="/root/\${RUN_NAME}"
RUN_URL="\${BASE_URL}/\${NVIDIA_DRIVER_VERSION}/\${RUN_NAME}"
echo "[nvidia-hook] Downloading \${RUN_URL}" >&2
wget -O "\${RUN_PATH}" "\${RUN_URL}"
chmod +x "\${RUN_PATH}"
echo "[nvidia-hook] Running NVIDIA installer for kernel \${KVER}" >&2
sh "\${RUN_PATH}" \
--silent \
--kernel-name="\${KVER}" \
--no-x-check \
--no-nouveau-check \
--no-runlevel-check
# Rebuild module metadata
echo "[nvidia-hook] Running depmod for \${KVER}" >&2
depmod -a "\${KVER}"
echo "[nvidia-hook] NVIDIA \${NVIDIA_DRIVER_VERSION} installation complete for kernel \${KVER}" >&2
echo "[nvidia-hook] Removing build dependencies..." >&2
apt-get purge -y nvidia-depends
apt-get autoremove -y
echo "[nvidia-hook] Removed build dependencies." >&2
fi
cp /etc/resolv.conf /etc/resolv.conf.bak cp /etc/resolv.conf /etc/resolv.conf.bak
if [ "${IB_SUITE}" = trixie ] && [ "${IB_TARGET_ARCH}" != riscv64 ]; then if [ "${IB_SUITE}" = trixie ] && [ "${IB_TARGET_ARCH}" != riscv64 ]; then
echo 'deb https://deb.debian.org/debian/ bookworm main' > /etc/apt/sources.list.d/bookworm.list echo 'deb https://deb.debian.org/debian/ bookworm main' > /etc/apt/sources.list.d/bookworm.list
apt-get update apt-get update
apt-get install -y postgresql-15 apt-get install -y postgresql-15
rm /etc/apt/sources.list.d/bookworm.list rm /etc/apt/sources.list.d/bookworm.list
apt-get update apt-get update
systemctl mask postgresql systemctl mask postgresql
fi fi
if [ "${IB_TARGET_PLATFORM}" = "raspberrypi" ]; then if [ "${IB_TARGET_PLATFORM}" = "raspberrypi" ]; then
ln -sf /usr/bin/pi-beep /usr/local/bin/beep ln -sf /usr/bin/pi-beep /usr/local/bin/beep
KERNEL_VERSION=${RPI_KERNEL_VERSION} sh /boot/config.sh > /boot/config.txt KERNEL_VERSION=${RPI_KERNEL_VERSION} sh /boot/config.sh > /boot/config.txt
mkinitramfs -c gzip -o initrd.img-${RPI_KERNEL_VERSION}-rpi-v8 ${RPI_KERNEL_VERSION}-rpi-v8 mkinitramfs -c gzip -o /boot/initrd.img-${RPI_KERNEL_VERSION}-rpi-v8 ${RPI_KERNEL_VERSION}-rpi-v8
mkinitramfs -c gzip -o initrd.img-${RPI_KERNEL_VERSION}-rpi-2712 ${RPI_KERNEL_VERSION}-rpi-2712 mkinitramfs -c gzip -o /boot/initrd.img-${RPI_KERNEL_VERSION}-rpi-2712 ${RPI_KERNEL_VERSION}-rpi-2712
fi fi
useradd --shell /bin/bash -G startos -m start9 useradd --shell /bin/bash -G startos -m start9
@@ -218,11 +304,11 @@ usermod -aG systemd-journal start9
echo "start9 ALL=(ALL:ALL) NOPASSWD: ALL" | sudo tee "/etc/sudoers.d/010_start9-nopasswd" echo "start9 ALL=(ALL:ALL) NOPASSWD: ALL" | sudo tee "/etc/sudoers.d/010_start9-nopasswd"
if [ "${IB_TARGET_PLATFORM}" != "raspberrypi" ]; then if [ "${IB_TARGET_PLATFORM}" != "raspberrypi" ]; then
/usr/lib/startos/scripts/enable-kiosk /usr/lib/startos/scripts/enable-kiosk
fi fi
if ! [[ "${IB_OS_ENV}" =~ (^|-)dev($|-) ]]; then if ! [[ "${IB_OS_ENV}" =~ (^|-)dev($|-) ]]; then
passwd -l start9 passwd -l start9
fi fi
EOF EOF
@@ -360,4 +446,4 @@ elif [ "${IMAGE_TYPE}" = img ]; then
fi fi
chown $IB_UID:$IB_UID $RESULTS_DIR/$IMAGE_BASENAME.* chown $IB_UID:$IB_UID $RESULTS_DIR/$IMAGE_BASENAME.*

View File

@@ -1 +1 @@
usb-storage.quirks=152d:0562:u,14cd:121c:u,0781:cfcb:u console=serial0,115200 console=tty1 root=PARTUUID=cb15ae4d-02 rootfstype=ext4 fsck.repair=yes rootwait cgroup_enable=cpuset cgroup_memory=1 cgroup_enable=memory quiet boot=startos usb-storage.quirks=152d:0562:u,14cd:121c:u,0781:cfcb:u console=serial0,115200 console=tty1 root=PARTUUID=cb15ae4d-02 rootfstype=ext4 fsck.repair=yes rootwait cgroup_enable=cpuset cgroup_memory=1 cgroup_enable=memory boot=startos

View File

@@ -0,0 +1,35 @@
#!/bin/bash
set -e
cd "$(dirname "${BASH_SOURCE[0]}")/../.."
BASEDIR="$(pwd -P)"
SUITE=trixie
USE_TTY=
if tty -s; then
USE_TTY="-it"
fi
dockerfile_hash=$(sha256sum ${BASEDIR}/build/image-recipe/Dockerfile | head -c 7)
docker_img_name="start9/build-iso:${SUITE}-${dockerfile_hash}"
platform=linux/${ARCH}
case $ARCH in
x86_64)
platform=linux/amd64;;
aarch64)
platform=linux/arm64;;
esac
if ! docker run --rm --platform=$platform "${docker_img_name}" true 2> /dev/null; then
docker buildx build --load --platform=$platform --build-arg=SUITE=${SUITE} -t "${docker_img_name}" ./build/image-recipe
fi
docker run $USE_TTY --rm --platform=$platform --privileged -v "$(pwd)/build/image-recipe:/root/image-recipe" -v "$(pwd)/results:/root/results" \
-e IB_SUITE="$SUITE" \
-e IB_UID="$UID" \
-e IB_INCLUDE \
"${docker_img_name}" /root/image-recipe/build.sh $@

View File

@@ -0,0 +1,51 @@
desktop-image: "../splash.png"
title-color: "#ffffff"
title-font: "Unifont Regular 16"
title-text: "StartOS Boot Menu with GRUB"
message-font: "Unifont Regular 16"
terminal-font: "Unifont Regular 16"
#help bar at the bottom
+ label {
top = 100%-50
left = 0
width = 100%
height = 20
text = "@KEYMAP_SHORT@"
align = "center"
color = "#ffffff"
font = "Unifont Regular 16"
}
#boot menu
+ boot_menu {
left = 10%
width = 80%
top = 52%
height = 48%-80
item_color = "#a8a8a8"
item_font = "Unifont Regular 16"
selected_item_color= "#ffffff"
selected_item_font = "Unifont Regular 16"
item_height = 16
item_padding = 0
item_spacing = 4
icon_width = 0
icon_heigh = 0
item_icon_space = 0
}
#progress bar
+ progress_bar {
id = "__timeout__"
left = 15%
top = 100%-80
height = 16
width = 70%
font = "Unifont Regular 16"
text_color = "#000000"
fg_color = "#ffffff"
bg_color = "#a8a8a8"
border_color = "#ffffff"
text = "@TIMEOUT_NOTIFICATION_LONG@"
}

View File

@@ -4,7 +4,7 @@ parse_essential_db_info() {
DB_DUMP="/tmp/startos_db.json" DB_DUMP="/tmp/startos_db.json"
if command -v start-cli >/dev/null 2>&1; then if command -v start-cli >/dev/null 2>&1; then
start-cli db dump > "$DB_DUMP" 2>/dev/null || return 1 timeout 30 start-cli db dump > "$DB_DUMP" 2>/dev/null || return 1
else else
return 1 return 1
fi fi

View File

@@ -63,7 +63,7 @@ mount --bind /proc /media/startos/next/proc
mount --bind /boot /media/startos/next/boot mount --bind /boot /media/startos/next/boot
mount --bind /media/startos/root /media/startos/next/media/startos/root mount --bind /media/startos/root /media/startos/next/media/startos/root
if mountpoint /sys/firmware/efi/efivars 2> /dev/null; then if mountpoint /sys/firmware/efi/efivars 2>&1 > /dev/null; then
mount --bind /sys/firmware/efi/efivars /media/startos/next/sys/firmware/efi/efivars mount --bind /sys/firmware/efi/efivars /media/startos/next/sys/firmware/efi/efivars
fi fi
@@ -75,7 +75,7 @@ else
CHROOT_RES=$? CHROOT_RES=$?
fi fi
if mountpoint /media/startos/next/sys/firmware/efi/efivars 2> /dev/null; then if mountpoint /media/startos/next/sys/firmware/efi/efivars 2>&1 > /dev/null; then
umount /media/startos/next/sys/firmware/efi/efivars umount /media/startos/next/sys/firmware/efi/efivars
fi fi

View File

@@ -35,16 +35,20 @@ if [ "$UNDO" = 1 ]; then
exit $err exit $err
fi fi
# DNAT: rewrite destination for incoming packets (external traffic)
iptables -t nat -A ${NAME}_PREROUTING -d "$sip" -p tcp --dport "$sport" -j DNAT --to-destination "$dip:$dport" iptables -t nat -A ${NAME}_PREROUTING -d "$sip" -p tcp --dport "$sport" -j DNAT --to-destination "$dip:$dport"
iptables -t nat -A ${NAME}_PREROUTING -d "$sip" -p udp --dport "$sport" -j DNAT --to-destination "$dip:$dport" iptables -t nat -A ${NAME}_PREROUTING -d "$sip" -p udp --dport "$sport" -j DNAT --to-destination "$dip:$dport"
# DNAT: rewrite destination for locally-originated packets (hairpin from host itself)
iptables -t nat -A ${NAME}_OUTPUT -d "$sip" -p tcp --dport "$sport" -j DNAT --to-destination "$dip:$dport" iptables -t nat -A ${NAME}_OUTPUT -d "$sip" -p tcp --dport "$sport" -j DNAT --to-destination "$dip:$dport"
iptables -t nat -A ${NAME}_OUTPUT -d "$sip" -p udp --dport "$sport" -j DNAT --to-destination "$dip:$dport" iptables -t nat -A ${NAME}_OUTPUT -d "$sip" -p udp --dport "$sport" -j DNAT --to-destination "$dip:$dport"
iptables -t nat -A ${NAME}_PREROUTING -s "$dip/$dprefix" -d "$sip" -p tcp --dport "$sport" -j DNAT --to-destination "$dip:$dport" # MASQUERADE: rewrite source for all forwarded traffic to the destination
iptables -t nat -A ${NAME}_PREROUTING -s "$dip/$dprefix" -d "$sip" -p udp --dport "$sport" -j DNAT --to-destination "$dip:$dport" # This ensures responses are routed back through the host regardless of source IP
iptables -t nat -A ${NAME}_POSTROUTING -s "$dip/$dprefix" -d "$dip" -p tcp --dport "$dport" -j MASQUERADE iptables -t nat -A ${NAME}_POSTROUTING -d "$dip" -p tcp --dport "$dport" -j MASQUERADE
iptables -t nat -A ${NAME}_POSTROUTING -s "$dip/$dprefix" -d "$dip" -p udp --dport "$dport" -j MASQUERADE iptables -t nat -A ${NAME}_POSTROUTING -d "$dip" -p udp --dport "$dport" -j MASQUERADE
# Allow new connections to be forwarded to the destination
iptables -A ${NAME}_FORWARD -d $dip -p tcp --dport $dport -m state --state NEW -j ACCEPT iptables -A ${NAME}_FORWARD -d $dip -p tcp --dport $dport -m state --state NEW -j ACCEPT
iptables -A ${NAME}_FORWARD -d $dip -p udp --dport $dport -m state --state NEW -j ACCEPT iptables -A ${NAME}_FORWARD -d $dip -p udp --dport $dport -m state --state NEW -j ACCEPT

View File

@@ -0,0 +1,20 @@
#!/bin/bash
export DEBIAN_FRONTEND=noninteractive
export DEBCONF_NONINTERACTIVE_SEEN=true
TMP_DIR=$(mktemp -d)
(
set -e
cd $TMP_DIR
cat > control.equivs
equivs-build control.equivs
apt-get install -y ./*.deb < /dev/null
)
rm -rf $TMP_DIR
echo Install complete. >&2
exit 0

View File

@@ -29,10 +29,13 @@ if [ -z "$needed" ]; then
exit 1 exit 1
fi fi
MARGIN=${MARGIN:-1073741824}
target=$((needed + MARGIN))
if [ -h /media/startos/config/current.rootfs ] && [ -e /media/startos/config/current.rootfs ]; then if [ -h /media/startos/config/current.rootfs ] && [ -e /media/startos/config/current.rootfs ]; then
echo 'Pruning...' echo 'Pruning...'
current="$(readlink -f /media/startos/config/current.rootfs)" current="$(readlink -f /media/startos/config/current.rootfs)"
while [[ "$(df -B1 --output=avail --sync /media/startos/images | tail -n1)" -lt "$needed" ]]; do while [[ "$(df -B1 --output=avail --sync /media/startos/images | tail -n1)" -lt "$target" ]]; do
to_prune="$(ls -t1 /media/startos/images/*.rootfs /media/startos/images/*.squashfs 2> /dev/null | grep -v "$current" | tail -n1)" to_prune="$(ls -t1 /media/startos/images/*.rootfs /media/startos/images/*.squashfs 2> /dev/null | grep -v "$current" | tail -n1)"
if [ -e "$to_prune" ]; then if [ -e "$to_prune" ]; then
echo " Pruning $to_prune" echo " Pruning $to_prune"

View File

@@ -50,12 +50,12 @@ mount --bind /proc /media/startos/next/proc
mount --bind /boot /media/startos/next/boot mount --bind /boot /media/startos/next/boot
mount --bind /media/startos/root /media/startos/next/media/startos/root mount --bind /media/startos/root /media/startos/next/media/startos/root
if mountpoint /boot/efi 2> /dev/null; then if mountpoint /boot/efi 2>&1 > /dev/null; then
mkdir -p /media/startos/next/boot/efi mkdir -p /media/startos/next/boot/efi
mount --bind /boot/efi /media/startos/next/boot/efi mount --bind /boot/efi /media/startos/next/boot/efi
fi fi
if mountpoint /sys/firmware/efi/efivars 2> /dev/null; then if mountpoint /sys/firmware/efi/efivars 2>&1 > /dev/null; then
mount --bind /sys/firmware/efi/efivars /media/startos/next/sys/firmware/efi/efivars mount --bind /sys/firmware/efi/efivars /media/startos/next/sys/firmware/efi/efivars
fi fi

View File

Before

Width:  |  Height:  |  Size: 9.6 KiB

After

Width:  |  Height:  |  Size: 9.6 KiB

View File

@@ -1,4 +1,4 @@
FROM debian:forky FROM debian:trixie
RUN apt-get update && \ RUN apt-get update && \
apt-get install -y \ apt-get install -y \
@@ -12,35 +12,14 @@ RUN apt-get update && \
jq \ jq \
gzip \ gzip \
brotli \ brotli \
qemu-user-static \
binfmt-support \
squashfs-tools \ squashfs-tools \
git \ git \
debspawn \
rsync \ rsync \
b3sum \ b3sum \
fuse-overlayfs \
sudo \ sudo \
systemd \
systemd-container \
systemd-sysv \
dbus \
dbus-user-session \
nodejs nodejs
RUN systemctl mask \
systemd-firstboot.service \
systemd-udevd.service \
getty@tty1.service \
console-getty.service
RUN git config --global --add safe.directory /root/start-os RUN git config --global --add safe.directory /root/start-os
RUN mkdir -p /etc/debspawn && \
echo "AllowUnsafePermissions=true" > /etc/debspawn/global.toml
RUN mkdir -p /root/start-os RUN mkdir -p /root/start-os
WORKDIR /root/start-os WORKDIR /root/start-os
COPY docker-entrypoint.sh /docker-entrypoint.sh
ENTRYPOINT [ "/docker-entrypoint.sh" ]

View File

@@ -1,3 +0,0 @@
#!/bin/bash
exec /lib/systemd/systemd --unit=multi-user.target --show-status=false --log-target=journal

View File

@@ -1,27 +1,30 @@
#!/bin/bash #!/bin/bash
if [ "$FORCE_COMPAT" = 1 ] || ( [ "$REQUIRES" = "linux" ] && [ "$(uname -s)" != "Linux" ] ) || ( [ "$REQUIRES" = "debian" ] && ! which dpkg > /dev/null ) || ( [ "$REQUIRES" = "qemu" ] && ! which qemu-$ARCH > /dev/null ); then pwd=$(pwd)
project_pwd="$(cd "$(dirname "${BASH_SOURCE[0]}")"/../.. && pwd)/"
pwd="$(pwd)/"
if ! [[ "$pwd" = "$project_pwd"* ]]; then
>&2 echo "Must be run from start-os project dir"
exit 1
fi
rel_pwd="${pwd#"$project_pwd"}"
SYSTEMD_TTY="-P" cd "$(dirname "${BASH_SOURCE[0]}")/../.."
USE_TTY=
set -e
rel_pwd="${pwd#"$(pwd)"}"
COMPAT_ARCH=$(uname -m)
platform=linux/$COMPAT_ARCH
case $COMPAT_ARCH in
x86_64)
platform=linux/amd64;;
aarch64)
platform=linux/arm64;;
esac
if [ "$FORCE_COMPAT" = 1 ] || ( [ "$REQUIRES" = "linux" ] && [ "$(uname -s)" != "Linux" ] ) || ( [ "$REQUIRES" = "debian" ] && ! which dpkg > /dev/null ); then
if tty -s; then if tty -s; then
USE_TTY="-it" USE_TTY="-it"
SYSTEMD_TTY="-t"
fi fi
docker run -d --rm --name os-compat --privileged --security-opt apparmor=unconfined -v "${project_pwd}:/root/start-os" -v /lib/modules:/lib/modules:ro start9/build-env docker run $USE_TTY --platform=$platform -eARCH -eENVIRONMENT -ePLATFORM -eGIT_BRANCH_AS_HASH -ePROJECT -eDEPENDS -eCONFLICTS -w "/root/start-os${rel_pwd}" --rm -v "$(pwd):/root/start-os" start9/build-env $@
while ! docker exec os-compat systemctl is-active --quiet multi-user.target 2> /dev/null; do sleep .5; done
docker exec -eARCH -eENVIRONMENT -ePLATFORM -eGIT_BRANCH_AS_HASH -ePROJECT -eDEPENDS -eCONFLICTS $USE_TTY -w "/root/start-os${rel_pwd}" os-compat $@
code=$?
docker stop os-compat > /dev/null
exit $code
else else
exec $@ exec $@
fi fi

View File

@@ -1,87 +0,0 @@
#!/bin/bash
set -e
function partition_for () {
if [[ "$1" =~ [0-9]+$ ]]; then
echo "$1p$2"
else
echo "$1$2"
fi
}
VERSION=$(cat VERSION.txt)
ENVIRONMENT=$(cat ENVIRONMENT.txt)
GIT_HASH=$(cat GIT_HASH.txt | head -c 7)
DATE=$(date +%Y%m%d)
ROOT_PART_END=7217792
VERSION_FULL="$VERSION-$GIT_HASH"
if [ -n "$ENVIRONMENT" ]; then
VERSION_FULL="$VERSION_FULL~$ENVIRONMENT"
fi
TARGET_NAME=startos-${VERSION_FULL}-${DATE}_raspberrypi.img
TARGET_SIZE=$[($ROOT_PART_END+1)*512]
rm -f $TARGET_NAME
truncate -s $TARGET_SIZE $TARGET_NAME
(
echo o
echo x
echo i
echo "0xcb15ae4d"
echo r
echo n
echo p
echo 1
echo 2048
echo 526335
echo t
echo c
echo n
echo p
echo 2
echo 526336
echo $ROOT_PART_END
echo a
echo 1
echo w
) | fdisk $TARGET_NAME
OUTPUT_DEVICE=$(sudo losetup --show -fP $TARGET_NAME)
sudo mkfs.ext4 `partition_for ${OUTPUT_DEVICE} 2`
sudo mkfs.vfat `partition_for ${OUTPUT_DEVICE} 1`
TMPDIR=$(mktemp -d)
sudo mount `partition_for ${OUTPUT_DEVICE} 2` $TMPDIR
sudo mkdir $TMPDIR/boot
sudo mount `partition_for ${OUTPUT_DEVICE} 1` $TMPDIR/boot
sudo unsquashfs -f -d $TMPDIR startos.raspberrypi.squashfs
REAL_GIT_HASH=$(cat $TMPDIR/usr/lib/startos/GIT_HASH.txt)
REAL_VERSION=$(cat $TMPDIR/usr/lib/startos/VERSION.txt)
REAL_ENVIRONMENT=$(cat $TMPDIR/usr/lib/startos/ENVIRONMENT.txt)
sudo sed -i 's| boot=startos| init=/usr/lib/startos/scripts/init_resize\.sh|' $TMPDIR/boot/cmdline.txt
sudo cp ./build/raspberrypi/fstab $TMPDIR/etc/
sudo cp ./build/raspberrypi/init_resize.sh $TMPDIR/usr/lib/startos/scripts/init_resize.sh
sudo umount $TMPDIR/boot
sudo umount $TMPDIR
sudo losetup -d $OUTPUT_DEVICE
if [ "$ALLOW_VERSION_MISMATCH" != 1 ]; then
if [ "$(cat GIT_HASH.txt)" != "$REAL_GIT_HASH" ]; then
>&2 echo "startos.raspberrypi.squashfs GIT_HASH.txt mismatch"
>&2 echo "expected $REAL_GIT_HASH (dpkg) found $(cat GIT_HASH.txt) (repo)"
exit 1
fi
if [ "$(cat VERSION.txt)" != "$REAL_VERSION" ]; then
>&2 echo "startos.raspberrypi.squashfs VERSION.txt mismatch"
exit 1
fi
if [ "$(cat ENVIRONMENT.txt)" != "$REAL_ENVIRONMENT" ]; then
>&2 echo "startos.raspberrypi.squashfs ENVIRONMENT.txt mismatch"
exit 1
fi
fi

142
build/upload-ota.sh Executable file
View File

@@ -0,0 +1,142 @@
#!/bin/bash
if [ -z "$VERSION" ]; then
>&2 echo '$VERSION required'
exit 2
fi
set -e
if [ "$SKIP_DL" != "1" ]; then
if [ "$SKIP_CLEAN" != "1" ]; then
rm -rf ~/Downloads/v$VERSION
mkdir ~/Downloads/v$VERSION
cd ~/Downloads/v$VERSION
fi
if [ -n "$RUN_ID" ]; then
for arch in aarch64 aarch64-nonfree riscv64 x86_64 x86_64-nonfree; do
while ! gh run download -R Start9Labs/start-os $RUN_ID -n $arch.squashfs -D $(pwd); do sleep 1; done
done
for arch in aarch64 aarch64-nonfree riscv64 x86_64 x86_64-nonfree; do
while ! gh run download -R Start9Labs/start-os $RUN_ID -n $arch.iso -D $(pwd); do sleep 1; done
done
fi
if [ -n "$ST_RUN_ID" ]; then
for arch in aarch64 riscv64 x86_64; do
while ! gh run download -R Start9Labs/start-os $ST_RUN_ID -n start-tunnel_$arch.deb -D $(pwd); do sleep 1; done
done
fi
if [ -n "$CLI_RUN_ID" ]; then
for arch in aarch64 riscv64 x86_64; do
for os in linux macos; do
pair=${arch}-${os}
if [ "${pair}" = "riscv64-linux" ]; then
target=riscv64gc-unknown-linux-musl
elif [ "${pair}" = "riscv64-macos" ]; then
continue
elif [ "${os}" = "linux" ]; then
target="${arch}-unknown-linux-musl"
elif [ "${os}" = "macos" ]; then
target="${arch}-apple-darwin"
fi
while ! gh run download -R Start9Labs/start-os $CLI_RUN_ID -n start-cli_$target -D $(pwd); do sleep 1; done
mv start-cli "start-cli_${pair}"
done
done
fi
else
cd ~/Downloads/v$VERSION
fi
start-cli --registry=https://alpha-registry-x.start9.com registry os version add $VERSION "v$VERSION" '' ">=0.3.5 <=$VERSION"
if [ "$SKIP_UL" = "2" ]; then
exit 2
elif [ "$SKIP_UL" != "1" ]; then
for file in *.deb start-cli_*; do
gh release upload -R Start9Labs/start-os v$VERSION $file
done
for file in *.iso *.squashfs; do
s3cmd put -P $file s3://startos-images/v$VERSION/$file
done
fi
if [ "$SKIP_INDEX" != "1" ]; then
for arch in aarch64 aarch64-nonfree riscv64 x86_64 x86_64-nonfree; do
for file in *_$arch.squashfs *_$arch.iso; do
start-cli --registry=https://alpha-registry-x.start9.com registry os asset add --platform=$arch --version=$VERSION $file https://startos-images.nyc3.cdn.digitaloceanspaces.com/v$VERSION/$file
done
done
fi
for file in *.iso *.squashfs *.deb start-cli_*; do
gpg -u 7CFFDA41CA66056A --detach-sign --armor -o "${file}.asc" "$file"
done
gpg --export -a 7CFFDA41CA66056A > dr-bonez.key.asc
tar -czvf signatures.tar.gz *.asc
gh release upload -R Start9Labs/start-os v$VERSION signatures.tar.gz
cat << EOF
# ISO Downloads
- [x86_64/AMD64](https://startos-images.nyc3.cdn.digitaloceanspaces.com/v$VERSION/$(ls *_x86_64-nonfree.iso))
- [x86_64/AMD64-slim (FOSS-only)](https://startos-images.nyc3.cdn.digitaloceanspaces.com/v$VERSION/$(ls *_x86_64.iso) "Without proprietary software or drivers")
- [aarch64/ARM64](https://startos-images.nyc3.cdn.digitaloceanspaces.com/v$VERSION/$(ls *_aarch64-nonfree.iso))
- [aarch64/ARM64-slim (FOSS-Only)](https://startos-images.nyc3.cdn.digitaloceanspaces.com/v$VERSION/$(ls *_aarch64.iso) "Without proprietary software or drivers")
- [RISCV64 (RVA23)](https://startos-images.nyc3.cdn.digitaloceanspaces.com/v$VERSION/$(ls *_riscv64.iso))
EOF
cat << 'EOF'
# StartOS Checksums
## SHA-256
```
EOF
sha256sum *.iso *.squashfs
cat << 'EOF'
```
## BLAKE-3
```
EOF
b3sum *.iso *.squashfs
cat << 'EOF'
```
# Start-Tunnel Checksums
## SHA-256
```
EOF
sha256sum start-tunnel*.deb
cat << 'EOF'
```
## BLAKE-3
```
EOF
b3sum start-tunnel*.deb
cat << 'EOF'
```
# start-cli Checksums
## SHA-256
```
EOF
sha256sum start-cli_*
cat << 'EOF'
```
## BLAKE-3
```
EOF
b3sum start-cli_*
cat << 'EOF'
```
EOF

View File

@@ -2,9 +2,6 @@
set -e set -e
mkdir -p /run/systemd/resolve
echo "nameserver 8.8.8.8" > /run/systemd/resolve/stub-resolv.conf
apt-get update apt-get update
apt-get install -y curl rsync qemu-user-static nodejs apt-get install -y curl rsync qemu-user-static nodejs
@@ -16,7 +13,4 @@ sed -i '/\(^\|#\)ForwardToSyslog=/c\ForwardToSyslog=no' /etc/systemd/journald.co
systemctl enable container-runtime.service systemctl enable container-runtime.service
rm -rf /run/systemd
rm -f /etc/resolv.conf
echo "nameserver 10.0.3.1" > /etc/resolv.conf echo "nameserver 10.0.3.1" > /etc/resolv.conf

View File

@@ -1,28 +0,0 @@
#!/bin/bash
set -e
IMAGE=$1
if [ -z "$IMAGE" ]; then
>&2 echo "usage: $0 <image id>"
exit 1
fi
if ! [ -d "/media/images/$IMAGE" ]; then
>&2 echo "image does not exist"
exit 1
fi
container=$(mktemp -d)
mkdir -p $container/rootfs $container/upper $container/work
mount -t overlay -olowerdir=/media/images/$IMAGE,upperdir=$container/upper,workdir=$container/work overlay $container/rootfs
rootfs=$container/rootfs
for special in dev sys proc run; do
mkdir -p $rootfs/$special
mount --bind /$special $rootfs/$special
done
echo $rootfs

View File

@@ -38,7 +38,7 @@
}, },
"../sdk/dist": { "../sdk/dist": {
"name": "@start9labs/start-sdk", "name": "@start9labs/start-sdk",
"version": "0.4.0-beta.45", "version": "0.4.0-beta.48",
"license": "MIT", "license": "MIT",
"dependencies": { "dependencies": {
"@iarna/toml": "^3.0.0", "@iarna/toml": "^3.0.0",

View File

@@ -1,12 +0,0 @@
#!/bin/bash
set -e
rootfs=$1
if [ -z "$rootfs" ]; then
>&2 echo "usage: $0 <container rootfs path>"
exit 1
fi
umount --recursive $rootfs
rm -rf $rootfs/..

View File

@@ -178,6 +178,13 @@ export function makeEffects(context: EffectContext): Effects {
T.Effects["getInstalledPackages"] T.Effects["getInstalledPackages"]
> >
}, },
getServiceManifest(
...[options]: Parameters<T.Effects["getServiceManifest"]>
) {
return rpcRound("get-service-manifest", options) as ReturnType<
T.Effects["getServiceManifest"]
>
},
subcontainer: { subcontainer: {
createFs(options: { imageId: string; name: string }) { createFs(options: { imageId: string; name: string }) {
return rpcRound("subcontainer.create-fs", options) as ReturnType< return rpcRound("subcontainer.create-fs", options) as ReturnType<
@@ -312,6 +319,7 @@ export function makeEffects(context: EffectContext): Effects {
} }
if (context.callbacks?.onLeaveContext) if (context.callbacks?.onLeaveContext)
self.onLeaveContext(() => { self.onLeaveContext(() => {
self.constRetry = undefined
self.isInContext = false self.isInContext = false
self.onLeaveContext = () => { self.onLeaveContext = () => {
console.warn( console.warn(

View File

@@ -10,7 +10,6 @@ import { SDKManifest } from "@start9labs/start-sdk/base/lib/types"
import { SubContainerRc } from "@start9labs/start-sdk/package/lib/util/SubContainer" import { SubContainerRc } from "@start9labs/start-sdk/package/lib/util/SubContainer"
const EMBASSY_HEALTH_INTERVAL = 15 * 1000 const EMBASSY_HEALTH_INTERVAL = 15 * 1000
const EMBASSY_PROPERTIES_LOOP = 30 * 1000
/** /**
* We wanted something to represent what the main loop is doing, and * We wanted something to represent what the main loop is doing, and
* in this case it used to run the properties, health, and the docker/ js main. * in this case it used to run the properties, health, and the docker/ js main.

View File

@@ -50,6 +50,7 @@ import {
transformOldConfigToNew, transformOldConfigToNew,
} from "./transformConfigSpec" } from "./transformConfigSpec"
import { partialDiff } from "@start9labs/start-sdk/base/lib/util" import { partialDiff } from "@start9labs/start-sdk/base/lib/util"
import { Volume } from "@start9labs/start-sdk/package/lib/util/Volume"
type Optional<A> = A | undefined | null type Optional<A> = A | undefined | null
function todo(): never { function todo(): never {
@@ -61,14 +62,14 @@ export const EMBASSY_JS_LOCATION = "/usr/lib/startos/package/embassy.js"
const configFile = FileHelper.json( const configFile = FileHelper.json(
{ {
volumeId: "embassy", base: new Volume("embassy"),
subpath: "config.json", subpath: "config.json",
}, },
matches.any, matches.any,
) )
const dependsOnFile = FileHelper.json( const dependsOnFile = FileHelper.json(
{ {
volumeId: "embassy", base: new Volume("embassy"),
subpath: "dependsOn.json", subpath: "dependsOn.json",
}, },
dictionary([string, array(string)]), dictionary([string, array(string)]),
@@ -287,7 +288,6 @@ function convertProperties(
} }
} }
const DEFAULT_REGISTRY = "https://registry.start9.com"
export class SystemForEmbassy implements System { export class SystemForEmbassy implements System {
private version: ExtendedVersion private version: ExtendedVersion
currentRunning: MainLoop | undefined currentRunning: MainLoop | undefined
@@ -331,6 +331,10 @@ export class SystemForEmbassy implements System {
) { ) {
this.version.upstream.prerelease = ["alpha"] this.version.upstream.prerelease = ["alpha"]
} }
if (this.manifest.id === "nostr") {
this.manifest.id = "nostr-rs-relay"
}
} }
async init( async init(

View File

@@ -0,0 +1,21 @@
#!/bin/bash
cd "$(dirname "${BASH_SOURCE[0]}")/.."
USE_TTY=
if tty -s; then
USE_TTY="-it"
fi
DOCKER_PLATFORM=linux/${ARCH}
case $ARCH in
x86_64)
DOCKER_PLATFORM=linux/amd64;;
aarch64)
DOCKER_PLATFORM=linux/arm64;;
esac
docker run --rm $USE_TTY --platform=$DOCKER_PLATFORM -eARCH --privileged -v "$(pwd):/root/start-os" start9/build-env /root/start-os/container-runtime/update-image.sh
if [ "$(ls -nd "rootfs.${ARCH}.squashfs" | awk '{ print $3 }')" != "$UID" ]; then
docker run --rm $USE_TTY -v "$(pwd):/root/start-os" start9/build-env chown -R $UID:$UID /root/start-os/container-runtime
fi

View File

@@ -9,56 +9,34 @@ if [ "$ARCH" = "riscv64" ]; then
RUST_ARCH="riscv64gc" RUST_ARCH="riscv64gc"
fi fi
if mountpoint -q tmp/combined; then sudo umount -l tmp/combined; fi mount -t tmpfs tmpfs /tmp
if mountpoint -q tmp/lower; then sudo umount tmp/lower; fi mkdir -p /tmp/lower /tmp/upper /tmp/work /tmp/combined
sudo rm -rf tmp mount -o loop debian.${ARCH}.squashfs /tmp/lower
mkdir -p tmp/lower tmp/upper tmp/work tmp/combined mount -t overlay -olowerdir=/tmp/lower,upperdir=/tmp/upper,workdir=/tmp/work overlay /tmp/combined
if which squashfuse > /dev/null; then
sudo squashfuse debian.${ARCH}.squashfs tmp/lower
else
sudo mount debian.${ARCH}.squashfs tmp/lower
fi
if which fuse-overlayfs > /dev/null; then
sudo fuse-overlayfs -olowerdir=tmp/lower,upperdir=tmp/upper,workdir=tmp/work overlay tmp/combined
else
sudo mount -t overlay -olowerdir=tmp/lower,upperdir=tmp/upper,workdir=tmp/work overlay tmp/combined
fi
QEMU= mkdir -p /tmp/combined/usr/lib/startos/
if [ "$ARCH" != "$(uname -m)" ]; then rsync -a --copy-unsafe-links --info=progress2 dist/ /tmp/combined/usr/lib/startos/init/
QEMU=/usr/bin/qemu-${ARCH} chown -R 0:0 /tmp/combined/usr/lib/startos/
if ! which qemu-$ARCH > /dev/null; then cp container-runtime.service /tmp/combined/lib/systemd/system/container-runtime.service
>&2 echo qemu-user is required for cross-platform builds chown 0:0 /tmp/combined/lib/systemd/system/container-runtime.service
sudo umount tmp/combined cp container-runtime-failure.service /tmp/combined/lib/systemd/system/container-runtime-failure.service
sudo umount tmp/lower chown 0:0 /tmp/combined/lib/systemd/system/container-runtime-failure.service
sudo rm -rf tmp cp ../core/target/${RUST_ARCH}-unknown-linux-musl/release/start-container /tmp/combined/usr/bin/start-container
exit 1 echo -e '#!/bin/bash\nexec start-container "$@"' > /tmp/combined/usr/bin/start-cli # TODO: remove
fi chmod +x /tmp/combined/usr/bin/start-cli
sudo cp $(which qemu-$ARCH) tmp/combined${QEMU} chown 0:0 /tmp/combined/usr/bin/start-container
fi echo container-runtime | sha256sum | head -c 32 | cat - <(echo) > /tmp/combined/etc/machine-id
rm -f /tmp/combined/etc/resolv.conf
sudo mkdir -p tmp/combined/usr/lib/startos/ cp /etc/resolv.conf /tmp/combined/etc/resolv.conf
sudo rsync -a --copy-unsafe-links dist/ tmp/combined/usr/lib/startos/init/ for fs in proc sys dev; do
sudo chown -R 0:0 tmp/combined/usr/lib/startos/ mount --bind /$fs /tmp/combined/$fs
sudo cp container-runtime.service tmp/combined/lib/systemd/system/container-runtime.service done
sudo chown 0:0 tmp/combined/lib/systemd/system/container-runtime.service cat deb-install.sh | chroot /tmp/combined /bin/bash
sudo cp container-runtime-failure.service tmp/combined/lib/systemd/system/container-runtime-failure.service for fs in proc sys dev; do
sudo chown 0:0 tmp/combined/lib/systemd/system/container-runtime-failure.service umount /tmp/combined/$fs
sudo cp ../core/target/${RUST_ARCH}-unknown-linux-musl/release/start-container tmp/combined/usr/bin/start-container done
echo -e '#!/bin/bash\nexec start-container "$@"' | sudo tee tmp/combined/usr/bin/start-cli # TODO: remove truncate -s 0 /tmp/combined/etc/machine-id
sudo chmod +x tmp/combined/usr/bin/start-cli
sudo chown 0:0 tmp/combined/usr/bin/start-container
echo container-runtime | sha256sum | head -c 32 | cat - <(echo) | sudo tee tmp/combined/etc/machine-id
cat deb-install.sh | sudo systemd-nspawn --console=pipe -D tmp/combined $QEMU /bin/bash
sudo truncate -s 0 tmp/combined/etc/machine-id
if [ -n "$QEMU" ]; then
sudo rm tmp/combined${QEMU}
fi
rm -f rootfs.${ARCH}.squashfs rm -f rootfs.${ARCH}.squashfs
mkdir -p ../build/lib/container-runtime mkdir -p ../build/lib/container-runtime
sudo mksquashfs tmp/combined rootfs.${ARCH}.squashfs mksquashfs /tmp/combined rootfs.${ARCH}.squashfs
sudo umount tmp/combined
sudo umount tmp/lower
sudo rm -rf tmp

2
core/.gitignore vendored
View File

@@ -8,4 +8,4 @@ secrets.db
.env .env
.editorconfig .editorconfig
proptest-regressions/**/* proptest-regressions/**/*
/startos/bindings/* /bindings/*

915
core/Cargo.lock generated

File diff suppressed because it is too large Load Diff

View File

@@ -1,3 +1,292 @@
[workspace] [package]
authors = ["Aiden McClelland <me@drbonez.dev>"]
description = "The core of StartOS"
documentation = "https://docs.rs/start-os"
edition = "2024"
keywords = [
"bitcoin",
"full-node",
"lightning",
"privacy",
"raspberry-pi",
"self-hosted",
]
license = "MIT"
name = "start-os"
readme = "README.md"
repository = "https://github.com/Start9Labs/start-os"
version = "0.4.0-alpha.19" # VERSION_BUMP
members = ["startos"] [lib]
name = "startos"
path = "src/lib.rs"
[[bin]]
name = "startbox"
path = "src/main/startbox.rs"
[[bin]]
name = "start-cli"
path = "src/main/start-cli.rs"
[[bin]]
name = "start-container"
path = "src/main/start-container.rs"
[[bin]]
name = "registrybox"
path = "src/main/registrybox.rs"
[[bin]]
name = "tunnelbox"
path = "src/main/tunnelbox.rs"
[features]
arti = [
"arti-client",
"safelog",
"tor-cell",
"tor-hscrypto",
"tor-hsservice",
"tor-keymgr",
"tor-llcrypto",
"tor-proto",
"tor-rtcompat",
]
beta = []
console = ["console-subscriber", "tokio/tracing"]
default = []
dev = []
test = []
unstable = ["backtrace-on-stack-overflow"]
[dependencies]
aes = { version = "0.7.5", features = ["ctr"] }
arti-client = { version = "0.33", features = [
"compression",
"ephemeral-keystore",
"experimental-api",
"onion-service-client",
"onion-service-service",
"rustls",
"static",
"tokio",
], default-features = false, git = "https://github.com/Start9Labs/arti.git", branch = "patch/disable-exit", optional = true }
async-acme = { version = "0.6.0", git = "https://github.com/dr-bonez/async-acme.git", features = [
"use_rustls",
"use_tokio",
] }
async-compression = { version = "0.4.32", features = [
"brotli",
"gzip",
"tokio",
"zstd",
] }
async-stream = "0.3.5"
async-trait = "0.1.74"
axum = { version = "0.8.4", features = ["ws", "http2"] }
backtrace-on-stack-overflow = { version = "0.3.0", optional = true }
base32 = "0.5.0"
base64 = "0.22.1"
base64ct = "1.6.0"
basic-cookies = "0.1.4"
blake3 = { version = "1.5.0", features = ["mmap", "rayon"] }
bytes = "1"
chrono = { version = "0.4.31", features = ["serde"] }
clap = { version = "4.4.12", features = ["string"] }
color-eyre = "0.6.2"
console = "0.16.2"
console-subscriber = { version = "0.5.0", optional = true }
const_format = "0.2.34"
cookie = "0.18.0"
cookie_store = "0.22.0"
curve25519-dalek = "4.1.3"
der = { version = "0.7.9", features = ["derive", "pem"] }
digest = "0.10.7"
divrem = "1.0.0"
dns-lookup = "3.0.1"
ed25519 = { version = "2.2.3", features = ["alloc", "pem", "pkcs8"] }
ed25519-dalek = { version = "2.2.0", features = [
"digest",
"hazmat",
"pkcs8",
"rand_core",
"serde",
"zeroize",
] }
ed25519-dalek-v1 = { package = "ed25519-dalek", version = "1" }
exver = { version = "0.2.0", git = "https://github.com/Start9Labs/exver-rs.git", features = [
"serde",
] }
fd-lock-rs = "0.1.4"
form_urlencoded = "1.2.1"
futures = "0.3.28"
gpt = "4.1.0"
hex = "0.4.3"
hickory-server = { version = "0.25.2", features = ["resolver"] }
hmac = "0.12.1"
http = "1.0.0"
http-body-util = "0.1"
hyper = { version = "1.5", features = ["http1", "http2", "server"] }
hyper-util = { version = "0.1.10", features = [
"http1",
"http2",
"server",
"server-auto",
"server-graceful",
"service",
"tokio",
] }
id-pool = { version = "0.2.2", default-features = false, features = [
"serde",
"u16",
] }
iddqd = "0.3.14"
imbl = { version = "6", features = ["serde", "small-chunks"] }
imbl-value = { version = "0.4.3", features = ["ts-rs"] }
include_dir = { version = "0.7.3", features = ["metadata"] }
indexmap = { version = "2.0.2", features = ["serde"] }
indicatif = { version = "0.18.3", features = ["tokio"] }
inotify = "0.11.0"
integer-encoding = { version = "4.0.0", features = ["tokio_async"] }
ipnet = { version = "2.8.0", features = ["serde"] }
isocountry = "0.3.2"
itertools = "0.14.0"
jaq-core = "0.10.1"
jaq-std = "0.10.0"
josekit = "0.10.3"
jsonpath_lib = { git = "https://github.com/Start9Labs/jsonpath.git" }
lazy_async_pool = "0.3.3"
lazy_format = "2.0"
lazy_static = "1.4.0"
lettre = { version = "0.11.18", default-features = false, features = [
"aws-lc-rs",
"builder",
"hostname",
"pool",
"rustls-platform-verifier",
"smtp-transport",
"tokio1-rustls",
] }
libc = "0.2.149"
log = "0.4.20"
mbrman = "0.6.0"
miette = { version = "7.6.0", features = ["fancy"] }
mio = "1"
new_mime_guess = "4"
nix = { version = "0.30.1", features = [
"fs",
"hostname",
"mount",
"net",
"process",
"sched",
"signal",
"user",
] }
nom = "8.0.0"
num = "0.4.1"
num_cpus = "1.16.0"
num_enum = "0.7.0"
once_cell = "1.19.0"
openssh-keys = "0.6.2"
openssl = { version = "0.10.57", features = ["vendored"] }
p256 = { version = "0.13.2", features = ["pem"] }
patch-db = { version = "*", path = "../patch-db/patch-db", features = [
"trace",
] }
pbkdf2 = "0.12.2"
pin-project = "1.1.3"
pkcs8 = { version = "0.10.2", features = ["std"] }
prettytable-rs = "0.10.0"
proptest = "1.3.1"
proptest-derive = "0.7.0"
qrcode = "0.14.1"
r3bl_tui = "0.7.6"
rand = "0.9.2"
regex = "1.10.2"
reqwest = { version = "0.12.25", features = [
"json",
"socks",
"stream",
"http2",
] }
reqwest_cookie_store = "0.9.0"
rpassword = "7.2.0"
rust-argon2 = "3.0.0"
rust-i18n = "3.1.5"
rpc-toolkit = { git = "https://github.com/Start9Labs/rpc-toolkit.git" }
safelog = { version = "0.4.8", git = "https://github.com/Start9Labs/arti.git", branch = "patch/disable-exit", optional = true }
semver = { version = "1.0.20", features = ["serde"] }
serde = { version = "1.0", features = ["derive", "rc"] }
serde_cbor = { package = "ciborium", version = "0.2.1" }
serde_json = "1.0"
serde_toml = { package = "toml", version = "0.9.9+spec-1.0.0" }
serde_yaml = { package = "serde_yml", version = "0.0.12" }
sha-crypt = "0.5.0"
sha2 = "0.10.2"
signal-hook = "0.3.17"
socket2 = { version = "0.6.0", features = ["all"] }
socks5-impl = { version = "0.7.2", features = ["client", "server"] }
sqlx = { version = "0.8.6", features = [
"postgres",
"runtime-tokio-rustls",
], default-features = false }
sscanf = "0.4.1"
ssh-key = { version = "0.6.2", features = ["ed25519"] }
tar = "0.4.40"
termion = "4.0.5"
textwrap = "0.16.1"
thiserror = "2.0.12"
tokio = { version = "1.38.1", features = ["full"] }
tokio-rustls = "0.26.4"
tokio-stream = { version = "0.1.14", features = ["io-util", "net", "sync"] }
tokio-tar = { git = "https://github.com/dr-bonez/tokio-tar.git" }
tokio-tungstenite = { version = "0.26.2", features = ["native-tls", "url"] }
tokio-util = { version = "0.7.9", features = ["io"] }
tor-cell = { version = "0.33", git = "https://github.com/Start9Labs/arti.git", branch = "patch/disable-exit", optional = true }
tor-hscrypto = { version = "0.33", features = [
"full",
], git = "https://github.com/Start9Labs/arti.git", branch = "patch/disable-exit", optional = true }
tor-hsservice = { version = "0.33", git = "https://github.com/Start9Labs/arti.git", branch = "patch/disable-exit", optional = true }
tor-keymgr = { version = "0.33", features = [
"ephemeral-keystore",
], git = "https://github.com/Start9Labs/arti.git", branch = "patch/disable-exit", optional = true }
tor-llcrypto = { version = "0.33", features = [
"full",
], git = "https://github.com/Start9Labs/arti.git", branch = "patch/disable-exit", optional = true }
tor-proto = { version = "0.33", git = "https://github.com/Start9Labs/arti.git", branch = "patch/disable-exit", optional = true }
tor-rtcompat = { version = "0.33", features = [
"rustls",
"tokio",
], git = "https://github.com/Start9Labs/arti.git", branch = "patch/disable-exit", optional = true }
torut = "0.2.1"
tower-service = "0.3.3"
tracing = "0.1.39"
tracing-error = "0.2.0"
tracing-journald = "0.3.0"
tracing-subscriber = { version = "=0.3.19", features = ["env-filter"] }
ts-rs = "9.0.1"
typed-builder = "0.23.2"
url = { version = "2.4.1", features = ["serde"] }
uuid = { version = "1.4.1", features = ["v4"] }
visit-rs = "0.1.1"
x25519-dalek = { version = "2.0.1", features = ["static_secrets"] }
zbus = "5.1.1"
hashing-serializer = "0.1.1"
[target.'cfg(target_os = "linux")'.dependencies]
procfs = "0.18.0"
pty-process = "0.5.1"
[profile.test]
opt-level = 3
[profile.dev]
opt-level = 3
[profile.dev.package.backtrace]
opt-level = 3
[profile.dev.package.sqlx-macros]
opt-level = 3

View File

@@ -26,7 +26,7 @@ PROFILE=${PROFILE:-release}
if [ "${PROFILE}" = "release" ]; then if [ "${PROFILE}" = "release" ]; then
BUILD_FLAGS="--release" BUILD_FLAGS="--release"
else else
if [ "$PROFILE" != "debug"]; then if [ "$PROFILE" != "debug" ]; then
>&2 echo "Unknown profile $PROFILE: falling back to debug..." >&2 echo "Unknown profile $PROFILE: falling back to debug..."
PROFILE=debug PROFILE=debug
fi fi
@@ -60,7 +60,7 @@ if [ -z "${TARGET:-}" ]; then
fi fi
fi fi
cd .. cd ../..
FEATURES="$(echo "${ENVIRONMENT:-}" | sed 's/-/,/g')" FEATURES="$(echo "${ENVIRONMENT:-}" | sed 's/-/,/g')"
RUSTFLAGS="" RUSTFLAGS=""
if [[ "${ENVIRONMENT:-}" =~ (^|-)console($|-) ]]; then if [[ "${ENVIRONMENT:-}" =~ (^|-)console($|-) ]]; then

View File

@@ -30,7 +30,7 @@ if [ "$ARCH" = "riscv64" ]; then
RUST_ARCH="riscv64gc" RUST_ARCH="riscv64gc"
fi fi
cd .. cd ../..
FEATURES="$(echo $ENVIRONMENT | sed 's/-/,/g')" FEATURES="$(echo $ENVIRONMENT | sed 's/-/,/g')"
RUSTFLAGS="" RUSTFLAGS=""

View File

@@ -30,7 +30,7 @@ if [ "$ARCH" = "riscv64" ]; then
RUST_ARCH="riscv64gc" RUST_ARCH="riscv64gc"
fi fi
cd .. cd ../..
FEATURES="$(echo $ENVIRONMENT | sed 's/-/,/g')" FEATURES="$(echo $ENVIRONMENT | sed 's/-/,/g')"
RUSTFLAGS="" RUSTFLAGS=""

View File

@@ -30,7 +30,7 @@ if [ "$ARCH" = "riscv64" ]; then
RUST_ARCH="riscv64gc" RUST_ARCH="riscv64gc"
fi fi
cd .. cd ../..
FEATURES="$(echo $ENVIRONMENT | sed 's/-/,/g')" FEATURES="$(echo $ENVIRONMENT | sed 's/-/,/g')"
RUSTFLAGS="" RUSTFLAGS=""

View File

@@ -30,7 +30,7 @@ if [ "$ARCH" = "riscv64" ]; then
RUST_ARCH="riscv64gc" RUST_ARCH="riscv64gc"
fi fi
cd .. cd ../..
FEATURES="$(echo $ENVIRONMENT | sed 's/-/,/g')" FEATURES="$(echo $ENVIRONMENT | sed 's/-/,/g')"
RUSTFLAGS="" RUSTFLAGS=""
if [[ "${ENVIRONMENT}" =~ (^|-)console($|-) ]]; then if [[ "${ENVIRONMENT}" =~ (^|-)console($|-) ]]; then
@@ -39,6 +39,6 @@ fi
echo "FEATURES=\"$FEATURES\"" echo "FEATURES=\"$FEATURES\""
echo "RUSTFLAGS=\"$RUSTFLAGS\"" echo "RUSTFLAGS=\"$RUSTFLAGS\""
rust-zig-builder cargo test --manifest-path=./core/Cargo.toml $BUILD_FLAGS --features test,$FEATURES --locked 'export_bindings_' rust-zig-builder cargo test --manifest-path=./core/Cargo.toml $BUILD_FLAGS --features test,$FEATURES --locked 'export_bindings_'
if [ "$(ls -nd "core/startos/bindings" | awk '{ print $3 }')" != "$UID" ]; then if [ "$(ls -nd "core/bindings" | awk '{ print $3 }')" != "$UID" ]; then
rust-zig-builder sh -c "chown -R $UID:$UID core/target && chown -R $UID:$UID core/startos/bindings && chown -R $UID:$UID /usr/local/cargo" rust-zig-builder sh -c "chown -R $UID:$UID core/target && chown -R $UID:$UID core/bindings && chown -R $UID:$UID /usr/local/cargo"
fi fi

View File

@@ -30,7 +30,7 @@ if [ "$ARCH" = "riscv64" ]; then
RUST_ARCH="riscv64gc" RUST_ARCH="riscv64gc"
fi fi
cd .. cd ../..
FEATURES="$(echo $ENVIRONMENT | sed 's/-/,/g')" FEATURES="$(echo $ENVIRONMENT | sed 's/-/,/g')"
RUSTFLAGS="" RUSTFLAGS=""

5343
core/locales/i18n.yaml Normal file

File diff suppressed because it is too large Load Diff

View File

@@ -2,7 +2,7 @@
cd "$(dirname "${BASH_SOURCE[0]}")" cd "$(dirname "${BASH_SOURCE[0]}")"
source ./builder-alias.sh source ./build/builder-alias.sh
set -ea set -ea
shopt -s expand_aliases shopt -s expand_aliases

View File

@@ -23,7 +23,7 @@ pub fn action_api<C: Context>() -> ParentHandler<C> {
"get-input", "get-input",
from_fn_async(get_action_input) from_fn_async(get_action_input)
.with_display_serializable() .with_display_serializable()
.with_about("Get action input spec") .with_about("about.get-action-input-spec")
.with_call_remote::<CliContext>(), .with_call_remote::<CliContext>(),
) )
.subcommand( .subcommand(
@@ -36,14 +36,14 @@ pub fn action_api<C: Context>() -> ParentHandler<C> {
} }
Ok(()) Ok(())
}) })
.with_about("Run service action") .with_about("about.run-service-action")
.with_call_remote::<CliContext>(), .with_call_remote::<CliContext>(),
) )
.subcommand( .subcommand(
"clear-task", "clear-task",
from_fn_async(clear_task) from_fn_async(clear_task)
.no_display() .no_display()
.with_about("Clear a service task") .with_about("about.clear-service-task")
.with_call_remote::<CliContext>(), .with_call_remote::<CliContext>(),
) )
} }
@@ -63,7 +63,9 @@ pub struct ActionInput {
#[derive(Deserialize, Serialize, TS, Parser)] #[derive(Deserialize, Serialize, TS, Parser)]
#[serde(rename_all = "camelCase")] #[serde(rename_all = "camelCase")]
pub struct GetActionInputParams { pub struct GetActionInputParams {
#[arg(help = "help.arg.package-id")]
pub package_id: PackageId, pub package_id: PackageId,
#[arg(help = "help.arg.action-id")]
pub action_id: ActionId, pub action_id: ActionId,
} }
@@ -280,8 +282,11 @@ pub struct RunActionParams {
#[derive(Parser)] #[derive(Parser)]
struct CliRunActionParams { struct CliRunActionParams {
#[arg(help = "help.arg.package-id")]
pub package_id: PackageId, pub package_id: PackageId,
#[arg(help = "help.arg.event-id")]
pub event_id: Option<Guid>, pub event_id: Option<Guid>,
#[arg(help = "help.arg.action-id")]
pub action_id: ActionId, pub action_id: ActionId,
#[command(flatten)] #[command(flatten)]
pub input: StdinDeserializable<Option<Value>>, pub input: StdinDeserializable<Option<Value>>,
@@ -360,9 +365,11 @@ pub async fn run_action(
#[serde(rename_all = "camelCase")] #[serde(rename_all = "camelCase")]
#[command(rename_all = "kebab-case")] #[command(rename_all = "kebab-case")]
pub struct ClearTaskParams { pub struct ClearTaskParams {
#[arg(help = "help.arg.package-id")]
pub package_id: PackageId, pub package_id: PackageId,
#[arg(help = "help.arg.replay-id")]
pub replay_id: ReplayId, pub replay_id: ReplayId,
#[arg(long)] #[arg(long, help = "help.arg.force-clear-task")]
#[serde(default)] #[serde(default)]
pub force: bool, pub force: bool,
} }

View File

@@ -51,7 +51,10 @@ pub async fn write_shadow(password: &str) -> Result<(), Error> {
match line.split_once(":") { match line.split_once(":") {
Some((user, rest)) if user == "start9" || user == "kiosk" => { Some((user, rest)) if user == "start9" || user == "kiosk" => {
let (_, rest) = rest.split_once(":").ok_or_else(|| { let (_, rest) = rest.split_once(":").ok_or_else(|| {
Error::new(eyre!("malformed /etc/shadow"), ErrorKind::ParseSysInfo) Error::new(
eyre!("{}", t!("auth.malformed-etc-shadow")),
ErrorKind::ParseSysInfo,
)
})?; })?;
shadow_file shadow_file
.write_all(format!("{user}:{hash}:{rest}\n").as_bytes()) .write_all(format!("{user}:{hash}:{rest}\n").as_bytes())
@@ -81,7 +84,7 @@ impl PasswordType {
PasswordType::String(x) => Ok(x), PasswordType::String(x) => Ok(x),
PasswordType::EncryptedWire(x) => x.decrypt(current_secret).ok_or_else(|| { PasswordType::EncryptedWire(x) => x.decrypt(current_secret).ok_or_else(|| {
Error::new( Error::new(
color_eyre::eyre::eyre!("Couldn't decode password"), color_eyre::eyre::eyre!("{}", t!("auth.couldnt-decode-password")),
crate::ErrorKind::Unknown, crate::ErrorKind::Unknown,
) )
}), }),
@@ -125,19 +128,19 @@ where
"login", "login",
from_fn_async(cli_login::<AC>) from_fn_async(cli_login::<AC>)
.no_display() .no_display()
.with_about("Log in a new auth session"), .with_about("about.login-new-auth-session"),
) )
.subcommand( .subcommand(
"logout", "logout",
from_fn_async(logout::<AC>) from_fn_async(logout::<AC>)
.with_metadata("get_session", Value::Bool(true)) .with_metadata("get_session", Value::Bool(true))
.no_display() .no_display()
.with_about("Log out of current auth session") .with_about("about.logout-current-auth-session")
.with_call_remote::<CliContext>(), .with_call_remote::<CliContext>(),
) )
.subcommand( .subcommand(
"session", "session",
session::<C, AC>().with_about("List or kill auth sessions"), session::<C, AC>().with_about("about.list-or-kill-auth-sessions"),
) )
.subcommand( .subcommand(
"reset-password", "reset-password",
@@ -147,14 +150,14 @@ where
"reset-password", "reset-password",
from_fn_async(cli_reset_password) from_fn_async(cli_reset_password)
.no_display() .no_display()
.with_about("Reset password"), .with_about("about.reset-password"),
) )
.subcommand( .subcommand(
"get-pubkey", "get-pubkey",
from_fn_async(get_pubkey) from_fn_async(get_pubkey)
.with_metadata("authenticated", Value::Bool(false)) .with_metadata("authenticated", Value::Bool(false))
.no_display() .no_display()
.with_about("Get public key derived from server private key") .with_about("about.get-pubkey-from-server")
.with_call_remote::<CliContext>(), .with_call_remote::<CliContext>(),
) )
} }
@@ -184,7 +187,11 @@ async fn cli_login<C: SessionAuthContext>(
where where
CliContext: CallRemote<C>, CliContext: CallRemote<C>,
{ {
let password = rpassword::prompt_password("Password: ")?; let password = if let Ok(password) = std::env::var("PASSWORD") {
password
} else {
rpassword::prompt_password("Password: ")?
};
ctx.call_remote::<C>( ctx.call_remote::<C>(
&parent_method.into_iter().chain(method).join("."), &parent_method.into_iter().chain(method).join("."),
@@ -204,12 +211,12 @@ pub fn check_password(hash: &str, password: &str) -> Result<(), Error> {
ensure_code!( ensure_code!(
argon2::verify_encoded(&hash, password.as_bytes()).map_err(|_| { argon2::verify_encoded(&hash, password.as_bytes()).map_err(|_| {
Error::new( Error::new(
eyre!("Password Incorrect"), eyre!("{}", t!("auth.password-incorrect")),
crate::ErrorKind::IncorrectPassword, crate::ErrorKind::IncorrectPassword,
) )
})?, })?,
crate::ErrorKind::IncorrectPassword, crate::ErrorKind::IncorrectPassword,
"Password Incorrect" t!("auth.password-incorrect")
); );
Ok(()) Ok(())
} }
@@ -323,14 +330,14 @@ where
.with_metadata("get_session", Value::Bool(true)) .with_metadata("get_session", Value::Bool(true))
.with_display_serializable() .with_display_serializable()
.with_custom_display_fn(|handle, result| display_sessions(handle.params, result)) .with_custom_display_fn(|handle, result| display_sessions(handle.params, result))
.with_about("Display all auth sessions") .with_about("about.display-all-auth-sessions")
.with_call_remote::<CliContext>(), .with_call_remote::<CliContext>(),
) )
.subcommand( .subcommand(
"kill", "kill",
from_fn_async(kill::<AC>) from_fn_async(kill::<AC>)
.no_display() .no_display()
.with_about("Terminate existing auth session(s)") .with_about("about.terminate-auth-sessions")
.with_call_remote::<CliContext>(), .with_call_remote::<CliContext>(),
) )
} }
@@ -414,6 +421,7 @@ impl AsLogoutSessionId for KillSessionId {
#[serde(rename_all = "camelCase")] #[serde(rename_all = "camelCase")]
#[command(rename_all = "kebab-case")] #[command(rename_all = "kebab-case")]
pub struct KillParams { pub struct KillParams {
#[arg(help = "help.arg.session-ids")]
ids: Vec<String>, ids: Vec<String>,
} }
@@ -430,7 +438,9 @@ pub async fn kill<C: SessionAuthContext>(
#[serde(rename_all = "camelCase")] #[serde(rename_all = "camelCase")]
#[command(rename_all = "kebab-case")] #[command(rename_all = "kebab-case")]
pub struct ResetPasswordParams { pub struct ResetPasswordParams {
#[arg(help = "help.arg.old-password")]
old_password: Option<PasswordType>, old_password: Option<PasswordType>,
#[arg(help = "help.arg.new-password")]
new_password: Option<PasswordType>, new_password: Option<PasswordType>,
} }
@@ -443,13 +453,13 @@ async fn cli_reset_password(
.. ..
}: HandlerArgs<CliContext>, }: HandlerArgs<CliContext>,
) -> Result<(), RpcError> { ) -> Result<(), RpcError> {
let old_password = rpassword::prompt_password("Current Password: ")?; let old_password = rpassword::prompt_password(&t!("auth.prompt-current-password"))?;
let new_password = { let new_password = {
let new_password = rpassword::prompt_password("New Password: ")?; let new_password = rpassword::prompt_password(&t!("auth.prompt-new-password"))?;
if new_password != rpassword::prompt_password("Confirm: ")? { if new_password != rpassword::prompt_password(&t!("auth.prompt-confirm"))? {
return Err(Error::new( return Err(Error::new(
eyre!("Passwords do not match"), eyre!("{}", t!("auth.passwords-do-not-match")),
crate::ErrorKind::IncorrectPassword, crate::ErrorKind::IncorrectPassword,
) )
.into()); .into());
@@ -482,7 +492,7 @@ pub async fn reset_password_impl(
.with_kind(crate::ErrorKind::IncorrectPassword)? .with_kind(crate::ErrorKind::IncorrectPassword)?
{ {
return Err(Error::new( return Err(Error::new(
eyre!("Incorrect Password"), eyre!("{}", t!("auth.password-incorrect")),
crate::ErrorKind::IncorrectPassword, crate::ErrorKind::IncorrectPassword,
)); ));
} }

View File

@@ -33,11 +33,13 @@ use crate::version::VersionT;
#[serde(rename_all = "camelCase")] #[serde(rename_all = "camelCase")]
#[command(rename_all = "kebab-case")] #[command(rename_all = "kebab-case")]
pub struct BackupParams { pub struct BackupParams {
#[arg(help = "help.arg.backup-target-id")]
target_id: BackupTargetId, target_id: BackupTargetId,
#[arg(long = "old-password")] #[arg(long = "old-password", help = "help.arg.old-backup-password")]
old_password: Option<crate::auth::PasswordType>, old_password: Option<crate::auth::PasswordType>,
#[arg(long = "package-ids")] #[arg(long = "package-ids", help = "help.arg.package-ids-to-backup")]
package_ids: Option<Vec<PackageId>>, package_ids: Option<Vec<PackageId>>,
#[arg(help = "help.arg.backup-password")]
password: crate::auth::PasswordType, password: crate::auth::PasswordType,
} }
@@ -69,8 +71,8 @@ impl BackupStatusGuard {
db, db,
None, None,
NotificationLevel::Success, NotificationLevel::Success,
"Backup Complete".to_owned(), t!("backup.bulk.complete-title").to_string(),
"Your backup has completed".to_owned(), t!("backup.bulk.complete-message").to_string(),
BackupReport { BackupReport {
server: ServerBackupReport { server: ServerBackupReport {
attempted: true, attempted: true,
@@ -88,9 +90,8 @@ impl BackupStatusGuard {
db, db,
None, None,
NotificationLevel::Warning, NotificationLevel::Warning,
"Backup Complete".to_owned(), t!("backup.bulk.complete-title").to_string(),
"Your backup has completed, but some package(s) failed to backup" t!("backup.bulk.complete-with-failures").to_string(),
.to_owned(),
BackupReport { BackupReport {
server: ServerBackupReport { server: ServerBackupReport {
attempted: true, attempted: true,
@@ -103,7 +104,7 @@ impl BackupStatusGuard {
.await .await
} }
Err(e) => { Err(e) => {
tracing::error!("Backup Failed: {}", e); tracing::error!("{}", t!("backup.bulk.failed-error", error = e));
tracing::debug!("{:?}", e); tracing::debug!("{:?}", e);
let err_string = e.to_string(); let err_string = e.to_string();
db.mutate(|db| { db.mutate(|db| {
@@ -111,8 +112,8 @@ impl BackupStatusGuard {
db, db,
None, None,
NotificationLevel::Error, NotificationLevel::Error,
"Backup Failed".to_owned(), t!("backup.bulk.failed-title").to_string(),
"Your backup failed to complete.".to_owned(), t!("backup.bulk.failed-message").to_string(),
BackupReport { BackupReport {
server: ServerBackupReport { server: ServerBackupReport {
attempted: true, attempted: true,
@@ -224,7 +225,7 @@ fn assure_backing_up<'a>(
.as_backup_progress_mut(); .as_backup_progress_mut();
if backing_up.transpose_ref().is_some() { if backing_up.transpose_ref().is_some() {
return Err(Error::new( return Err(Error::new(
eyre!("Server is already backing up!"), eyre!("{}", t!("backup.bulk.already-backing-up")),
ErrorKind::InvalidRequest, ErrorKind::InvalidRequest,
)); ));
} }
@@ -303,7 +304,7 @@ async fn perform_backup(
let mut backup_guard = Arc::try_unwrap(backup_guard).map_err(|_| { let mut backup_guard = Arc::try_unwrap(backup_guard).map_err(|_| {
Error::new( Error::new(
eyre!("leaked reference to BackupMountGuard"), eyre!("{}", t!("backup.bulk.leaked-reference")),
ErrorKind::Incoherent, ErrorKind::Incoherent,
) )
})?; })?;

View File

@@ -37,12 +37,12 @@ pub fn backup<C: Context>() -> ParentHandler<C> {
"create", "create",
from_fn_async(backup_bulk::backup_all) from_fn_async(backup_bulk::backup_all)
.no_display() .no_display()
.with_about("Create backup for all packages") .with_about("about.create-backup-all-packages")
.with_call_remote::<CliContext>(), .with_call_remote::<CliContext>(),
) )
.subcommand( .subcommand(
"target", "target",
target::target::<C>().with_about("Commands related to a backup target"), target::target::<C>().with_about("about.commands-backup-target"),
) )
} }
@@ -51,7 +51,7 @@ pub fn package_backup<C: Context>() -> ParentHandler<C> {
"restore", "restore",
from_fn_async(restore::restore_packages_rpc) from_fn_async(restore::restore_packages_rpc)
.no_display() .no_display()
.with_about("Restore package(s) from backup") .with_about("about.restore-packages-from-backup")
.with_call_remote::<CliContext>(), .with_call_remote::<CliContext>(),
) )
} }

View File

@@ -23,16 +23,19 @@ use crate::progress::ProgressUnits;
use crate::s9pk::S9pk; use crate::s9pk::S9pk;
use crate::service::service_map::DownloadInstallFuture; use crate::service::service_map::DownloadInstallFuture;
use crate::setup::SetupExecuteProgress; use crate::setup::SetupExecuteProgress;
use crate::system::sync_kiosk; use crate::system::{save_language, sync_kiosk};
use crate::util::serde::IoFormat; use crate::util::serde::{IoFormat, Pem};
use crate::{PLATFORM, PackageId}; use crate::{PLATFORM, PackageId};
#[derive(Deserialize, Serialize, Parser, TS)] #[derive(Deserialize, Serialize, Parser, TS)]
#[serde(rename_all = "camelCase")] #[serde(rename_all = "camelCase")]
#[command(rename_all = "kebab-case")] #[command(rename_all = "kebab-case")]
pub struct RestorePackageParams { pub struct RestorePackageParams {
#[arg(help = "help.arg.package-ids")]
pub ids: Vec<PackageId>, pub ids: Vec<PackageId>,
#[arg(help = "help.arg.backup-target-id")]
pub target_id: BackupTargetId, pub target_id: BackupTargetId,
#[arg(help = "help.arg.backup-password")]
pub password: String, pub password: String,
} }
@@ -63,7 +66,10 @@ pub async fn restore_packages_rpc(
match async { res.await?.await }.await { match async { res.await?.await }.await {
Ok(_) => (), Ok(_) => (),
Err(err) => { Err(err) => {
tracing::error!("Error restoring package {}: {}", id, err); tracing::error!(
"{}",
t!("backup.restore.package-error", id = id, error = err)
);
tracing::debug!("{:?}", err); tracing::debug!("{:?}", err);
} }
} }
@@ -75,10 +81,10 @@ pub async fn restore_packages_rpc(
} }
#[instrument(skip_all)] #[instrument(skip_all)]
pub async fn recover_full_embassy( pub async fn recover_full_server(
ctx: &SetupContext, ctx: &SetupContext,
disk_guid: Arc<String>, disk_guid: InternedString,
start_os_password: String, password: String,
recovery_source: TmpMountGuard, recovery_source: TmpMountGuard,
server_id: &str, server_id: &str,
recovery_password: &str, recovery_password: &str,
@@ -102,7 +108,7 @@ pub async fn recover_full_embassy(
)?; )?;
os_backup.account.password = argon2::hash_encoded( os_backup.account.password = argon2::hash_encoded(
start_os_password.as_bytes(), password.as_bytes(),
&rand::random::<[u8; 16]>()[..], &rand::random::<[u8; 16]>()[..],
&argon2::Config::rfc9106_low_mem(), &argon2::Config::rfc9106_low_mem(),
) )
@@ -111,16 +117,32 @@ pub async fn recover_full_embassy(
let kiosk = Some(kiosk.unwrap_or(true)).filter(|_| &*PLATFORM != "raspberrypi"); let kiosk = Some(kiosk.unwrap_or(true)).filter(|_| &*PLATFORM != "raspberrypi");
sync_kiosk(kiosk).await?; sync_kiosk(kiosk).await?;
let language = ctx.language.peek(|a| a.clone());
let keyboard = ctx.keyboard.peek(|a| a.clone());
if let Some(language) = &language {
save_language(&**language).await?;
}
if let Some(keyboard) = &keyboard {
keyboard.save().await?;
}
let db = ctx.db().await?; let db = ctx.db().await?;
db.put(&ROOT, &Database::init(&os_backup.account, kiosk)?) db.put(
.await?; &ROOT,
&Database::init(&os_backup.account, kiosk, language, keyboard)?,
)
.await?;
drop(db); drop(db);
let init_result = init(&ctx.webserver, &ctx.config, init_phases).await?; let config = ctx.config.peek(|c| c.clone());
let init_result = init(&ctx.webserver, &config, init_phases).await?;
let rpc_ctx = RpcContext::init( let rpc_ctx = RpcContext::init(
&ctx.webserver, &ctx.webserver,
&ctx.config, &config,
disk_guid.clone(), disk_guid.clone(),
Some(init_result), Some(init_result),
rpc_ctx_phases, rpc_ctx_phases,
@@ -145,7 +167,10 @@ pub async fn recover_full_embassy(
match async { res.await?.await }.await { match async { res.await?.await }.await {
Ok(_) => (), Ok(_) => (),
Err(err) => { Err(err) => {
tracing::error!("Error restoring package {}: {}", id, err); tracing::error!(
"{}",
t!("backup.restore.package-error", id = id, error = err)
);
tracing::debug!("{:?}", err); tracing::debug!("{:?}", err);
} }
} }
@@ -155,7 +180,14 @@ pub async fn recover_full_embassy(
.await; .await;
restore_phase.lock().await.complete(); restore_phase.lock().await.complete();
Ok(((&os_backup.account).try_into()?, rpc_ctx)) Ok((
SetupResult {
hostname: os_backup.account.hostname,
root_ca: Pem(os_backup.account.root_ca_cert),
needs_restart: ctx.install_rootfs.peek(|a| a.is_some()),
},
rpc_ctx,
))
} }
#[instrument(skip(ctx, backup_guard))] #[instrument(skip(ctx, backup_guard))]

View File

@@ -52,21 +52,21 @@ pub fn cifs<C: Context>() -> ParentHandler<C> {
"add", "add",
from_fn_async(add) from_fn_async(add)
.no_display() .no_display()
.with_about("Add a new backup target") .with_about("about.add-new-backup-target")
.with_call_remote::<CliContext>(), .with_call_remote::<CliContext>(),
) )
.subcommand( .subcommand(
"update", "update",
from_fn_async(update) from_fn_async(update)
.no_display() .no_display()
.with_about("Update an existing backup target") .with_about("about.update-existing-backup-target")
.with_call_remote::<CliContext>(), .with_call_remote::<CliContext>(),
) )
.subcommand( .subcommand(
"remove", "remove",
from_fn_async(remove) from_fn_async(remove)
.no_display() .no_display()
.with_about("Remove an existing backup target") .with_about("about.remove-existing-backup-target")
.with_call_remote::<CliContext>(), .with_call_remote::<CliContext>(),
) )
} }
@@ -75,9 +75,13 @@ pub fn cifs<C: Context>() -> ParentHandler<C> {
#[serde(rename_all = "camelCase")] #[serde(rename_all = "camelCase")]
#[command(rename_all = "kebab-case")] #[command(rename_all = "kebab-case")]
pub struct AddParams { pub struct AddParams {
#[arg(help = "help.arg.cifs-hostname")]
pub hostname: String, pub hostname: String,
#[arg(help = "help.arg.cifs-path")]
pub path: PathBuf, pub path: PathBuf,
#[arg(help = "help.arg.cifs-username")]
pub username: String, pub username: String,
#[arg(help = "help.arg.cifs-password")]
pub password: Option<String>, pub password: Option<String>,
} }
@@ -130,10 +134,15 @@ pub async fn add(
#[serde(rename_all = "camelCase")] #[serde(rename_all = "camelCase")]
#[command(rename_all = "kebab-case")] #[command(rename_all = "kebab-case")]
pub struct UpdateParams { pub struct UpdateParams {
#[arg(help = "help.arg.backup-target-id")]
pub id: BackupTargetId, pub id: BackupTargetId,
#[arg(help = "help.arg.cifs-hostname")]
pub hostname: String, pub hostname: String,
#[arg(help = "help.arg.cifs-path")]
pub path: PathBuf, pub path: PathBuf,
#[arg(help = "help.arg.cifs-username")]
pub username: String, pub username: String,
#[arg(help = "help.arg.cifs-password")]
pub password: Option<String>, pub password: Option<String>,
} }
@@ -151,7 +160,7 @@ pub async fn update(
id id
} else { } else {
return Err(Error::new( return Err(Error::new(
eyre!("Backup Target ID {} Not Found", id), eyre!("{}", t!("backup.target.cifs.target-not-found", id = id)),
ErrorKind::NotFound, ErrorKind::NotFound,
)); ));
}; };
@@ -171,7 +180,13 @@ pub async fn update(
.as_idx_mut(&id) .as_idx_mut(&id)
.ok_or_else(|| { .ok_or_else(|| {
Error::new( Error::new(
eyre!("Backup Target ID {} Not Found", BackupTargetId::Cifs { id }), eyre!(
"{}",
t!(
"backup.target.cifs.target-not-found",
id = BackupTargetId::Cifs { id }
)
),
ErrorKind::NotFound, ErrorKind::NotFound,
) )
})? })?
@@ -195,6 +210,7 @@ pub async fn update(
#[serde(rename_all = "camelCase")] #[serde(rename_all = "camelCase")]
#[command(rename_all = "kebab-case")] #[command(rename_all = "kebab-case")]
pub struct RemoveParams { pub struct RemoveParams {
#[arg(help = "help.arg.backup-target-id")]
pub id: BackupTargetId, pub id: BackupTargetId,
} }
@@ -203,7 +219,7 @@ pub async fn remove(ctx: RpcContext, RemoveParams { id }: RemoveParams) -> Resul
id id
} else { } else {
return Err(Error::new( return Err(Error::new(
eyre!("Backup Target ID {} Not Found", id), eyre!("{}", t!("backup.target.cifs.target-not-found", id = id)),
ErrorKind::NotFound, ErrorKind::NotFound,
)); ));
}; };
@@ -220,7 +236,7 @@ pub fn load(db: &DatabaseModel, id: u32) -> Result<Cifs, Error> {
.as_idx(&id) .as_idx(&id)
.ok_or_else(|| { .ok_or_else(|| {
Error::new( Error::new(
eyre!("Backup Target ID {} Not Found", id), eyre!("{}", t!("backup.target.cifs.target-not-found-id", id = id)),
ErrorKind::NotFound, ErrorKind::NotFound,
) )
})? })?

View File

@@ -143,13 +143,13 @@ pub fn target<C: Context>() -> ParentHandler<C> {
ParentHandler::new() ParentHandler::new()
.subcommand( .subcommand(
"cifs", "cifs",
cifs::cifs::<C>().with_about("Add, remove, or update a backup target"), cifs::cifs::<C>().with_about("about.add-remove-update-backup-target"),
) )
.subcommand( .subcommand(
"list", "list",
from_fn_async(list) from_fn_async(list)
.with_display_serializable() .with_display_serializable()
.with_about("List existing backup targets") .with_about("about.list-existing-backup-targets")
.with_call_remote::<CliContext>(), .with_call_remote::<CliContext>(),
) )
.subcommand( .subcommand(
@@ -159,20 +159,20 @@ pub fn target<C: Context>() -> ParentHandler<C> {
.with_custom_display_fn::<CliContext, _>(|params, info| { .with_custom_display_fn::<CliContext, _>(|params, info| {
display_backup_info(params.params, info) display_backup_info(params.params, info)
}) })
.with_about("Display package backup information") .with_about("about.display-package-backup-information")
.with_call_remote::<CliContext>(), .with_call_remote::<CliContext>(),
) )
.subcommand( .subcommand(
"mount", "mount",
from_fn_async(mount) from_fn_async(mount)
.with_about("Mount backup target") .with_about("about.mount-backup-target")
.with_call_remote::<CliContext>(), .with_call_remote::<CliContext>(),
) )
.subcommand( .subcommand(
"umount", "umount",
from_fn_async(umount) from_fn_async(umount)
.no_display() .no_display()
.with_about("Unmount backup target") .with_about("about.unmount-backup-target")
.with_call_remote::<CliContext>(), .with_call_remote::<CliContext>(),
) )
} }
@@ -268,8 +268,11 @@ fn display_backup_info(params: WithIoFormat<InfoParams>, info: BackupInfo) -> Re
#[serde(rename_all = "camelCase")] #[serde(rename_all = "camelCase")]
#[command(rename_all = "kebab-case")] #[command(rename_all = "kebab-case")]
pub struct InfoParams { pub struct InfoParams {
#[arg(help = "help.arg.backup-target-id")]
target_id: BackupTargetId, target_id: BackupTargetId,
#[arg(help = "help.arg.server-id")]
server_id: String, server_id: String,
#[arg(help = "help.arg.backup-password")]
password: String, password: String,
} }
@@ -305,11 +308,13 @@ lazy_static::lazy_static! {
#[serde(rename_all = "camelCase")] #[serde(rename_all = "camelCase")]
#[command(rename_all = "kebab-case")] #[command(rename_all = "kebab-case")]
pub struct MountParams { pub struct MountParams {
#[arg(help = "help.arg.backup-target-id")]
target_id: BackupTargetId, target_id: BackupTargetId,
#[arg(long)] #[arg(long, help = "help.arg.server-id")]
server_id: Option<String>, server_id: Option<String>,
#[arg(help = "help.arg.backup-password")]
password: String, // TODO: rpassword password: String, // TODO: rpassword
#[arg(long)] #[arg(long, help = "help.arg.allow-partial-backup")]
allow_partial: bool, allow_partial: bool,
} }
@@ -385,6 +390,7 @@ pub async fn mount(
#[serde(rename_all = "camelCase")] #[serde(rename_all = "camelCase")]
#[command(rename_all = "kebab-case")] #[command(rename_all = "kebab-case")]
pub struct UmountParams { pub struct UmountParams {
#[arg(help = "help.arg.backup-target-id")]
target_id: Option<BackupTargetId>, target_id: Option<BackupTargetId>,
} }

View File

@@ -17,6 +17,7 @@ pub fn main(args: impl IntoIterator<Item = OsString>) {
|cfg: ContainerClientConfig| Ok(ContainerCliContext::init(cfg)), |cfg: ContainerClientConfig| Ok(ContainerCliContext::init(cfg)),
crate::service::effects::handler(), crate::service::effects::handler(),
) )
.mutate_command(super::translate_cli)
.run(args) .run(args)
{ {
match e.data { match e.data {

View File

@@ -0,0 +1,11 @@
use rust_i18n::t;
pub fn renamed(old: &str, new: &str) -> ! {
eprintln!("{}", t!("bins.deprecated.renamed", old = old, new = new));
std::process::exit(1)
}
pub fn removed(name: &str) -> ! {
eprintln!("{}", t!("bins.deprecated.removed", name = name));
std::process::exit(1)
}

194
core/src/bins/mod.rs Normal file
View File

@@ -0,0 +1,194 @@
use std::collections::{BTreeMap, VecDeque};
use std::ffi::OsString;
use std::path::Path;
use rust_i18n::t;
pub mod container_cli;
pub mod deprecated;
pub mod registry;
pub mod start_cli;
pub mod start_init;
pub mod startd;
pub mod tunnel;
pub fn set_locale_from_env() {
let lang = std::env::var("LANG").ok();
let lang = lang
.as_deref()
.map_or("C", |l| l.strip_suffix(".UTF-8").unwrap_or(l));
set_locale(lang)
}
pub fn set_locale(lang: &str) {
let mut best = None;
let prefix = lang.split_inclusive("_").next().unwrap();
for l in rust_i18n::available_locales!() {
if l == lang {
best = Some(l);
break;
}
if best.is_none() && l.starts_with(prefix) {
best = Some(l);
}
}
rust_i18n::set_locale(best.unwrap_or(lang));
}
pub fn translate_cli(mut cmd: clap::Command) -> clap::Command {
fn translate(s: impl std::fmt::Display) -> String {
t!(s.to_string()).into_owned()
}
if let Some(s) = cmd.get_about() {
let s = translate(s);
cmd = cmd.about(s);
}
if let Some(s) = cmd.get_long_about() {
let s = translate(s);
cmd = cmd.long_about(s);
}
if let Some(s) = cmd.get_before_help() {
let s = translate(s);
cmd = cmd.before_help(s);
}
if let Some(s) = cmd.get_before_long_help() {
let s = translate(s);
cmd = cmd.before_long_help(s);
}
if let Some(s) = cmd.get_after_help() {
let s = translate(s);
cmd = cmd.after_help(s);
}
if let Some(s) = cmd.get_after_long_help() {
let s = translate(s);
cmd = cmd.after_long_help(s);
}
let arg_ids = cmd
.get_arguments()
.map(|a| a.get_id().clone())
.collect::<Vec<_>>();
for id in arg_ids {
cmd = cmd.mut_arg(id, |arg| {
let arg = if let Some(s) = arg.get_help() {
let s = translate(s);
arg.help(s)
} else {
arg
};
if let Some(s) = arg.get_long_help() {
let s = translate(s);
arg.long_help(s)
} else {
arg
}
});
}
for cmd in cmd.get_subcommands_mut() {
*cmd = translate_cli(cmd.clone());
}
cmd
}
#[derive(Default)]
pub struct MultiExecutable {
default: Option<&'static str>,
bins: BTreeMap<&'static str, fn(VecDeque<OsString>)>,
}
impl MultiExecutable {
pub fn enable_startd(&mut self) -> &mut Self {
self.bins.insert("startd", startd::main);
self.bins
.insert("embassyd", |_| deprecated::renamed("embassyd", "startd"));
self.bins
.insert("embassy-init", |_| deprecated::removed("embassy-init"));
self
}
pub fn enable_start_cli(&mut self) -> &mut Self {
self.bins.insert("start-cli", start_cli::main);
self.bins.insert("embassy-cli", |_| {
deprecated::renamed("embassy-cli", "start-cli")
});
self.bins
.insert("embassy-sdk", |_| deprecated::removed("embassy-sdk"));
self
}
pub fn enable_start_container(&mut self) -> &mut Self {
self.bins.insert("start-container", container_cli::main);
self
}
pub fn enable_start_registryd(&mut self) -> &mut Self {
self.bins.insert("start-registryd", registry::main);
self
}
pub fn enable_start_registry(&mut self) -> &mut Self {
self.bins.insert("start-registry", registry::cli);
self
}
pub fn enable_start_tunneld(&mut self) -> &mut Self {
self.bins.insert("start-tunneld", tunnel::main);
self
}
pub fn enable_start_tunnel(&mut self) -> &mut Self {
self.bins.insert("start-tunnel", tunnel::cli);
self
}
pub fn set_default(&mut self, name: &str) -> &mut Self {
if let Some((name, _)) = self.bins.get_key_value(name) {
self.default = Some(*name);
} else {
panic!("{}", t!("bins.mod.does-not-exist", name = name));
}
self
}
fn select_executable(&self, name: &str) -> Option<fn(VecDeque<OsString>)> {
self.bins.get(&name).copied()
}
pub fn execute(&self) {
set_locale_from_env();
let mut popped = Vec::with_capacity(2);
let mut args = std::env::args_os().collect::<VecDeque<_>>();
for _ in 0..2 {
if let Some(s) = args.pop_front() {
if let Some(name) = Path::new(&*s).file_name().and_then(|s| s.to_str()) {
if name == "--contents" {
for name in self.bins.keys() {
println!("{name}");
}
return;
}
if let Some(x) = self.select_executable(&name) {
args.push_front(s);
return x(args);
}
}
popped.push(s);
}
}
if let Some(default) = self.default {
while let Some(arg) = popped.pop() {
args.push_front(arg);
}
return self.bins[default](args);
}
let args = std::env::args().collect::<VecDeque<_>>();
eprintln!(
"{}",
t!(
"bins.mod.unknown-executable",
name = args
.get(1)
.or_else(|| args.get(0))
.map(|s| s.as_str())
.unwrap_or("N/A")
)
);
std::process::exit(1);
}
}

View File

@@ -3,6 +3,7 @@ use std::ffi::OsString;
use clap::Parser; use clap::Parser;
use futures::FutureExt; use futures::FutureExt;
use rpc_toolkit::CliApp; use rpc_toolkit::CliApp;
use rust_i18n::t;
use tokio::signal::unix::signal; use tokio::signal::unix::signal;
use tracing::instrument; use tracing::instrument;
@@ -77,7 +78,7 @@ pub fn main(args: impl IntoIterator<Item = OsString>) {
let rt = tokio::runtime::Builder::new_multi_thread() let rt = tokio::runtime::Builder::new_multi_thread()
.enable_all() .enable_all()
.build() .build()
.expect("failed to initialize runtime"); .expect(&t!("bins.registry.failed-to-initialize-runtime"));
rt.block_on(inner_main(&config)) rt.block_on(inner_main(&config))
}; };
@@ -99,6 +100,7 @@ pub fn cli(args: impl IntoIterator<Item = OsString>) {
|cfg: ClientConfig| Ok(CliContext::init(cfg.load()?)?), |cfg: ClientConfig| Ok(CliContext::init(cfg.load()?)?),
crate::registry::registry_api(), crate::registry::registry_api(),
) )
.mutate_command(super::translate_cli)
.run(args) .run(args)
{ {
match e.data { match e.data {

View File

@@ -19,6 +19,7 @@ pub fn main(args: impl IntoIterator<Item = OsString>) {
|cfg: ClientConfig| Ok(CliContext::init(cfg.load()?)?), |cfg: ClientConfig| Ok(CliContext::init(cfg.load()?)?),
crate::main_api(), crate::main_api(),
) )
.mutate_command(super::translate_cli)
.run(args) .run(args)
{ {
match e.data { match e.data {

View File

@@ -1,11 +1,9 @@
use std::sync::Arc;
use tokio::process::Command; use tokio::process::Command;
use tracing::instrument; use tracing::instrument;
use crate::context::config::ServerConfig; use crate::context::config::ServerConfig;
use crate::context::rpc::InitRpcContextPhases; use crate::context::rpc::InitRpcContextPhases;
use crate::context::{DiagnosticContext, InitContext, InstallContext, RpcContext, SetupContext}; use crate::context::{DiagnosticContext, InitContext, RpcContext, SetupContext};
use crate::disk::REPAIR_DISK_PATH; use crate::disk::REPAIR_DISK_PATH;
use crate::disk::fsck::RepairStrategy; use crate::disk::fsck::RepairStrategy;
use crate::disk::main::DEFAULT_PASSWORD; use crate::disk::main::DEFAULT_PASSWORD;
@@ -27,7 +25,13 @@ async fn setup_or_init(
if let Some(firmware) = check_for_firmware_update() if let Some(firmware) = check_for_firmware_update()
.await .await
.map_err(|e| { .map_err(|e| {
tracing::warn!("Error checking for firmware update: {e}"); tracing::warn!(
"{}",
t!(
"bins.start-init.error-checking-firmware",
error = e.to_string()
)
);
tracing::debug!("{e:?}"); tracing::debug!("{e:?}");
}) })
.ok() .ok()
@@ -35,14 +39,21 @@ async fn setup_or_init(
{ {
let init_ctx = InitContext::init(config).await?; let init_ctx = InitContext::init(config).await?;
let handle = &init_ctx.progress; let handle = &init_ctx.progress;
let mut update_phase = handle.add_phase("Updating Firmware".into(), Some(10)); let mut update_phase =
let mut reboot_phase = handle.add_phase("Rebooting".into(), Some(1)); handle.add_phase(t!("bins.start-init.updating-firmware").into(), Some(10));
let mut reboot_phase = handle.add_phase(t!("bins.start-init.rebooting").into(), Some(1));
server.serve_ui_for(init_ctx); server.serve_ui_for(init_ctx);
update_phase.start(); update_phase.start();
if let Err(e) = update_firmware(firmware).await { if let Err(e) = update_firmware(firmware).await {
tracing::warn!("Error performing firmware update: {e}"); tracing::warn!(
"{}",
t!(
"bins.start-init.error-firmware-update",
error = e.to_string()
)
);
tracing::debug!("{e:?}"); tracing::debug!("{e:?}");
} else { } else {
update_phase.complete(); update_phase.complete();
@@ -79,40 +90,11 @@ async fn setup_or_init(
.invoke(crate::ErrorKind::OpenSsl) .invoke(crate::ErrorKind::OpenSsl)
.await?; .await?;
if tokio::fs::metadata("/run/live/medium").await.is_ok() {
Command::new("sed")
.arg("-i")
.arg("s/PasswordAuthentication no/PasswordAuthentication yes/g")
.arg("/etc/ssh/sshd_config")
.invoke(crate::ErrorKind::Filesystem)
.await?;
Command::new("systemctl")
.arg("reload")
.arg("ssh")
.invoke(crate::ErrorKind::OpenSsh)
.await?;
let ctx = InstallContext::init().await?;
server.serve_ui_for(ctx.clone());
ctx.shutdown
.subscribe()
.recv()
.await
.expect("context dropped");
return Ok(Err(Shutdown {
disk_guid: None,
restart: true,
}));
}
if tokio::fs::metadata("/media/startos/config/disk.guid") if tokio::fs::metadata("/media/startos/config/disk.guid")
.await .await
.is_err() .is_err()
{ {
let ctx = SetupContext::init(server, config)?; let ctx = SetupContext::init(server, config.clone())?;
server.serve_ui_for(ctx.clone()); server.serve_ui_for(ctx.clone());
@@ -127,7 +109,13 @@ async fn setup_or_init(
.invoke(ErrorKind::NotFound) .invoke(ErrorKind::NotFound)
.await .await
{ {
tracing::error!("Failed to kill kiosk: {}", e); tracing::error!(
"{}",
t!(
"bins.start-init.failed-to-kill-kiosk",
error = e.to_string()
)
);
tracing::debug!("{:?}", e); tracing::debug!("{:?}", e);
} }
@@ -136,7 +124,7 @@ async fn setup_or_init(
Some(Err(e)) => return Err(e.clone_output()), Some(Err(e)) => return Err(e.clone_output()),
None => { None => {
return Err(Error::new( return Err(Error::new(
eyre!("Setup mode exited before setup completed"), eyre!("{}", t!("bins.start-init.setup-mode-exited")),
ErrorKind::Unknown, ErrorKind::Unknown,
)); ));
} }
@@ -146,7 +134,8 @@ async fn setup_or_init(
let handle = init_ctx.progress.clone(); let handle = init_ctx.progress.clone();
let err_channel = init_ctx.error.clone(); let err_channel = init_ctx.error.clone();
let mut disk_phase = handle.add_phase("Opening data drive".into(), Some(10)); let mut disk_phase =
handle.add_phase(t!("bins.start-init.opening-data-drive").into(), Some(10));
let init_phases = InitPhases::new(&handle); let init_phases = InitPhases::new(&handle);
let rpc_ctx_phases = InitRpcContextPhases::new(&handle); let rpc_ctx_phases = InitRpcContextPhases::new(&handle);
@@ -156,9 +145,9 @@ async fn setup_or_init(
disk_phase.start(); disk_phase.start();
let guid_string = tokio::fs::read_to_string("/media/startos/config/disk.guid") // unique identifier for volume group - keeps track of the disk that goes with your embassy let guid_string = tokio::fs::read_to_string("/media/startos/config/disk.guid") // unique identifier for volume group - keeps track of the disk that goes with your embassy
.await?; .await?;
let disk_guid = Arc::new(String::from(guid_string.trim())); let disk_guid = InternedString::intern(guid_string.trim());
let requires_reboot = crate::disk::main::import( let requires_reboot = crate::disk::main::import(
&**disk_guid, &*disk_guid,
DATA_DIR, DATA_DIR,
if tokio::fs::metadata(REPAIR_DISK_PATH).await.is_ok() { if tokio::fs::metadata(REPAIR_DISK_PATH).await.is_ok() {
RepairStrategy::Aggressive RepairStrategy::Aggressive
@@ -178,11 +167,12 @@ async fn setup_or_init(
.with_ctx(|_| (crate::ErrorKind::Filesystem, REPAIR_DISK_PATH))?; .with_ctx(|_| (crate::ErrorKind::Filesystem, REPAIR_DISK_PATH))?;
} }
disk_phase.complete(); disk_phase.complete();
tracing::info!("Loaded Disk"); tracing::info!("{}", t!("bins.start-init.loaded-disk"));
if requires_reboot.0 { if requires_reboot.0 {
tracing::info!("Rebooting..."); tracing::info!("{}", t!("bins.start-init.rebooting"));
let mut reboot_phase = handle.add_phase("Rebooting".into(), Some(1)); let mut reboot_phase =
handle.add_phase(t!("bins.start-init.rebooting").into(), Some(1));
reboot_phase.start(); reboot_phase.start();
return Ok(Err(Shutdown { return Ok(Err(Shutdown {
disk_guid: Some(disk_guid), disk_guid: Some(disk_guid),
@@ -236,11 +226,10 @@ pub async fn main(
.await .await
.is_ok() .is_ok()
{ {
Some(Arc::new( Some(InternedString::intern(
tokio::fs::read_to_string("/media/startos/config/disk.guid") // unique identifier for volume group - keeps track of the disk that goes with your embassy tokio::fs::read_to_string("/media/startos/config/disk.guid") // unique identifier for volume group - keeps track of the disk that goes with your embassy
.await? .await?
.trim() .trim(),
.to_owned(),
)) ))
} else { } else {
None None

View File

@@ -1,11 +1,11 @@
use std::cmp::max; use std::cmp::max;
use std::ffi::OsString; use std::ffi::OsString;
use std::sync::Arc;
use std::time::Duration; use std::time::Duration;
use clap::Parser; use clap::Parser;
use color_eyre::eyre::eyre; use color_eyre::eyre::eyre;
use futures::{FutureExt, TryFutureExt}; use futures::{FutureExt, TryFutureExt};
use rust_i18n::t;
use tokio::signal::unix::signal; use tokio::signal::unix::signal;
use tracing::instrument; use tracing::instrument;
@@ -15,11 +15,11 @@ use crate::context::{DiagnosticContext, InitContext, RpcContext};
use crate::net::gateway::{BindTcp, SelfContainedNetworkInterfaceListener, UpgradableListener}; use crate::net::gateway::{BindTcp, SelfContainedNetworkInterfaceListener, UpgradableListener};
use crate::net::static_server::refresher; use crate::net::static_server::refresher;
use crate::net::web_server::{Acceptor, WebServer}; use crate::net::web_server::{Acceptor, WebServer};
use crate::prelude::*;
use crate::shutdown::Shutdown; use crate::shutdown::Shutdown;
use crate::system::launch_metrics_task; use crate::system::launch_metrics_task;
use crate::util::io::append_file; use crate::util::io::append_file;
use crate::util::logger::LOGGER; use crate::util::logger::LOGGER;
use crate::{Error, ErrorKind, ResultExt};
#[instrument(skip_all)] #[instrument(skip_all)]
async fn inner_main( async fn inner_main(
@@ -53,11 +53,10 @@ async fn inner_main(
let ctx = RpcContext::init( let ctx = RpcContext::init(
&server.acceptor_setter(), &server.acceptor_setter(),
config, config,
Arc::new( InternedString::intern(
tokio::fs::read_to_string("/media/startos/config/disk.guid") // unique identifier for volume group - keeps track of the disk that goes with your embassy tokio::fs::read_to_string("/media/startos/config/disk.guid") // unique identifier for volume group - keeps track of the disk that goes with your embassy
.await? .await?
.trim() .trim(),
.to_owned(),
), ),
None, None,
rpc_ctx_phases, rpc_ctx_phases,
@@ -114,11 +113,11 @@ async fn inner_main(
metrics_task metrics_task
.map_err(|e| { .map_err(|e| {
Error::new( Error::new(
eyre!("{}", e).wrap_err("Metrics daemon panicked!"), eyre!("{}", e).wrap_err(t!("bins.startd.metrics-daemon-panicked").to_string()),
ErrorKind::Unknown, ErrorKind::Unknown,
) )
}) })
.map_ok(|_| tracing::debug!("Metrics daemon Shutdown")) .map_ok(|_| tracing::debug!("{}", t!("bins.startd.metrics-daemon-shutdown")))
.await?; .await?;
let shutdown = shutdown_recv let shutdown = shutdown_recv
@@ -146,7 +145,7 @@ pub fn main(args: impl IntoIterator<Item = OsString>) {
.worker_threads(max(1, num_cpus::get())) .worker_threads(max(1, num_cpus::get()))
.enable_all() .enable_all()
.build() .build()
.expect("failed to initialize runtime"); .expect(&t!("bins.startd.failed-to-initialize-runtime"));
let res = rt.block_on(async { let res = rt.block_on(async {
let mut server = WebServer::new( let mut server = WebServer::new(
Acceptor::bind_upgradable(SelfContainedNetworkInterfaceListener::bind(BindTcp, 80)), Acceptor::bind_upgradable(SelfContainedNetworkInterfaceListener::bind(BindTcp, 80)),
@@ -167,11 +166,10 @@ pub fn main(args: impl IntoIterator<Item = OsString>) {
.await .await
.is_ok() .is_ok()
{ {
Some(Arc::new( Some(InternedString::intern(
tokio::fs::read_to_string("/media/startos/config/disk.guid") // unique identifier for volume group - keeps track of the disk that goes with your embassy tokio::fs::read_to_string("/media/startos/config/disk.guid") // unique identifier for volume group - keeps track of the disk that goes with your embassy
.await? .await?
.trim() .trim(),
.to_owned(),
)) ))
} else { } else {
None None

View File

@@ -6,6 +6,7 @@ use std::time::Duration;
use clap::Parser; use clap::Parser;
use futures::FutureExt; use futures::FutureExt;
use rpc_toolkit::CliApp; use rpc_toolkit::CliApp;
use rust_i18n::t;
use tokio::signal::unix::signal; use tokio::signal::unix::signal;
use tracing::instrument; use tracing::instrument;
use visit_rs::Visit; use visit_rs::Visit;
@@ -70,7 +71,7 @@ async fn inner_main(config: &TunnelConfig) -> Result<(), Error> {
true true
} }
Err(e) => { Err(e) => {
tracing::error!("error adding ssl listener: {e}"); tracing::error!("{}", t!("bins.tunnel.error-adding-ssl-listener", error = e.to_string()));
tracing::debug!("{e:?}"); tracing::debug!("{e:?}");
false false
@@ -92,7 +93,7 @@ async fn inner_main(config: &TunnelConfig) -> Result<(), Error> {
} }
.await .await
{ {
tracing::error!("error updating webserver bind: {e}"); tracing::error!("{}", t!("bins.tunnel.error-updating-webserver-bind", error = e.to_string()));
tracing::debug!("{e:?}"); tracing::debug!("{e:?}");
tokio::time::sleep(Duration::from_secs(5)).await; tokio::time::sleep(Duration::from_secs(5)).await;
} }
@@ -157,7 +158,7 @@ pub fn main(args: impl IntoIterator<Item = OsString>) {
let rt = tokio::runtime::Builder::new_multi_thread() let rt = tokio::runtime::Builder::new_multi_thread()
.enable_all() .enable_all()
.build() .build()
.expect("failed to initialize runtime"); .expect(&t!("bins.tunnel.failed-to-initialize-runtime"));
rt.block_on(inner_main(&config)) rt.block_on(inner_main(&config))
}; };
@@ -179,6 +180,7 @@ pub fn cli(args: impl IntoIterator<Item = OsString>) {
|cfg: ClientConfig| Ok(CliContext::init(cfg.load()?)?), |cfg: ClientConfig| Ok(CliContext::init(cfg.load()?)?),
crate::tunnel::api::tunnel_api(), crate::tunnel::api::tunnel_api(),
) )
.mutate_command(super::translate_cli)
.run(args) .run(args)
{ {
match e.data { match e.data {

View File

@@ -7,6 +7,7 @@ use std::sync::Arc;
use cookie::{Cookie, Expiration, SameSite}; use cookie::{Cookie, Expiration, SameSite};
use cookie_store::CookieStore; use cookie_store::CookieStore;
use http::HeaderMap; use http::HeaderMap;
use imbl::OrdMap;
use imbl_value::InternedString; use imbl_value::InternedString;
use josekit::jwk::Jwk; use josekit::jwk::Jwk;
use once_cell::sync::OnceCell; use once_cell::sync::OnceCell;
@@ -22,7 +23,7 @@ use tracing::instrument;
use super::setup::CURRENT_SECRET; use super::setup::CURRENT_SECRET;
use crate::context::config::{ClientConfig, local_config_path}; use crate::context::config::{ClientConfig, local_config_path};
use crate::context::{DiagnosticContext, InitContext, InstallContext, RpcContext, SetupContext}; use crate::context::{DiagnosticContext, InitContext, RpcContext, SetupContext};
use crate::developer::{OS_DEVELOPER_KEY_PATH, default_developer_key_path}; use crate::developer::{OS_DEVELOPER_KEY_PATH, default_developer_key_path};
use crate::middleware::auth::local::LocalAuthContext; use crate::middleware::auth::local::LocalAuthContext;
use crate::prelude::*; use crate::prelude::*;
@@ -37,6 +38,8 @@ pub struct CliContextSeed {
pub registry_url: Option<Url>, pub registry_url: Option<Url>,
pub registry_hostname: Vec<InternedString>, pub registry_hostname: Vec<InternedString>,
pub registry_listen: Option<SocketAddr>, pub registry_listen: Option<SocketAddr>,
pub s9pk_s3base: Option<Url>,
pub s9pk_s3bucket: Option<InternedString>,
pub tunnel_addr: Option<SocketAddr>, pub tunnel_addr: Option<SocketAddr>,
pub tunnel_listen: Option<SocketAddr>, pub tunnel_listen: Option<SocketAddr>,
pub client: Client, pub client: Client,
@@ -128,6 +131,8 @@ impl CliContext {
.transpose()?, .transpose()?,
registry_hostname: config.registry_hostname.unwrap_or_default(), registry_hostname: config.registry_hostname.unwrap_or_default(),
registry_listen: config.registry_listen, registry_listen: config.registry_listen,
s9pk_s3base: config.s9pk_s3base,
s9pk_s3bucket: config.s9pk_s3bucket,
tunnel_addr: config.tunnel, tunnel_addr: config.tunnel,
tunnel_listen: config.tunnel_listen, tunnel_listen: config.tunnel_listen,
client: { client: {
@@ -159,21 +164,23 @@ impl CliContext {
if !path.exists() { if !path.exists() {
continue; continue;
} }
let pair = <ed25519::KeypairBytes as ed25519::pkcs8::DecodePrivateKey>::from_pkcs8_pem( let pair =
&std::fs::read_to_string(path)?, <ed25519::KeypairBytes as ed25519::pkcs8::DecodePrivateKey>::from_pkcs8_pem(
) &std::fs::read_to_string(path)?,
.with_kind(crate::ErrorKind::Pem)?;
let secret = ed25519_dalek::SecretKey::try_from(&pair.secret_key[..]).map_err(|_| {
Error::new(
eyre!("pkcs8 key is of incorrect length"),
ErrorKind::OpenSsl,
) )
})?; .with_kind(crate::ErrorKind::Pem)?;
return Ok(secret.into()) let secret =
ed25519_dalek::SecretKey::try_from(&pair.secret_key[..]).map_err(|_| {
Error::new(
eyre!("{}", t!("context.cli.pkcs8-key-incorrect-length")),
ErrorKind::OpenSsl,
)
})?;
return Ok(secret.into());
} }
Err(Error::new( Err(Error::new(
eyre!("Developer Key does not exist! Please run `start-cli init-key` before running this command."), eyre!("{}", t!("context.cli.developer-key-does-not-exist")),
crate::ErrorKind::Uninitialized crate::ErrorKind::Uninitialized,
)) ))
}) })
} }
@@ -188,14 +195,18 @@ impl CliContext {
"http" => "ws", "http" => "ws",
_ => { _ => {
return Err(Error::new( return Err(Error::new(
eyre!("Cannot parse scheme from base URL"), eyre!("{}", t!("context.cli.cannot-parse-scheme-from-base-url")),
crate::ErrorKind::ParseUrl, crate::ErrorKind::ParseUrl,
) )
.into()); .into());
} }
}; };
url.set_scheme(ws_scheme) url.set_scheme(ws_scheme).map_err(|_| {
.map_err(|_| Error::new(eyre!("Cannot set URL scheme"), crate::ErrorKind::ParseUrl))?; Error::new(
eyre!("{}", t!("context.cli.cannot-set-url-scheme")),
crate::ErrorKind::ParseUrl,
)
})?;
url.path_segments_mut() url.path_segments_mut()
.map_err(|_| eyre!("Url cannot be base")) .map_err(|_| eyre!("Url cannot be base"))
.with_kind(crate::ErrorKind::ParseUrl)? .with_kind(crate::ErrorKind::ParseUrl)?
@@ -238,10 +249,16 @@ impl CliContext {
where where
Self: CallRemote<RemoteContext>, Self: CallRemote<RemoteContext>,
{ {
<Self as CallRemote<RemoteContext, Empty>>::call_remote(&self, method, params, Empty {}) <Self as CallRemote<RemoteContext, Empty>>::call_remote(
.await &self,
.map_err(Error::from) method,
.with_ctx(|e| (e.kind, method)) OrdMap::new(),
params,
Empty {},
)
.await
.map_err(Error::from)
.with_ctx(|e| (e.kind, method))
} }
pub async fn call_remote_with<RemoteContext, T>( pub async fn call_remote_with<RemoteContext, T>(
&self, &self,
@@ -252,10 +269,16 @@ impl CliContext {
where where
Self: CallRemote<RemoteContext, T>, Self: CallRemote<RemoteContext, T>,
{ {
<Self as CallRemote<RemoteContext, T>>::call_remote(&self, method, params, extra) <Self as CallRemote<RemoteContext, T>>::call_remote(
.await &self,
.map_err(Error::from) method,
.with_ctx(|e| (e.kind, method)) OrdMap::new(),
params,
extra,
)
.await
.map_err(Error::from)
.with_ctx(|e| (e.kind, method))
} }
} }
impl AsRef<Jwk> for CliContext { impl AsRef<Jwk> for CliContext {
@@ -292,7 +315,13 @@ impl AsRef<Client> for CliContext {
} }
impl CallRemote<RpcContext> for CliContext { impl CallRemote<RpcContext> for CliContext {
async fn call_remote(&self, method: &str, params: Value, _: Empty) -> Result<Value, RpcError> { async fn call_remote(
&self,
method: &str,
_: OrdMap<&'static str, Value>,
params: Value,
_: Empty,
) -> Result<Value, RpcError> {
if let Ok(local) = read_file_to_string(RpcContext::LOCAL_AUTH_COOKIE_PATH).await { if let Ok(local) = read_file_to_string(RpcContext::LOCAL_AUTH_COOKIE_PATH).await {
self.cookie_store self.cookie_store
.lock() .lock()
@@ -319,7 +348,13 @@ impl CallRemote<RpcContext> for CliContext {
} }
} }
impl CallRemote<DiagnosticContext> for CliContext { impl CallRemote<DiagnosticContext> for CliContext {
async fn call_remote(&self, method: &str, params: Value, _: Empty) -> Result<Value, RpcError> { async fn call_remote(
&self,
method: &str,
_: OrdMap<&'static str, Value>,
params: Value,
_: Empty,
) -> Result<Value, RpcError> {
crate::middleware::auth::signature::call_remote( crate::middleware::auth::signature::call_remote(
self, self,
self.rpc_url.clone(), self.rpc_url.clone(),
@@ -332,7 +367,13 @@ impl CallRemote<DiagnosticContext> for CliContext {
} }
} }
impl CallRemote<InitContext> for CliContext { impl CallRemote<InitContext> for CliContext {
async fn call_remote(&self, method: &str, params: Value, _: Empty) -> Result<Value, RpcError> { async fn call_remote(
&self,
method: &str,
_: OrdMap<&'static str, Value>,
params: Value,
_: Empty,
) -> Result<Value, RpcError> {
crate::middleware::auth::signature::call_remote( crate::middleware::auth::signature::call_remote(
self, self,
self.rpc_url.clone(), self.rpc_url.clone(),
@@ -345,20 +386,13 @@ impl CallRemote<InitContext> for CliContext {
} }
} }
impl CallRemote<SetupContext> for CliContext { impl CallRemote<SetupContext> for CliContext {
async fn call_remote(&self, method: &str, params: Value, _: Empty) -> Result<Value, RpcError> { async fn call_remote(
crate::middleware::auth::signature::call_remote( &self,
self, method: &str,
self.rpc_url.clone(), _: OrdMap<&'static str, Value>,
HeaderMap::new(), params: Value,
self.rpc_url.host_str(), _: Empty,
method, ) -> Result<Value, RpcError> {
params,
)
.await
}
}
impl CallRemote<InstallContext> for CliContext {
async fn call_remote(&self, method: &str, params: Value, _: Empty) -> Result<Value, RpcError> {
crate::middleware::auth::signature::call_remote( crate::middleware::auth::signature::call_remote(
self, self,
self.rpc_url.clone(), self.rpc_url.clone(),

View File

@@ -58,27 +58,31 @@ pub trait ContextConfig: DeserializeOwned + Default {
#[command(rename_all = "kebab-case")] #[command(rename_all = "kebab-case")]
#[command(version = crate::version::Current::default().semver().to_string())] #[command(version = crate::version::Current::default().semver().to_string())]
pub struct ClientConfig { pub struct ClientConfig {
#[arg(short = 'c', long)] #[arg(short = 'c', long, help = "help.arg.config-file-path")]
pub config: Option<PathBuf>, pub config: Option<PathBuf>,
#[arg(short = 'H', long)] #[arg(short = 'H', long, help = "help.arg.host-url")]
pub host: Option<Url>, pub host: Option<Url>,
#[arg(short = 'r', long)] #[arg(short = 'r', long, help = "help.arg.registry-url")]
pub registry: Option<Url>, pub registry: Option<Url>,
#[arg(long)] #[arg(long, help = "help.arg.registry-hostname")]
pub registry_hostname: Option<Vec<InternedString>>, pub registry_hostname: Option<Vec<InternedString>>,
#[arg(skip)] #[arg(skip)]
pub registry_listen: Option<SocketAddr>, pub registry_listen: Option<SocketAddr>,
#[arg(short = 't', long)] #[arg(long, help = "help.s9pk-s3base")]
pub s9pk_s3base: Option<Url>,
#[arg(long, help = "help.s9pk-s3bucket")]
pub s9pk_s3bucket: Option<InternedString>,
#[arg(short = 't', long, help = "help.arg.tunnel-address")]
pub tunnel: Option<SocketAddr>, pub tunnel: Option<SocketAddr>,
#[arg(skip)] #[arg(skip)]
pub tunnel_listen: Option<SocketAddr>, pub tunnel_listen: Option<SocketAddr>,
#[arg(short = 'p', long)] #[arg(short = 'p', long, help = "help.arg.proxy-url")]
pub proxy: Option<Url>, pub proxy: Option<Url>,
#[arg(skip)] #[arg(skip)]
pub socks_listen: Option<SocketAddr>, pub socks_listen: Option<SocketAddr>,
#[arg(long)] #[arg(long, help = "help.arg.cookie-path")]
pub cookie_path: Option<PathBuf>, pub cookie_path: Option<PathBuf>,
#[arg(long)] #[arg(long, help = "help.arg.developer-key-path")]
pub developer_key_path: Option<PathBuf>, pub developer_key_path: Option<PathBuf>,
} }
impl ContextConfig for ClientConfig { impl ContextConfig for ClientConfig {
@@ -89,8 +93,13 @@ impl ContextConfig for ClientConfig {
self.host = self.host.take().or(other.host); self.host = self.host.take().or(other.host);
self.registry = self.registry.take().or(other.registry); self.registry = self.registry.take().or(other.registry);
self.registry_hostname = self.registry_hostname.take().or(other.registry_hostname); self.registry_hostname = self.registry_hostname.take().or(other.registry_hostname);
self.registry_listen = self.registry_listen.take().or(other.registry_listen);
self.s9pk_s3base = self.s9pk_s3base.take().or(other.s9pk_s3base);
self.s9pk_s3bucket = self.s9pk_s3bucket.take().or(other.s9pk_s3bucket);
self.tunnel = self.tunnel.take().or(other.tunnel); self.tunnel = self.tunnel.take().or(other.tunnel);
self.tunnel_listen = self.tunnel_listen.take().or(other.tunnel_listen);
self.proxy = self.proxy.take().or(other.proxy); self.proxy = self.proxy.take().or(other.proxy);
self.socks_listen = self.socks_listen.take().or(other.socks_listen);
self.cookie_path = self.cookie_path.take().or(other.cookie_path); self.cookie_path = self.cookie_path.take().or(other.cookie_path);
self.developer_key_path = self.developer_key_path.take().or(other.developer_key_path); self.developer_key_path = self.developer_key_path.take().or(other.developer_key_path);
} }
@@ -109,21 +118,19 @@ impl ClientConfig {
#[serde(rename_all = "kebab-case")] #[serde(rename_all = "kebab-case")]
#[command(rename_all = "kebab-case")] #[command(rename_all = "kebab-case")]
pub struct ServerConfig { pub struct ServerConfig {
#[arg(short, long)] #[arg(short, long, help = "help.arg.config-file-path")]
pub config: Option<PathBuf>, pub config: Option<PathBuf>,
#[arg(long)]
pub ethernet_interface: Option<String>,
#[arg(skip)] #[arg(skip)]
pub os_partitions: Option<OsPartitionInfo>, pub os_partitions: Option<OsPartitionInfo>,
#[arg(long)] #[arg(long, help = "help.arg.socks-listen-address")]
pub socks_listen: Option<SocketAddr>, pub socks_listen: Option<SocketAddr>,
#[arg(long)] #[arg(long, help = "help.arg.revision-cache-size")]
pub revision_cache_size: Option<usize>, pub revision_cache_size: Option<usize>,
#[arg(long)] #[arg(long, help = "help.arg.disable-encryption")]
pub disable_encryption: Option<bool>, pub disable_encryption: Option<bool>,
#[arg(long)] #[arg(long, help = "help.arg.multi-arch-s9pks")]
pub multi_arch_s9pks: Option<bool>, pub multi_arch_s9pks: Option<bool>,
#[arg(long)] #[arg(long, help = "help.arg.developer-key-path")]
pub developer_key_path: Option<PathBuf>, pub developer_key_path: Option<PathBuf>,
} }
impl ContextConfig for ServerConfig { impl ContextConfig for ServerConfig {
@@ -131,7 +138,6 @@ impl ContextConfig for ServerConfig {
self.config.take() self.config.take()
} }
fn merge_with(&mut self, other: Self) { fn merge_with(&mut self, other: Self) {
self.ethernet_interface = self.ethernet_interface.take().or(other.ethernet_interface);
self.os_partitions = self.os_partitions.take().or(other.os_partitions); self.os_partitions = self.os_partitions.take().or(other.os_partitions);
self.socks_listen = self.socks_listen.take().or(other.socks_listen); self.socks_listen = self.socks_listen.take().or(other.socks_listen);
self.revision_cache_size = self self.revision_cache_size = self

View File

@@ -6,15 +6,15 @@ use rpc_toolkit::yajrc::RpcError;
use tokio::sync::broadcast::Sender; use tokio::sync::broadcast::Sender;
use tracing::instrument; use tracing::instrument;
use crate::Error;
use crate::context::config::ServerConfig; use crate::context::config::ServerConfig;
use crate::prelude::*;
use crate::rpc_continuations::RpcContinuations; use crate::rpc_continuations::RpcContinuations;
use crate::shutdown::Shutdown; use crate::shutdown::Shutdown;
pub struct DiagnosticContextSeed { pub struct DiagnosticContextSeed {
pub shutdown: Sender<Shutdown>, pub shutdown: Sender<Shutdown>,
pub error: Arc<RpcError>, pub error: Arc<RpcError>,
pub disk_guid: Option<Arc<String>>, pub disk_guid: Option<InternedString>,
pub rpc_continuations: RpcContinuations, pub rpc_continuations: RpcContinuations,
} }
@@ -24,10 +24,13 @@ impl DiagnosticContext {
#[instrument(skip_all)] #[instrument(skip_all)]
pub fn init( pub fn init(
_config: &ServerConfig, _config: &ServerConfig,
disk_guid: Option<Arc<String>>, disk_guid: Option<InternedString>,
error: Error, error: Error,
) -> Result<Self, Error> { ) -> Result<Self, Error> {
tracing::error!("Error: {}: Starting diagnostic UI", error); tracing::error!(
"{}",
t!("context.diagnostic.starting-diagnostic-ui", error = error)
);
tracing::debug!("{:?}", error); tracing::debug!("{:?}", error);
let (shutdown, _) = tokio::sync::broadcast::channel(1); let (shutdown, _) = tokio::sync::broadcast::channel(1);

View File

@@ -2,13 +2,11 @@ pub mod cli;
pub mod config; pub mod config;
pub mod diagnostic; pub mod diagnostic;
pub mod init; pub mod init;
pub mod install;
pub mod rpc; pub mod rpc;
pub mod setup; pub mod setup;
pub use cli::CliContext; pub use cli::CliContext;
pub use diagnostic::DiagnosticContext; pub use diagnostic::DiagnosticContext;
pub use init::InitContext; pub use init::InitContext;
pub use install::InstallContext;
pub use rpc::RpcContext; pub use rpc::RpcContext;
pub use setup::SetupContext; pub use setup::SetupContext;

View File

@@ -15,6 +15,7 @@ use josekit::jwk::Jwk;
use reqwest::{Client, Proxy}; use reqwest::{Client, Proxy};
use rpc_toolkit::yajrc::RpcError; use rpc_toolkit::yajrc::RpcError;
use rpc_toolkit::{CallRemote, Context, Empty}; use rpc_toolkit::{CallRemote, Context, Empty};
use tokio::process::Command;
use tokio::sync::{RwLock, broadcast, oneshot, watch}; use tokio::sync::{RwLock, broadcast, oneshot, watch};
use tokio::time::Instant; use tokio::time::Instant;
use tracing::instrument; use tracing::instrument;
@@ -26,6 +27,10 @@ use crate::context::config::ServerConfig;
use crate::db::model::Database; use crate::db::model::Database;
use crate::db::model::package::TaskSeverity; use crate::db::model::package::TaskSeverity;
use crate::disk::OsPartitionInfo; use crate::disk::OsPartitionInfo;
use crate::disk::mount::filesystem::bind::Bind;
use crate::disk::mount::filesystem::block_dev::BlockDev;
use crate::disk::mount::filesystem::{FileSystem, ReadOnly};
use crate::disk::mount::guard::MountGuard;
use crate::init::{InitResult, check_time_is_synchronized}; use crate::init::{InitResult, check_time_is_synchronized};
use crate::install::PKG_ARCHIVE_DIR; use crate::install::PKG_ARCHIVE_DIR;
use crate::lxc::LxcManager; use crate::lxc::LxcManager;
@@ -41,19 +46,21 @@ use crate::rpc_continuations::{Guid, OpenAuthedContinuations, RpcContinuations};
use crate::service::ServiceMap; use crate::service::ServiceMap;
use crate::service::action::update_tasks; use crate::service::action::update_tasks;
use crate::service::effects::callbacks::ServiceCallbacks; use crate::service::effects::callbacks::ServiceCallbacks;
use crate::service::effects::subcontainer::NVIDIA_OVERLAY_PATH;
use crate::shutdown::Shutdown; use crate::shutdown::Shutdown;
use crate::util::Invoke;
use crate::util::future::NonDetachingJoinHandle; use crate::util::future::NonDetachingJoinHandle;
use crate::util::io::delete_file; use crate::util::io::{TmpDir, delete_file};
use crate::util::lshw::LshwDevice; use crate::util::lshw::LshwDevice;
use crate::util::sync::{SyncMutex, SyncRwLock, Watch}; use crate::util::sync::{SyncMutex, SyncRwLock, Watch};
use crate::{ActionId, DATA_DIR, PackageId}; use crate::{ActionId, DATA_DIR, PLATFORM, PackageId};
pub struct RpcContextSeed { pub struct RpcContextSeed {
is_closed: AtomicBool, is_closed: AtomicBool,
pub os_partitions: OsPartitionInfo, pub os_partitions: OsPartitionInfo,
pub wifi_interface: Option<String>, pub wifi_interface: Option<String>,
pub ethernet_interface: String, pub ethernet_interface: String,
pub disk_guid: Arc<String>, pub disk_guid: InternedString,
pub ephemeral_sessions: SyncMutex<Sessions>, pub ephemeral_sessions: SyncMutex<Sessions>,
pub db: TypedPatchDb<Database>, pub db: TypedPatchDb<Database>,
pub sync_db: watch::Sender<u64>, pub sync_db: watch::Sender<u64>,
@@ -77,7 +84,7 @@ pub struct RpcContextSeed {
} }
impl Drop for RpcContextSeed { impl Drop for RpcContextSeed {
fn drop(&mut self) { fn drop(&mut self) {
tracing::info!("RpcContext is dropped"); tracing::info!("{}", t!("context.rpc.rpc-context-dropped"));
} }
} }
@@ -127,7 +134,7 @@ impl RpcContext {
pub async fn init( pub async fn init(
webserver: &WebServerAcceptorSetter<UpgradableListener>, webserver: &WebServerAcceptorSetter<UpgradableListener>,
config: &ServerConfig, config: &ServerConfig,
disk_guid: Arc<String>, disk_guid: InternedString,
init_result: Option<InitResult>, init_result: Option<InitResult>,
InitRpcContextPhases { InitRpcContextPhases {
mut load_db, mut load_db,
@@ -148,7 +155,7 @@ impl RpcContext {
let peek = db.peek().await; let peek = db.peek().await;
let account = AccountInfo::load(&peek)?; let account = AccountInfo::load(&peek)?;
load_db.complete(); load_db.complete();
tracing::info!("Opened PatchDB"); tracing::info!("{}", t!("context.rpc.opened-patchdb"));
init_net_ctrl.start(); init_net_ctrl.start();
let (net_controller, os_net_service) = if let Some(InitResult { let (net_controller, os_net_service) = if let Some(InitResult {
@@ -165,7 +172,125 @@ impl RpcContext {
(net_ctrl, os_net_service) (net_ctrl, os_net_service)
}; };
init_net_ctrl.complete(); init_net_ctrl.complete();
tracing::info!("Initialized Net Controller"); tracing::info!("{}", t!("context.rpc.initialized-net-controller"));
if PLATFORM.ends_with("-nonfree") {
if let Err(e) = Command::new("nvidia-smi")
.invoke(ErrorKind::ParseSysInfo)
.await
{
tracing::warn!("{}", t!("context.rpc.nvidia-smi-error", error = e));
tracing::info!("{}", t!("context.rpc.nvidia-warning-can-be-ignored"));
} else {
async {
let version: InternedString = String::from_utf8(
Command::new("modinfo")
.arg("-F")
.arg("version")
.arg("nvidia")
.invoke(ErrorKind::ParseSysInfo)
.await?,
)?
.trim()
.into();
let nvidia_dir =
Path::new("/media/startos/data/package-data/nvidia").join(&*version);
// Generate single squashfs with both debian and generic overlays
let sqfs = nvidia_dir.join("container-overlay.squashfs");
if tokio::fs::metadata(&sqfs).await.is_err() {
let tmp = TmpDir::new().await?;
// Generate debian overlay (libs in /usr/lib/aarch64-linux-gnu/)
let debian_dir = tmp.join("debian");
tokio::fs::create_dir_all(&debian_dir).await?;
// Create /etc/debian_version to trigger debian path detection
tokio::fs::create_dir_all(debian_dir.join("etc")).await?;
tokio::fs::write(debian_dir.join("etc/debian_version"), "").await?;
let procfs = MountGuard::mount(
&Bind::new("/proc"),
debian_dir.join("proc"),
ReadOnly,
)
.await?;
Command::new("nvidia-container-cli")
.arg("configure")
.arg("--no-devbind")
.arg("--no-cgroups")
.arg("--utility")
.arg("--compute")
.arg("--graphics")
.arg("--video")
.arg(&debian_dir)
.invoke(ErrorKind::Unknown)
.await?;
procfs.unmount(true).await?;
// Run ldconfig to create proper symlinks for all NVIDIA libraries
Command::new("ldconfig")
.arg("-r")
.arg(&debian_dir)
.invoke(ErrorKind::Unknown)
.await?;
// Remove /etc/debian_version - it was only needed for nvidia-container-cli detection
tokio::fs::remove_file(debian_dir.join("etc/debian_version")).await?;
// Generate generic overlay (libs in /usr/lib64/)
let generic_dir = tmp.join("generic");
tokio::fs::create_dir_all(&generic_dir).await?;
// No /etc/debian_version - will use generic /usr/lib64 paths
let procfs = MountGuard::mount(
&Bind::new("/proc"),
generic_dir.join("proc"),
ReadOnly,
)
.await?;
Command::new("nvidia-container-cli")
.arg("configure")
.arg("--no-devbind")
.arg("--no-cgroups")
.arg("--utility")
.arg("--compute")
.arg("--graphics")
.arg("--video")
.arg(&generic_dir)
.invoke(ErrorKind::Unknown)
.await?;
procfs.unmount(true).await?;
// Run ldconfig to create proper symlinks for all NVIDIA libraries
Command::new("ldconfig")
.arg("-r")
.arg(&generic_dir)
.invoke(ErrorKind::Unknown)
.await?;
// Create squashfs with UID/GID mapping (avoids chown on readonly mounts)
if let Some(p) = sqfs.parent() {
tokio::fs::create_dir_all(p)
.await
.with_ctx(|_| (ErrorKind::Filesystem, format!("mkdir -p {p:?}")))?;
}
Command::new("mksquashfs")
.arg(&*tmp)
.arg(&sqfs)
.arg("-force-uid")
.arg("100000")
.arg("-force-gid")
.arg("100000")
.invoke(ErrorKind::Filesystem)
.await?;
// tmp.unmount_and_delete().await?;
}
BlockDev::new(&sqfs)
.mount(NVIDIA_OVERLAY_PATH, ReadOnly)
.await?;
Ok::<_, Error>(())
}
.await
.log_err();
}
}
let services = ServiceMap::default(); let services = ServiceMap::default();
let metrics_cache = Watch::<Option<crate::system::Metrics>>::new(None); let metrics_cache = Watch::<Option<crate::system::Metrics>>::new(None);
@@ -210,16 +335,12 @@ impl RpcContext {
is_closed: AtomicBool::new(false), is_closed: AtomicBool::new(false),
os_partitions: config.os_partitions.clone().ok_or_else(|| { os_partitions: config.os_partitions.clone().ok_or_else(|| {
Error::new( Error::new(
eyre!("OS Partition Information Missing"), eyre!("{}", t!("context.rpc.os-partition-info-missing")),
ErrorKind::Filesystem, ErrorKind::Filesystem,
) )
})?, })?,
wifi_interface: wifi_interface.clone(), wifi_interface: wifi_interface.clone(),
ethernet_interface: if let Some(eth) = config.ethernet_interface.clone() { ethernet_interface: find_eth_iface().await?,
eth
} else {
find_eth_iface().await?
},
disk_guid, disk_guid,
ephemeral_sessions: SyncMutex::new(Sessions::new()), ephemeral_sessions: SyncMutex::new(Sessions::new()),
sync_db: watch::Sender::new(db.sequence().await), sync_db: watch::Sender::new(db.sequence().await),
@@ -244,9 +365,9 @@ impl RpcContext {
current_secret: Arc::new( current_secret: Arc::new(
Jwk::generate_ec_key(josekit::jwk::alg::ec::EcCurve::P256).map_err(|e| { Jwk::generate_ec_key(josekit::jwk::alg::ec::EcCurve::P256).map_err(|e| {
tracing::debug!("{:?}", e); tracing::debug!("{:?}", e);
tracing::error!("Couldn't generate ec key"); tracing::error!("{}", t!("context.rpc.couldnt-generate-ec-key"));
Error::new( Error::new(
color_eyre::eyre::eyre!("Couldn't generate ec key"), color_eyre::eyre::eyre!("{}", t!("context.rpc.couldnt-generate-ec-key")),
crate::ErrorKind::Unknown, crate::ErrorKind::Unknown,
) )
})?, })?,
@@ -261,10 +382,10 @@ impl RpcContext {
let res = Self(seed.clone()); let res = Self(seed.clone());
res.cleanup_and_initialize(cleanup_init).await?; res.cleanup_and_initialize(cleanup_init).await?;
tracing::info!("Cleaned up transient states"); tracing::info!("{}", t!("context.rpc.cleaned-up-transient-states"));
crate::version::post_init(&res, run_migrations).await?; crate::version::post_init(&res, run_migrations).await?;
tracing::info!("Completed migrations"); tracing::info!("{}", t!("context.rpc.completed-migrations"));
Ok(res) Ok(res)
} }
@@ -273,7 +394,7 @@ impl RpcContext {
self.crons.mutate(|c| std::mem::take(c)); self.crons.mutate(|c| std::mem::take(c));
self.services.shutdown_all().await?; self.services.shutdown_all().await?;
self.is_closed.store(true, Ordering::SeqCst); self.is_closed.store(true, Ordering::SeqCst);
tracing::info!("RpcContext is shutdown"); tracing::info!("{}", t!("context.rpc.rpc-context-shutdown"));
Ok(()) Ok(())
} }
@@ -342,7 +463,10 @@ impl RpcContext {
.await .await
.result .result
{ {
tracing::error!("Error in session cleanup cron: {e}"); tracing::error!(
"{}",
t!("context.rpc.error-in-session-cleanup-cron", error = e)
);
tracing::debug!("{e:?}"); tracing::debug!("{e:?}");
} }
} }
@@ -455,24 +579,33 @@ impl RpcContext {
pub async fn call_remote<RemoteContext>( pub async fn call_remote<RemoteContext>(
&self, &self,
method: &str, method: &str,
metadata: OrdMap<&'static str, Value>,
params: Value, params: Value,
) -> Result<Value, RpcError> ) -> Result<Value, RpcError>
where where
Self: CallRemote<RemoteContext>, Self: CallRemote<RemoteContext>,
{ {
<Self as CallRemote<RemoteContext, Empty>>::call_remote(&self, method, params, Empty {}) <Self as CallRemote<RemoteContext, Empty>>::call_remote(
.await &self,
method,
metadata,
params,
Empty {},
)
.await
} }
pub async fn call_remote_with<RemoteContext, T>( pub async fn call_remote_with<RemoteContext, T>(
&self, &self,
method: &str, method: &str,
metadata: OrdMap<&'static str, Value>,
params: Value, params: Value,
extra: T, extra: T,
) -> Result<Value, RpcError> ) -> Result<Value, RpcError>
where where
Self: CallRemote<RemoteContext, T>, Self: CallRemote<RemoteContext, T>,
{ {
<Self as CallRemote<RemoteContext, T>>::call_remote(&self, method, params, extra).await <Self as CallRemote<RemoteContext, T>>::call_remote(&self, method, metadata, params, extra)
.await
} }
} }
impl AsRef<Client> for RpcContext { impl AsRef<Client> for RpcContext {

Some files were not shown because too many files have changed in this diff Show More