Compare commits

...

47 Commits

Author SHA1 Message Date
Aiden McClelland
f41710c892 dynamic subnet in port forward 2025-12-18 20:19:08 -07:00
Aiden McClelland
df3f79f282 fix ws timeouts 2025-12-18 14:54:19 -07:00
Aiden McClelland
f8df692865 Merge pull request #3079 from Start9Labs/hotfix/alpha.16
hotfixes for alpha.16
2025-12-18 11:32:47 -07:00
Aiden McClelland
0c6d3b188d Merge branch 'hotfix/alpha.16' of github.com:Start9Labs/start-os into hotfix/alpha.16 2025-12-18 11:31:51 -07:00
Aiden McClelland
e7a38863ab fix registry auth 2025-12-18 11:31:30 -07:00
Alex Inkin
720e0fcdab fix: keep uptime width constant and service table DOM cached (#3078)
* fix: keep uptime width constant and service table DOM cached

* show error status and fix columns spacing

* revert const

---------

Co-authored-by: Matt Hill <mattnine@protonmail.com>
2025-12-18 07:51:34 -07:00
Aiden McClelland
bf8ff84522 inconsequential ssl changes 2025-12-18 05:56:51 -07:00
Aiden McClelland
5a9510238e add map & eq to getServiceInterface 2025-12-18 04:30:08 -07:00
Aiden McClelland
7b3c74179b add local auth to registry 2025-12-18 04:29:34 -07:00
Aiden McClelland
cd70fa4c32 hotfixes for alpha.16 2025-12-18 04:22:56 -07:00
Aiden McClelland
83133ced6a consolidate crates 2025-12-17 21:15:24 -07:00
Aiden McClelland
6c5179a179 handle flavor atom version range 2025-12-17 14:18:43 -07:00
Aiden McClelland
e33ab39b85 hotfix 2025-12-17 12:17:22 -07:00
Aiden McClelland
9567bcec1b randomize default start-tunnel subnet 2025-12-16 17:34:23 -07:00
Aiden McClelland
550b16dc0b fix build for cargo deps 2025-12-16 17:33:55 -07:00
Matt Hill
5d8331b7f7 Feature/tor logs (#3077)
* add tor logs, rework services page, other small things

* feat: sortable service table and mobile view

---------

Co-authored-by: waterplea <alexander@inkin.ru>
2025-12-16 12:47:43 -07:00
Aiden McClelland
e35b643e51 use arm runner for riscv 2025-12-15 16:19:06 -07:00
Aiden McClelland
bc6a92677b readd riscv target 2025-12-15 16:18:44 -07:00
Aiden McClelland
f52072e6ec sdk beta.45 2025-12-15 15:23:05 -07:00
Remco Ros
9c43c43a46 fix: shutdown order (#3073)
* fix: race condition in Daemon.stop()

* fix: do not stop Daemon on context leave

* fix: remove duplicate Daemons.term calls

* feat: honor dependency order when shutting terminating Daemons

* fixes, and remove started

---------

Co-authored-by: Aiden McClelland <me@drbonez.dev>
2025-12-15 15:21:23 -07:00
Aiden McClelland
0430e0f930 alpha.16 (#3068)
* add support for idmapped mounts to start-sdk

* misc fixes

* misc fixes

* add default to textarea

* fix iptables masquerade rule

* fix textarea types

* more fixes

* better logging for rsync

* fix tty size

* fix wg conf generation for android

* disable file mounts on dependencies

* mostly there, some styling issues (#3069)

* mostly there, some styling issues

* fix: address comments (#3070)

* fix: address comments

* fix: fix

* show SSL for any address with secure protocol and ssl added

* better sorting and messaging

---------

Co-authored-by: Alex Inkin <alexander@inkin.ru>

* fixes for nextcloud

* allow sidebar navigation during service state traansitions

* wip: x-forwarded headers

* implement x-forwarded-for proxy

* lowercase domain names and fix warning popover bug

* fix http2 websockets

* fix websocket retry behavior

* add arch filters to s9pk pack

* use docker for start-cli install

* add version range to package signer on registry

* fix rcs < 0

* fix user information parsing

* refactor service interface getters

* disable idmaps

* build fixes

* update docker login action

* streamline build

* add start-cli workflow

* rename

* riscv64gc

* fix ui packing

* no default features on cli

* make cli depend on GIT_HASH

* more build fixes

* more build fixes

* interpolate arch within dockerfile

* fix tests

* add launch ui to service page plus other small improvements (#3075)

* add launch ui to service page plus other small improvements

* revert translation disable

* add spinner to service list if service is health and loading

* chore: some visual tune up

* chore: update Taiga UI

---------

Co-authored-by: waterplea <alexander@inkin.ru>

* fix backups

* feat: use arm hosted runners and don't fail when apt package does not exist (#3076)

---------

Co-authored-by: Matt Hill <mattnine@protonmail.com>
Co-authored-by: Shadowy Super Coder <musashidisciple@proton.me>
Co-authored-by: Matt Hill <MattDHill@users.noreply.github.com>
Co-authored-by: Alex Inkin <alexander@inkin.ru>
Co-authored-by: Remco Ros <remcoros@live.nl>
2025-12-15 13:30:50 -07:00
Mariusz Kogen
b945243d1a refactor(tor-check): improve proxy support, error handling ... (#3072)
refactor(tor-check): improve proxy support, error handling, and output formatting
2025-12-15 18:14:46 +01:00
Aiden McClelland
d8484a8b26 add docker build for start-registry (#3067)
* add docker build for start-registry

* login to docker

* add workflow dependency

* fix path

* fix add

* fix gh actions permissions

* use apt-get
2025-12-05 18:12:09 -07:00
Matt Hill
3c27499795 Refactor/status info (#3066)
* refactor status info

* wip fe

* frontend changes and version bump

* fix tests and motd

* add registry workflow

* better starttunnel instructions

* placeholders for starttunnel tables

---------

Co-authored-by: Aiden McClelland <me@drbonez.dev>
2025-12-02 23:31:02 +00:00
Remco Ros
7c772e873d fix: pass --allow-discards to luksOpen for trim support (#3064) 2025-12-02 22:00:10 +00:00
Matt Hill
db2fab245e better language and show wg config on device save (#3065)
* better language and show wg config on device save

* chore: fix

---------

Co-authored-by: waterplea <alexander@inkin.ru>
2025-12-02 21:42:14 +00:00
Matt Hill
a9c9917f1a better ST instructions 2025-12-01 18:03:07 -07:00
Matt Hill
23e2e9e9cc Update START-TUNNEL.md 2025-11-30 16:32:40 -07:00
Aiden McClelland
2369e92460 Update download link for StartTunnel installation 2025-11-28 14:28:54 -07:00
Aiden McClelland
a53b15f2a3 improve StartTunnel validation and GC (#3062)
* improve StartTunnel validation and GC

* update sdk formatting
2025-11-28 13:14:52 -07:00
Aiden McClelland
72eb8b1eb6 Update START-TUNNEL.md 2025-11-27 08:57:38 -07:00
Aiden McClelland
4db54f3b83 Update START-TUNNEL.md 2025-11-27 08:56:05 -07:00
Aiden McClelland
24eb27f005 minor bugfixes for alpha.14 (#3058)
* overwrite AllowedIPs in wg config
mute UnknownCA errors

* fix upgrade issues

* allow start9 user to access journal

* alpha.15

* sort actions lexicographically and show desc in marketplace details

* add registry package download cli command

---------

Co-authored-by: Matt Hill <mattnine@protonmail.com>
2025-11-26 16:23:08 -07:00
Matt Hill
009d76ea35 More FE fixes (#3056)
* tell user to restart server after kiosk chnage

* remove unused import

* dont show tor address on server setup

* chore: address comments

* revert mock

* chore: remove uptime block on mobile

* utiliser le futur proche

* chore: comments

* don't show loading on authorities tab

* chore: fix mobile unions

---------

Co-authored-by: waterplea <alexander@inkin.ru>
Co-authored-by: Aiden McClelland <3732071+dr-bonez@users.noreply.github.com>
2025-11-25 23:43:19 +00:00
Aiden McClelland
6e8a425eb1 overwrite AllowedIPs in wg config (#3055)
mute UnknownCA errors
2025-11-21 11:30:21 -07:00
Aiden McClelland
66188d791b fix start-tunnel artifact upload 2025-11-20 10:53:23 -07:00
Aiden McClelland
015ff02d71 fix build 2025-11-20 01:05:50 -07:00
Aiden McClelland
10bfaf5415 fix start-tunnel build 2025-11-20 00:30:33 -07:00
Aiden McClelland
e3e0b85e0c Bugfix/alpha.13 (#3053)
* bugfixes for alpha.13

* minor fixes

* version bump

* start-tunnel workflow

* sdk beta 44

* defaultFilter

* fix reset-password on tunnel auth

* explicitly rebuild types

* fix typo

* ubuntu-latest runner

* add cleanup steps

* fix env on attach
2025-11-19 22:48:49 -07:00
Matt Hill
ad0632892e Various (#3051)
* tell user to restart server after kiosk chnage

* remove unused import

* dont show tor address on server setup

* chore: address comments

* revert mock

* chore: remove uptime block on mobile

* utiliser le futur proche

---------

Co-authored-by: waterplea <alexander@inkin.ru>
Co-authored-by: Aiden McClelland <3732071+dr-bonez@users.noreply.github.com>
2025-11-19 10:35:07 -07:00
Aiden McClelland
f26791ba39 fix raspi fsck 2025-11-17 12:17:58 -07:00
Aiden McClelland
2fbaaebf44 Bugfixes for alpha.12 (#3049)
* squashfs-wip

* sdk fixes

* misc fixes

* bump sdk

* Include StartTunnel installation command

Added installation instructions for StartTunnel.

* CA instead of leaf for StartTunnel (#3046)

* updated docs for CA instead of cert

* generate ca instead of self-signed in start-tunnel

* Fix formatting in START-TUNNEL.md installation instructions

* Fix formatting in START-TUNNEL.md

* fix infinite loop

* add success message to install

* hide loopback and bridge gateways

---------

Co-authored-by: Aiden McClelland <me@drbonez.dev>
Co-authored-by: Aiden McClelland <3732071+dr-bonez@users.noreply.github.com>

* prevent gateways from getting stuck empty

* fix set-password

* misc networking fixes

* build and efi fixes

* efi fixes

* alpha.13

* remove cross

* fix tests

* provide path to upgrade

* fix networkmanager issues

* remove squashfs before creating

---------

Co-authored-by: Matt Hill <MattDHill@users.noreply.github.com>
2025-11-15 22:33:03 -07:00
StuPleb
edb916338c minor typos and grammar (#3047)
* minor typos and grammar

* added missing word - compute
2025-11-14 12:48:41 -07:00
Aiden McClelland
f7e947d37d Fix installation command for StartTunnel (#3048) 2025-11-14 12:42:05 -07:00
Aiden McClelland
a9e3d1ed75 Revise StartTunnel installation and update commands
Updated installation and update instructions for StartTunnel.
2025-11-14 12:39:53 -07:00
Matt Hill
ce97827c42 CA instead of leaf for StartTunnel (#3046)
* updated docs for CA instead of cert

* generate ca instead of self-signed in start-tunnel

* Fix formatting in START-TUNNEL.md installation instructions

* Fix formatting in START-TUNNEL.md

* fix infinite loop

* add success message to install

* hide loopback and bridge gateways

---------

Co-authored-by: Aiden McClelland <me@drbonez.dev>
Co-authored-by: Aiden McClelland <3732071+dr-bonez@users.noreply.github.com>
2025-11-10 14:15:58 -07:00
Aiden McClelland
3efec07338 Include StartTunnel installation command
Added installation instructions for StartTunnel.
2025-11-07 20:00:31 +00:00
355 changed files with 8734 additions and 7037 deletions

118
.github/workflows/start-cli.yaml vendored Normal file
View File

@@ -0,0 +1,118 @@
name: start-cli
on:
workflow_call:
workflow_dispatch:
inputs:
environment:
type: choice
description: Environment
options:
- NONE
- dev
- unstable
- dev-unstable
runner:
type: choice
description: Runner
options:
- standard
- fast
arch:
type: choice
description: Architecture
options:
- ALL
- x86_64
- x86_64-apple
- aarch64
- aarch64-apple
- riscv64
push:
branches:
- master
- next/*
pull_request:
branches:
- master
- next/*
env:
NODEJS_VERSION: "24.11.0"
ENVIRONMENT: '${{ fromJson(format(''["{0}", ""]'', github.event.inputs.environment || ''dev''))[github.event.inputs.environment == ''NONE''] }}'
jobs:
compile:
name: Build Debian Package
strategy:
fail-fast: true
matrix:
triple: >-
${{
fromJson('{
"x86_64": ["x86_64-unknown-linux-musl"],
"x86_64-apple": ["x86_64-apple-darwin"],
"aarch64": ["aarch64-unknown-linux-musl"],
"x86_64-apple": ["aarch64-apple-darwin"],
"riscv64": ["riscv64gc-unknown-linux-musl"],
"ALL": ["x86_64-unknown-linux-musl", "x86_64-apple-darwin", "aarch64-unknown-linux-musl", "aarch64-apple-darwin", "riscv64gc-unknown-linux-musl"]
}')[github.event.inputs.platform || 'ALL']
}}
runs-on: ${{ fromJson('["ubuntu-latest", "buildjet-32vcpu-ubuntu-2204"]')[github.event.inputs.runner == 'fast'] }}
steps:
- name: Cleaning up unnecessary files
run: |
sudo apt-get remove --purge -y mono-* \
ghc* cabal-install* \
dotnet* \
php* \
ruby* \
mysql-* \
postgresql-* \
azure-cli \
powershell \
google-cloud-sdk \
msodbcsql* mssql-tools* \
imagemagick* \
libgl1-mesa-dri \
google-chrome-stable \
firefox
sudo apt-get autoremove -y
sudo apt-get clean
- run: |
sudo mount -t tmpfs tmpfs .
if: ${{ github.event.inputs.runner == 'fast' }}
- uses: actions/checkout@v4
with:
submodules: recursive
- uses: actions/setup-node@v4
with:
node-version: ${{ env.NODEJS_VERSION }}
- name: Set up docker QEMU
uses: docker/setup-qemu-action@v3
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@v3
- name: Configure sccache
uses: actions/github-script@v7
with:
script: |
core.exportVariable('ACTIONS_RESULTS_URL', process.env.ACTIONS_RESULTS_URL || '');
core.exportVariable('ACTIONS_RUNTIME_TOKEN', process.env.ACTIONS_RUNTIME_TOKEN || '');
- name: Make
run: TARGET=${{ matrix.triple }} make cli
env:
PLATFORM: ${{ matrix.arch }}
SCCACHE_GHA_ENABLED: on
SCCACHE_GHA_VERSION: 0
- uses: actions/upload-artifact@v4
with:
name: start-cli_${{ matrix.triple }}
path: core/target/${{ matrix.triple }}/release/start-cli

203
.github/workflows/start-registry.yaml vendored Normal file
View File

@@ -0,0 +1,203 @@
name: Start-Registry
on:
workflow_call:
workflow_dispatch:
inputs:
environment:
type: choice
description: Environment
options:
- NONE
- dev
- unstable
- dev-unstable
runner:
type: choice
description: Runner
options:
- standard
- fast
arch:
type: choice
description: Architecture
options:
- ALL
- x86_64
- aarch64
- riscv64
push:
branches:
- master
- next/*
pull_request:
branches:
- master
- next/*
env:
NODEJS_VERSION: "24.11.0"
ENVIRONMENT: '${{ fromJson(format(''["{0}", ""]'', github.event.inputs.environment || ''dev''))[github.event.inputs.environment == ''NONE''] }}'
jobs:
compile:
name: Build Debian Package
strategy:
fail-fast: true
matrix:
arch: >-
${{
fromJson('{
"x86_64": ["x86_64"],
"aarch64": ["aarch64"],
"riscv64": ["riscv64"],
"ALL": ["x86_64", "aarch64", "riscv64"]
}')[github.event.inputs.platform || 'ALL']
}}
runs-on: ${{ fromJson('["ubuntu-latest", "buildjet-32vcpu-ubuntu-2204"]')[github.event.inputs.runner == 'fast'] }}
steps:
- name: Cleaning up unnecessary files
run: |
sudo apt-get remove --purge -y mono-* \
ghc* cabal-install* \
dotnet* \
php* \
ruby* \
mysql-* \
postgresql-* \
azure-cli \
powershell \
google-cloud-sdk \
msodbcsql* mssql-tools* \
imagemagick* \
libgl1-mesa-dri \
google-chrome-stable \
firefox
sudo apt-get autoremove -y
sudo apt-get clean
- run: |
sudo mount -t tmpfs tmpfs .
if: ${{ github.event.inputs.runner == 'fast' }}
- uses: actions/checkout@v4
with:
submodules: recursive
- uses: actions/setup-node@v4
with:
node-version: ${{ env.NODEJS_VERSION }}
- name: Set up docker QEMU
uses: docker/setup-qemu-action@v3
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@v3
- name: Configure sccache
uses: actions/github-script@v7
with:
script: |
core.exportVariable('ACTIONS_RESULTS_URL', process.env.ACTIONS_RESULTS_URL || '');
core.exportVariable('ACTIONS_RUNTIME_TOKEN', process.env.ACTIONS_RUNTIME_TOKEN || '');
- name: Make
run: make registry-deb
env:
PLATFORM: ${{ matrix.arch }}
SCCACHE_GHA_ENABLED: on
SCCACHE_GHA_VERSION: 0
- uses: actions/upload-artifact@v4
with:
name: start-registry_${{ matrix.arch }}.deb
path: results/start-registry-*_${{ matrix.arch }}.deb
create-image:
name: Create Docker Image
needs: [compile]
permissions:
contents: read
packages: write
runs-on: ${{ fromJson('["ubuntu-latest", "buildjet-32vcpu-ubuntu-2204"]')[github.event.inputs.runner == 'fast'] }}
steps:
- name: Cleaning up unnecessary files
run: |
sudo apt-get remove --purge -y google-chrome-stable firefox mono-devel
sudo apt-get autoremove -y
sudo apt-get clean
- run: |
sudo mount -t tmpfs tmpfs .
if: ${{ github.event.inputs.runner == 'fast' }}
- name: Set up docker QEMU
uses: docker/setup-qemu-action@v3
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@v3
- name: "Login to GitHub Container Registry"
uses: docker/login-action@v3
with:
registry: ghcr.io
username: ${{github.actor}}
password: ${{secrets.GITHUB_TOKEN}}
- name: Docker meta
id: meta
uses: docker/metadata-action@v5
with:
images: ghcr.io/Start9Labs/startos-registry
tags: |
type=raw,value=${{ github.ref_name }}
- name: Download debian package
uses: actions/download-artifact@v4
with:
pattern: start-registry_*.deb
- name: Map matrix.arch to docker platform
run: |
platforms=""
for deb in *.deb; do
filename=$(basename "$deb" .deb)
arch="${filename#*_}"
case "$arch" in
x86_64)
platform="linux/amd64"
;;
aarch64)
platform="linux/arm64"
;;
riscv64)
platform="linux/riscv64"
;;
*)
echo "Unknown architecture: $arch" >&2
exit 1
;;
esac
if [ -z "$platforms" ]; then
platforms="$platform"
else
platforms="$platforms,$platform"
fi
done
echo "DOCKER_PLATFORM=$platforms" >> "$GITHUB_ENV"
- run: |
cat | docker buildx build --platform "$DOCKER_PLATFORM" --push -t ${{ steps.meta.outputs.tags }} -f - . << 'EOF'
FROM debian:trixie
ADD *.deb .
RUN apt-get install -y ./*_$(uname -m).deb && rm *.deb
VOLUME /var/lib/startos
ENV RUST_LOG=startos=debug
ENTRYPOINT ["start-registryd"]
EOF

114
.github/workflows/start-tunnel.yaml vendored Normal file
View File

@@ -0,0 +1,114 @@
name: Start-Tunnel
on:
workflow_call:
workflow_dispatch:
inputs:
environment:
type: choice
description: Environment
options:
- NONE
- dev
- unstable
- dev-unstable
runner:
type: choice
description: Runner
options:
- standard
- fast
arch:
type: choice
description: Architecture
options:
- ALL
- x86_64
- aarch64
- riscv64
push:
branches:
- master
- next/*
pull_request:
branches:
- master
- next/*
env:
NODEJS_VERSION: "24.11.0"
ENVIRONMENT: '${{ fromJson(format(''["{0}", ""]'', github.event.inputs.environment || ''dev''))[github.event.inputs.environment == ''NONE''] }}'
jobs:
compile:
name: Build Debian Package
strategy:
fail-fast: true
matrix:
arch: >-
${{
fromJson('{
"x86_64": ["x86_64"],
"aarch64": ["aarch64"],
"riscv64": ["riscv64"],
"ALL": ["x86_64", "aarch64", "riscv64"]
}')[github.event.inputs.platform || 'ALL']
}}
runs-on: ${{ fromJson('["ubuntu-latest", "buildjet-32vcpu-ubuntu-2204"]')[github.event.inputs.runner == 'fast'] }}
steps:
- name: Cleaning up unnecessary files
run: |
sudo apt-get remove --purge -y mono-* \
ghc* cabal-install* \
dotnet* \
php* \
ruby* \
mysql-* \
postgresql-* \
azure-cli \
powershell \
google-cloud-sdk \
msodbcsql* mssql-tools* \
imagemagick* \
libgl1-mesa-dri \
google-chrome-stable \
firefox
sudo apt-get autoremove -y
sudo apt-get clean
- run: |
sudo mount -t tmpfs tmpfs .
if: ${{ github.event.inputs.runner == 'fast' }}
- uses: actions/checkout@v4
with:
submodules: recursive
- uses: actions/setup-node@v4
with:
node-version: ${{ env.NODEJS_VERSION }}
- name: Set up docker QEMU
uses: docker/setup-qemu-action@v3
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@v3
- name: Configure sccache
uses: actions/github-script@v7
with:
script: |
core.exportVariable('ACTIONS_RESULTS_URL', process.env.ACTIONS_RESULTS_URL || '');
core.exportVariable('ACTIONS_RUNTIME_TOKEN', process.env.ACTIONS_RUNTIME_TOKEN || '');
- name: Make
run: make tunnel-deb
env:
PLATFORM: ${{ matrix.arch }}
SCCACHE_GHA_ENABLED: on
SCCACHE_GHA_VERSION: 0
- uses: actions/upload-artifact@v4
with:
name: start-tunnel_${{ matrix.arch }}.deb
path: results/start-tunnel-*_${{ matrix.arch }}.deb

View File

@@ -64,11 +64,47 @@ jobs:
"aarch64-nonfree": ["aarch64"], "aarch64-nonfree": ["aarch64"],
"raspberrypi": ["aarch64"], "raspberrypi": ["aarch64"],
"riscv64": ["riscv64"], "riscv64": ["riscv64"],
"ALL": ["x86_64", "aarch64"] "ALL": ["x86_64", "aarch64", "riscv64"]
}')[github.event.inputs.platform || 'ALL'] }')[github.event.inputs.platform || 'ALL']
}} }}
runs-on: ${{ fromJson('["ubuntu-22.04", "buildjet-32vcpu-ubuntu-2204"]')[github.event.inputs.runner == 'fast'] }} runs-on: >-
${{
fromJson(
format(
'["{0}", "{1}"]',
fromJson('{
"x86_64": "ubuntu-latest",
"aarch64": "ubuntu-24.04-arm",
"riscv64": "ubuntu-latest"
}')[matrix.arch],
fromJson('{
"x86_64": "buildjet-32vcpu-ubuntu-2204",
"aarch64": "buildjet-32vcpu-ubuntu-2204-arm",
"riscv64": "buildjet-32vcpu-ubuntu-2204"
}')[matrix.arch]
)
)[github.event.inputs.runner == 'fast']
}}
steps: steps:
- name: Cleaning up unnecessary files
run: |
sudo apt-get remove --purge -y azure-cli || true
sudo apt-get remove --purge -y firefox || true
sudo apt-get remove --purge -y ghc-* || true
sudo apt-get remove --purge -y google-cloud-sdk || true
sudo apt-get remove --purge -y google-chrome-stable || true
sudo apt-get remove --purge -y powershell || true
sudo apt-get remove --purge -y php* || true
sudo apt-get remove --purge -y ruby* || true
sudo apt-get remove --purge -y mono-* || true
sudo apt-get autoremove -y
sudo apt-get clean
sudo rm -rf /usr/lib/jvm # All JDKs
sudo rm -rf /usr/local/.ghcup # Haskell toolchain
sudo rm -rf /usr/local/lib/android # Android SDK/NDK, emulator
sudo rm -rf /usr/share/dotnet # .NET SDKs
sudo rm -rf /usr/share/swift # Swift toolchain (if present)
sudo rm -rf "$AGENT_TOOLSDIRECTORY" # Pre-cached tool cache (Go, Node, etc.)
- run: | - run: |
sudo mount -t tmpfs tmpfs . sudo mount -t tmpfs tmpfs .
if: ${{ github.event.inputs.runner == 'fast' }} if: ${{ github.event.inputs.runner == 'fast' }}
@@ -102,12 +138,6 @@ jobs:
core.exportVariable('ACTIONS_RESULTS_URL', process.env.ACTIONS_RESULTS_URL || ''); core.exportVariable('ACTIONS_RESULTS_URL', process.env.ACTIONS_RESULTS_URL || '');
core.exportVariable('ACTIONS_RUNTIME_TOKEN', process.env.ACTIONS_RUNTIME_TOKEN || ''); core.exportVariable('ACTIONS_RUNTIME_TOKEN', process.env.ACTIONS_RUNTIME_TOKEN || '');
- name: Use Beta Toolchain
run: rustup default beta
- name: Setup Cross
run: cargo install cross --git https://github.com/cross-rs/cross
- name: Make - name: Make
run: make ARCH=${{ matrix.arch }} compiled-${{ matrix.arch }}.tar run: make ARCH=${{ matrix.arch }} compiled-${{ matrix.arch }}.tar
env: env:
@@ -130,7 +160,7 @@ jobs:
format( format(
'[ '[
["{0}"], ["{0}"],
["x86_64", "x86_64-nonfree", "aarch64", "aarch64-nonfree", "raspberrypi"] ["x86_64", "x86_64-nonfree", "aarch64", "aarch64-nonfree", "riscv64", "raspberrypi"]
]', ]',
github.event.inputs.platform || 'ALL' github.event.inputs.platform || 'ALL'
) )
@@ -140,7 +170,15 @@ jobs:
${{ ${{
fromJson( fromJson(
format( format(
'["ubuntu-22.04", "{0}"]', '["{0}", "{1}"]',
fromJson('{
"x86_64": "ubuntu-latest",
"x86_64-nonfree": "ubuntu-latest",
"aarch64": "ubuntu-24.04-arm",
"aarch64-nonfree": "ubuntu-24.04-arm",
"raspberrypi": "ubuntu-24.04-arm",
"riscv64": "ubuntu-24.04-arm",
}')[matrix.platform],
fromJson('{ fromJson('{
"x86_64": "buildjet-8vcpu-ubuntu-2204", "x86_64": "buildjet-8vcpu-ubuntu-2204",
"x86_64-nonfree": "buildjet-8vcpu-ubuntu-2204", "x86_64-nonfree": "buildjet-8vcpu-ubuntu-2204",
@@ -166,7 +204,24 @@ jobs:
}} }}
steps: steps:
- name: Free space - name: Free space
run: rm -rf /opt/hostedtoolcache* run: |
sudo apt-get remove --purge -y azure-cli || true
sudo apt-get remove --purge -y firefox || true
sudo apt-get remove --purge -y ghc-* || true
sudo apt-get remove --purge -y google-cloud-sdk || true
sudo apt-get remove --purge -y google-chrome-stable || true
sudo apt-get remove --purge -y powershell || true
sudo apt-get remove --purge -y php* || true
sudo apt-get remove --purge -y ruby* || true
sudo apt-get remove --purge -y mono-* || true
sudo apt-get autoremove -y
sudo apt-get clean
sudo rm -rf /usr/lib/jvm # All JDKs
sudo rm -rf /usr/local/.ghcup # Haskell toolchain
sudo rm -rf /usr/local/lib/android # Android SDK/NDK, emulator
sudo rm -rf /usr/share/dotnet # .NET SDKs
sudo rm -rf /usr/share/swift # Swift toolchain (if present)
sudo rm -rf "$AGENT_TOOLSDIRECTORY" # Pre-cached tool cache (Go, Node, etc.)
if: ${{ github.event.inputs.runner != 'fast' }} if: ${{ github.event.inputs.runner != 'fast' }}
- uses: actions/checkout@v4 - uses: actions/checkout@v4
@@ -273,7 +328,7 @@ jobs:
index: index:
if: ${{ github.event.inputs.deploy != '' && github.event.inputs.deploy != 'NONE' }} if: ${{ github.event.inputs.deploy != '' && github.event.inputs.deploy != 'NONE' }}
needs: [image] needs: [image]
runs-on: ubuntu-22.04 runs-on: ubuntu-latest
steps: steps:
- run: >- - run: >-
curl "https://${{ curl "https://${{

View File

@@ -17,7 +17,7 @@ env:
jobs: jobs:
test: test:
name: Run Automated Tests name: Run Automated Tests
runs-on: ubuntu-22.04 runs-on: ubuntu-latest
steps: steps:
- uses: actions/checkout@v4 - uses: actions/checkout@v4
with: with:
@@ -27,11 +27,5 @@ jobs:
with: with:
node-version: ${{ env.NODEJS_VERSION }} node-version: ${{ env.NODEJS_VERSION }}
- name: Use Beta Toolchain
run: rustup default beta
- name: Setup Cross
run: cargo install cross --git https://github.com/cross-rs/cross
- name: Build And Run Tests - name: Build And Run Tests
run: make test run: make test

View File

@@ -27,7 +27,7 @@ WEB_START_TUNNEL_SRC := $(call ls-files, web/projects/start-tunnel)
PATCH_DB_CLIENT_SRC := $(shell git ls-files --recurse-submodules patch-db/client) PATCH_DB_CLIENT_SRC := $(shell git ls-files --recurse-submodules patch-db/client)
GZIP_BIN := $(shell which pigz || which gzip) GZIP_BIN := $(shell which pigz || which gzip)
TAR_BIN := $(shell which gtar || which tar) TAR_BIN := $(shell which gtar || which tar)
COMPILED_TARGETS := core/target/$(RUST_ARCH)-unknown-linux-musl/$(PROFILE)/startbox core/target/$(RUST_ARCH)-unknown-linux-musl/release/containerbox container-runtime/rootfs.$(ARCH).squashfs COMPILED_TARGETS := core/target/$(RUST_ARCH)-unknown-linux-musl/$(PROFILE)/startbox core/target/$(RUST_ARCH)-unknown-linux-musl/release/start-container container-runtime/rootfs.$(ARCH).squashfs
STARTOS_TARGETS := $(STARTD_SRC) $(ENVIRONMENT_FILE) $(GIT_HASH_FILE) $(VERSION_FILE) $(COMPILED_TARGETS) cargo-deps/$(RUST_ARCH)-unknown-linux-musl/release/startos-backup-fs $(PLATFORM_FILE) \ STARTOS_TARGETS := $(STARTD_SRC) $(ENVIRONMENT_FILE) $(GIT_HASH_FILE) $(VERSION_FILE) $(COMPILED_TARGETS) cargo-deps/$(RUST_ARCH)-unknown-linux-musl/release/startos-backup-fs $(PLATFORM_FILE) \
$(shell if [ "$(PLATFORM)" = "raspberrypi" ]; then \ $(shell if [ "$(PLATFORM)" = "raspberrypi" ]; then \
echo cargo-deps/aarch64-unknown-linux-musl/release/pi-beep; \ echo cargo-deps/aarch64-unknown-linux-musl/release/pi-beep; \
@@ -40,7 +40,6 @@ STARTOS_TARGETS := $(STARTD_SRC) $(ENVIRONMENT_FILE) $(GIT_HASH_FILE) $(VERSION_
fi') fi')
REGISTRY_TARGETS := core/target/$(RUST_ARCH)-unknown-linux-musl/$(PROFILE)/registrybox core/startos/start-registryd.service REGISTRY_TARGETS := core/target/$(RUST_ARCH)-unknown-linux-musl/$(PROFILE)/registrybox core/startos/start-registryd.service
TUNNEL_TARGETS := core/target/$(RUST_ARCH)-unknown-linux-musl/$(PROFILE)/tunnelbox core/startos/start-tunneld.service TUNNEL_TARGETS := core/target/$(RUST_ARCH)-unknown-linux-musl/$(PROFILE)/tunnelbox core/startos/start-tunneld.service
REBUILD_TYPES = 1
ifeq ($(REMOTE),) ifeq ($(REMOTE),)
mkdir = mkdir -p $1 mkdir = mkdir -p $1
@@ -63,7 +62,7 @@ endif
.DELETE_ON_ERROR: .DELETE_ON_ERROR:
.PHONY: all metadata install clean format cli uis ui reflash deb $(IMAGE_TYPE) squashfs wormhole wormhole-deb test test-core test-sdk test-container-runtime registry install-registry tunnel install-tunnel .PHONY: all metadata install clean format install-cli cli uis ui reflash deb $(IMAGE_TYPE) squashfs wormhole wormhole-deb test test-core test-sdk test-container-runtime registry install-registry tunnel install-tunnel ts-bindings
all: $(STARTOS_TARGETS) all: $(STARTOS_TARGETS)
@@ -113,8 +112,11 @@ test-sdk: $(call ls-files, sdk) sdk/base/lib/osBindings/index.ts
test-container-runtime: container-runtime/node_modules/.package-lock.json $(call ls-files, container-runtime/src) container-runtime/package.json container-runtime/tsconfig.json test-container-runtime: container-runtime/node_modules/.package-lock.json $(call ls-files, container-runtime/src) container-runtime/package.json container-runtime/tsconfig.json
cd container-runtime && npm test cd container-runtime && npm test
cli: install-cli: $(GIT_HASH_FILE)
./core/install-cli.sh ./core/build-cli.sh --install
cli: $(GIT_HASH_FILE)
./core/build-cli.sh
registry: core/target/$(RUST_ARCH)-unknown-linux-musl/$(PROFILE)/registrybox registry: core/target/$(RUST_ARCH)-unknown-linux-musl/$(PROFILE)/registrybox
@@ -159,8 +161,8 @@ results/$(REGISTRY_BASENAME).deb: dpkg-build.sh $(call ls-files,debian/start-reg
tunnel-deb: results/$(TUNNEL_BASENAME).deb tunnel-deb: results/$(TUNNEL_BASENAME).deb
results/$(TUNNEL_BASENAME).deb: dpkg-build.sh $(call ls-files,debian/start-tunnel) $(TUNNEL_TARGETS) results/$(TUNNEL_BASENAME).deb: dpkg-build.sh $(call ls-files,debian/start-tunnel) $(TUNNEL_TARGETS) build/lib/scripts/forward-port
PROJECT=start-tunnel PLATFORM=$(ARCH) REQUIRES=debian DEPENDS=wireguard-tools,iptables ./build/os-compat/run-compat.sh ./dpkg-build.sh PROJECT=start-tunnel PLATFORM=$(ARCH) REQUIRES=debian DEPENDS=wireguard-tools,iptables,conntrack ./build/os-compat/run-compat.sh ./dpkg-build.sh
$(IMAGE_TYPE): results/$(BASENAME).$(IMAGE_TYPE) $(IMAGE_TYPE): results/$(BASENAME).$(IMAGE_TYPE)
@@ -226,7 +228,7 @@ wormhole-squashfs: results/$(BASENAME).squashfs
$(eval SQFS_SIZE := $(shell du -s --bytes results/$(BASENAME).squashfs | awk '{print $$1}')) $(eval SQFS_SIZE := $(shell du -s --bytes results/$(BASENAME).squashfs | awk '{print $$1}'))
@echo "Paste the following command into the shell of your StartOS server:" @echo "Paste the following command into the shell of your StartOS server:"
@echo @echo
@wormhole send results/$(BASENAME).squashfs 2>&1 | awk -Winteractive '/wormhole receive/ { printf "sudo sh -c '"'"'/usr/lib/startos/scripts/prune-images $(SQFS_SIZE) && /usr/lib/startos/scripts/prune-boot && cd /media/startos/images && wormhole receive --accept-file %s && CHECKSUM=$(SQFS_SUM) /usr/lib/startos/scripts/use-img ./$(BASENAME).squashfs'"'"'\n", $$3 }' @wormhole send results/$(BASENAME).squashfs 2>&1 | awk -Winteractive '/wormhole receive/ { printf "sudo sh -c '"'"'/usr/lib/startos/scripts/prune-images $(SQFS_SIZE) && /usr/lib/startos/scripts/prune-boot && cd /media/startos/images && wormhole receive --accept-file %s && CHECKSUM=$(SQFS_SUM) /usr/lib/startos/scripts/upgrade ./$(BASENAME).squashfs'"'"'\n", $$3 }'
update: $(STARTOS_TARGETS) update: $(STARTOS_TARGETS)
@if [ -z "$(REMOTE)" ]; then >&2 echo "Must specify REMOTE" && false; fi @if [ -z "$(REMOTE)" ]; then >&2 echo "Must specify REMOTE" && false; fi
@@ -254,7 +256,7 @@ update-squashfs: results/$(BASENAME).squashfs
$(call ssh,'/usr/lib/startos/scripts/prune-images $(SQFS_SIZE)') $(call ssh,'/usr/lib/startos/scripts/prune-images $(SQFS_SIZE)')
$(call ssh,'/usr/lib/startos/scripts/prune-boot') $(call ssh,'/usr/lib/startos/scripts/prune-boot')
$(call cp,results/$(BASENAME).squashfs,/media/startos/images/next.rootfs) $(call cp,results/$(BASENAME).squashfs,/media/startos/images/next.rootfs)
$(call ssh,'sudo CHECKSUM=$(SQFS_SUM) /usr/lib/startos/scripts/use-img /media/startos/images/next.rootfs') $(call ssh,'sudo CHECKSUM=$(SQFS_SUM) /usr/lib/startos/scripts/upgrade /media/startos/images/next.rootfs')
emulate-reflash: $(STARTOS_TARGETS) emulate-reflash: $(STARTOS_TARGETS)
@if [ -z "$(REMOTE)" ]; then >&2 echo "Must specify REMOTE" && false; fi @if [ -z "$(REMOTE)" ]; then >&2 echo "Must specify REMOTE" && false; fi
@@ -277,10 +279,9 @@ container-runtime/node_modules/.package-lock.json: container-runtime/package-loc
npm --prefix container-runtime ci npm --prefix container-runtime ci
touch container-runtime/node_modules/.package-lock.json touch container-runtime/node_modules/.package-lock.json
sdk/base/lib/osBindings/index.ts: $(shell if [ "$(REBUILD_TYPES)" -ne 0 ]; then echo core/startos/bindings/index.ts; fi) ts-bindings: core/startos/bindings/index.ts
mkdir -p sdk/base/lib/osBindings mkdir -p sdk/base/lib/osBindings
rsync -ac --delete core/startos/bindings/ sdk/base/lib/osBindings/ rsync -ac --delete core/startos/bindings/ sdk/base/lib/osBindings/
touch sdk/base/lib/osBindings/index.ts
core/startos/bindings/index.ts: $(call ls-files, core) $(ENVIRONMENT_FILE) core/startos/bindings/index.ts: $(call ls-files, core) $(ENVIRONMENT_FILE)
rm -rf core/startos/bindings rm -rf core/startos/bindings
@@ -302,8 +303,8 @@ container-runtime/dist/node_modules/.package-lock.json container-runtime/dist/pa
./container-runtime/install-dist-deps.sh ./container-runtime/install-dist-deps.sh
touch container-runtime/dist/node_modules/.package-lock.json touch container-runtime/dist/node_modules/.package-lock.json
container-runtime/rootfs.$(ARCH).squashfs: container-runtime/debian.$(ARCH).squashfs container-runtime/container-runtime.service container-runtime/update-image.sh container-runtime/deb-install.sh container-runtime/dist/index.js container-runtime/dist/node_modules/.package-lock.json core/target/$(RUST_ARCH)-unknown-linux-musl/release/containerbox container-runtime/rootfs.$(ARCH).squashfs: container-runtime/debian.$(ARCH).squashfs container-runtime/container-runtime.service container-runtime/update-image.sh container-runtime/deb-install.sh container-runtime/dist/index.js container-runtime/dist/node_modules/.package-lock.json core/target/$(RUST_ARCH)-unknown-linux-musl/release/start-container
ARCH=$(ARCH) REQUIRES=linux ./build/os-compat/run-compat.sh ./container-runtime/update-image.sh ARCH=$(ARCH) REQUIRES=qemu ./build/os-compat/run-compat.sh ./container-runtime/update-image.sh
build/lib/depends build/lib/conflicts: $(ENVIRONMENT_FILE) $(PLATFORM_FILE) $(shell ls build/dpkg-deps/*) build/lib/depends build/lib/conflicts: $(ENVIRONMENT_FILE) $(PLATFORM_FILE) $(shell ls build/dpkg-deps/*)
PLATFORM=$(PLATFORM) ARCH=$(ARCH) build/dpkg-deps/generate.sh PLATFORM=$(PLATFORM) ARCH=$(ARCH) build/dpkg-deps/generate.sh
@@ -315,9 +316,9 @@ core/target/$(RUST_ARCH)-unknown-linux-musl/$(PROFILE)/startbox: $(CORE_SRC) $(C
ARCH=$(ARCH) PROFILE=$(PROFILE) ./core/build-startbox.sh ARCH=$(ARCH) PROFILE=$(PROFILE) ./core/build-startbox.sh
touch core/target/$(RUST_ARCH)-unknown-linux-musl/$(PROFILE)/startbox touch core/target/$(RUST_ARCH)-unknown-linux-musl/$(PROFILE)/startbox
core/target/$(RUST_ARCH)-unknown-linux-musl/release/containerbox: $(CORE_SRC) $(ENVIRONMENT_FILE) core/target/$(RUST_ARCH)-unknown-linux-musl/release/start-container: $(CORE_SRC) $(ENVIRONMENT_FILE)
ARCH=$(ARCH) ./core/build-containerbox.sh ARCH=$(ARCH) ./core/build-start-container.sh
touch core/target/$(RUST_ARCH)-unknown-linux-musl/release/containerbox touch core/target/$(RUST_ARCH)-unknown-linux-musl/release/start-container
web/package-lock.json: web/package.json sdk/baseDist/package.json web/package-lock.json: web/package.json sdk/baseDist/package.json
npm --prefix web i npm --prefix web i
@@ -373,14 +374,17 @@ uis: $(WEB_UIS)
# this is a convenience step to build the UI # this is a convenience step to build the UI
ui: web/dist/raw/ui ui: web/dist/raw/ui
cargo-deps/aarch64-unknown-linux-musl/release/pi-beep: cargo-deps/aarch64-unknown-linux-musl/release/pi-beep: ./build-cargo-dep.sh
ARCH=aarch64 ./build-cargo-dep.sh pi-beep ARCH=aarch64 ./build-cargo-dep.sh pi-beep
cargo-deps/$(RUST_ARCH)-unknown-linux-musl/release/tokio-console: cargo-deps/$(RUST_ARCH)-unknown-linux-musl/release/tokio-console: ./build-cargo-dep.sh
ARCH=$(ARCH) PREINSTALL="apk add musl-dev pkgconfig" ./build-cargo-dep.sh tokio-console ARCH=$(ARCH) ./build-cargo-dep.sh tokio-console
touch $@
cargo-deps/$(RUST_ARCH)-unknown-linux-musl/release/startos-backup-fs: cargo-deps/$(RUST_ARCH)-unknown-linux-musl/release/startos-backup-fs: ./build-cargo-dep.sh
ARCH=$(ARCH) PREINSTALL="apk add fuse3 fuse3-dev fuse3-static musl-dev pkgconfig" ./build-cargo-dep.sh --git https://github.com/Start9Labs/start-fs.git startos-backup-fs ARCH=$(ARCH) ./build-cargo-dep.sh --git https://github.com/Start9Labs/start-fs.git startos-backup-fs
touch $@
cargo-deps/$(RUST_ARCH)-unknown-linux-musl/release/flamegraph: cargo-deps/$(RUST_ARCH)-unknown-linux-musl/release/flamegraph: ./build-cargo-dep.sh
ARCH=$(ARCH) PREINSTALL="apk add musl-dev pkgconfig" ./build-cargo-dep.sh flamegraph ARCH=$(ARCH) ./build-cargo-dep.sh flamegraph
touch $@

View File

@@ -1,34 +1,44 @@
# StartTunnel # StartTunnel
A self-hosted Wiregaurd VPN optimized for creating VLANs and reverse tunneling to personal servers. A self-hosted WireGuard VPN optimized for creating VLANs and reverse tunneling to personal servers.
You can think of StartTunnel as "virtual router in the cloud" You can think of StartTunnel as "virtual router in the cloud".
Use it for private, remote access, to self-hosted services running on a personal server, or to expose self-hosted services to the public Internet without revealing the host server's IP address. Use it for private remote access to self-hosted services running on a personal server, or to expose self-hosted services to the public Internet without revealing the host server's IP address.
## Features
- **Create Subnets**: Each subnet creates a private, virtual local area network (VLAN), similar to the LAN created by a home router.
- **Add Devices**: When you add a device (server, phone, laptop) to a subnet, it receives a LAN IP address on that subnet as well as a unique WireGuard config that must be copied, downloaded, or scanned into the device.
- **Forward Ports**: Forwarding a port creates a "reverse tunnel", exposing a specific port on a specific device to the public Internet.
## Installation ## Installation
1. Rent a low cost VPS. For most use cases, the cheapest option should be enough. 1. Rent a low cost VPS. For most use cases, the cheapest option should be enough.
- It must have a dedicated public IP address. - It must have a dedicated public IP address.
- For (CPU), memory (RAM), and storage (disk), choose the minimum spec. - For compute (CPU), memory (RAM), and storage (disk), choose the minimum spec.
- For transfer (bandwidth), it depends on (1) your use case and (2) your home Internet's _upload_ speed. Even if you intend to serve large files or stream content from your server, there is no reason to pay for speeds that exceed your home Internet's upload speed. - For transfer (bandwidth), it depends on (1) your use case and (2) your home Internet's _upload_ speed. Even if you intend to serve large files or stream content from your server, there is no reason to pay for speeds that exceed your home Internet's upload speed.
1. Provision the VPS with the latest version of Debian. 1. Provision the VPS with the latest version of Debian.
1. Access the VPS via SSH. 1. Access the VPS via SSH.
1. Install StartTunnel: 1. Run the StartTunnel install script:
@TODO curl -fsSL https://start9labs.github.io/start-tunnel | sh
## Features 1. [Initialize the web interface](#web-interface) (recommended)
- **Create Subnets**: Each subnet creates a private, virtual local area network (VLAN), similar to the LAN created by a home router. ## Updating
- **Add Devices**: When you add a device (server, phone, laptop) to a subnet, it receives a LAN IP address on that subnet as well as a unique Wireguard config that must be copied, downloaded, or scanned into the device. Simply re-run the install command:
- **Forward Ports**: Forwarding a port creates a "reverse tunnel", exposing a specific port on a specific device to the public Internet. ```sh
curl -fsSL https://start9labs.github.io/start-tunnel | sh
```
## CLI ## CLI
@@ -40,20 +50,46 @@ start-tunnel --help
## Web Interface ## Web Interface
If you choose to enable the web interface (recommended in most cases), StartTunnel can be accessed as a website from the browser, or programmatically via API. Enable the web interface (recommended in most cases) to access your StartTunnel from the browser or via API.
1. Initialize the web interface. 1. Initialize the web interface.
start-tunnel web init start-tunnel web init
1. When prompted, select the IP address at which to host the web interface. In many cases, there will be only one IP address. 1. If your VPS has multiple public IP addresses, you will be prompted to select the IP address at which to host the web interface.
1. When prompted, enter the port at which to host the web interface. The default is 8443, and we recommend using it. If you change the default, choose an uncommon port to avoid conflicts. 1. When prompted, enter the port at which to host the web interface. The default is 8443, and we recommend using it. If you change the default, choose an uncommon port to avoid future conflicts.
1. Select whether to autogenerate a self-signed certificate or provide your own certificate and key. If you choose to autogenerate, you will be asked to list all IP addresses and domains for which to sign the certificate. For example, if you intend to access your StartTunnel web UI at a domain, include the domain in the list. 1. To access your StartTunnel web interface securely over HTTPS, you need an SSL certificate. When prompted, select whether to autogenerate a certificate or provide your own. _This is only for accessing your StartTunnel web interface_.
1. You will receive a success message that the webserver is running at the chosen IP:port, as well as your SSL certificate and an autogenerated UI password. 1. You will receive a success message with 3 pieces of information:
1. If not already, trust the certificate in your system keychain and/or browser. - **<https://IP:port>**: the URL where you can reach your personal web interface.
- **Password**: an autogenerated password for your interface. If you lose/forget it, you can reset it using the start-tunnel CLI.
- **Root Certificate Authority**: the Root CA of your StartTunnel instance.
1. If you lose/forget your password, you can reset it using the CLI. 1. If you autogenerated your SSL certificate, visiting the `https://IP:port` URL in the browser will warn you that the website is insecure. This is expected. You have two options for getting past this warning:
- option 1 (recommended): [Trust your StartTunnel Root CA on your connecting device](#trusting-your-starttunnel-root-ca).
- Option 2: bypass the warning in the browser, creating a one-time security exception.
### Trusting your StartTunnel Root CA
1. Copy the contents of your Root CA (starting with -----BEGIN CERTIFICATE----- and ending with -----END CERTIFICATE-----).
2. Open a text editor:
- Linux: gedit, nano, or any editor
- Mac: TextEdit
- Windows: Notepad
3. Paste the contents of your Root CA.
4. Save the file with a `.crt` extension (e.g. `start-tunnel.crt`) (make sure it saves as plain text, not rich text).
5. Trust the Root CA on your client device(s):
- [Linux](https://staging.docs.start9.com/device-guides/linux/ca.html)
- [Mac](https://staging.docs.start9.com/device-guides/mac/ca.html)
- [Windows](https://staging.docs.start9.com/device-guides/windows/ca.html)
- [Android/Graphene](https://staging.docs.start9.com/device-guides/android/ca.html)
- [iOS](https://staging.docs.start9.com/device-guides/ios/ca.html)

201
agents/VERSION_BUMP.md Normal file
View File

@@ -0,0 +1,201 @@
# StartOS Version Bump Guide
This document explains how to bump the StartOS version across the entire codebase.
## Overview
When bumping from version `X.Y.Z-alpha.N` to `X.Y.Z-alpha.N+1`, you need to update files in multiple locations across the repository. The `// VERSION_BUMP` comment markers indicate where changes are needed.
## Files to Update
### 1. Core Rust Crate Version
**File: `core/startos/Cargo.toml`**
Update the version string (line ~18):
```toml
version = "0.4.0-alpha.15" # VERSION_BUMP
```
**File: `core/Cargo.lock`**
This file is auto-generated. After updating `Cargo.toml`, run:
```bash
cd core
cargo check
```
This will update the version in `Cargo.lock` automatically.
### 2. Create New Version Migration Module
**File: `core/startos/src/version/vX_Y_Z_alpha_N+1.rs`**
Create a new version file by copying the previous version and updating:
```rust
use exver::{PreReleaseSegment, VersionRange};
use super::v0_3_5::V0_3_0_COMPAT;
use super::{VersionT, v0_4_0_alpha_14}; // Update to previous version
use crate::prelude::*;
lazy_static::lazy_static! {
static ref V0_4_0_alpha_15: exver::Version = exver::Version::new(
[0, 4, 0],
[PreReleaseSegment::String("alpha".into()), 15.into()] // Update number
);
}
#[derive(Clone, Copy, Debug, Default)]
pub struct Version;
impl VersionT for Version {
type Previous = v0_4_0_alpha_14::Version; // Update to previous version
type PreUpRes = ();
async fn pre_up(self) -> Result<Self::PreUpRes, Error> {
Ok(())
}
fn semver(self) -> exver::Version {
V0_4_0_alpha_15.clone() // Update version name
}
fn compat(self) -> &'static VersionRange {
&V0_3_0_COMPAT
}
#[instrument(skip_all)]
fn up(self, _db: &mut Value, _: Self::PreUpRes) -> Result<Value, Error> {
// Add migration logic here if needed
Ok(Value::Null)
}
fn down(self, _db: &mut Value) -> Result<(), Error> {
// Add rollback logic here if needed
Ok(())
}
}
```
### 3. Update Version Module Registry
**File: `core/startos/src/version/mod.rs`**
Make changes in **5 locations**:
#### Location 1: Module Declaration (~line 57)
Add the new module after the previous version:
```rust
mod v0_4_0_alpha_14;
mod v0_4_0_alpha_15; // Add this
```
#### Location 2: Current Type Alias (~line 59)
Update the `Current` type and move the `// VERSION_BUMP` comment:
```rust
pub type Current = v0_4_0_alpha_15::Version; // VERSION_BUMP
```
#### Location 3: Version Enum (~line 175)
Remove `// VERSION_BUMP` from the previous version, add new variant, add comment:
```rust
V0_4_0_alpha_14(Wrapper<v0_4_0_alpha_14::Version>),
V0_4_0_alpha_15(Wrapper<v0_4_0_alpha_15::Version>), // VERSION_BUMP
Other(exver::Version),
```
#### Location 4: as_version_t() Match (~line 233)
Remove `// VERSION_BUMP`, add new match arm, add comment:
```rust
Self::V0_4_0_alpha_14(v) => DynVersion(Box::new(v.0)),
Self::V0_4_0_alpha_15(v) => DynVersion(Box::new(v.0)), // VERSION_BUMP
Self::Other(v) => {
```
#### Location 5: as_exver() Match (~line 284, inside #[cfg(test)])
Remove `// VERSION_BUMP`, add new match arm, add comment:
```rust
Version::V0_4_0_alpha_14(Wrapper(x)) => x.semver(),
Version::V0_4_0_alpha_15(Wrapper(x)) => x.semver(), // VERSION_BUMP
Version::Other(x) => x.clone(),
```
### 4. SDK TypeScript Version
**File: `sdk/package/lib/StartSdk.ts`**
Update the OSVersion constant (~line 64):
```typescript
export const OSVersion = testTypeVersion("0.4.0-alpha.15");
```
### 5. Web UI Package Version
**File: `web/package.json`**
Update the version field:
```json
{
"name": "startos-ui",
"version": "0.4.0-alpha.15",
...
}
```
**File: `web/package-lock.json`**
This file is auto-generated, but it's faster to update manually. Find all instances of "startos-ui" and update the version field.
## Verification Step
```
make
```
## VERSION_BUMP Comment Pattern
The `// VERSION_BUMP` comment serves as a marker for where to make changes next time:
- Always **remove** it from the old location
- **Add** the new version entry
- **Move** the comment to mark the new location
This pattern helps you quickly find all the places that need updating in the next version bump.
## Summary Checklist
- [ ] Update `core/startos/Cargo.toml` version
- [ ] Create new `core/startos/src/version/vX_Y_Z_alpha_N+1.rs` file
- [ ] Update `core/startos/src/version/mod.rs` in 5 locations
- [ ] Run `cargo check` to update `core/Cargo.lock`
- [ ] Update `sdk/package/lib/StartSdk.ts` OSVersion
- [ ] Update `web/package.json` and `web/package-lock.json` version
- [ ] Verify all changes compile/build successfully
## Migration Logic
The `up()` and `down()` methods in the version file handle database migrations:
- **up()**: Migrates the database from the previous version to this version
- **down()**: Rolls back from this version to the previous version
- **pre_up()**: Runs before migration, useful for pre-migration checks or data gathering
If no migration is needed, return `Ok(Value::Null)` for `up()` and `Ok(())` for `down()`.
For complex migrations, you may need to:
1. Update `type PreUpRes` to pass data between `pre_up()` and `up()`
2. Implement database transformations in the `up()` method
3. Implement reverse transformations in `down()` for rollback support

View File

@@ -8,27 +8,22 @@ if [ "$0" != "./build-cargo-dep.sh" ]; then
exit 1 exit 1
fi fi
USE_TTY=
if tty -s; then
USE_TTY="-it"
fi
if [ -z "$ARCH" ]; then if [ -z "$ARCH" ]; then
ARCH=$(uname -m) ARCH=$(uname -m)
fi fi
DOCKER_PLATFORM="linux/${ARCH}" RUST_ARCH="$ARCH"
if [ "$ARCH" = aarch64 ] || [ "$ARCH" = arm64 ]; then if [ "$ARCH" = "riscv64" ]; then
DOCKER_PLATFORM="linux/arm64" RUST_ARCH="riscv64gc"
elif [ "$ARCH" = x86_64 ]; then
DOCKER_PLATFORM="linux/amd64"
fi fi
mkdir -p cargo-deps mkdir -p cargo-deps
alias 'rust-musl-builder'='docker run $USE_TTY --platform=${DOCKER_PLATFORM} --rm -e "RUSTFLAGS=$RUSTFLAGS" -v "$HOME/.cargo/registry":/root/.cargo/registry -v "$(pwd)"/cargo-deps:/home/rust/src -w /home/rust/src -P rust:alpine'
PREINSTALL=${PREINSTALL:-true} source core/builder-alias.sh
rust-musl-builder sh -c "$PREINSTALL && cargo install $* --target-dir /home/rust/src --target=$ARCH-unknown-linux-musl" RUSTFLAGS="-C target-feature=+crt-static"
sudo chown -R $USER cargo-deps
sudo chown -R $USER ~/.cargo rust-zig-builder cargo-zigbuild install $* --target-dir /workdir/cargo-deps/ --target=$RUST_ARCH-unknown-linux-musl
if [ "$(ls -nd "cargo-deps/$RUST_ARCH-unknown-linux-musl/release/${!#}" | awk '{ print $3 }')" != "$UID" ]; then
rust-zig-builder sh -c "chown -R $UID:$UID cargo-deps && chown -R $UID:$UID /usr/local/cargo"
fi

View File

@@ -7,6 +7,7 @@ bmon
btrfs-progs btrfs-progs
ca-certificates ca-certificates
cifs-utils cifs-utils
conntrack
cryptsetup cryptsetup
curl curl
dmidecode dmidecode
@@ -19,7 +20,6 @@ flashrom
fuse3 fuse3
grub-common grub-common
grub-efi grub-efi
grub2-common
htop htop
httpdirfs httpdirfs
iotop iotop

View File

@@ -1,6 +1,4 @@
- grub-common
- grub-efi - grub-efi
- grub2-common
+ parted + parted
+ raspberrypi-net-mods + raspberrypi-net-mods
+ raspberrypi-sys-mods + raspberrypi-sys-mods

View File

@@ -22,7 +22,7 @@ parse_essential_db_info() {
RAM_GB="unknown" RAM_GB="unknown"
fi fi
RUNNING_SERVICES=$(jq -r '[.value.packageData[] | select(.status.main == "running")] | length' "$DB_DUMP" 2>/dev/null) RUNNING_SERVICES=$(jq -r '[.value.packageData[] | select(.statusInfo.started != null)] | length' "$DB_DUMP" 2>/dev/null)
TOTAL_SERVICES=$(jq -r '.value.packageData | length' "$DB_DUMP" 2>/dev/null) TOTAL_SERVICES=$(jq -r '.value.packageData | length' "$DB_DUMP" 2>/dev/null)
rm -f "$DB_DUMP" rm -f "$DB_DUMP"

View File

@@ -10,24 +10,24 @@ fi
POSITIONAL_ARGS=() POSITIONAL_ARGS=()
while [[ $# -gt 0 ]]; do while [[ $# -gt 0 ]]; do
case $1 in case $1 in
--no-sync) --no-sync)
NO_SYNC=1 NO_SYNC=1
shift shift
;; ;;
--create) --create)
ONLY_CREATE=1 ONLY_CREATE=1
shift shift
;; ;;
-*|--*) -*|--*)
echo "Unknown option $1" echo "Unknown option $1"
exit 1 exit 1
;; ;;
*) *)
POSITIONAL_ARGS+=("$1") # save positional arg POSITIONAL_ARGS+=("$1") # save positional arg
shift # past argument shift # past argument
;; ;;
esac esac
done done
set -- "${POSITIONAL_ARGS[@]}" # restore positional parameters set -- "${POSITIONAL_ARGS[@]}" # restore positional parameters
@@ -35,7 +35,7 @@ set -- "${POSITIONAL_ARGS[@]}" # restore positional parameters
if [ -z "$NO_SYNC" ]; then if [ -z "$NO_SYNC" ]; then
echo 'Syncing...' echo 'Syncing...'
umount -R /media/startos/next 2> /dev/null umount -R /media/startos/next 2> /dev/null
umount -R /media/startos/upper 2> /dev/null umount /media/startos/upper 2> /dev/null
rm -rf /media/startos/upper /media/startos/next rm -rf /media/startos/upper /media/startos/next
mkdir /media/startos/upper mkdir /media/startos/upper
mount -t tmpfs tmpfs /media/startos/upper mount -t tmpfs tmpfs /media/startos/upper
@@ -43,8 +43,6 @@ if [ -z "$NO_SYNC" ]; then
mount -t overlay \ mount -t overlay \
-olowerdir=/media/startos/current,upperdir=/media/startos/upper/data,workdir=/media/startos/upper/work \ -olowerdir=/media/startos/current,upperdir=/media/startos/upper/data,workdir=/media/startos/upper/work \
overlay /media/startos/next overlay /media/startos/next
mkdir -p /media/startos/next/media/startos/root
mount --bind /media/startos/root /media/startos/next/media/startos/root
fi fi
if [ -n "$ONLY_CREATE" ]; then if [ -n "$ONLY_CREATE" ]; then
@@ -56,12 +54,18 @@ mkdir -p /media/startos/next/dev
mkdir -p /media/startos/next/sys mkdir -p /media/startos/next/sys
mkdir -p /media/startos/next/proc mkdir -p /media/startos/next/proc
mkdir -p /media/startos/next/boot mkdir -p /media/startos/next/boot
mkdir -p /media/startos/next/media/startos/root
mount --bind /run /media/startos/next/run mount --bind /run /media/startos/next/run
mount --bind /tmp /media/startos/next/tmp mount --bind /tmp /media/startos/next/tmp
mount --bind /dev /media/startos/next/dev mount --bind /dev /media/startos/next/dev
mount --bind /sys /media/startos/next/sys mount --bind /sys /media/startos/next/sys
mount --bind /proc /media/startos/next/proc mount --bind /proc /media/startos/next/proc
mount --bind /boot /media/startos/next/boot mount --bind /boot /media/startos/next/boot
mount --bind /media/startos/root /media/startos/next/media/startos/root
if mountpoint /sys/firmware/efi/efivars 2> /dev/null; then
mount --bind /sys/firmware/efi/efivars /media/startos/next/sys/firmware/efi/efivars
fi
if [ -z "$*" ]; then if [ -z "$*" ]; then
chroot /media/startos/next chroot /media/startos/next
@@ -71,6 +75,10 @@ else
CHROOT_RES=$? CHROOT_RES=$?
fi fi
if mountpoint /media/startos/next/sys/firmware/efi/efivars 2> /dev/null; then
umount /media/startos/next/sys/firmware/efi/efivars
fi
umount /media/startos/next/run umount /media/startos/next/run
umount /media/startos/next/tmp umount /media/startos/next/tmp
umount /media/startos/next/dev umount /media/startos/next/dev
@@ -87,11 +95,12 @@ if [ "$CHROOT_RES" -eq 0 ]; then
echo 'Upgrading...' echo 'Upgrading...'
rm -f /media/startos/images/next.squashfs
if ! time mksquashfs /media/startos/next /media/startos/images/next.squashfs -b 4096 -comp gzip; then if ! time mksquashfs /media/startos/next /media/startos/images/next.squashfs -b 4096 -comp gzip; then
umount -R /media/startos/next umount -l /media/startos/next
umount -R /media/startos/upper umount -l /media/startos/upper
rm -rf /media/startos/upper /media/startos/next rm -rf /media/startos/upper /media/startos/next
exit 1 exit 1
fi fi
hash=$(b3sum /media/startos/images/next.squashfs | head -c 32) hash=$(b3sum /media/startos/images/next.squashfs | head -c 32)
mv /media/startos/images/next.squashfs /media/startos/images/${hash}.rootfs mv /media/startos/images/next.squashfs /media/startos/images/${hash}.rootfs
@@ -103,5 +112,5 @@ if [ "$CHROOT_RES" -eq 0 ]; then
fi fi
umount -R /media/startos/next umount -R /media/startos/next
umount -R /media/startos/upper umount /media/startos/upper
rm -rf /media/startos/upper /media/startos/next rm -rf /media/startos/upper /media/startos/next

View File

@@ -1,38 +1,51 @@
#!/bin/bash #!/bin/bash
if [ -z "$sip" ] || [ -z "$dip" ] || [ -z "$sport" ] || [ -z "$dport" ]; then if [ -z "$sip" ] || [ -z "$dip" ] || [ -z "$dprefix" ] || [ -z "$sport" ] || [ -z "$dport" ]; then
>&2 echo 'missing required env var' >&2 echo 'missing required env var'
exit 1 exit 1
fi fi
# Helper function to check if a rule exists NAME="F$(echo "$sip:$sport -> $dip/$dprefix:$dport" | sha256sum | head -c 15)"
nat_rule_exists() {
iptables -t nat -C "$@" 2>/dev/null
}
# Helper function to add or delete a rule idempotently for kind in INPUT FORWARD ACCEPT; do
# Usage: apply_rule [add|del] <iptables args...> if ! iptables -C $kind -j "${NAME}_${kind}" 2> /dev/null; then
apply_nat_rule() { iptables -N "${NAME}_${kind}" 2> /dev/null
local action="$1" iptables -A $kind -j "${NAME}_${kind}"
shift
if [ "$action" = "add" ]; then
# Only add if rule doesn't exist
if ! rule_exists "$@"; then
iptables -t nat -A "$@"
fi
elif [ "$action" = "del" ]; then
if rule_exists "$@"; then
iptables -t nat -D "$@"
fi
fi fi
} done
for kind in PREROUTING INPUT OUTPUT POSTROUTING; do
if ! iptables -t nat -C $kind -j "${NAME}_${kind}" 2> /dev/null; then
iptables -t nat -N "${NAME}_${kind}" 2> /dev/null
iptables -t nat -A $kind -j "${NAME}_${kind}"
fi
done
err=0
trap 'err=1' ERR
for kind in INPUT FORWARD ACCEPT; do
iptables -F "${NAME}_${kind}" 2> /dev/null
done
for kind in PREROUTING INPUT OUTPUT POSTROUTING; do
iptables -t nat -F "${NAME}_${kind}" 2> /dev/null
done
if [ "$UNDO" = 1 ]; then if [ "$UNDO" = 1 ]; then
action="del" conntrack -D -p tcp -d $sip --dport $sport || true # conntrack returns exit 1 if no connections are active
else conntrack -D -p udp -d $sip --dport $sport || true # conntrack returns exit 1 if no connections are active
action="add" exit $err
fi fi
apply_nat_rule "$action" PREROUTING -p tcp -d $sip --dport $sport -j DNAT --to-destination $dip:$dport iptables -t nat -A ${NAME}_PREROUTING -d "$sip" -p tcp --dport "$sport" -j DNAT --to-destination "$dip:$dport"
apply_nat_rule "$action" OUTPUT -p tcp -d $sip --dport $sport -j DNAT --to-destination $dip:$dport iptables -t nat -A ${NAME}_PREROUTING -d "$sip" -p udp --dport "$sport" -j DNAT --to-destination "$dip:$dport"
iptables -t nat -A ${NAME}_OUTPUT -d "$sip" -p tcp --dport "$sport" -j DNAT --to-destination "$dip:$dport"
iptables -t nat -A ${NAME}_OUTPUT -d "$sip" -p udp --dport "$sport" -j DNAT --to-destination "$dip:$dport"
iptables -t nat -A ${NAME}_PREROUTING -s "$dip/$dprefix" -d "$sip" -p tcp --dport "$sport" -j DNAT --to-destination "$dip:$dport"
iptables -t nat -A ${NAME}_PREROUTING -s "$dip/$dprefix" -d "$sip" -p udp --dport "$sport" -j DNAT --to-destination "$dip:$dport"
iptables -t nat -A ${NAME}_POSTROUTING -s "$dip/$dprefix" -d "$dip" -p tcp --dport "$dport" -j MASQUERADE
iptables -t nat -A ${NAME}_POSTROUTING -s "$dip/$dprefix" -d "$dip" -p udp --dport "$dport" -j MASQUERADE
iptables -A ${NAME}_FORWARD -d $dip -p tcp --dport $dport -m state --state NEW -j ACCEPT
iptables -A ${NAME}_FORWARD -d $dip -p udp --dport $dport -m state --state NEW -j ACCEPT
exit $err

View File

@@ -1,36 +1,64 @@
#!/bin/bash #!/bin/bash
fail=$(printf " [\033[31m fail \033[0m]") # --- Config ---
pass=$(printf " [\033[32m pass \033[0m]") # Colors (using printf to ensure compatibility)
GRAY=$(printf '\033[90m')
GREEN=$(printf '\033[32m')
RED=$(printf '\033[31m')
NC=$(printf '\033[0m') # No Color
# Proxies to test
proxies=(
"Host Tor|127.0.1.1:9050"
"Startd Tor|10.0.3.1:9050"
)
# Default URLs
onion_list=( onion_list=(
"The Tor Project|http://2gzyxa5ihm7nsggfxnu52rck2vv4rvmdlkiu3zzui5du4xyclen53wid.onion"
"Start9|http://privacy34kn4ez3y3nijweec6w4g54i3g54sdv7r5mr6soma3w4begyd.onion" "Start9|http://privacy34kn4ez3y3nijweec6w4g54i3g54sdv7r5mr6soma3w4begyd.onion"
"Mempool|http://mempoolhqx4isw62xs7abwphsq7ldayuidyx2v2oethdhhj6mlo2r6ad.onion" "Mempool|http://mempoolhqx4isw62xs7abwphsq7ldayuidyx2v2oethdhhj6mlo2r6ad.onion"
"DuckDuckGo|https://duckduckgogg42xjoc72x3sjasowoarfbgcmvfimaftt6twagswzczad.onion" "DuckDuckGo|https://duckduckgogg42xjoc72x3sjasowoarfbgcmvfimaftt6twagswzczad.onion"
"Brave Search|https://search.brave4u7jddbv7cyviptqjc7jusxh72uik7zt6adtckl5f4nwy2v72qd.onion" "Brave Search|https://search.brave4u7jddbv7cyviptqjc7jusxh72uik7zt6adtckl5f4nwy2v72qd.onion"
) )
# Check if ~/.startos/tor-check.list exists and read its contents if available # Load custom list
if [ -f ~/.startos/tor-check.list ]; then [ -f ~/.startos/tor-check.list ] && readarray -t custom_list < <(grep -v '^#' ~/.startos/tor-check.list) && onion_list+=("${custom_list[@]}")
while IFS= read -r line; do
# Check if the line starts with a # # --- Functions ---
if [[ ! "$line" =~ ^# ]]; then print_line() { printf "${GRAY}────────────────────────────────────────${NC}\n"; }
onion_list+=("$line")
# --- Main ---
echo "Testing Onion Connections..."
for proxy_info in "${proxies[@]}"; do
proxy_name="${proxy_info%%|*}"
proxy_addr="${proxy_info#*|}"
print_line
printf "${GRAY}Proxy: %s (%s)${NC}\n" "$proxy_name" "$proxy_addr"
for data in "${onion_list[@]}"; do
name="${data%%|*}"
url="${data#*|}"
# Capture verbose output + http code.
# --no-progress-meter: Suppresses the "0 0 0" stats but keeps -v output
output=$(curl -v --no-progress-meter --max-time 15 --socks5-hostname "$proxy_addr" "$url" 2>&1)
exit_code=$?
if [ $exit_code -eq 0 ]; then
printf " ${GREEN}[pass]${NC} %s (%s)\n" "$name" "$url"
else
printf " ${RED}[fail]${NC} %s (%s)\n" "$name" "$url"
printf " ${RED}↳ Curl Error %s${NC}\n" "$exit_code"
# Print the last 4 lines of verbose log to show the specific handshake error
# We look for lines starting with '*' or '>' or '<' to filter out junk if any remains
echo "$output" | tail -n 4 | sed "s/^/ ${GRAY}/"
fi fi
done < ~/.startos/tor-check.list done
fi
echo "Testing connection to Onion Pages ..."
for data in "${onion_list[@]}"; do
name="${data%%|*}"
url="${data#*|}"
if curl --socks5-hostname localhost:9050 "$url" > /dev/null 2>&1; then
echo " ${pass}: $name ($url) "
else
echo " ${fail}: $name ($url) "
fi
done done
print_line
echo # Reset color just in case
echo "Done." printf "${NC}"

82
build/lib/scripts/upgrade Executable file
View File

@@ -0,0 +1,82 @@
#!/bin/bash
set -e
SOURCE_DIR="$(dirname $(realpath "${BASH_SOURCE[0]}"))"
if [ "$UID" -ne 0 ]; then
>&2 echo 'Must be run as root'
exit 1
fi
if ! [ -f "$1" ]; then
>&2 echo "usage: $0 <SQUASHFS>"
exit 1
fi
echo 'Upgrading...'
hash=$(b3sum $1 | head -c 32)
if [ -n "$2" ] && [ "$hash" != "$CHECKSUM" ]; then
>&2 echo 'Checksum mismatch'
exit 2
fi
unsquashfs -f -d / $1 boot
umount -R /media/startos/next 2> /dev/null || true
umount /media/startos/upper 2> /dev/null || true
umount /media/startos/lower 2> /dev/null || true
mkdir -p /media/startos/upper
mount -t tmpfs tmpfs /media/startos/upper
mkdir -p /media/startos/lower /media/startos/upper/data /media/startos/upper/work /media/startos/next
mount $1 /media/startos/lower
mount -t overlay \
-olowerdir=/media/startos/lower,upperdir=/media/startos/upper/data,workdir=/media/startos/upper/work \
overlay /media/startos/next
mkdir -p /media/startos/next/run
mkdir -p /media/startos/next/dev
mkdir -p /media/startos/next/sys
mkdir -p /media/startos/next/proc
mkdir -p /media/startos/next/boot
mkdir -p /media/startos/next/media/startos/root
mount --bind /run /media/startos/next/run
mount --bind /tmp /media/startos/next/tmp
mount --bind /dev /media/startos/next/dev
mount --bind /sys /media/startos/next/sys
mount --bind /proc /media/startos/next/proc
mount --bind /boot /media/startos/next/boot
mount --bind /media/startos/root /media/startos/next/media/startos/root
if mountpoint /boot/efi 2> /dev/null; then
mkdir -p /media/startos/next/boot/efi
mount --bind /boot/efi /media/startos/next/boot/efi
fi
if mountpoint /sys/firmware/efi/efivars 2> /dev/null; then
mount --bind /sys/firmware/efi/efivars /media/startos/next/sys/firmware/efi/efivars
fi
chroot /media/startos/next bash -e << "EOF"
if [ -f /boot/grub/grub.cfg ]; then
grub-install /dev/$(eval $(lsblk -o MOUNTPOINT,PKNAME -P | grep 'MOUNTPOINT="/media/startos/root"') && echo $PKNAME)
update-grub
fi
EOF
sync
umount -Rl /media/startos/next
umount /media/startos/upper
umount /media/startos/lower
mv $1 /media/startos/images/${hash}.rootfs
ln -rsf /media/startos/images/${hash}.rootfs /media/startos/config/current.rootfs
sync
echo 'System upgrade complete. Reboot to apply changes...'

View File

@@ -1,61 +0,0 @@
#!/bin/bash
set -e
if [ "$UID" -ne 0 ]; then
>&2 echo 'Must be run as root'
exit 1
fi
if [ -z "$1" ]; then
>&2 echo "usage: $0 <SQUASHFS>"
exit 1
fi
VERSION=$(unsquashfs -cat $1 /usr/lib/startos/VERSION.txt)
GIT_HASH=$(unsquashfs -cat $1 /usr/lib/startos/GIT_HASH.txt)
B3SUM=$(b3sum $1 | head -c 32)
if [ -n "$CHECKSUM" ] && [ "$CHECKSUM" != "$B3SUM" ]; then
>&2 echo "CHECKSUM MISMATCH"
exit 2
fi
mv $1 /media/startos/images/${B3SUM}.rootfs
ln -rsf /media/startos/images/${B3SUM}.rootfs /media/startos/config/current.rootfs
unsquashfs -n -f -d / /media/startos/images/${B3SUM}.rootfs boot
umount -R /media/startos/next 2> /dev/null || true
umount -R /media/startos/lower 2> /dev/null || true
umount -R /media/startos/upper 2> /dev/null || true
rm -rf /media/startos/lower /media/startos/upper /media/startos/next
mkdir /media/startos/upper
mount -t tmpfs tmpfs /media/startos/upper
mkdir -p /media/startos/lower /media/startos/upper/data /media/startos/upper/work /media/startos/next
mount /media/startos/images/${B3SUM}.rootfs /media/startos/lower
mount -t overlay \
-olowerdir=/media/startos/lower,upperdir=/media/startos/upper/data,workdir=/media/startos/upper/work \
overlay /media/startos/next
mkdir -p /media/startos/next/media/startos/root
mount --bind /media/startos/root /media/startos/next/media/startos/root
mkdir -p /media/startos/next/dev
mkdir -p /media/startos/next/sys
mkdir -p /media/startos/next/proc
mkdir -p /media/startos/next/boot
mount --bind /dev /media/startos/next/dev
mount --bind /sys /media/startos/next/sys
mount --bind /proc /media/startos/next/proc
mount --bind /boot /media/startos/next/boot
chroot /media/startos/next update-grub2
umount -R /media/startos/next
umount -R /media/startos/upper
umount -R /media/startos/lower
rm -rf /media/startos/lower /media/startos/upper /media/startos/next
sync
reboot

View File

@@ -1,4 +1,4 @@
FROM debian:bookworm FROM debian:forky
RUN apt-get update && \ RUN apt-get update && \
apt-get install -y \ apt-get install -y \
@@ -25,7 +25,8 @@ RUN apt-get update && \
systemd-container \ systemd-container \
systemd-sysv \ systemd-sysv \
dbus \ dbus \
dbus-user-session dbus-user-session \
nodejs
RUN systemctl mask \ RUN systemctl mask \
systemd-firstboot.service \ systemd-firstboot.service \
@@ -38,17 +39,6 @@ RUN git config --global --add safe.directory /root/start-os
RUN mkdir -p /etc/debspawn && \ RUN mkdir -p /etc/debspawn && \
echo "AllowUnsafePermissions=true" > /etc/debspawn/global.toml echo "AllowUnsafePermissions=true" > /etc/debspawn/global.toml
ENV NVM_DIR=~/.nvm
ENV NODE_VERSION=22
RUN mkdir -p $NVM_DIR && \
curl -o- https://raw.githubusercontent.com/nvm-sh/nvm/master/install.sh | bash && \
. $NVM_DIR/nvm.sh \
nvm install $NODE_VERSION && \
nvm use $NODE_VERSION && \
nvm alias default $NODE_VERSION && \
ln -s $(which node) /usr/bin/node && \
ln -s $(which npm) /usr/bin/npm
RUN mkdir -p /root/start-os RUN mkdir -p /root/start-os
WORKDIR /root/start-os WORKDIR /root/start-os

View File

@@ -1,6 +1,6 @@
#!/bin/bash #!/bin/bash
if [ "$FORCE_COMPAT" = 1 ] || ( [ "$REQUIRES" = "linux" ] && [ "$(uname -s)" != "Linux" ] ) || ( [ "$REQUIRES" = "debian" ] && ! which dpkg > /dev/null ); then if [ "$FORCE_COMPAT" = 1 ] || ( [ "$REQUIRES" = "linux" ] && [ "$(uname -s)" != "Linux" ] ) || ( [ "$REQUIRES" = "debian" ] && ! which dpkg > /dev/null ) || ( [ "$REQUIRES" = "qemu" ] && ! which qemu-$ARCH > /dev/null ); then
project_pwd="$(cd "$(dirname "${BASH_SOURCE[0]}")"/../.. && pwd)/" project_pwd="$(cd "$(dirname "${BASH_SOURCE[0]}")"/../.. && pwd)/"
pwd="$(pwd)/" pwd="$(pwd)/"
if ! [[ "$pwd" = "$project_pwd"* ]]; then if ! [[ "$pwd" = "$project_pwd"* ]]; then
@@ -20,7 +20,7 @@ if [ "$FORCE_COMPAT" = 1 ] || ( [ "$REQUIRES" = "linux" ] && [ "$(uname -s)" !=
while ! docker exec os-compat systemctl is-active --quiet multi-user.target 2> /dev/null; do sleep .5; done while ! docker exec os-compat systemctl is-active --quiet multi-user.target 2> /dev/null; do sleep .5; done
docker exec -eARCH -eENVIRONMENT -ePLATFORM -eGIT_BRANCH_AS_HASH -ePROJECT -eDEPENDS -eCONFLICTS $USE_TTY -w "/root/start-os${rel_pwd}" os-compat $@ docker exec -eARCH -eENVIRONMENT -ePLATFORM -eGIT_BRANCH_AS_HASH -ePROJECT -eDEPENDS -eCONFLICTS $USE_TTY -w "/root/start-os${rel_pwd}" os-compat $@
code=$? code=$?
docker stop os-compat docker stop os-compat > /dev/null
exit $code exit $code
else else
exec $@ exec $@

View File

@@ -4,6 +4,7 @@ OnFailure=container-runtime-failure.service
[Service] [Service]
Type=simple Type=simple
Environment=RUST_LOG=startos=debug
ExecStart=/usr/bin/node --experimental-detect-module --trace-warnings --unhandled-rejections=warn /usr/lib/startos/init/index.js ExecStart=/usr/bin/node --experimental-detect-module --trace-warnings --unhandled-rejections=warn /usr/lib/startos/init/index.js
Restart=no Restart=no

View File

@@ -38,7 +38,7 @@
}, },
"../sdk/dist": { "../sdk/dist": {
"name": "@start9labs/start-sdk", "name": "@start9labs/start-sdk",
"version": "0.4.0-beta.42", "version": "0.4.0-beta.45",
"license": "MIT", "license": "MIT",
"dependencies": { "dependencies": {
"@iarna/toml": "^3.0.0", "@iarna/toml": "^3.0.0",

View File

@@ -289,6 +289,7 @@ export function makeEffects(context: EffectContext): Effects {
getStatus(...[o]: Parameters<T.Effects["getStatus"]>) { getStatus(...[o]: Parameters<T.Effects["getStatus"]>) {
return rpcRound("get-status", o) as ReturnType<T.Effects["getStatus"]> return rpcRound("get-status", o) as ReturnType<T.Effects["getStatus"]>
}, },
/// DEPRECATED
setMainStatus(o: { status: "running" | "stopped" }): Promise<null> { setMainStatus(o: { status: "running" | "stopped" }): Promise<null> {
return rpcRound("set-main-status", o) as ReturnType< return rpcRound("set-main-status", o) as ReturnType<
T.Effects["setHealth"] T.Effects["setHealth"]

View File

@@ -146,6 +146,7 @@ const handleRpc = (id: IdType, result: Promise<RpcResult>) =>
const hasId = object({ id: idType }).test const hasId = object({ id: idType }).test
export class RpcListener { export class RpcListener {
shouldExit = false
unixSocketServer = net.createServer(async (server) => {}) unixSocketServer = net.createServer(async (server) => {})
private _system: System | undefined private _system: System | undefined
private callbacks: CallbackHolder | undefined private callbacks: CallbackHolder | undefined
@@ -158,6 +159,8 @@ export class RpcListener {
this.unixSocketServer.listen(SOCKET_PATH) this.unixSocketServer.listen(SOCKET_PATH)
console.log("Listening on %s", SOCKET_PATH)
this.unixSocketServer.on("connection", (s) => { this.unixSocketServer.on("connection", (s) => {
let id: IdType = null let id: IdType = null
const captureId = <X>(x: X) => { const captureId = <X>(x: X) => {
@@ -208,6 +211,11 @@ export class RpcListener {
.catch(mapError) .catch(mapError)
.then(logData("response")) .then(logData("response"))
.then(writeDataToSocket) .then(writeDataToSocket)
.then((_) => {
if (this.shouldExit) {
process.exit(0)
}
})
.catch((e) => { .catch((e) => {
console.error(`Major error in socket handling: ${e}`) console.error(`Major error in socket handling: ${e}`)
console.debug(`Data in: ${a.toString()}`) console.debug(`Data in: ${a.toString()}`)
@@ -284,10 +292,13 @@ export class RpcListener {
) )
}) })
.when(stopType, async ({ id }) => { .when(stopType, async ({ id }) => {
this.callbacks?.removeChild("main")
return handleRpc( return handleRpc(
id, id,
this.system.stop().then((result) => ({ result })), this.system.stop().then((result) => {
this.callbacks?.removeChild("main")
return { result }
}),
) )
}) })
.when(exitType, async ({ id, params }) => { .when(exitType, async ({ id, params }) => {
@@ -308,6 +319,7 @@ export class RpcListener {
}), }),
target, target,
) )
this.shouldExit = true
} }
})().then((result) => ({ result })), })().then((result) => ({ result })),
) )

View File

@@ -118,7 +118,7 @@ export class DockerProcedureContainer extends Drop {
subpath: volumeMount.path, subpath: volumeMount.path,
readonly: volumeMount.readonly, readonly: volumeMount.readonly,
volumeId: volumeMount["volume-id"], volumeId: volumeMount["volume-id"],
filetype: "directory", idmap: [],
}, },
}) })
} else if (volumeMount.type === "backup") { } else if (volumeMount.type === "backup") {

View File

@@ -120,6 +120,7 @@ export class MainLoop {
? { ? {
preferredExternalPort: lanConf.external, preferredExternalPort: lanConf.external,
alpn: { specified: ["http/1.1"] }, alpn: { specified: ["http/1.1"] },
addXForwardedHeaders: false,
} }
: null, : null,
}) })
@@ -133,7 +134,7 @@ export class MainLoop {
delete this.mainEvent delete this.mainEvent
delete this.healthLoops delete this.healthLoops
await main?.daemon await main?.daemon
.stop() .term()
.catch((e: unknown) => console.error(`Main loop error`, utils.asError(e))) .catch((e: unknown) => console.error(`Main loop error`, utils.asError(e)))
this.effects.setMainStatus({ status: "stopped" }) this.effects.setMainStatus({ status: "stopped" })
if (healthLoops) healthLoops.forEach((x) => clearInterval(x.interval)) if (healthLoops) healthLoops.forEach((x) => clearInterval(x.interval))

View File

@@ -456,6 +456,7 @@ export class SystemForEmbassy implements System {
addSsl = { addSsl = {
preferredExternalPort: lanPortNum, preferredExternalPort: lanPortNum,
alpn: { specified: [] }, alpn: { specified: [] },
addXForwardedHeaders: false,
} }
} }
return [ return [
@@ -888,7 +889,6 @@ export class SystemForEmbassy implements System {
effects: Effects, effects: Effects,
timeoutMs: number | null, timeoutMs: number | null,
): Promise<PropertiesReturn> { ): Promise<PropertiesReturn> {
// TODO BLU-J set the properties ever so often
const setConfigValue = this.manifest.properties const setConfigValue = this.manifest.properties
if (!setConfigValue) throw new Error("There is no properties") if (!setConfigValue) throw new Error("There is no properties")
if (setConfigValue.type === "docker") { if (setConfigValue.type === "docker") {
@@ -1043,7 +1043,7 @@ export class SystemForEmbassy implements System {
volumeId: "embassy", volumeId: "embassy",
subpath: null, subpath: null,
readonly: true, readonly: true,
filetype: "directory", idmap: [],
}, },
}) })
configFile configFile
@@ -1191,7 +1191,7 @@ async function updateConfig(
volumeId: "embassy", volumeId: "embassy",
subpath: null, subpath: null,
readonly: true, readonly: true,
filetype: "directory", idmap: [],
}, },
}) })
const remoteConfig = configFile const remoteConfig = configFile
@@ -1241,11 +1241,11 @@ async function updateConfig(
: catchFn( : catchFn(
() => () =>
(specValue.target === "lan-address" (specValue.target === "lan-address"
? filled.addressInfo!.localHostnames[0] || ? filled.addressInfo!.filter({ kind: "mdns" }) ||
filled.addressInfo!.onionHostnames[0] filled.addressInfo!.onion
: filled.addressInfo!.onionHostnames[0] || : filled.addressInfo!.onion ||
filled.addressInfo!.localHostnames[0] filled.addressInfo!.filter({ kind: "mdns" })
).hostname.value, ).hostnames[0].hostname.value,
) || "" ) || ""
mutConfigValue[key] = url mutConfigValue[key] = url
} }

View File

@@ -68,22 +68,18 @@ export class SystemForStartOs implements System {
try { try {
if (this.runningMain || this.starting) return if (this.runningMain || this.starting) return
this.starting = true this.starting = true
effects.constRetry = utils.once(() => effects.restart()) effects.constRetry = utils.once(() => {
console.debug(".const() triggered")
effects.restart()
})
let mainOnTerm: () => Promise<void> | undefined let mainOnTerm: () => Promise<void> | undefined
const started = async (onTerm: () => Promise<void>) => {
await effects.setMainStatus({ status: "running" })
mainOnTerm = onTerm
return null
}
const daemons = await ( const daemons = await (
await this.abi.main({ await this.abi.main({
effects, effects,
started,
}) })
).build() ).build()
this.runningMain = { this.runningMain = {
stop: async () => { stop: async () => {
if (mainOnTerm) await mainOnTerm()
await daemons.term() await daemons.term()
}, },
} }

View File

@@ -7,6 +7,12 @@ const getDependencies: AllGetDependencies = {
system: getSystem, system: getSystem,
} }
for (let s of ["SIGTERM", "SIGINT", "SIGHUP"]) {
process.on(s, (s) => {
console.log(`Caught ${s}`)
})
}
new RpcListener(getDependencies) new RpcListener(getDependencies)
/** /**

View File

@@ -9,7 +9,7 @@ if [ "$ARCH" = "riscv64" ]; then
RUST_ARCH="riscv64gc" RUST_ARCH="riscv64gc"
fi fi
if mountpoint -q tmp/combined; then sudo umount -R tmp/combined; fi if mountpoint -q tmp/combined; then sudo umount -l tmp/combined; fi
if mountpoint -q tmp/lower; then sudo umount tmp/lower; fi if mountpoint -q tmp/lower; then sudo umount tmp/lower; fi
sudo rm -rf tmp sudo rm -rf tmp
mkdir -p tmp/lower tmp/upper tmp/work tmp/combined mkdir -p tmp/lower tmp/upper tmp/work tmp/combined
@@ -26,15 +26,15 @@ fi
QEMU= QEMU=
if [ "$ARCH" != "$(uname -m)" ]; then if [ "$ARCH" != "$(uname -m)" ]; then
QEMU=/usr/bin/qemu-${ARCH}-static QEMU=/usr/bin/qemu-${ARCH}
if ! which qemu-$ARCH-static > /dev/null; then if ! which qemu-$ARCH > /dev/null; then
>&2 echo qemu-user-static is required for cross-platform builds >&2 echo qemu-user is required for cross-platform builds
sudo umount tmp/combined sudo umount tmp/combined
sudo umount tmp/lower sudo umount tmp/lower
sudo rm -rf tmp sudo rm -rf tmp
exit 1 exit 1
fi fi
sudo cp $(which qemu-$ARCH-static) tmp/combined${QEMU} sudo cp $(which qemu-$ARCH) tmp/combined${QEMU}
fi fi
sudo mkdir -p tmp/combined/usr/lib/startos/ sudo mkdir -p tmp/combined/usr/lib/startos/
@@ -44,7 +44,7 @@ sudo cp container-runtime.service tmp/combined/lib/systemd/system/container-runt
sudo chown 0:0 tmp/combined/lib/systemd/system/container-runtime.service sudo chown 0:0 tmp/combined/lib/systemd/system/container-runtime.service
sudo cp container-runtime-failure.service tmp/combined/lib/systemd/system/container-runtime-failure.service sudo cp container-runtime-failure.service tmp/combined/lib/systemd/system/container-runtime-failure.service
sudo chown 0:0 tmp/combined/lib/systemd/system/container-runtime-failure.service sudo chown 0:0 tmp/combined/lib/systemd/system/container-runtime-failure.service
sudo cp ../core/target/${RUST_ARCH}-unknown-linux-musl/release/containerbox tmp/combined/usr/bin/start-container sudo cp ../core/target/${RUST_ARCH}-unknown-linux-musl/release/start-container tmp/combined/usr/bin/start-container
echo -e '#!/bin/bash\nexec start-container "$@"' | sudo tee tmp/combined/usr/bin/start-cli # TODO: remove echo -e '#!/bin/bash\nexec start-container "$@"' | sudo tee tmp/combined/usr/bin/start-cli # TODO: remove
sudo chmod +x tmp/combined/usr/bin/start-cli sudo chmod +x tmp/combined/usr/bin/start-cli
sudo chown 0:0 tmp/combined/usr/bin/start-container sudo chown 0:0 tmp/combined/usr/bin/start-container

1197
core/Cargo.lock generated

File diff suppressed because it is too large Load Diff

View File

@@ -1,3 +1,3 @@
[workspace] [workspace]
members = ["helpers", "models", "startos"] members = ["startos"]

View File

@@ -2,12 +2,34 @@
cd "$(dirname "${BASH_SOURCE[0]}")" cd "$(dirname "${BASH_SOURCE[0]}")"
source ./builder-alias.sh
set -ea set -ea
INSTALL=false
while [[ $# -gt 0 ]]; do
case $1 in
--install)
INSTALL=true
shift
;;
*)
>&2 echo "Unknown option: $1"
exit 1
;;
esac
done
shopt -s expand_aliases shopt -s expand_aliases
PROFILE=${PROFILE:-release} PROFILE=${PROFILE:-release}
if [ "${PROFILE}" = "release" ]; then if [ "${PROFILE}" = "release" ]; then
BUILD_FLAGS="--release" BUILD_FLAGS="--release"
else
if [ "$PROFILE" != "debug"]; then
>&2 echo "Unknown profile $PROFILE: falling back to debug..."
PROFILE=debug
fi
fi fi
if [ -z "${ARCH:-}" ]; then if [ -z "${ARCH:-}" ]; then
@@ -18,15 +40,20 @@ if [ "$ARCH" = "arm64" ]; then
ARCH="aarch64" ARCH="aarch64"
fi fi
RUST_ARCH="$ARCH"
if [ "$ARCH" = "riscv64" ]; then
RUST_ARCH="riscv64gc"
fi
if [ -z "${KERNEL_NAME:-}" ]; then if [ -z "${KERNEL_NAME:-}" ]; then
KERNEL_NAME=$(uname -s) KERNEL_NAME=$(uname -s)
fi fi
if [ -z "${TARGET:-}" ]; then if [ -z "${TARGET:-}" ]; then
if [ "$KERNEL_NAME" = "Linux" ]; then if [ "$KERNEL_NAME" = "Linux" ]; then
TARGET="$ARCH-unknown-linux-musl" TARGET="$RUST_ARCH-unknown-linux-musl"
elif [ "$KERNEL_NAME" = "Darwin" ]; then elif [ "$KERNEL_NAME" = "Darwin" ]; then
TARGET="$ARCH-apple-darwin" TARGET="$RUST_ARCH-apple-darwin"
else else
>&2 echo "unknown kernel $KERNEL_NAME" >&2 echo "unknown kernel $KERNEL_NAME"
exit 1 exit 1
@@ -34,18 +61,7 @@ if [ -z "${TARGET:-}" ]; then
fi fi
cd .. cd ..
# Ensure GIT_HASH.txt exists if not created by higher-level build steps
if [ ! -f GIT_HASH.txt ] && command -v git >/dev/null 2>&1; then
git rev-parse HEAD > GIT_HASH.txt || true
fi
FEATURES="$(echo "${ENVIRONMENT:-}" | sed 's/-/,/g')" FEATURES="$(echo "${ENVIRONMENT:-}" | sed 's/-/,/g')"
FEATURE_ARGS="cli"
if [ -n "$FEATURES" ]; then
FEATURE_ARGS="$FEATURE_ARGS,$FEATURES"
fi
RUSTFLAGS="" RUSTFLAGS=""
if [[ "${ENVIRONMENT:-}" =~ (^|-)console($|-) ]]; then if [[ "${ENVIRONMENT:-}" =~ (^|-)console($|-) ]]; then
RUSTFLAGS="--cfg tokio_unstable" RUSTFLAGS="--cfg tokio_unstable"
@@ -53,4 +69,11 @@ fi
echo "FEATURES=\"$FEATURES\"" echo "FEATURES=\"$FEATURES\""
echo "RUSTFLAGS=\"$RUSTFLAGS\"" echo "RUSTFLAGS=\"$RUSTFLAGS\""
cross build --manifest-path=./core/Cargo.toml $BUILD_FLAGS --no-default-features --features $FEATURE_ARGS --locked --bin start-cli --target=$TARGET rust-zig-builder cargo zigbuild --manifest-path=./core/Cargo.toml $BUILD_FLAGS --features=$FEATURES --locked --bin start-cli --target=$TARGET
if [ "$(ls -nd "core/target/$TARGET/$PROFILE/start-cli" | awk '{ print $3 }')" != "$UID" ]; then
rust-zig-builder sh -c "cd core && chown -R $UID:$UID target && chown -R $UID:$UID /usr/local/cargo"
fi
if [ "$INSTALL" = "true" ]; then
cp "core/target/$TARGET/$PROFILE/start-cli" ~/.cargo/bin/start-cli
fi

View File

@@ -2,12 +2,19 @@
cd "$(dirname "${BASH_SOURCE[0]}")" cd "$(dirname "${BASH_SOURCE[0]}")"
source ./builder-alias.sh
set -ea set -ea
shopt -s expand_aliases shopt -s expand_aliases
PROFILE=${PROFILE:-release} PROFILE=${PROFILE:-release}
if [ "${PROFILE}" = "release" ]; then if [ "${PROFILE}" = "release" ]; then
BUILD_FLAGS="--release" BUILD_FLAGS="--release"
else
if [ "$PROFILE" != "debug"]; then
>&2 echo "Unknown profile $PROFILE: falling back to debug..."
PROFILE=debug
fi
fi fi
if [ -z "$ARCH" ]; then if [ -z "$ARCH" ]; then
@@ -33,4 +40,7 @@ fi
echo "FEATURES=\"$FEATURES\"" echo "FEATURES=\"$FEATURES\""
echo "RUSTFLAGS=\"$RUSTFLAGS\"" echo "RUSTFLAGS=\"$RUSTFLAGS\""
cross build --manifest-path=./core/Cargo.toml $BUILD_FLAGS --no-default-features --features cli-registry,registry,$FEATURES --locked --bin registrybox --target=$RUST_ARCH-unknown-linux-musl rust-zig-builder cargo zigbuild --manifest-path=./core/Cargo.toml $BUILD_FLAGS --features=$FEATURES --locked --bin registrybox --target=$RUST_ARCH-unknown-linux-musl
if [ "$(ls -nd "core/target/$RUST_ARCH-unknown-linux-musl/$PROFILE/registrybox" | awk '{ print $3 }')" != "$UID" ]; then
rust-zig-builder sh -c "chown -R $UID:$UID core/target && chown -R $UID:$UID /usr/local/cargo"
fi

View File

@@ -2,12 +2,19 @@
cd "$(dirname "${BASH_SOURCE[0]}")" cd "$(dirname "${BASH_SOURCE[0]}")"
source ./builder-alias.sh
set -ea set -ea
shopt -s expand_aliases shopt -s expand_aliases
PROFILE=${PROFILE:-release} PROFILE=${PROFILE:-release}
if [ "${PROFILE}" = "release" ]; then if [ "${PROFILE}" = "release" ]; then
BUILD_FLAGS="--release" BUILD_FLAGS="--release"
else
if [ "$PROFILE" != "debug"]; then
>&2 echo "Unknown profile $PROFILE: falling back to debug..."
PROFILE=debug
fi
fi fi
if [ -z "$ARCH" ]; then if [ -z "$ARCH" ]; then
@@ -33,4 +40,7 @@ fi
echo "FEATURES=\"$FEATURES\"" echo "FEATURES=\"$FEATURES\""
echo "RUSTFLAGS=\"$RUSTFLAGS\"" echo "RUSTFLAGS=\"$RUSTFLAGS\""
cross build --manifest-path=./core/Cargo.toml $BUILD_FLAGS --no-default-features --features cli-container,$FEATURES --locked --bin containerbox --target=$RUST_ARCH-unknown-linux-musl rust-zig-builder cargo zigbuild --manifest-path=./core/Cargo.toml $BUILD_FLAGS --features=$FEATURES --locked --bin start-container --target=$RUST_ARCH-unknown-linux-musl
if [ "$(ls -nd "core/target/$RUST_ARCH-unknown-linux-musl/$PROFILE/start-container" | awk '{ print $3 }')" != "$UID" ]; then
rust-zig-builder sh -c "chown -R $UID:$UID core/target && chown -R $UID:$UID /usr/local/cargo"
fi

View File

@@ -2,12 +2,19 @@
cd "$(dirname "${BASH_SOURCE[0]}")" cd "$(dirname "${BASH_SOURCE[0]}")"
source ./builder-alias.sh
set -ea set -ea
shopt -s expand_aliases shopt -s expand_aliases
PROFILE=${PROFILE:-release} PROFILE=${PROFILE:-release}
if [ "${PROFILE}" = "release" ]; then if [ "${PROFILE}" = "release" ]; then
BUILD_FLAGS="--release" BUILD_FLAGS="--release"
else
if [ "$PROFILE" != "debug"]; then
>&2 echo "Unknown profile $PROFILE: falling back to debug..."
PROFILE=debug
fi
fi fi
if [ -z "$ARCH" ]; then if [ -z "$ARCH" ]; then
@@ -33,4 +40,7 @@ fi
echo "FEATURES=\"$FEATURES\"" echo "FEATURES=\"$FEATURES\""
echo "RUSTFLAGS=\"$RUSTFLAGS\"" echo "RUSTFLAGS=\"$RUSTFLAGS\""
cross build --manifest-path=./core/Cargo.toml $BUILD_FLAGS --no-default-features --features cli,startd,$FEATURES --locked --bin startbox --target=$RUST_ARCH-unknown-linux-musl rust-zig-builder cargo zigbuild --manifest-path=./core/Cargo.toml $BUILD_FLAGS --features=$FEATURES --locked --bin startbox --target=$RUST_ARCH-unknown-linux-musl
if [ "$(ls -nd "core/target/$RUST_ARCH-unknown-linux-musl/$PROFILE/startbox" | awk '{ print $3 }')" != "$UID" ]; then
rust-zig-builder sh -c "chown -R $UID:$UID core/target && chown -R $UID:$UID /usr/local/cargo"
fi

View File

@@ -2,12 +2,19 @@
cd "$(dirname "${BASH_SOURCE[0]}")" cd "$(dirname "${BASH_SOURCE[0]}")"
source ./builder-alias.sh
set -ea set -ea
shopt -s expand_aliases shopt -s expand_aliases
PROFILE=${PROFILE:-release} PROFILE=${PROFILE:-release}
if [ "${PROFILE}" = "release" ]; then if [ "${PROFILE}" = "release" ]; then
BUILD_FLAGS="--release" BUILD_FLAGS="--release"
else
if [ "$PROFILE" != "debug"]; then
>&2 echo "Unknown profile $PROFILE: falling back to debug..."
PROFILE=debug
fi
fi fi
if [ -z "$ARCH" ]; then if [ -z "$ARCH" ]; then
@@ -31,4 +38,7 @@ if [[ "${ENVIRONMENT}" =~ (^|-)console($|-) ]]; then
fi fi
echo "FEATURES=\"$FEATURES\"" echo "FEATURES=\"$FEATURES\""
echo "RUSTFLAGS=\"$RUSTFLAGS\"" echo "RUSTFLAGS=\"$RUSTFLAGS\""
cross test --manifest-path=./core/Cargo.toml $BUILD_FLAGS --no-default-features --features test,$FEATURES --locked 'export_bindings_' rust-zig-builder cargo test --manifest-path=./core/Cargo.toml $BUILD_FLAGS --features test,$FEATURES --locked 'export_bindings_'
if [ "$(ls -nd "core/startos/bindings" | awk '{ print $3 }')" != "$UID" ]; then
rust-zig-builder sh -c "chown -R $UID:$UID core/target && chown -R $UID:$UID core/startos/bindings && chown -R $UID:$UID /usr/local/cargo"
fi

View File

@@ -2,12 +2,19 @@
cd "$(dirname "${BASH_SOURCE[0]}")" cd "$(dirname "${BASH_SOURCE[0]}")"
source ./builder-alias.sh
set -ea set -ea
shopt -s expand_aliases shopt -s expand_aliases
PROFILE=${PROFILE:-release} PROFILE=${PROFILE:-release}
if [ "${PROFILE}" = "release" ]; then if [ "${PROFILE}" = "release" ]; then
BUILD_FLAGS="--release" BUILD_FLAGS="--release"
else
if [ "$PROFILE" != "debug"]; then
>&2 echo "Unknown profile $PROFILE: falling back to debug..."
PROFILE=debug
fi
fi fi
if [ -z "$ARCH" ]; then if [ -z "$ARCH" ]; then
@@ -33,4 +40,7 @@ fi
echo "FEATURES=\"$FEATURES\"" echo "FEATURES=\"$FEATURES\""
echo "RUSTFLAGS=\"$RUSTFLAGS\"" echo "RUSTFLAGS=\"$RUSTFLAGS\""
cross build --manifest-path=./core/Cargo.toml $BUILD_FLAGS --no-default-features --features cli-tunnel,tunnel,$FEATURES --locked --bin tunnelbox --target=$RUST_ARCH-unknown-linux-musl rust-zig-builder cargo zigbuild --manifest-path=./core/Cargo.toml $BUILD_FLAGS --features=$FEATURES --locked --bin tunnelbox --target=$RUST_ARCH-unknown-linux-musl
if [ "$(ls -nd "core/target/$RUST_ARCH-unknown-linux-musl/$PROFILE/tunnelbox" | awk '{ print $3 }')" != "$UID" ]; then
rust-zig-builder sh -c "chown -R $UID:$UID core/target && chown -R $UID:$UID /usr/local/cargo"
fi

View File

@@ -1,3 +1,8 @@
#!/bin/bash #!/bin/bash
alias 'rust-musl-builder'='docker run $USE_TTY --rm -e "RUSTFLAGS=$RUSTFLAGS" -e SCCACHE_GHA_ENABLED -e SCCACHE_GHA_VERSION -e ACTIONS_RESULTS_URL -e ACTIONS_RUNTIME_TOKEN -v "$HOME/.cargo/registry":/root/.cargo/registry -v "$HOME/.cargo/git":/root/.cargo/git -v "$HOME/.cache/sccache":/root/.cache/sccache -v "$(pwd)":/home/rust/src -w /home/rust/src -P start9/rust-musl-cross:$ARCH-musl' USE_TTY=
if tty -s; then
USE_TTY="-it"
fi
alias 'rust-zig-builder'='docker run '"$USE_TTY"' --rm -e "RUSTFLAGS=$RUSTFLAGS" -e "PKG_CONFIG_SYSROOT_DIR=/opt/sysroot/$ARCH" -e PKG_CONFIG_PATH="" -e PKG_CONFIG_LIBDIR="/opt/sysroot/$ARCH/usr/lib/pkgconfig" -e "AWS_LC_SYS_CMAKE_TOOLCHAIN_FILE_riscv64gc_unknown_linux_musl=/root/cmake-overrides/toolchain-riscv64-musl-clang.cmake" -e SCCACHE_GHA_ENABLED -e SCCACHE_GHA_VERSION -e ACTIONS_RESULTS_URL -e ACTIONS_RUNTIME_TOKEN -v "$HOME/.cargo/registry":/usr/local/cargo/registry -v "$HOME/.cargo/git":/usr/local/cargo/git -v "$HOME/.cache/sccache":/root/.cache/sccache -v "$HOME/.cache/cargo-zigbuild:/root/.cache/cargo-zigbuild" -v "$(pwd)":/workdir -w /workdir start9/cargo-zigbuild'

View File

@@ -1,19 +0,0 @@
[package]
name = "helpers"
version = "0.1.0"
edition = "2021"
# See more keys and their definitions at https://doc.rust-lang.org/cargo/reference/manifest.html
[dependencies]
color-eyre = "0.6.2"
futures = "0.3.28"
lazy_async_pool = "0.3.3"
models = { path = "../models" }
pin-project = "1.1.3"
rpc-toolkit = { git = "https://github.com/Start9Labs/rpc-toolkit.git", branch = "master" }
serde = { version = "1.0", features = ["derive", "rc"] }
serde_json = "1.0"
tokio = { version = "1", features = ["full"] }
tokio-stream = { version = "0.1.14", features = ["io-util", "sync"] }
tracing = "0.1.39"

View File

@@ -1,31 +0,0 @@
use std::task::Poll;
use tokio::io::{AsyncRead, ReadBuf};
#[pin_project::pin_project]
pub struct ByteReplacementReader<R> {
pub replace: u8,
pub with: u8,
#[pin]
pub inner: R,
}
impl<R: AsyncRead> AsyncRead for ByteReplacementReader<R> {
fn poll_read(
self: std::pin::Pin<&mut Self>,
cx: &mut std::task::Context<'_>,
buf: &mut ReadBuf<'_>,
) -> std::task::Poll<std::io::Result<()>> {
let this = self.project();
match this.inner.poll_read(cx, buf) {
Poll::Ready(Ok(())) => {
for idx in 0..buf.filled().len() {
if buf.filled()[idx] == *this.replace {
buf.filled_mut()[idx] = *this.with;
}
}
Poll::Ready(Ok(()))
}
a => a,
}
}
}

View File

@@ -1,262 +0,0 @@
use std::future::Future;
use std::ops::{Deref, DerefMut};
use std::path::{Path, PathBuf};
use std::time::Duration;
use color_eyre::eyre::{eyre, Context, Error};
use futures::future::BoxFuture;
use futures::FutureExt;
use models::ResultExt;
use tokio::fs::File;
use tokio::sync::oneshot;
use tokio::task::{JoinError, JoinHandle, LocalSet};
mod byte_replacement_reader;
mod rsync;
mod script_dir;
pub use byte_replacement_reader::*;
pub use rsync::*;
pub use script_dir::*;
pub fn const_true() -> bool {
true
}
pub fn to_tmp_path(path: impl AsRef<Path>) -> Result<PathBuf, Error> {
let path = path.as_ref();
if let (Some(parent), Some(file_name)) =
(path.parent(), path.file_name().and_then(|f| f.to_str()))
{
Ok(parent.join(format!(".{}.tmp", file_name)))
} else {
Err(eyre!("invalid path: {}", path.display()))
}
}
pub async fn canonicalize(
path: impl AsRef<Path> + Send + Sync,
create_parent: bool,
) -> Result<PathBuf, Error> {
fn create_canonical_folder<'a>(
path: impl AsRef<Path> + Send + Sync + 'a,
) -> BoxFuture<'a, Result<PathBuf, Error>> {
async move {
let path = canonicalize(path, true).await?;
tokio::fs::create_dir(&path)
.await
.with_context(|| path.display().to_string())?;
Ok(path)
}
.boxed()
}
let path = path.as_ref();
if tokio::fs::metadata(path).await.is_err() {
let parent = path.parent().unwrap_or(Path::new("."));
if let Some(file_name) = path.file_name() {
if create_parent && tokio::fs::metadata(parent).await.is_err() {
return Ok(create_canonical_folder(parent).await?.join(file_name));
} else {
return Ok(tokio::fs::canonicalize(parent)
.await
.with_context(|| parent.display().to_string())?
.join(file_name));
}
}
}
tokio::fs::canonicalize(&path)
.await
.with_context(|| path.display().to_string())
}
#[pin_project::pin_project(PinnedDrop)]
pub struct NonDetachingJoinHandle<T>(#[pin] JoinHandle<T>);
impl<T> NonDetachingJoinHandle<T> {
pub async fn wait_for_abort(self) -> Result<T, JoinError> {
self.abort();
self.await
}
}
impl<T> From<JoinHandle<T>> for NonDetachingJoinHandle<T> {
fn from(t: JoinHandle<T>) -> Self {
NonDetachingJoinHandle(t)
}
}
impl<T> Deref for NonDetachingJoinHandle<T> {
type Target = JoinHandle<T>;
fn deref(&self) -> &Self::Target {
&self.0
}
}
impl<T> DerefMut for NonDetachingJoinHandle<T> {
fn deref_mut(&mut self) -> &mut Self::Target {
&mut self.0
}
}
#[pin_project::pinned_drop]
impl<T> PinnedDrop for NonDetachingJoinHandle<T> {
fn drop(self: std::pin::Pin<&mut Self>) {
let this = self.project();
this.0.into_ref().get_ref().abort()
}
}
impl<T> Future for NonDetachingJoinHandle<T> {
type Output = Result<T, JoinError>;
fn poll(
self: std::pin::Pin<&mut Self>,
cx: &mut std::task::Context<'_>,
) -> std::task::Poll<Self::Output> {
let this = self.project();
this.0.poll(cx)
}
}
pub struct AtomicFile {
tmp_path: PathBuf,
path: PathBuf,
file: Option<File>,
}
impl AtomicFile {
pub async fn new(
path: impl AsRef<Path> + Send + Sync,
tmp_path: Option<impl AsRef<Path> + Send + Sync>,
) -> Result<Self, Error> {
let path = canonicalize(&path, true).await?;
let tmp_path = if let Some(tmp_path) = tmp_path {
canonicalize(&tmp_path, true).await?
} else {
to_tmp_path(&path)?
};
let file = File::create(&tmp_path)
.await
.with_context(|| tmp_path.display().to_string())?;
Ok(Self {
tmp_path,
path,
file: Some(file),
})
}
pub async fn rollback(mut self) -> Result<(), Error> {
drop(self.file.take());
tokio::fs::remove_file(&self.tmp_path)
.await
.with_context(|| format!("rm {}", self.tmp_path.display()))?;
Ok(())
}
pub async fn save(mut self) -> Result<(), Error> {
use tokio::io::AsyncWriteExt;
if let Some(file) = self.file.as_mut() {
file.flush().await?;
file.shutdown().await?;
file.sync_all().await?;
}
drop(self.file.take());
tokio::fs::rename(&self.tmp_path, &self.path)
.await
.with_context(|| {
format!("mv {} -> {}", self.tmp_path.display(), self.path.display())
})?;
Ok(())
}
}
impl std::ops::Deref for AtomicFile {
type Target = File;
fn deref(&self) -> &Self::Target {
self.file.as_ref().unwrap()
}
}
impl std::ops::DerefMut for AtomicFile {
fn deref_mut(&mut self) -> &mut Self::Target {
self.file.as_mut().unwrap()
}
}
impl Drop for AtomicFile {
fn drop(&mut self) {
if let Some(file) = self.file.take() {
drop(file);
let path = std::mem::take(&mut self.tmp_path);
tokio::spawn(async move { tokio::fs::remove_file(path).await.log_err() });
}
}
}
pub struct TimedResource<T: 'static + Send> {
handle: NonDetachingJoinHandle<Option<T>>,
ready: oneshot::Sender<()>,
}
impl<T: 'static + Send> TimedResource<T> {
pub fn new(resource: T, timer: Duration) -> Self {
let (send, recv) = oneshot::channel();
let handle = tokio::spawn(async move {
tokio::select! {
_ = tokio::time::sleep(timer) => {
drop(resource);
None
},
_ = recv => Some(resource),
}
});
Self {
handle: handle.into(),
ready: send,
}
}
pub fn new_with_destructor<
Fn: FnOnce(T) -> Fut + Send + 'static,
Fut: Future<Output = ()> + Send,
>(
resource: T,
timer: Duration,
destructor: Fn,
) -> Self {
let (send, recv) = oneshot::channel();
let handle = tokio::spawn(async move {
tokio::select! {
_ = tokio::time::sleep(timer) => {
destructor(resource).await;
None
},
_ = recv => Some(resource),
}
});
Self {
handle: handle.into(),
ready: send,
}
}
pub async fn get(self) -> Option<T> {
let _ = self.ready.send(());
self.handle.await.unwrap()
}
pub fn is_timed_out(&self) -> bool {
self.ready.is_closed()
}
}
pub async fn spawn_local<
T: 'static + Send,
F: FnOnce() -> Fut + Send + 'static,
Fut: Future<Output = T> + 'static,
>(
fut: F,
) -> NonDetachingJoinHandle<T> {
let (send, recv) = tokio::sync::oneshot::channel();
std::thread::spawn(move || {
tokio::runtime::Builder::new_current_thread()
.enable_all()
.build()
.unwrap()
.block_on(async move {
let set = LocalSet::new();
send.send(set.spawn_local(fut()).into())
.unwrap_or_else(|_| unreachable!());
set.await
})
});
recv.await.unwrap()
}

View File

@@ -1,82 +0,0 @@
use std::sync::Arc;
use color_eyre::Report;
use models::InterfaceId;
use models::PackageId;
use serde_json::Value;
use tokio::sync::mpsc;
pub struct RuntimeDropped;
pub struct Callback {
id: Arc<String>,
sender: mpsc::UnboundedSender<(Arc<String>, Vec<Value>)>,
}
impl Callback {
pub fn new(id: String, sender: mpsc::UnboundedSender<(Arc<String>, Vec<Value>)>) -> Self {
Self {
id: Arc::new(id),
sender,
}
}
pub fn is_listening(&self) -> bool {
self.sender.is_closed()
}
pub fn call(&self, args: Vec<Value>) -> Result<(), RuntimeDropped> {
self.sender
.send((self.id.clone(), args))
.map_err(|_| RuntimeDropped)
}
}
#[derive(serde::Deserialize, serde::Serialize, Debug, Clone)]
#[serde(rename_all = "camelCase")]
pub struct AddressSchemaOnion {
pub id: InterfaceId,
pub external_port: u16,
}
#[derive(serde::Deserialize, serde::Serialize, Debug, Clone)]
#[serde(rename_all = "camelCase")]
pub struct AddressSchemaLocal {
pub id: InterfaceId,
pub external_port: u16,
}
#[derive(serde::Deserialize, serde::Serialize, Debug, Clone)]
#[serde(rename_all = "camelCase")]
pub struct Address(pub String);
#[derive(serde::Deserialize, serde::Serialize, Debug, Clone)]
#[serde(rename_all = "camelCase")]
pub struct Domain;
#[derive(serde::Deserialize, serde::Serialize, Debug, Clone)]
#[serde(rename_all = "camelCase")]
pub struct Name;
#[async_trait::async_trait]
#[allow(unused_variables)]
pub trait OsApi: Send + Sync + 'static {
async fn get_service_config(
&self,
id: PackageId,
path: &str,
callback: Option<Callback>,
) -> Result<Vec<Value>, Report>;
async fn bind_local(
&self,
internal_port: u16,
address_schema: AddressSchemaLocal,
) -> Result<Address, Report>;
async fn bind_onion(
&self,
internal_port: u16,
address_schema: AddressSchemaOnion,
) -> Result<Address, Report>;
async fn unbind_local(&self, id: InterfaceId, external: u16) -> Result<(), Report>;
async fn unbind_onion(&self, id: InterfaceId, external: u16) -> Result<(), Report>;
fn set_started(&self) -> Result<(), Report>;
async fn restart(&self) -> Result<(), Report>;
async fn start(&self) -> Result<(), Report>;
async fn stop(&self) -> Result<(), Report>;
}

View File

@@ -1,17 +0,0 @@
use std::path::{Path, PathBuf};
use models::{PackageId, VersionString};
pub const PKG_SCRIPT_DIR: &str = "package-data/scripts";
pub fn script_dir<P: AsRef<Path>>(
datadir: P,
pkg_id: &PackageId,
version: &VersionString,
) -> PathBuf {
datadir
.as_ref()
.join(&*PKG_SCRIPT_DIR)
.join(pkg_id)
.join(version.as_str())
}

View File

@@ -1,19 +0,0 @@
#!/bin/bash
cd "$(dirname "${BASH_SOURCE[0]}")"
set -ea
shopt -s expand_aliases
web="../web/dist/static"
[ -d "$web" ] || mkdir -p "$web"
if [ -z "$PLATFORM" ]; then
PLATFORM=$(uname -m)
fi
if [ "$PLATFORM" = "arm64" ]; then
PLATFORM="aarch64"
fi
cargo install --path=./startos --no-default-features --features=cli,docker --bin start-cli --locked

View File

@@ -1,46 +0,0 @@
[package]
edition = "2021"
name = "models"
version = "0.1.0"
# See more keys and their definitions at https://doc.rust-lang.org/cargo/reference/manifest.html
[features]
arti = ["arti-client"]
[dependencies]
arti-client = { version = "0.33", default-features = false, git = "https://github.com/Start9Labs/arti.git", branch = "patch/disable-exit", optional = true }
axum = "0.8.4"
base64 = "0.22.1"
color-eyre = "0.6.2"
ed25519-dalek = { version = "2.0.0", features = ["serde"] }
exver = { version = "0.2.0", git = "https://github.com/Start9Labs/exver-rs.git", features = [
"serde",
] }
gpt = "4.1.0"
ipnet = "2.8.0"
lazy_static = "1.4"
lettre = { version = "0.11", default-features = false }
mbrman = "0.6.0"
miette = "7.6.0"
num_enum = "0.7.1"
openssl = { version = "0.10.57", features = ["vendored"] }
patch-db = { version = "*", path = "../../patch-db/patch-db", features = [
"trace",
] }
rand = "0.9.1"
regex = "1.10.2"
reqwest = "0.12"
rpc-toolkit = { git = "https://github.com/Start9Labs/rpc-toolkit.git", branch = "master" }
rustls = "0.23"
serde = { version = "1.0", features = ["derive", "rc"] }
serde_json = "1.0"
ssh-key = "0.6.2"
thiserror = "2.0"
tokio = { version = "1", features = ["full"] }
torut = "0.2.1"
tracing = "0.1.39"
ts-rs = "9"
typeid = "1"
yasi = { version = "0.1.6", features = ["serde", "ts-rs"] }
zbus = "5"

View File

@@ -1,669 +0,0 @@
use std::fmt::{Debug, Display};
use axum::http::uri::InvalidUri;
use axum::http::StatusCode;
use color_eyre::eyre::eyre;
use num_enum::TryFromPrimitive;
use patch_db::Revision;
use rpc_toolkit::reqwest;
use rpc_toolkit::yajrc::{
RpcError, INVALID_PARAMS_ERROR, INVALID_REQUEST_ERROR, METHOD_NOT_FOUND_ERROR, PARSE_ERROR,
};
use serde::{Deserialize, Serialize};
use tokio::task::JoinHandle;
use crate::InvalidId;
#[derive(Debug, Clone, Copy, PartialEq, Eq, TryFromPrimitive)]
#[repr(i32)]
pub enum ErrorKind {
Unknown = 1,
Filesystem = 2,
Docker = 3,
ConfigSpecViolation = 4,
ConfigRulesViolation = 5,
NotFound = 6,
IncorrectPassword = 7,
VersionIncompatible = 8,
Network = 9,
Registry = 10,
Serialization = 11,
Deserialization = 12,
Utf8 = 13,
ParseVersion = 14,
IncorrectDisk = 15,
// Nginx = 16,
Dependency = 17,
ParseS9pk = 18,
ParseUrl = 19,
DiskNotAvailable = 20,
BlockDevice = 21,
InvalidOnionAddress = 22,
Pack = 23,
ValidateS9pk = 24,
DiskCorrupted = 25, // Remove
Tor = 26,
ConfigGen = 27,
ParseNumber = 28,
Database = 29,
InvalidId = 30,
InvalidSignature = 31,
Backup = 32,
Restore = 33,
Authorization = 34,
AutoConfigure = 35,
Action = 36,
RateLimited = 37,
InvalidRequest = 38,
MigrationFailed = 39,
Uninitialized = 40,
ParseNetAddress = 41,
ParseSshKey = 42,
SoundError = 43,
ParseTimestamp = 44,
ParseSysInfo = 45,
Wifi = 46,
Journald = 47,
DiskManagement = 48,
OpenSsl = 49,
PasswordHashGeneration = 50,
DiagnosticMode = 51,
ParseDbField = 52,
Duplicate = 53,
MultipleErrors = 54,
Incoherent = 55,
InvalidBackupTargetId = 56,
ProductKeyMismatch = 57,
LanPortConflict = 58,
Javascript = 59,
Pem = 60,
TLSInit = 61,
Ascii = 62,
MissingHeader = 63,
Grub = 64,
Systemd = 65,
OpenSsh = 66,
Zram = 67,
Lshw = 68,
CpuSettings = 69,
Firmware = 70,
Timeout = 71,
Lxc = 72,
Cancelled = 73,
Git = 74,
DBus = 75,
InstallFailed = 76,
UpdateFailed = 77,
Smtp = 78,
}
impl ErrorKind {
pub fn as_str(&self) -> &'static str {
use ErrorKind::*;
match self {
Unknown => "Unknown Error",
Filesystem => "Filesystem I/O Error",
Docker => "Docker Error",
ConfigSpecViolation => "Config Spec Violation",
ConfigRulesViolation => "Config Rules Violation",
NotFound => "Not Found",
IncorrectPassword => "Incorrect Password",
VersionIncompatible => "Version Incompatible",
Network => "Network Error",
Registry => "Registry Error",
Serialization => "Serialization Error",
Deserialization => "Deserialization Error",
Utf8 => "UTF-8 Parse Error",
ParseVersion => "Version Parsing Error",
IncorrectDisk => "Incorrect Disk",
// Nginx => "Nginx Error",
Dependency => "Dependency Error",
ParseS9pk => "S9PK Parsing Error",
ParseUrl => "URL Parsing Error",
DiskNotAvailable => "Disk Not Available",
BlockDevice => "Block Device Error",
InvalidOnionAddress => "Invalid Onion Address",
Pack => "Pack Error",
ValidateS9pk => "S9PK Validation Error",
DiskCorrupted => "Disk Corrupted", // Remove
Tor => "Tor Daemon Error",
ConfigGen => "Config Generation Error",
ParseNumber => "Number Parsing Error",
Database => "Database Error",
InvalidId => "Invalid ID",
InvalidSignature => "Invalid Signature",
Backup => "Backup Error",
Restore => "Restore Error",
Authorization => "Unauthorized",
AutoConfigure => "Auto-Configure Error",
Action => "Action Failed",
RateLimited => "Rate Limited",
InvalidRequest => "Invalid Request",
MigrationFailed => "Migration Failed",
Uninitialized => "Uninitialized",
ParseNetAddress => "Net Address Parsing Error",
ParseSshKey => "SSH Key Parsing Error",
SoundError => "Sound Interface Error",
ParseTimestamp => "Timestamp Parsing Error",
ParseSysInfo => "System Info Parsing Error",
Wifi => "WiFi Internal Error",
Journald => "Journald Error",
DiskManagement => "Disk Management Error",
OpenSsl => "OpenSSL Internal Error",
PasswordHashGeneration => "Password Hash Generation Error",
DiagnosticMode => "Server is in Diagnostic Mode",
ParseDbField => "Database Field Parse Error",
Duplicate => "Duplication Error",
MultipleErrors => "Multiple Errors",
Incoherent => "Incoherent",
InvalidBackupTargetId => "Invalid Backup Target ID",
ProductKeyMismatch => "Incompatible Product Keys",
LanPortConflict => "Incompatible LAN Port Configuration",
Javascript => "Javascript Engine Error",
Pem => "PEM Encoding Error",
TLSInit => "TLS Backend Initialization Error",
Ascii => "ASCII Parse Error",
MissingHeader => "Missing Header",
Grub => "Grub Error",
Systemd => "Systemd Error",
OpenSsh => "OpenSSH Error",
Zram => "Zram Error",
Lshw => "LSHW Error",
CpuSettings => "CPU Settings Error",
Firmware => "Firmware Error",
Timeout => "Timeout Error",
Lxc => "LXC Error",
Cancelled => "Cancelled",
Git => "Git Error",
DBus => "DBus Error",
InstallFailed => "Install Failed",
UpdateFailed => "Update Failed",
Smtp => "SMTP Error",
}
}
}
impl Display for ErrorKind {
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
write!(f, "{}", self.as_str())
}
}
pub struct Error {
pub source: color_eyre::eyre::Error,
pub debug: Option<color_eyre::eyre::Error>,
pub kind: ErrorKind,
pub revision: Option<Revision>,
pub task: Option<JoinHandle<()>>,
}
impl Display for Error {
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
write!(f, "{}: {:#}", self.kind.as_str(), self.source)
}
}
impl Debug for Error {
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
write!(
f,
"{}: {:?}",
self.kind.as_str(),
self.debug.as_ref().unwrap_or(&self.source)
)
}
}
impl Error {
pub fn new<E: Into<color_eyre::eyre::Error> + std::fmt::Debug + 'static>(
source: E,
kind: ErrorKind,
) -> Self {
let debug = (typeid::of::<E>() == typeid::of::<color_eyre::eyre::Error>())
.then(|| eyre!("{source:?}"));
Error {
source: source.into(),
debug,
kind,
revision: None,
task: None,
}
}
pub fn clone_output(&self) -> Self {
Error {
source: eyre!("{}", self.source),
debug: self.debug.as_ref().map(|e| eyre!("{e}")),
kind: self.kind,
revision: self.revision.clone(),
task: None,
}
}
pub fn with_task(mut self, task: JoinHandle<()>) -> Self {
self.task = Some(task);
self
}
pub async fn wait(mut self) -> Self {
if let Some(task) = &mut self.task {
task.await.log_err();
}
self.task.take();
self
}
}
impl axum::response::IntoResponse for Error {
fn into_response(self) -> axum::response::Response {
let mut res = axum::Json(RpcError::from(self)).into_response();
*res.status_mut() = StatusCode::INTERNAL_SERVER_ERROR;
res
}
}
impl From<std::convert::Infallible> for Error {
fn from(value: std::convert::Infallible) -> Self {
match value {}
}
}
impl From<InvalidId> for Error {
fn from(err: InvalidId) -> Self {
Error::new(err, ErrorKind::InvalidId)
}
}
impl From<std::io::Error> for Error {
fn from(e: std::io::Error) -> Self {
Error::new(e, ErrorKind::Filesystem)
}
}
impl From<std::str::Utf8Error> for Error {
fn from(e: std::str::Utf8Error) -> Self {
Error::new(e, ErrorKind::Utf8)
}
}
impl From<std::string::FromUtf8Error> for Error {
fn from(e: std::string::FromUtf8Error) -> Self {
Error::new(e, ErrorKind::Utf8)
}
}
impl From<exver::ParseError> for Error {
fn from(e: exver::ParseError) -> Self {
Error::new(e, ErrorKind::ParseVersion)
}
}
impl From<rpc_toolkit::url::ParseError> for Error {
fn from(e: rpc_toolkit::url::ParseError) -> Self {
Error::new(e, ErrorKind::ParseUrl)
}
}
impl From<std::num::ParseIntError> for Error {
fn from(e: std::num::ParseIntError) -> Self {
Error::new(e, ErrorKind::ParseNumber)
}
}
impl From<std::num::ParseFloatError> for Error {
fn from(e: std::num::ParseFloatError) -> Self {
Error::new(e, ErrorKind::ParseNumber)
}
}
impl From<patch_db::Error> for Error {
fn from(e: patch_db::Error) -> Self {
Error::new(e, ErrorKind::Database)
}
}
impl From<ed25519_dalek::SignatureError> for Error {
fn from(e: ed25519_dalek::SignatureError) -> Self {
Error::new(e, ErrorKind::InvalidSignature)
}
}
impl From<std::net::AddrParseError> for Error {
fn from(e: std::net::AddrParseError) -> Self {
Error::new(e, ErrorKind::ParseNetAddress)
}
}
impl From<ipnet::AddrParseError> for Error {
fn from(e: ipnet::AddrParseError) -> Self {
Error::new(e, ErrorKind::ParseNetAddress)
}
}
impl From<openssl::error::ErrorStack> for Error {
fn from(e: openssl::error::ErrorStack) -> Self {
Error::new(eyre!("{}", e), ErrorKind::OpenSsl)
}
}
impl From<mbrman::Error> for Error {
fn from(e: mbrman::Error) -> Self {
Error::new(e, ErrorKind::DiskManagement)
}
}
impl From<gpt::GptError> for Error {
fn from(e: gpt::GptError) -> Self {
Error::new(e, ErrorKind::DiskManagement)
}
}
impl From<gpt::mbr::MBRError> for Error {
fn from(e: gpt::mbr::MBRError) -> Self {
Error::new(e, ErrorKind::DiskManagement)
}
}
impl From<InvalidUri> for Error {
fn from(e: InvalidUri) -> Self {
Error::new(eyre!("{}", e), ErrorKind::ParseUrl)
}
}
impl From<ssh_key::Error> for Error {
fn from(e: ssh_key::Error) -> Self {
Error::new(e, ErrorKind::OpenSsh)
}
}
impl From<reqwest::Error> for Error {
fn from(e: reqwest::Error) -> Self {
let kind = match e {
_ if e.is_builder() => ErrorKind::ParseUrl,
_ if e.is_decode() => ErrorKind::Deserialization,
_ => ErrorKind::Network,
};
Error::new(e, kind)
}
}
#[cfg(feature = "arti")]
impl From<arti_client::Error> for Error {
fn from(e: arti_client::Error) -> Self {
Error::new(e, ErrorKind::Tor)
}
}
impl From<torut::control::ConnError> for Error {
fn from(e: torut::control::ConnError) -> Self {
Error::new(e, ErrorKind::Tor)
}
}
impl From<zbus::Error> for Error {
fn from(e: zbus::Error) -> Self {
Error::new(e, ErrorKind::DBus)
}
}
impl From<rustls::Error> for Error {
fn from(e: rustls::Error) -> Self {
Error::new(e, ErrorKind::OpenSsl)
}
}
impl From<lettre::error::Error> for Error {
fn from(e: lettre::error::Error) -> Self {
Error::new(e, ErrorKind::Smtp)
}
}
impl From<lettre::transport::smtp::Error> for Error {
fn from(e: lettre::transport::smtp::Error) -> Self {
Error::new(e, ErrorKind::Smtp)
}
}
impl From<lettre::address::AddressError> for Error {
fn from(e: lettre::address::AddressError) -> Self {
Error::new(e, ErrorKind::Smtp)
}
}
impl From<patch_db::value::Error> for Error {
fn from(value: patch_db::value::Error) -> Self {
match value.kind {
patch_db::value::ErrorKind::Serialization => {
Error::new(value.source, ErrorKind::Serialization)
}
patch_db::value::ErrorKind::Deserialization => {
Error::new(value.source, ErrorKind::Deserialization)
}
}
}
}
#[derive(Clone, Deserialize, Serialize)]
pub struct ErrorData {
pub details: String,
pub debug: String,
}
impl Display for ErrorData {
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
Display::fmt(&self.details, f)
}
}
impl Debug for ErrorData {
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
Display::fmt(&self.debug, f)
}
}
impl std::error::Error for ErrorData {}
impl From<Error> for ErrorData {
fn from(value: Error) -> Self {
Self {
details: value.to_string(),
debug: format!("{:?}", value),
}
}
}
impl From<&RpcError> for ErrorData {
fn from(value: &RpcError) -> Self {
Self {
details: value
.data
.as_ref()
.and_then(|d| {
d.as_object()
.and_then(|d| {
d.get("details")
.and_then(|d| d.as_str().map(|s| s.to_owned()))
})
.or_else(|| d.as_str().map(|s| s.to_owned()))
})
.unwrap_or_else(|| value.message.clone().into_owned()),
debug: value
.data
.as_ref()
.and_then(|d| {
d.as_object()
.and_then(|d| {
d.get("debug")
.and_then(|d| d.as_str().map(|s| s.to_owned()))
})
.or_else(|| d.as_str().map(|s| s.to_owned()))
})
.unwrap_or_else(|| value.message.clone().into_owned()),
}
}
}
impl From<Error> for RpcError {
fn from(e: Error) -> Self {
let mut data_object = serde_json::Map::with_capacity(3);
data_object.insert("details".to_owned(), format!("{}", e.source).into());
data_object.insert("debug".to_owned(), format!("{:?}", e.source).into());
data_object.insert(
"revision".to_owned(),
match serde_json::to_value(&e.revision) {
Ok(a) => a,
Err(e) => {
tracing::warn!("Error serializing revision for Error object: {}", e);
serde_json::Value::Null
}
},
);
RpcError {
code: e.kind as i32,
message: e.kind.as_str().into(),
data: Some(
match serde_json::to_value(&ErrorData {
details: format!("{}", e.source),
debug: format!("{:?}", e.source),
}) {
Ok(a) => a,
Err(e) => {
tracing::warn!("Error serializing revision for Error object: {}", e);
serde_json::Value::Null
}
},
),
}
}
}
impl From<RpcError> for Error {
fn from(e: RpcError) -> Self {
Error::new(
ErrorData::from(&e),
if let Ok(kind) = e.code.try_into() {
kind
} else if e.code == METHOD_NOT_FOUND_ERROR.code {
ErrorKind::NotFound
} else if e.code == PARSE_ERROR.code
|| e.code == INVALID_PARAMS_ERROR.code
|| e.code == INVALID_REQUEST_ERROR.code
{
ErrorKind::Deserialization
} else {
ErrorKind::Unknown
},
)
}
}
#[derive(Debug, Default)]
pub struct ErrorCollection(Vec<Error>);
impl ErrorCollection {
pub fn new() -> Self {
Self::default()
}
pub fn handle<T, E: Into<Error>>(&mut self, result: Result<T, E>) -> Option<T> {
match result {
Ok(a) => Some(a),
Err(e) => {
self.0.push(e.into());
None
}
}
}
pub fn into_result(self) -> Result<(), Error> {
if self.0.is_empty() {
Ok(())
} else {
Err(Error::new(eyre!("{}", self), ErrorKind::MultipleErrors))
}
}
}
impl From<ErrorCollection> for Result<(), Error> {
fn from(e: ErrorCollection) -> Self {
e.into_result()
}
}
impl<T, E: Into<Error>> Extend<Result<T, E>> for ErrorCollection {
fn extend<I: IntoIterator<Item = Result<T, E>>>(&mut self, iter: I) {
for item in iter {
self.handle(item);
}
}
}
impl std::fmt::Display for ErrorCollection {
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
for (idx, e) in self.0.iter().enumerate() {
if idx > 0 {
write!(f, "; ")?;
}
write!(f, "{}", e)?;
}
Ok(())
}
}
pub trait ResultExt<T, E>
where
Self: Sized,
{
fn with_kind(self, kind: ErrorKind) -> Result<T, Error>;
fn with_ctx<F: FnOnce(&E) -> (ErrorKind, D), D: Display>(self, f: F) -> Result<T, Error>;
fn log_err(self) -> Option<T>;
}
impl<T, E> ResultExt<T, E> for Result<T, E>
where
color_eyre::eyre::Error: From<E>,
E: std::fmt::Debug + 'static,
{
fn with_kind(self, kind: ErrorKind) -> Result<T, Error> {
self.map_err(|e| Error::new(e, kind))
}
fn with_ctx<F: FnOnce(&E) -> (ErrorKind, D), D: Display>(self, f: F) -> Result<T, Error> {
self.map_err(|e| {
let (kind, ctx) = f(&e);
let debug = (typeid::of::<E>() == typeid::of::<color_eyre::eyre::Error>())
.then(|| eyre!("{ctx}: {e:?}"));
let source = color_eyre::eyre::Error::from(e);
let with_ctx = format!("{ctx}: {source}");
let source = source.wrap_err(with_ctx);
Error {
kind,
source,
debug,
revision: None,
task: None,
}
})
}
fn log_err(self) -> Option<T> {
match self {
Ok(a) => Some(a),
Err(e) => {
let e: color_eyre::eyre::Error = e.into();
tracing::error!("{e}");
tracing::debug!("{e:?}");
None
}
}
}
}
impl<T> ResultExt<T, Error> for Result<T, Error> {
fn with_kind(self, kind: ErrorKind) -> Result<T, Error> {
self.map_err(|e| Error { kind, ..e })
}
fn with_ctx<F: FnOnce(&Error) -> (ErrorKind, D), D: Display>(self, f: F) -> Result<T, Error> {
self.map_err(|e| {
let (kind, ctx) = f(&e);
let source = e.source;
let with_ctx = format!("{ctx}: {source}");
let source = source.wrap_err(with_ctx);
let debug = e.debug.map(|e| {
let with_ctx = format!("{ctx}: {e}");
e.wrap_err(with_ctx)
});
Error {
kind,
source,
debug,
..e
}
})
}
fn log_err(self) -> Option<T> {
match self {
Ok(a) => Some(a),
Err(e) => {
tracing::error!("{e}");
tracing::debug!("{e:?}");
None
}
}
}
}
pub trait OptionExt<T>
where
Self: Sized,
{
fn or_not_found(self, message: impl std::fmt::Display) -> Result<T, Error>;
}
impl<T> OptionExt<T> for Option<T> {
fn or_not_found(self, message: impl std::fmt::Display) -> Result<T, Error> {
self.ok_or_else(|| Error::new(eyre!("{}", message), ErrorKind::NotFound))
}
}
#[macro_export]
macro_rules! ensure_code {
($x:expr, $c:expr, $fmt:expr $(, $arg:expr)*) => {
if !($x) {
return Err(Error::new(color_eyre::eyre::eyre!($fmt, $($arg, )*), $c));
}
};
}

View File

@@ -1,15 +0,0 @@
mod clap;
mod data_url;
mod errors;
mod id;
mod mime;
mod procedure_name;
mod version;
pub use clap::*;
pub use data_url::*;
pub use errors::*;
pub use id::*;
pub use mime::*;
pub use procedure_name::*;
pub use version::*;

View File

@@ -2,12 +2,19 @@
cd "$(dirname "${BASH_SOURCE[0]}")" cd "$(dirname "${BASH_SOURCE[0]}")"
source ./builder-alias.sh
set -ea set -ea
shopt -s expand_aliases shopt -s expand_aliases
PROFILE=${PROFILE:-release} PROFILE=${PROFILE:-release}
if [ "${PROFILE}" = "release" ]; then if [ "${PROFILE}" = "release" ]; then
BUILD_FLAGS="--release" BUILD_FLAGS="--release"
else
if [ "$PROFILE" != "debug"]; then
>&2 echo "Unknown profile $PROFILE: falling back to debug..."
PROFILE=debug
fi
fi fi
if [ -z "$ARCH" ]; then if [ -z "$ARCH" ]; then
@@ -31,8 +38,8 @@ if [[ "${ENVIRONMENT}" =~ (^|-)console($|-) ]]; then
RUSTFLAGS="--cfg tokio_unstable" RUSTFLAGS="--cfg tokio_unstable"
fi fi
source ./core/builder-alias.sh
echo "FEATURES=\"$FEATURES\"" echo "FEATURES=\"$FEATURES\""
echo "RUSTFLAGS=\"$RUSTFLAGS\"" echo "RUSTFLAGS=\"$RUSTFLAGS\""
cross test --manifest-path=./core/Cargo.toml $BUILD_FLAGS --features=test,$FEATURES --workspace --locked --target=$ARCH-unknown-linux-musl -- --skip export_bindings_ rust-zig-builder cargo test --manifest-path=./core/Cargo.toml $BUILD_FLAGS --features=test,$FEATURES --workspace --locked --lib -- --skip export_bindings_
rust-zig-builder sh -c "chown -R $UID:$UID core/target && chown -R $UID:$UID /usr/local/cargo"

View File

@@ -15,7 +15,7 @@ license = "MIT"
name = "start-os" name = "start-os"
readme = "README.md" readme = "README.md"
repository = "https://github.com/Start9Labs/start-os" repository = "https://github.com/Start9Labs/start-os"
version = "0.4.0-alpha.12" # VERSION_BUMP version = "0.4.0-alpha.16" # VERSION_BUMP
[lib] [lib]
name = "startos" name = "startos"
@@ -23,28 +23,27 @@ path = "src/lib.rs"
[[bin]] [[bin]]
name = "startbox" name = "startbox"
path = "src/main.rs" path = "src/main/startbox.rs"
[[bin]] [[bin]]
name = "start-cli" name = "start-cli"
path = "src/main.rs" path = "src/main/start-cli.rs"
[[bin]] [[bin]]
name = "containerbox" name = "start-container"
path = "src/main.rs" path = "src/main/start-container.rs"
[[bin]] [[bin]]
name = "registrybox" name = "registrybox"
path = "src/main.rs" path = "src/main/registrybox.rs"
[[bin]] [[bin]]
name = "tunnelbox" name = "tunnelbox"
path = "src/main.rs" path = "src/main/tunnelbox.rs"
[features] [features]
arti = [ arti = [
"arti-client", "arti-client",
"models/arti",
"safelog", "safelog",
"tor-cell", "tor-cell",
"tor-hscrypto", "tor-hscrypto",
@@ -54,19 +53,10 @@ arti = [
"tor-proto", "tor-proto",
"tor-rtcompat", "tor-rtcompat",
] ]
cli = ["cli-registry", "cli-startd", "cli-tunnel"]
cli-container = ["procfs", "pty-process"]
cli-registry = []
cli-startd = []
cli-tunnel = []
console = ["console-subscriber", "tokio/tracing"] console = ["console-subscriber", "tokio/tracing"]
default = ["cli", "cli-container", "registry", "startd", "tunnel"] default = []
dev = ["backtrace-on-stack-overflow"] dev = []
docker = []
registry = []
startd = []
test = [] test = []
tunnel = []
unstable = ["backtrace-on-stack-overflow"] unstable = ["backtrace-on-stack-overflow"]
[dependencies] [dependencies]
@@ -93,10 +83,8 @@ async-compression = { version = "0.4.32", features = [
] } ] }
async-stream = "0.3.5" async-stream = "0.3.5"
async-trait = "0.1.74" async-trait = "0.1.74"
aws-lc-sys = { version = "0.32", features = ["bindgen"] } axum = { version = "0.8.4", features = ["ws", "http2"] }
axum = { version = "0.8.4", features = ["ws"] }
backtrace-on-stack-overflow = { version = "0.3.0", optional = true } backtrace-on-stack-overflow = { version = "0.3.0", optional = true }
barrage = "0.2.3"
base32 = "0.5.0" base32 = "0.5.0"
base64 = "0.22.1" base64 = "0.22.1"
base64ct = "1.6.0" base64ct = "1.6.0"
@@ -106,16 +94,16 @@ bytes = "1"
chrono = { version = "0.4.31", features = ["serde"] } chrono = { version = "0.4.31", features = ["serde"] }
clap = { version = "4.4.12", features = ["string"] } clap = { version = "4.4.12", features = ["string"] }
color-eyre = "0.6.2" color-eyre = "0.6.2"
console = "0.15.7" console = "0.16.2"
console-subscriber = { version = "0.4.1", optional = true } console-subscriber = { version = "0.5.0", optional = true }
const_format = "0.2.34" const_format = "0.2.34"
cookie = "0.18.0" cookie = "0.18.0"
cookie_store = "0.21.0" cookie_store = "0.22.0"
curve25519-dalek = "4.1.3" curve25519-dalek = "4.1.3"
der = { version = "0.7.9", features = ["derive", "pem"] } der = { version = "0.7.9", features = ["derive", "pem"] }
digest = "0.10.7" digest = "0.10.7"
divrem = "1.0.0" divrem = "1.0.0"
dns-lookup = "2.1.0" dns-lookup = "3.0.1"
ed25519 = { version = "2.2.3", features = ["alloc", "pem", "pkcs8"] } ed25519 = { version = "2.2.3", features = ["alloc", "pem", "pkcs8"] }
ed25519-dalek = { version = "2.2.0", features = [ ed25519-dalek = { version = "2.2.0", features = [
"digest", "digest",
@@ -133,7 +121,6 @@ fd-lock-rs = "0.1.4"
form_urlencoded = "1.2.1" form_urlencoded = "1.2.1"
futures = "0.3.28" futures = "0.3.28"
gpt = "4.1.0" gpt = "4.1.0"
helpers = { path = "../helpers" }
hex = "0.4.3" hex = "0.4.3"
hickory-client = "0.25.2" hickory-client = "0.25.2"
hickory-server = "0.25.2" hickory-server = "0.25.2"
@@ -159,7 +146,7 @@ imbl = { version = "6", features = ["serde", "small-chunks"] }
imbl-value = { version = "0.4.3", features = ["ts-rs"] } imbl-value = { version = "0.4.3", features = ["ts-rs"] }
include_dir = { version = "0.7.3", features = ["metadata"] } include_dir = { version = "0.7.3", features = ["metadata"] }
indexmap = { version = "2.0.2", features = ["serde"] } indexmap = { version = "2.0.2", features = ["serde"] }
indicatif = { version = "0.17.7", features = ["tokio"] } indicatif = { version = "0.18.3", features = ["tokio"] }
inotify = "0.11.0" inotify = "0.11.0"
integer-encoding = { version = "4.0.0", features = ["tokio_async"] } integer-encoding = { version = "4.0.0", features = ["tokio_async"] }
ipnet = { version = "2.8.0", features = ["serde"] } ipnet = { version = "2.8.0", features = ["serde"] }
@@ -186,7 +173,6 @@ log = "0.4.20"
mbrman = "0.6.0" mbrman = "0.6.0"
miette = { version = "7.6.0", features = ["fancy"] } miette = { version = "7.6.0", features = ["fancy"] }
mio = "1" mio = "1"
models = { version = "*", path = "../models" }
new_mime_guess = "4" new_mime_guess = "4"
nix = { version = "0.30.1", features = [ nix = { version = "0.30.1", features = [
"fs", "fs",
@@ -212,33 +198,32 @@ pbkdf2 = "0.12.2"
pin-project = "1.1.3" pin-project = "1.1.3"
pkcs8 = { version = "0.10.2", features = ["std"] } pkcs8 = { version = "0.10.2", features = ["std"] }
prettytable-rs = "0.10.0" prettytable-rs = "0.10.0"
procfs = { version = "0.17.0", optional = true }
proptest = "1.3.1" proptest = "1.3.1"
proptest-derive = "0.5.0" proptest-derive = "0.7.0"
pty-process = { version = "0.5.1", optional = true }
qrcode = "0.14.1" qrcode = "0.14.1"
r3bl_tui = "0.7.6" r3bl_tui = "0.7.6"
rand = "0.9.2" rand = "0.9.2"
regex = "1.10.2" regex = "1.10.2"
reqwest = { version = "0.12.4", features = ["json", "socks", "stream"] } reqwest = { version = "0.12.25", features = [
reqwest_cookie_store = "0.8.0" "json",
"socks",
"stream",
"http2",
] }
reqwest_cookie_store = "0.9.0"
rpassword = "7.2.0" rpassword = "7.2.0"
rpc-toolkit = { git = "https://github.com/Start9Labs/rpc-toolkit.git", branch = "master" } rust-argon2 = "3.0.0"
rust-argon2 = "2.0.0" rpc-toolkit = { git = "https://github.com/Start9Labs/rpc-toolkit.git" }
safelog = { version = "0.4.8", git = "https://github.com/Start9Labs/arti.git", branch = "patch/disable-exit", optional = true } safelog = { version = "0.4.8", git = "https://github.com/Start9Labs/arti.git", branch = "patch/disable-exit", optional = true }
semver = { version = "1.0.20", features = ["serde"] } semver = { version = "1.0.20", features = ["serde"] }
serde = { version = "1.0", features = ["derive", "rc"] } serde = { version = "1.0", features = ["derive", "rc"] }
serde_cbor = { package = "ciborium", version = "0.2.1" } serde_cbor = { package = "ciborium", version = "0.2.1" }
serde_json = "1.0" serde_json = "1.0"
serde_toml = { package = "toml", version = "0.8.2" } serde_toml = { package = "toml", version = "0.9.9+spec-1.0.0" }
serde_urlencoded = "0.7"
serde_with = { version = "3.4.0", features = ["json", "macros"] }
serde_yaml = { package = "serde_yml", version = "0.0.12" } serde_yaml = { package = "serde_yml", version = "0.0.12" }
sha-crypt = "0.5.0" sha-crypt = "0.5.0"
sha2 = "0.10.2" sha2 = "0.10.2"
shell-words = "1"
signal-hook = "0.3.17" signal-hook = "0.3.17"
simple-logging = "2.0.2"
socket2 = { version = "0.6.0", features = ["all"] } socket2 = { version = "0.6.0", features = ["all"] }
socks5-impl = { version = "0.7.2", features = ["client", "server"] } socks5-impl = { version = "0.7.2", features = ["client", "server"] }
sqlx = { version = "0.8.6", features = [ sqlx = { version = "0.8.6", features = [
@@ -252,7 +237,7 @@ termion = "4.0.5"
textwrap = "0.16.1" textwrap = "0.16.1"
thiserror = "2.0.12" thiserror = "2.0.12"
tokio = { version = "1.38.1", features = ["full"] } tokio = { version = "1.38.1", features = ["full"] }
tokio-rustls = "0.26.0" tokio-rustls = "0.26.4"
tokio-stream = { version = "0.1.14", features = ["io-util", "net", "sync"] } tokio-stream = { version = "0.1.14", features = ["io-util", "net", "sync"] }
tokio-tar = { git = "https://github.com/dr-bonez/tokio-tar.git" } tokio-tar = { git = "https://github.com/dr-bonez/tokio-tar.git" }
tokio-tungstenite = { version = "0.26.2", features = ["native-tls", "url"] } tokio-tungstenite = { version = "0.26.2", features = ["native-tls", "url"] }
@@ -277,19 +262,19 @@ torut = "0.2.1"
tower-service = "0.3.3" tower-service = "0.3.3"
tracing = "0.1.39" tracing = "0.1.39"
tracing-error = "0.2.0" tracing-error = "0.2.0"
tracing-futures = "0.2.5"
tracing-journald = "0.3.0" tracing-journald = "0.3.0"
tracing-subscriber = { version = "0.3.17", features = ["env-filter"] } tracing-subscriber = { version = "0.3.17", features = ["env-filter"] }
ts-rs = "9.0.1" ts-rs = "9.0.1"
typed-builder = "0.21.0" typed-builder = "0.23.2"
unix-named-pipe = "0.2.0"
url = { version = "2.4.1", features = ["serde"] } url = { version = "2.4.1", features = ["serde"] }
urlencoding = "2.1.3"
uuid = { version = "1.4.1", features = ["v4"] } uuid = { version = "1.4.1", features = ["v4"] }
visit-rs = "0.1.1" visit-rs = "0.1.1"
x25519-dalek = { version = "2.0.1", features = ["static_secrets"] } x25519-dalek = { version = "2.0.1", features = ["static_secrets"] }
zbus = "5.1.1" zbus = "5.1.1"
zeroize = "1.6.0"
[target.'cfg(target_os = "linux")'.dependencies]
procfs = "0.18.0"
pty-process = "0.5.1"
[profile.test] [profile.test]
opt-level = 3 opt-level = 3

View File

@@ -7,7 +7,7 @@ use openssl::x509::X509;
use crate::db::model::DatabaseModel; use crate::db::model::DatabaseModel;
use crate::hostname::{Hostname, generate_hostname, generate_id}; use crate::hostname::{Hostname, generate_hostname, generate_id};
use crate::net::ssl::{generate_key, make_root_cert}; use crate::net::ssl::{gen_nistp256, make_root_cert};
use crate::net::tor::TorSecretKey; use crate::net::tor::TorSecretKey;
use crate::prelude::*; use crate::prelude::*;
use crate::util::serde::Pem; use crate::util::serde::Pem;
@@ -37,7 +37,7 @@ impl AccountInfo {
let server_id = generate_id(); let server_id = generate_id();
let hostname = generate_hostname(); let hostname = generate_hostname();
let tor_key = vec![TorSecretKey::generate()]; let tor_key = vec![TorSecretKey::generate()];
let root_ca_key = generate_key()?; let root_ca_key = gen_nistp256()?;
let root_ca_cert = make_root_cert(&root_ca_key, &hostname, start_time)?; let root_ca_cert = make_root_cert(&root_ca_key, &hostname, start_time)?;
let ssh_key = ssh_key::PrivateKey::from(ssh_key::private::Ed25519Keypair::random( let ssh_key = ssh_key::PrivateKey::from(ssh_key::private::Ed25519Keypair::random(
&mut ssh_key::rand_core::OsRng::default(), &mut ssh_key::rand_core::OsRng::default(),
@@ -128,7 +128,7 @@ impl AccountInfo {
cert_store cert_store
.as_root_cert_mut() .as_root_cert_mut()
.ser(Pem::new_ref(&self.root_ca_cert))?; .ser(Pem::new_ref(&self.root_ca_cert))?;
let int_key = crate::net::ssl::generate_key()?; let int_key = crate::net::ssl::gen_nistp256()?;
let int_cert = let int_cert =
crate::net::ssl::make_int_cert((&self.root_ca_key, &self.root_ca_cert), &int_key)?; crate::net::ssl::make_int_cert((&self.root_ca_key, &self.root_ca_cert), &int_key)?;
cert_store.as_int_key_mut().ser(&Pem(int_key))?; cert_store.as_int_key_mut().ser(&Pem(int_key))?;

View File

@@ -1,14 +1,13 @@
use std::fmt; use std::fmt;
use clap::{CommandFactory, FromArgMatches, Parser}; use clap::{CommandFactory, FromArgMatches, Parser};
pub use models::ActionId;
use models::{PackageId, ReplayId};
use qrcode::QrCode; use qrcode::QrCode;
use rpc_toolkit::{Context, HandlerExt, ParentHandler, from_fn_async}; use rpc_toolkit::{Context, HandlerExt, ParentHandler, from_fn_async};
use serde::{Deserialize, Serialize}; use serde::{Deserialize, Serialize};
use tracing::instrument; use tracing::instrument;
use ts_rs::TS; use ts_rs::TS;
pub use crate::ActionId;
use crate::context::{CliContext, RpcContext}; use crate::context::{CliContext, RpcContext};
use crate::db::model::package::TaskSeverity; use crate::db::model::package::TaskSeverity;
use crate::prelude::*; use crate::prelude::*;
@@ -16,6 +15,7 @@ use crate::rpc_continuations::Guid;
use crate::util::serde::{ use crate::util::serde::{
HandlerExtSerde, StdinDeserializable, WithIoFormat, display_serializable, HandlerExtSerde, StdinDeserializable, WithIoFormat, display_serializable,
}; };
use crate::{PackageId, ReplayId};
pub fn action_api<C: Context>() -> ParentHandler<C> { pub fn action_api<C: Context>() -> ParentHandler<C> {
ParentHandler::new() ParentHandler::new()

View File

@@ -14,8 +14,8 @@ use tracing::instrument;
use ts_rs::TS; use ts_rs::TS;
use crate::context::{CliContext, RpcContext}; use crate::context::{CliContext, RpcContext};
use crate::middleware::auth::{ use crate::middleware::auth::session::{
AsLogoutSessionId, AuthContext, HasLoggedOutSessions, HashSessionToken, LoginRes, AsLogoutSessionId, HasLoggedOutSessions, HashSessionToken, LoginRes, SessionAuthContext,
}; };
use crate::prelude::*; use crate::prelude::*;
use crate::util::crypto::EncryptedWire; use crate::util::crypto::EncryptedWire;
@@ -110,7 +110,7 @@ impl std::str::FromStr for PasswordType {
}) })
} }
} }
pub fn auth<C: Context, AC: AuthContext>() -> ParentHandler<C> pub fn auth<C: Context, AC: SessionAuthContext>() -> ParentHandler<C>
where where
CliContext: CallRemote<AC>, CliContext: CallRemote<AC>,
{ {
@@ -173,7 +173,7 @@ fn gen_pwd() {
} }
#[instrument(skip_all)] #[instrument(skip_all)]
async fn cli_login<C: AuthContext>( async fn cli_login<C: SessionAuthContext>(
HandlerArgs { HandlerArgs {
context: ctx, context: ctx,
parent_method, parent_method,
@@ -227,7 +227,7 @@ pub struct LoginParams {
} }
#[instrument(skip_all)] #[instrument(skip_all)]
pub async fn login_impl<C: AuthContext>( pub async fn login_impl<C: SessionAuthContext>(
ctx: C, ctx: C,
LoginParams { LoginParams {
password, password,
@@ -283,7 +283,7 @@ pub struct LogoutParams {
session: InternedString, session: InternedString,
} }
pub async fn logout<C: AuthContext>( pub async fn logout<C: SessionAuthContext>(
ctx: C, ctx: C,
LogoutParams { session }: LogoutParams, LogoutParams { session }: LogoutParams,
) -> Result<Option<HasLoggedOutSessions>, Error> { ) -> Result<Option<HasLoggedOutSessions>, Error> {
@@ -312,7 +312,7 @@ pub struct SessionList {
sessions: Sessions, sessions: Sessions,
} }
pub fn session<C: Context, AC: AuthContext>() -> ParentHandler<C> pub fn session<C: Context, AC: SessionAuthContext>() -> ParentHandler<C>
where where
CliContext: CallRemote<AC>, CliContext: CallRemote<AC>,
{ {
@@ -379,7 +379,7 @@ pub struct ListParams {
// #[command(display(display_sessions))] // #[command(display(display_sessions))]
#[instrument(skip_all)] #[instrument(skip_all)]
pub async fn list<C: AuthContext>( pub async fn list<C: SessionAuthContext>(
ctx: C, ctx: C,
ListParams { session, .. }: ListParams, ListParams { session, .. }: ListParams,
) -> Result<SessionList, Error> { ) -> Result<SessionList, Error> {
@@ -418,7 +418,10 @@ pub struct KillParams {
} }
#[instrument(skip_all)] #[instrument(skip_all)]
pub async fn kill<C: AuthContext>(ctx: C, KillParams { ids }: KillParams) -> Result<(), Error> { pub async fn kill<C: SessionAuthContext>(
ctx: C,
KillParams { ids }: KillParams,
) -> Result<(), Error> {
HasLoggedOutSessions::new(ids.into_iter().map(KillSessionId::new), &ctx).await?; HasLoggedOutSessions::new(ids.into_iter().map(KillSessionId::new), &ctx).await?;
Ok(()) Ok(())
} }

View File

@@ -5,9 +5,7 @@ use std::sync::Arc;
use chrono::Utc; use chrono::Utc;
use clap::Parser; use clap::Parser;
use color_eyre::eyre::eyre; use color_eyre::eyre::eyre;
use helpers::AtomicFile;
use imbl::OrdSet; use imbl::OrdSet;
use models::PackageId;
use serde::{Deserialize, Serialize}; use serde::{Deserialize, Serialize};
use tokio::io::AsyncWriteExt; use tokio::io::AsyncWriteExt;
use tracing::instrument; use tracing::instrument;
@@ -15,6 +13,7 @@ use ts_rs::TS;
use super::PackageBackupReport; use super::PackageBackupReport;
use super::target::{BackupTargetId, PackageBackupInfo}; use super::target::{BackupTargetId, PackageBackupInfo};
use crate::PackageId;
use crate::backup::os::OsBackup; use crate::backup::os::OsBackup;
use crate::backup::{BackupReport, ServerBackupReport}; use crate::backup::{BackupReport, ServerBackupReport};
use crate::context::RpcContext; use crate::context::RpcContext;
@@ -23,10 +22,10 @@ use crate::db::model::{Database, DatabaseModel};
use crate::disk::mount::backup::BackupMountGuard; use crate::disk::mount::backup::BackupMountGuard;
use crate::disk::mount::filesystem::ReadWrite; use crate::disk::mount::filesystem::ReadWrite;
use crate::disk::mount::guard::{GenericMountGuard, TmpMountGuard}; use crate::disk::mount::guard::{GenericMountGuard, TmpMountGuard};
use crate::middleware::auth::AuthContext; use crate::middleware::auth::session::SessionAuthContext;
use crate::notifications::{NotificationLevel, notify}; use crate::notifications::{NotificationLevel, notify};
use crate::prelude::*; use crate::prelude::*;
use crate::util::io::dir_copy; use crate::util::io::{AtomicFile, dir_copy};
use crate::util::serde::IoFormat; use crate::util::serde::IoFormat;
use crate::version::VersionT; use crate::version::VersionT;
@@ -312,19 +311,14 @@ async fn perform_backup(
let ui = ctx.db.peek().await.into_public().into_ui().de()?; let ui = ctx.db.peek().await.into_public().into_ui().de()?;
let mut os_backup_file = let mut os_backup_file =
AtomicFile::new(backup_guard.path().join("os-backup.json"), None::<PathBuf>) AtomicFile::new(backup_guard.path().join("os-backup.json"), None::<PathBuf>).await?;
.await
.with_kind(ErrorKind::Filesystem)?;
os_backup_file os_backup_file
.write_all(&IoFormat::Json.to_vec(&OsBackup { .write_all(&IoFormat::Json.to_vec(&OsBackup {
account: ctx.account.peek(|a| a.clone()), account: ctx.account.peek(|a| a.clone()),
ui, ui,
})?) })?)
.await?; .await?;
os_backup_file os_backup_file.save().await?;
.save()
.await
.with_kind(ErrorKind::Filesystem)?;
let luks_folder_old = backup_guard.path().join("luks.old"); let luks_folder_old = backup_guard.path().join("luks.old");
if tokio::fs::metadata(&luks_folder_old).await.is_ok() { if tokio::fs::metadata(&luks_folder_old).await.is_ok() {

View File

@@ -1,15 +1,12 @@
use std::collections::BTreeMap; use std::collections::BTreeMap;
use chrono::{DateTime, Utc};
use models::{HostId, PackageId};
use reqwest::Url;
use rpc_toolkit::{Context, HandlerExt, ParentHandler, from_fn_async}; use rpc_toolkit::{Context, HandlerExt, ParentHandler, from_fn_async};
use serde::{Deserialize, Serialize}; use serde::{Deserialize, Serialize};
use crate::PackageId;
use crate::context::CliContext; use crate::context::CliContext;
#[allow(unused_imports)] #[allow(unused_imports)]
use crate::prelude::*; use crate::prelude::*;
use crate::util::serde::{Base32, Base64};
pub mod backup_bulk; pub mod backup_bulk;
pub mod os; pub mod os;
@@ -58,13 +55,3 @@ pub fn package_backup<C: Context>() -> ParentHandler<C> {
.with_call_remote::<CliContext>(), .with_call_remote::<CliContext>(),
) )
} }
#[derive(Deserialize, Serialize)]
struct BackupMetadata {
pub timestamp: DateTime<Utc>,
#[serde(default)]
pub network_keys: BTreeMap<HostId, Base64<[u8; 32]>>,
#[serde(default)]
pub tor_keys: BTreeMap<HostId, Base32<[u8; 64]>>, // DEPRECATED
pub registry: Option<Url>,
}

View File

@@ -3,7 +3,6 @@ use std::sync::Arc;
use clap::Parser; use clap::Parser;
use futures::{StreamExt, stream}; use futures::{StreamExt, stream};
use models::PackageId;
use patch_db::json_ptr::ROOT; use patch_db::json_ptr::ROOT;
use serde::{Deserialize, Serialize}; use serde::{Deserialize, Serialize};
use tokio::sync::Mutex; use tokio::sync::Mutex;
@@ -11,7 +10,6 @@ use tracing::instrument;
use ts_rs::TS; use ts_rs::TS;
use super::target::BackupTargetId; use super::target::BackupTargetId;
use crate::PLATFORM;
use crate::backup::os::OsBackup; use crate::backup::os::OsBackup;
use crate::context::setup::SetupResult; use crate::context::setup::SetupResult;
use crate::context::{RpcContext, SetupContext}; use crate::context::{RpcContext, SetupContext};
@@ -27,6 +25,7 @@ use crate::service::service_map::DownloadInstallFuture;
use crate::setup::SetupExecuteProgress; use crate::setup::SetupExecuteProgress;
use crate::system::sync_kiosk; use crate::system::sync_kiosk;
use crate::util::serde::IoFormat; use crate::util::serde::IoFormat;
use crate::{PLATFORM, PackageId};
#[derive(Deserialize, Serialize, Parser, TS)] #[derive(Deserialize, Serialize, Parser, TS)]
#[serde(rename_all = "camelCase")] #[serde(rename_all = "camelCase")]

View File

@@ -9,7 +9,6 @@ use digest::OutputSizeUser;
use digest::generic_array::GenericArray; use digest::generic_array::GenericArray;
use exver::Version; use exver::Version;
use imbl_value::InternedString; use imbl_value::InternedString;
use models::{FromStrParser, PackageId};
use rpc_toolkit::{Context, HandlerExt, ParentHandler, from_fn_async}; use rpc_toolkit::{Context, HandlerExt, ParentHandler, from_fn_async};
use serde::{Deserialize, Serialize}; use serde::{Deserialize, Serialize};
use sha2::Sha256; use sha2::Sha256;
@@ -18,6 +17,7 @@ use tracing::instrument;
use ts_rs::TS; use ts_rs::TS;
use self::cifs::CifsBackupTarget; use self::cifs::CifsBackupTarget;
use crate::PackageId;
use crate::context::{CliContext, RpcContext}; use crate::context::{CliContext, RpcContext};
use crate::db::model::DatabaseModel; use crate::db::model::DatabaseModel;
use crate::disk::mount::backup::BackupMountGuard; use crate::disk::mount::backup::BackupMountGuard;
@@ -27,10 +27,10 @@ use crate::disk::mount::filesystem::{FileSystem, MountType, ReadWrite};
use crate::disk::mount::guard::{GenericMountGuard, TmpMountGuard}; use crate::disk::mount::guard::{GenericMountGuard, TmpMountGuard};
use crate::disk::util::PartitionInfo; use crate::disk::util::PartitionInfo;
use crate::prelude::*; use crate::prelude::*;
use crate::util::VersionString;
use crate::util::serde::{ use crate::util::serde::{
HandlerExtSerde, WithIoFormat, deserialize_from_str, display_serializable, serialize_display, HandlerExtSerde, WithIoFormat, deserialize_from_str, display_serializable, serialize_display,
}; };
use crate::util::{FromStrParser, VersionString};
pub mod cifs; pub mod cifs;
@@ -301,14 +301,14 @@ lazy_static::lazy_static! {
Mutex::new(BTreeMap::new()); Mutex::new(BTreeMap::new());
} }
#[derive(Deserialize, Serialize, Parser, TS)] #[derive(Deserialize, Serialize, Parser)]
#[serde(rename_all = "camelCase")] #[serde(rename_all = "camelCase")]
#[command(rename_all = "kebab-case")] #[command(rename_all = "kebab-case")]
pub struct MountParams { pub struct MountParams {
target_id: BackupTargetId, target_id: BackupTargetId,
#[arg(long)] #[arg(long)]
server_id: Option<String>, server_id: Option<String>,
password: String, password: String, // TODO: rpassword
#[arg(long)] #[arg(long)]
allow_partial: bool, allow_partial: bool,
} }

View File

@@ -1,91 +1,85 @@
use std::collections::VecDeque; use std::collections::{BTreeMap, VecDeque};
use std::ffi::OsString; use std::ffi::OsString;
use std::path::Path; use std::path::Path;
#[cfg(feature = "cli-container")]
pub mod container_cli; pub mod container_cli;
pub mod deprecated; pub mod deprecated;
#[cfg(any(feature = "registry", feature = "cli-registry"))]
pub mod registry; pub mod registry;
#[cfg(feature = "cli")]
pub mod start_cli; pub mod start_cli;
#[cfg(feature = "startd")]
pub mod start_init; pub mod start_init;
#[cfg(feature = "startd")]
pub mod startd; pub mod startd;
#[cfg(any(feature = "tunnel", feature = "cli-tunnel"))]
pub mod tunnel; pub mod tunnel;
fn select_executable(name: &str) -> Option<fn(VecDeque<OsString>)> { #[derive(Default)]
match name { pub struct MultiExecutable(BTreeMap<&'static str, fn(VecDeque<OsString>)>);
#[cfg(feature = "startd")] impl MultiExecutable {
"startd" => Some(startd::main), pub fn enable_startd(&mut self) -> &mut Self {
#[cfg(feature = "startd")] self.0.insert("startd", startd::main);
"embassyd" => Some(|_| deprecated::renamed("embassyd", "startd")), self.0
#[cfg(feature = "startd")] .insert("embassyd", |_| deprecated::renamed("embassyd", "startd"));
"embassy-init" => Some(|_| deprecated::removed("embassy-init")), self.0
.insert("embassy-init", |_| deprecated::removed("embassy-init"));
#[cfg(feature = "cli-startd")] self
"start-cli" => Some(start_cli::main), }
#[cfg(feature = "cli-startd")] pub fn enable_start_cli(&mut self) -> &mut Self {
"embassy-cli" => Some(|_| deprecated::renamed("embassy-cli", "start-cli")), self.0.insert("start-cli", start_cli::main);
#[cfg(feature = "cli-startd")] self.0.insert("embassy-cli", |_| {
"embassy-sdk" => Some(|_| deprecated::removed("embassy-sdk")), deprecated::renamed("embassy-cli", "start-cli")
});
#[cfg(feature = "cli-container")] self.0
"start-container" => Some(container_cli::main), .insert("embassy-sdk", |_| deprecated::removed("embassy-sdk"));
self
#[cfg(feature = "registry")] }
"start-registryd" => Some(registry::main), pub fn enable_start_container(&mut self) -> &mut Self {
#[cfg(feature = "cli-registry")] self.0.insert("start-container", container_cli::main);
"start-registry" => Some(registry::cli), self
}
#[cfg(feature = "tunnel")] pub fn enable_start_registryd(&mut self) -> &mut Self {
"start-tunneld" => Some(tunnel::main), self.0.insert("start-registryd", registry::main);
#[cfg(feature = "cli-tunnel")] self
"start-tunnel" => Some(tunnel::cli), }
pub fn enable_start_registry(&mut self) -> &mut Self {
"contents" => Some(|_| { self.0.insert("start-registry", registry::cli);
#[cfg(feature = "startd")] self
println!("startd"); }
#[cfg(feature = "cli-startd")] pub fn enable_start_tunneld(&mut self) -> &mut Self {
println!("start-cli"); self.0.insert("start-tunneld", tunnel::main);
#[cfg(feature = "cli-container")] self
println!("start-container"); }
#[cfg(feature = "registry")] pub fn enable_start_tunnel(&mut self) -> &mut Self {
println!("start-registryd"); self.0.insert("start-tunnel", tunnel::cli);
#[cfg(feature = "cli-registry")] self
println!("start-registry");
#[cfg(feature = "tunnel")]
println!("start-tunneld");
#[cfg(feature = "cli-tunnel")]
println!("start-tunnel");
}),
_ => None,
} }
}
pub fn startbox() { fn select_executable(&self, name: &str) -> Option<fn(VecDeque<OsString>)> {
let mut args = std::env::args_os().collect::<VecDeque<_>>(); self.0.get(&name).copied()
for _ in 0..2 { }
if let Some(s) = args.pop_front() {
if let Some(x) = Path::new(&*s) pub fn execute(&self) {
.file_name() let mut args = std::env::args_os().collect::<VecDeque<_>>();
.and_then(|s| s.to_str()) for _ in 0..2 {
.and_then(|s| select_executable(&s)) if let Some(s) = args.pop_front() {
{ if let Some(name) = Path::new(&*s).file_name().and_then(|s| s.to_str()) {
args.push_front(s); if name == "--contents" {
return x(args); for name in self.0.keys() {
println!("{name}");
}
}
if let Some(x) = self.select_executable(&name) {
args.push_front(s);
return x(args);
}
}
} }
} }
let args = std::env::args().collect::<VecDeque<_>>();
eprintln!(
"unknown executable: {}",
args.get(1)
.or_else(|| args.get(0))
.map(|s| s.as_str())
.unwrap_or("N/A")
);
std::process::exit(1);
} }
let args = std::env::args().collect::<VecDeque<_>>();
eprintln!(
"unknown executable: {}",
args.get(1)
.or_else(|| args.get(0))
.map(|s| s.as_str())
.unwrap_or("N/A")
);
std::process::exit(1);
} }

View File

@@ -5,7 +5,6 @@ use std::time::Duration;
use clap::Parser; use clap::Parser;
use futures::FutureExt; use futures::FutureExt;
use helpers::NonDetachingJoinHandle;
use rpc_toolkit::CliApp; use rpc_toolkit::CliApp;
use tokio::signal::unix::signal; use tokio::signal::unix::signal;
use tracing::instrument; use tracing::instrument;
@@ -20,9 +19,10 @@ use crate::prelude::*;
use crate::tunnel::context::{TunnelConfig, TunnelContext}; use crate::tunnel::context::{TunnelConfig, TunnelContext};
use crate::tunnel::tunnel_router; use crate::tunnel::tunnel_router;
use crate::tunnel::web::TunnelCertHandler; use crate::tunnel::web::TunnelCertHandler;
use crate::util::future::NonDetachingJoinHandle;
use crate::util::logger::LOGGER; use crate::util::logger::LOGGER;
#[derive(Clone, Copy, PartialEq, Eq, PartialOrd, Ord)] #[derive(Clone, Copy, Debug, PartialEq, Eq, PartialOrd, Ord)]
enum WebserverListener { enum WebserverListener {
Http, Http,
Https(SocketAddr), Https(SocketAddr),

View File

@@ -1,7 +1,7 @@
use helpers::Callback; use helpers::Callback;
use itertools::Itertools; use itertools::Itertools;
use jsonpath_lib::Compiled; use jsonpath_lib::Compiled;
use models::PackageId; use crate::PackageId;
use serde_json::Value; use serde_json::Value;
use crate::context::RpcContext; use crate::context::RpcContext;

View File

@@ -24,7 +24,7 @@ use super::setup::CURRENT_SECRET;
use crate::context::config::{ClientConfig, local_config_path}; use crate::context::config::{ClientConfig, local_config_path};
use crate::context::{DiagnosticContext, InitContext, InstallContext, RpcContext, SetupContext}; use crate::context::{DiagnosticContext, InitContext, InstallContext, RpcContext, SetupContext};
use crate::developer::{OS_DEVELOPER_KEY_PATH, default_developer_key_path}; use crate::developer::{OS_DEVELOPER_KEY_PATH, default_developer_key_path};
use crate::middleware::auth::AuthContext; use crate::middleware::auth::local::LocalAuthContext;
use crate::prelude::*; use crate::prelude::*;
use crate::rpc_continuations::Guid; use crate::rpc_continuations::Guid;
use crate::util::io::read_file_to_string; use crate::util::io::read_file_to_string;
@@ -35,7 +35,7 @@ pub struct CliContextSeed {
pub base_url: Url, pub base_url: Url,
pub rpc_url: Url, pub rpc_url: Url,
pub registry_url: Option<Url>, pub registry_url: Option<Url>,
pub registry_hostname: Option<InternedString>, pub registry_hostname: Vec<InternedString>,
pub registry_listen: Option<SocketAddr>, pub registry_listen: Option<SocketAddr>,
pub tunnel_addr: Option<SocketAddr>, pub tunnel_addr: Option<SocketAddr>,
pub tunnel_listen: Option<SocketAddr>, pub tunnel_listen: Option<SocketAddr>,
@@ -126,7 +126,7 @@ impl CliContext {
Ok::<_, Error>(registry) Ok::<_, Error>(registry)
}) })
.transpose()?, .transpose()?,
registry_hostname: config.registry_hostname, registry_hostname: config.registry_hostname.unwrap_or_default(),
registry_listen: config.registry_listen, registry_listen: config.registry_listen,
tunnel_addr: config.tunnel, tunnel_addr: config.tunnel,
tunnel_listen: config.tunnel_listen, tunnel_listen: config.tunnel_listen,
@@ -307,7 +307,7 @@ impl CallRemote<RpcContext> for CliContext {
) )
.with_kind(crate::ErrorKind::Network)?; .with_kind(crate::ErrorKind::Network)?;
} }
crate::middleware::signature::call_remote( crate::middleware::auth::signature::call_remote(
self, self,
self.rpc_url.clone(), self.rpc_url.clone(),
HeaderMap::new(), HeaderMap::new(),
@@ -320,7 +320,7 @@ impl CallRemote<RpcContext> for CliContext {
} }
impl CallRemote<DiagnosticContext> for CliContext { impl CallRemote<DiagnosticContext> for CliContext {
async fn call_remote(&self, method: &str, params: Value, _: Empty) -> Result<Value, RpcError> { async fn call_remote(&self, method: &str, params: Value, _: Empty) -> Result<Value, RpcError> {
crate::middleware::signature::call_remote( crate::middleware::auth::signature::call_remote(
self, self,
self.rpc_url.clone(), self.rpc_url.clone(),
HeaderMap::new(), HeaderMap::new(),
@@ -333,7 +333,7 @@ impl CallRemote<DiagnosticContext> for CliContext {
} }
impl CallRemote<InitContext> for CliContext { impl CallRemote<InitContext> for CliContext {
async fn call_remote(&self, method: &str, params: Value, _: Empty) -> Result<Value, RpcError> { async fn call_remote(&self, method: &str, params: Value, _: Empty) -> Result<Value, RpcError> {
crate::middleware::signature::call_remote( crate::middleware::auth::signature::call_remote(
self, self,
self.rpc_url.clone(), self.rpc_url.clone(),
HeaderMap::new(), HeaderMap::new(),
@@ -346,7 +346,7 @@ impl CallRemote<InitContext> for CliContext {
} }
impl CallRemote<SetupContext> for CliContext { impl CallRemote<SetupContext> for CliContext {
async fn call_remote(&self, method: &str, params: Value, _: Empty) -> Result<Value, RpcError> { async fn call_remote(&self, method: &str, params: Value, _: Empty) -> Result<Value, RpcError> {
crate::middleware::signature::call_remote( crate::middleware::auth::signature::call_remote(
self, self,
self.rpc_url.clone(), self.rpc_url.clone(),
HeaderMap::new(), HeaderMap::new(),
@@ -359,7 +359,7 @@ impl CallRemote<SetupContext> for CliContext {
} }
impl CallRemote<InstallContext> for CliContext { impl CallRemote<InstallContext> for CliContext {
async fn call_remote(&self, method: &str, params: Value, _: Empty) -> Result<Value, RpcError> { async fn call_remote(&self, method: &str, params: Value, _: Empty) -> Result<Value, RpcError> {
crate::middleware::signature::call_remote( crate::middleware::auth::signature::call_remote(
self, self,
self.rpc_url.clone(), self.rpc_url.clone(),
HeaderMap::new(), HeaderMap::new(),

View File

@@ -65,7 +65,7 @@ pub struct ClientConfig {
#[arg(short = 'r', long)] #[arg(short = 'r', long)]
pub registry: Option<Url>, pub registry: Option<Url>,
#[arg(long)] #[arg(long)]
pub registry_hostname: Option<InternedString>, pub registry_hostname: Option<Vec<InternedString>>,
#[arg(skip)] #[arg(skip)]
pub registry_listen: Option<SocketAddr>, pub registry_listen: Option<SocketAddr>,
#[arg(short = 't', long)] #[arg(short = 't', long)]

View File

@@ -8,12 +8,10 @@ use std::sync::atomic::{AtomicBool, Ordering};
use std::time::Duration; use std::time::Duration;
use chrono::{TimeDelta, Utc}; use chrono::{TimeDelta, Utc};
use helpers::NonDetachingJoinHandle;
use imbl::OrdMap; use imbl::OrdMap;
use imbl_value::InternedString; use imbl_value::InternedString;
use itertools::Itertools; use itertools::Itertools;
use josekit::jwk::Jwk; use josekit::jwk::Jwk;
use models::{ActionId, PackageId};
use reqwest::{Client, Proxy}; use reqwest::{Client, Proxy};
use rpc_toolkit::yajrc::RpcError; use rpc_toolkit::yajrc::RpcError;
use rpc_toolkit::{CallRemote, Context, Empty}; use rpc_toolkit::{CallRemote, Context, Empty};
@@ -22,7 +20,6 @@ use tokio::time::Instant;
use tracing::instrument; use tracing::instrument;
use super::setup::CURRENT_SECRET; use super::setup::CURRENT_SECRET;
use crate::DATA_DIR;
use crate::account::AccountInfo; use crate::account::AccountInfo;
use crate::auth::Sessions; use crate::auth::Sessions;
use crate::context::config::ServerConfig; use crate::context::config::ServerConfig;
@@ -45,9 +42,11 @@ use crate::service::ServiceMap;
use crate::service::action::update_tasks; use crate::service::action::update_tasks;
use crate::service::effects::callbacks::ServiceCallbacks; use crate::service::effects::callbacks::ServiceCallbacks;
use crate::shutdown::Shutdown; use crate::shutdown::Shutdown;
use crate::util::future::NonDetachingJoinHandle;
use crate::util::io::delete_file; use crate::util::io::delete_file;
use crate::util::lshw::LshwDevice; use crate::util::lshw::LshwDevice;
use crate::util::sync::{SyncMutex, SyncRwLock, Watch}; use crate::util::sync::{SyncMutex, SyncRwLock, Watch};
use crate::{ActionId, DATA_DIR, PackageId};
pub struct RpcContextSeed { pub struct RpcContextSeed {
is_closed: AtomicBool, is_closed: AtomicBool,
@@ -389,12 +388,19 @@ impl RpcContext {
.as_entries()? .as_entries()?
.into_iter() .into_iter()
.map(|(_, pde)| { .map(|(_, pde)| {
Ok(pde.as_tasks().as_entries()?.into_iter().map(|(_, r)| { Ok(pde
Ok::<_, Error>(( .as_tasks()
r.as_task().as_package_id().de()?, .as_entries()?
r.as_task().as_action_id().de()?, .into_iter()
)) .map(|(_, r)| {
})) let t = r.as_task();
Ok::<_, Error>(if t.as_input().transpose_ref().is_some() {
Some((t.as_package_id().de()?, t.as_action_id().de()?))
} else {
None
})
})
.filter_map_ok(|a| a))
}) })
.flatten_ok() .flatten_ok()
.map(|a| a.and_then(|a| a)) .map(|a| a.and_then(|a| a))
@@ -416,46 +422,32 @@ impl RpcContext {
} }
} }
} }
for id in
self.db self.db
.mutate::<Vec<PackageId>>(|db| { .mutate(|db| {
for (package_id, action_input) in &action_input { for (package_id, action_input) in &action_input {
for (action_id, input) in action_input { for (action_id, input) in action_input {
for (_, pde) in for (_, pde) in db.as_public_mut().as_package_data_mut().as_entries_mut()? {
db.as_public_mut().as_package_data_mut().as_entries_mut()? pde.as_tasks_mut().mutate(|tasks| {
{ Ok(update_tasks(tasks, package_id, action_id, input, false))
pde.as_tasks_mut().mutate(|tasks| { })?;
Ok(update_tasks(tasks, package_id, action_id, input, false))
})?;
}
} }
} }
db.as_public() }
.as_package_data() for (_, pde) in db.as_public_mut().as_package_data_mut().as_entries_mut()? {
.as_entries()? if pde
.as_tasks()
.de()?
.into_iter() .into_iter()
.filter_map(|(id, pkg)| { .any(|(_, t)| t.active && t.task.severity == TaskSeverity::Critical)
(|| { {
if pkg.as_tasks().de()?.into_iter().any(|(_, t)| { pde.as_status_info_mut().stop()?;
t.active && t.task.severity == TaskSeverity::Critical }
}) { }
Ok(Some(id)) Ok(())
} else { })
Ok(None) .await
} .result?;
})()
.transpose()
})
.collect()
})
.await
.result?
{
let svc = self.services.get(&id).await;
if let Some(svc) = &*svc {
svc.stop(procedure_id.clone(), false).await?;
}
}
check_tasks.complete(); check_tasks.complete();
Ok(()) Ok(())

View File

@@ -4,7 +4,6 @@ use std::sync::Arc;
use std::time::Duration; use std::time::Duration;
use futures::{Future, StreamExt}; use futures::{Future, StreamExt};
use helpers::NonDetachingJoinHandle;
use imbl_value::InternedString; use imbl_value::InternedString;
use josekit::jwk::Jwk; use josekit::jwk::Jwk;
use patch_db::PatchDb; use patch_db::PatchDb;
@@ -28,7 +27,7 @@ use crate::progress::FullProgressTracker;
use crate::rpc_continuations::{Guid, RpcContinuation, RpcContinuations}; use crate::rpc_continuations::{Guid, RpcContinuation, RpcContinuations};
use crate::setup::SetupProgress; use crate::setup::SetupProgress;
use crate::shutdown::Shutdown; use crate::shutdown::Shutdown;
use crate::util::net::WebSocketExt; use crate::util::future::NonDetachingJoinHandle;
lazy_static::lazy_static! { lazy_static::lazy_static! {
pub static ref CURRENT_SECRET: Jwk = Jwk::generate_ec_key(josekit::jwk::alg::ec::EcCurve::P256).unwrap_or_else(|e| { pub static ref CURRENT_SECRET: Jwk = Jwk::generate_ec_key(josekit::jwk::alg::ec::EcCurve::P256).unwrap_or_else(|e| {

View File

@@ -1,14 +1,11 @@
use clap::Parser; use clap::Parser;
use color_eyre::eyre::eyre;
use models::PackageId;
use serde::{Deserialize, Serialize}; use serde::{Deserialize, Serialize};
use tracing::instrument; use tracing::instrument;
use ts_rs::TS; use ts_rs::TS;
use crate::Error;
use crate::context::RpcContext; use crate::context::RpcContext;
use crate::prelude::*; use crate::prelude::*;
use crate::rpc_continuations::Guid; use crate::{Error, PackageId};
#[derive(Deserialize, Serialize, Parser, TS)] #[derive(Deserialize, Serialize, Parser, TS)]
#[serde(rename_all = "camelCase")] #[serde(rename_all = "camelCase")]
@@ -19,37 +16,51 @@ pub struct ControlParams {
#[instrument(skip_all)] #[instrument(skip_all)]
pub async fn start(ctx: RpcContext, ControlParams { id }: ControlParams) -> Result<(), Error> { pub async fn start(ctx: RpcContext, ControlParams { id }: ControlParams) -> Result<(), Error> {
ctx.services ctx.db
.get(&id) .mutate(|db| {
db.as_public_mut()
.as_package_data_mut()
.as_idx_mut(&id)
.or_not_found(&id)?
.as_status_info_mut()
.as_desired_mut()
.map_mutate(|s| Ok(s.start()))
})
.await .await
.as_ref() .result?;
.or_not_found(lazy_format!("Manager for {id}"))?
.start(Guid::new())
.await?;
Ok(()) Ok(())
} }
pub async fn stop(ctx: RpcContext, ControlParams { id }: ControlParams) -> Result<(), Error> { pub async fn stop(ctx: RpcContext, ControlParams { id }: ControlParams) -> Result<(), Error> {
ctx.services ctx.db
.get(&id) .mutate(|db| {
db.as_public_mut()
.as_package_data_mut()
.as_idx_mut(&id)
.or_not_found(&id)?
.as_status_info_mut()
.stop()
})
.await .await
.as_ref() .result?;
.ok_or_else(|| Error::new(eyre!("Manager not found"), crate::ErrorKind::InvalidRequest))?
.stop(Guid::new(), true)
.await?;
Ok(()) Ok(())
} }
pub async fn restart(ctx: RpcContext, ControlParams { id }: ControlParams) -> Result<(), Error> { pub async fn restart(ctx: RpcContext, ControlParams { id }: ControlParams) -> Result<(), Error> {
ctx.services ctx.db
.get(&id) .mutate(|db| {
db.as_public_mut()
.as_package_data_mut()
.as_idx_mut(&id)
.or_not_found(&id)?
.as_status_info_mut()
.as_desired_mut()
.map_mutate(|s| Ok(s.restart()))
})
.await .await
.as_ref() .result?;
.ok_or_else(|| Error::new(eyre!("Manager not found"), crate::ErrorKind::InvalidRequest))?
.restart(Guid::new(), false)
.await?;
Ok(()) Ok(())
} }

View File

@@ -1,7 +1,6 @@
pub mod model; pub mod model;
pub mod prelude; pub mod prelude;
use std::panic::UnwindSafe;
use std::path::PathBuf; use std::path::PathBuf;
use std::sync::Arc; use std::sync::Arc;
use std::time::Duration; use std::time::Duration;
@@ -23,7 +22,6 @@ use ts_rs::TS;
use crate::context::{CliContext, RpcContext}; use crate::context::{CliContext, RpcContext};
use crate::prelude::*; use crate::prelude::*;
use crate::rpc_continuations::{Guid, RpcContinuation}; use crate::rpc_continuations::{Guid, RpcContinuation};
use crate::util::net::WebSocketExt;
use crate::util::serde::{HandlerExtSerde, apply_expr}; use crate::util::serde::{HandlerExtSerde, apply_expr};
lazy_static::lazy_static! { lazy_static::lazy_static! {

View File

@@ -4,7 +4,6 @@ use std::path::PathBuf;
use chrono::{DateTime, Utc}; use chrono::{DateTime, Utc};
use exver::VersionRange; use exver::VersionRange;
use imbl_value::InternedString; use imbl_value::InternedString;
use models::{ActionId, DataUrl, HealthCheckId, HostId, PackageId, ReplayId, ServiceInterfaceId};
use patch_db::HasModel; use patch_db::HasModel;
use patch_db::json_ptr::JsonPointer; use patch_db::json_ptr::JsonPointer;
use reqwest::Url; use reqwest::Url;
@@ -16,8 +15,10 @@ use crate::net::service_interface::ServiceInterface;
use crate::prelude::*; use crate::prelude::*;
use crate::progress::FullProgress; use crate::progress::FullProgress;
use crate::s9pk::manifest::Manifest; use crate::s9pk::manifest::Manifest;
use crate::status::MainStatus; use crate::status::StatusInfo;
use crate::util::DataUrl;
use crate::util::serde::{Pem, is_partial_of}; use crate::util::serde::{Pem, is_partial_of};
use crate::{ActionId, HealthCheckId, HostId, PackageId, ReplayId, ServiceInterfaceId};
#[derive(Debug, Default, Deserialize, Serialize, TS)] #[derive(Debug, Default, Deserialize, Serialize, TS)]
#[ts(export)] #[ts(export)]
@@ -365,7 +366,7 @@ impl Default for ActionVisibility {
pub struct PackageDataEntry { pub struct PackageDataEntry {
pub state_info: PackageState, pub state_info: PackageState,
pub s9pk: PathBuf, pub s9pk: PathBuf,
pub status: MainStatus, pub status_info: StatusInfo,
#[ts(type = "string | null")] #[ts(type = "string | null")]
pub registry: Option<Url>, pub registry: Option<Url>,
#[ts(type = "string")] #[ts(type = "string")]

View File

@@ -1,9 +1,9 @@
use std::collections::{BTreeMap, HashSet}; use std::collections::{BTreeMap, HashSet};
use models::PackageId;
use patch_db::{HasModel, Value}; use patch_db::{HasModel, Value};
use serde::{Deserialize, Serialize}; use serde::{Deserialize, Serialize};
use crate::PackageId;
use crate::auth::Sessions; use crate::auth::Sessions;
use crate::backup::target::cifs::CifsTargets; use crate::backup::target::cifs::CifsTargets;
use crate::net::forward::AvailablePorts; use crate::net::forward::AvailablePorts;

View File

@@ -1,6 +1,6 @@
use std::collections::{BTreeMap, BTreeSet, VecDeque}; use std::collections::{BTreeMap, BTreeSet, VecDeque};
use std::net::{IpAddr, Ipv4Addr, SocketAddr}; use std::net::{IpAddr, Ipv4Addr, SocketAddr};
use std::sync::Arc; use std::sync::{Arc, OnceLock};
use chrono::{DateTime, Utc}; use chrono::{DateTime, Utc};
use exver::{Version, VersionRange}; use exver::{Version, VersionRange};
@@ -9,7 +9,6 @@ use imbl_value::InternedString;
use ipnet::IpNet; use ipnet::IpNet;
use isocountry::CountryCode; use isocountry::CountryCode;
use itertools::Itertools; use itertools::Itertools;
use models::{GatewayId, PackageId};
use openssl::hash::MessageDigest; use openssl::hash::MessageDigest;
use patch_db::{HasModel, Value}; use patch_db::{HasModel, Value};
use serde::{Deserialize, Serialize}; use serde::{Deserialize, Serialize};
@@ -31,7 +30,9 @@ use crate::util::cpupower::Governor;
use crate::util::lshw::LshwDevice; use crate::util::lshw::LshwDevice;
use crate::util::serde::MaybeUtf8String; use crate::util::serde::MaybeUtf8String;
use crate::version::{Current, VersionT}; use crate::version::{Current, VersionT};
use crate::{ARCH, PLATFORM}; use crate::{ARCH, GatewayId, PLATFORM, PackageId};
pub static DB_UI_SEED_CELL: OnceLock<&'static str> = OnceLock::new();
#[derive(Debug, Deserialize, Serialize, HasModel, TS)] #[derive(Debug, Deserialize, Serialize, HasModel, TS)]
#[serde(rename_all = "camelCase")] #[serde(rename_all = "camelCase")]
@@ -65,9 +66,10 @@ impl Public {
preferred_external_port: 80, preferred_external_port: 80,
add_ssl: Some(AddSslOptions { add_ssl: Some(AddSslOptions {
preferred_external_port: 443, preferred_external_port: 443,
add_x_forwarded_headers: false,
alpn: Some(AlpnInfo::Specified(vec![ alpn: Some(AlpnInfo::Specified(vec![
MaybeUtf8String("http/1.1".into()),
MaybeUtf8String("h2".into()), MaybeUtf8String("h2".into()),
MaybeUtf8String("http/1.1".into()),
])), ])),
}), }),
secure: None, secure: None,
@@ -123,20 +125,8 @@ impl Public {
kiosk, kiosk,
}, },
package_data: AllPackageData::default(), package_data: AllPackageData::default(),
ui: { ui: serde_json::from_str(*DB_UI_SEED_CELL.get().unwrap_or(&"null"))
#[cfg(feature = "startd")] .with_kind(ErrorKind::Deserialization)?,
{
serde_json::from_str(include_str!(concat!(
env!("CARGO_MANIFEST_DIR"),
"/../../web/patchdb-ui-seed.json"
)))
.with_kind(ErrorKind::Deserialization)?
}
#[cfg(not(feature = "startd"))]
{
Value::Null
}
},
}) })
} }
} }
@@ -260,11 +250,7 @@ impl NetworkInterfaceInfo {
} }
pub fn secure(&self) -> bool { pub fn secure(&self) -> bool {
self.secure.unwrap_or_else(|| { self.secure.unwrap_or(false)
self.ip_info.as_ref().map_or(false, |ip_info| {
ip_info.device_type == Some(NetworkInterfaceType::Wireguard)
}) && !self.public()
})
} }
} }

View File

@@ -2,13 +2,12 @@ use std::collections::BTreeMap;
use std::path::Path; use std::path::Path;
use imbl_value::InternedString; use imbl_value::InternedString;
use models::PackageId;
use serde::{Deserialize, Serialize}; use serde::{Deserialize, Serialize};
use ts_rs::TS; use ts_rs::TS;
use crate::Error;
use crate::prelude::*; use crate::prelude::*;
use crate::util::PathOrUrl; use crate::util::PathOrUrl;
use crate::{Error, PackageId};
#[derive(Clone, Debug, Default, Deserialize, Serialize, HasModel, TS)] #[derive(Clone, Debug, Default, Deserialize, Serialize, HasModel, TS)]
#[model = "Model<Self>"] #[model = "Model<Self>"]

View File

@@ -287,6 +287,7 @@ pub async fn mount_fs<P: AsRef<Path>>(
Command::new("cryptsetup") Command::new("cryptsetup")
.arg("-q") .arg("-q")
.arg("luksOpen") .arg("luksOpen")
.arg("--allow-discards")
.arg(format!("--key-file={}", PASSWORD_PATH)) .arg(format!("--key-file={}", PASSWORD_PATH))
.arg(format!("--keyfile-size={}", password.len())) .arg(format!("--keyfile-size={}", password.len()))
.arg(&blockdev_path) .arg(&blockdev_path)

View File

@@ -2,21 +2,21 @@ use std::path::{Path, PathBuf};
use std::sync::Arc; use std::sync::Arc;
use color_eyre::eyre::eyre; use color_eyre::eyre::eyre;
use helpers::AtomicFile;
use models::PackageId;
use tokio::io::AsyncWriteExt; use tokio::io::AsyncWriteExt;
use tracing::instrument; use tracing::instrument;
use super::guard::{GenericMountGuard, TmpMountGuard}; use super::guard::{GenericMountGuard, TmpMountGuard};
use crate::PackageId;
use crate::auth::check_password; use crate::auth::check_password;
use crate::backup::target::BackupInfo; use crate::backup::target::BackupInfo;
use crate::disk::mount::filesystem::ReadWrite; use crate::disk::mount::filesystem::ReadWrite;
use crate::disk::mount::filesystem::backupfs::BackupFS; use crate::disk::mount::filesystem::backupfs::BackupFS;
use crate::disk::mount::guard::SubPath; use crate::disk::mount::guard::SubPath;
use crate::disk::util::StartOsRecoveryInfo; use crate::disk::util::StartOsRecoveryInfo;
use crate::prelude::*;
use crate::util::crypto::{decrypt_slice, encrypt_slice}; use crate::util::crypto::{decrypt_slice, encrypt_slice};
use crate::util::io::AtomicFile;
use crate::util::serde::IoFormat; use crate::util::serde::IoFormat;
use crate::{Error, ErrorKind, ResultExt};
#[derive(Clone, Debug)] #[derive(Clone, Debug)]
pub struct BackupMountGuard<G: GenericMountGuard> { pub struct BackupMountGuard<G: GenericMountGuard> {
@@ -184,18 +184,14 @@ impl<G: GenericMountGuard> BackupMountGuard<G> {
#[instrument(skip_all)] #[instrument(skip_all)]
pub async fn save(&self) -> Result<(), Error> { pub async fn save(&self) -> Result<(), Error> {
let metadata_path = self.path().join("metadata.json"); let metadata_path = self.path().join("metadata.json");
let mut file = AtomicFile::new(&metadata_path, None::<PathBuf>) let mut file = AtomicFile::new(&metadata_path, None::<PathBuf>).await?;
.await
.with_kind(ErrorKind::Filesystem)?;
file.write_all(&IoFormat::Json.to_vec(&self.metadata)?) file.write_all(&IoFormat::Json.to_vec(&self.metadata)?)
.await?; .await?;
file.save().await.with_kind(ErrorKind::Filesystem)?; file.save().await?;
let mut file = AtomicFile::new(&self.unencrypted_metadata_path, None::<PathBuf>) let mut file = AtomicFile::new(&self.unencrypted_metadata_path, None::<PathBuf>).await?;
.await
.with_kind(ErrorKind::Filesystem)?;
file.write_all(&IoFormat::Json.to_vec(&self.unencrypted_metadata)?) file.write_all(&IoFormat::Json.to_vec(&self.unencrypted_metadata)?)
.await?; .await?;
file.save().await.with_kind(ErrorKind::Filesystem)?; file.save().await?;
Ok(()) Ok(())
} }

View File

@@ -19,6 +19,11 @@ pub enum FileType {
Directory, Directory,
Infer, Infer,
} }
impl Default for FileType {
fn default() -> Self {
FileType::Directory
}
}
pub struct Bind<Src: AsRef<Path>> { pub struct Bind<Src: AsRef<Path>> {
src: Src, src: Src,

View File

@@ -2,34 +2,85 @@ use std::ffi::OsStr;
use std::fmt::Display; use std::fmt::Display;
use std::os::unix::fs::MetadataExt; use std::os::unix::fs::MetadataExt;
use std::path::Path; use std::path::Path;
use std::str::FromStr;
use clap::Parser;
use clap::builder::ValueParserFactory;
use digest::generic_array::GenericArray; use digest::generic_array::GenericArray;
use digest::{Digest, OutputSizeUser}; use digest::{Digest, OutputSizeUser};
use serde::{Deserialize, Serialize}; use serde::{Deserialize, Serialize};
use sha2::Sha256; use sha2::Sha256;
use tokio::process::Command; use tokio::process::Command;
use ts_rs::TS;
use super::{FileSystem, MountType}; use super::FileSystem;
use crate::disk::mount::filesystem::default_mount_command;
use crate::prelude::*; use crate::prelude::*;
use crate::util::Invoke; use crate::util::{FromStrParser, Invoke};
#[derive(Clone, Copy, Debug, Deserialize, Serialize, Parser, TS)]
#[serde(rename_all = "camelCase")]
pub struct IdMap {
pub from_id: u32,
pub to_id: u32,
pub range: u32,
}
impl IdMap {
pub fn stack(a: Vec<IdMap>, b: Vec<IdMap>) -> Vec<IdMap> {
let mut res = Vec::with_capacity(a.len() + b.len());
res.extend_from_slice(&a);
for mut b in b {
for a in &a {
if a.from_id <= b.to_id && a.from_id + a.range > b.to_id {
b.to_id += a.to_id;
}
}
res.push(b);
}
res
}
}
impl FromStr for IdMap {
type Err = Error;
fn from_str(s: &str) -> Result<Self, Self::Err> {
let split = s.splitn(3, ":").collect::<Vec<_>>();
if let Some([u, k, r]) = split.get(0..3) {
Ok(Self {
from_id: u.parse()?,
to_id: k.parse()?,
range: r.parse()?,
})
} else if let Some([u, k]) = split.get(0..2) {
Ok(Self {
from_id: u.parse()?,
to_id: k.parse()?,
range: 1,
})
} else {
Err(Error::new(
eyre!("{s} is not a valid idmap"),
ErrorKind::ParseNumber,
))
}
}
}
impl ValueParserFactory for IdMap {
type Parser = FromStrParser<IdMap>;
fn value_parser() -> Self::Parser {
<Self::Parser>::new()
}
}
#[derive(Debug, Deserialize, Serialize)] #[derive(Debug, Deserialize, Serialize)]
#[serde(rename_all = "camelCase")] #[serde(rename_all = "camelCase")]
pub struct IdMapped<Fs: FileSystem> { pub struct IdMapped<Fs: FileSystem> {
filesystem: Fs, filesystem: Fs,
from_id: u32, idmap: Vec<IdMap>,
to_id: u32,
range: u32,
} }
impl<Fs: FileSystem> IdMapped<Fs> { impl<Fs: FileSystem> IdMapped<Fs> {
pub fn new(filesystem: Fs, from_id: u32, to_id: u32, range: u32) -> Self { pub fn new(filesystem: Fs, idmap: Vec<IdMap>) -> Self {
Self { Self { filesystem, idmap }
filesystem,
from_id,
to_id,
range,
}
} }
} }
impl<Fs: FileSystem> FileSystem for IdMapped<Fs> { impl<Fs: FileSystem> FileSystem for IdMapped<Fs> {
@@ -44,12 +95,17 @@ impl<Fs: FileSystem> FileSystem for IdMapped<Fs> {
.mount_options() .mount_options()
.into_iter() .into_iter()
.map(|a| Box::new(a) as Box<dyn Display>) .map(|a| Box::new(a) as Box<dyn Display>)
.chain(std::iter::once(Box::new(lazy_format!( .chain(if self.idmap.is_empty() {
"X-mount.idmap=b:{}:{}:{}", None
self.from_id, } else {
self.to_id, use std::fmt::Write;
self.range,
)) as Box<dyn Display>)) let mut option = "X-mount.idmap=".to_owned();
for i in &self.idmap {
write!(&mut option, "b:{}:{}:{} ", i.from_id, i.to_id, i.range).unwrap();
}
Some(Box::new(option) as Box<dyn Display>)
})
} }
async fn source(&self) -> Result<Option<impl AsRef<Path>>, Error> { async fn source(&self) -> Result<Option<impl AsRef<Path>>, Error> {
self.filesystem.source().await self.filesystem.source().await
@@ -57,26 +113,28 @@ impl<Fs: FileSystem> FileSystem for IdMapped<Fs> {
async fn pre_mount(&self, mountpoint: &Path) -> Result<(), Error> { async fn pre_mount(&self, mountpoint: &Path) -> Result<(), Error> {
self.filesystem.pre_mount(mountpoint).await?; self.filesystem.pre_mount(mountpoint).await?;
let info = tokio::fs::metadata(mountpoint).await?; let info = tokio::fs::metadata(mountpoint).await?;
let uid_in_range = self.from_id <= info.uid() && self.from_id + self.range > info.uid(); for i in &self.idmap {
let gid_in_range = self.from_id <= info.gid() && self.from_id + self.range > info.gid(); let uid_in_range = i.from_id <= info.uid() && i.from_id + i.range > info.uid();
if uid_in_range || gid_in_range { let gid_in_range = i.from_id <= info.gid() && i.from_id + i.range > info.gid();
Command::new("chown") if uid_in_range || gid_in_range {
.arg(format!( Command::new("chown")
"{uid}:{gid}", .arg(format!(
uid = if uid_in_range { "{uid}:{gid}",
self.to_id + info.uid() - self.from_id uid = if uid_in_range {
} else { i.to_id + info.uid() - i.from_id
info.uid() } else {
}, info.uid()
gid = if gid_in_range { },
self.to_id + info.gid() - self.from_id gid = if gid_in_range {
} else { i.to_id + info.gid() - i.from_id
info.gid() } else {
}, info.gid()
)) },
.arg(&mountpoint) ))
.invoke(crate::ErrorKind::Filesystem) .arg(&mountpoint)
.await?; .invoke(crate::ErrorKind::Filesystem)
.await?;
}
} }
Ok(()) Ok(())
} }
@@ -86,9 +144,12 @@ impl<Fs: FileSystem> FileSystem for IdMapped<Fs> {
let mut sha = Sha256::new(); let mut sha = Sha256::new();
sha.update("IdMapped"); sha.update("IdMapped");
sha.update(self.filesystem.source_hash().await?); sha.update(self.filesystem.source_hash().await?);
sha.update(u32::to_be_bytes(self.from_id)); sha.update(usize::to_be_bytes(self.idmap.len()));
sha.update(u32::to_be_bytes(self.to_id)); for i in &self.idmap {
sha.update(u32::to_be_bytes(self.range)); sha.update(u32::to_be_bytes(i.from_id));
sha.update(u32::to_be_bytes(i.to_id));
sha.update(u32::to_be_bytes(i.range));
}
Ok(sha.finalize()) Ok(sha.finalize())
} }
} }

View File

@@ -4,14 +4,13 @@ use std::sync::{Arc, Weak};
use futures::Future; use futures::Future;
use lazy_static::lazy_static; use lazy_static::lazy_static;
use models::ResultExt;
use tokio::sync::Mutex; use tokio::sync::Mutex;
use tracing::instrument; use tracing::instrument;
use super::filesystem::{FileSystem, MountType, ReadOnly, ReadWrite}; use super::filesystem::{FileSystem, MountType, ReadOnly, ReadWrite};
use super::util::unmount; use super::util::unmount;
use crate::Error;
use crate::util::{Invoke, Never}; use crate::util::{Invoke, Never};
use crate::{Error, ResultExt};
pub const TMP_MOUNTPOINT: &'static str = "/media/startos/tmp"; pub const TMP_MOUNTPOINT: &'static str = "/media/startos/tmp";

View File

@@ -48,7 +48,6 @@ pub async fn bind<P0: AsRef<Path>, P1: AsRef<Path>>(
pub async fn unmount<P: AsRef<Path>>(mountpoint: P, lazy: bool) -> Result<(), Error> { pub async fn unmount<P: AsRef<Path>>(mountpoint: P, lazy: bool) -> Result<(), Error> {
tracing::debug!("Unmounting {}.", mountpoint.as_ref().display()); tracing::debug!("Unmounting {}.", mountpoint.as_ref().display());
let mut cmd = tokio::process::Command::new("umount"); let mut cmd = tokio::process::Command::new("umount");
cmd.arg("-R");
if lazy { if lazy {
cmd.arg("-l"); cmd.arg("-l");
} }

View File

@@ -280,6 +280,9 @@ pub async fn list(os: &OsPartitionInfo) -> Result<Vec<DiskInfo>, Error> {
.try_fold( .try_fold(
BTreeMap::<PathBuf, DiskIndex>::new(), BTreeMap::<PathBuf, DiskIndex>::new(),
|mut disks, dir_entry| async move { |mut disks, dir_entry| async move {
if dir_entry.file_type().await?.is_dir() {
return Ok(disks);
}
if let Some(disk_path) = dir_entry.path().file_name().and_then(|s| s.to_str()) { if let Some(disk_path) = dir_entry.path().file_name().and_then(|s| s.to_str()) {
let (disk_path, part_path) = if let Some(end) = PARTITION_REGEX.find(disk_path) { let (disk_path, part_path) = if let Some(end) = PARTITION_REGEX.find(disk_path) {
( (

View File

@@ -1,4 +1,527 @@
pub use models::{Error, ErrorKind, OptionExt, ResultExt}; use std::fmt::{Debug, Display};
use axum::http::StatusCode;
use axum::http::uri::InvalidUri;
use color_eyre::eyre::eyre;
use num_enum::TryFromPrimitive;
use patch_db::Revision;
use rpc_toolkit::reqwest;
use rpc_toolkit::yajrc::{
INVALID_PARAMS_ERROR, INVALID_REQUEST_ERROR, METHOD_NOT_FOUND_ERROR, PARSE_ERROR, RpcError,
};
use serde::{Deserialize, Serialize};
use tokio::task::JoinHandle;
use tokio_rustls::rustls;
use ts_rs::TS;
use crate::InvalidId;
#[derive(Debug, Clone, Copy, PartialEq, Eq, TryFromPrimitive)]
#[repr(i32)]
pub enum ErrorKind {
Unknown = 1,
Filesystem = 2,
Docker = 3,
ConfigSpecViolation = 4,
ConfigRulesViolation = 5,
NotFound = 6,
IncorrectPassword = 7,
VersionIncompatible = 8,
Network = 9,
Registry = 10,
Serialization = 11,
Deserialization = 12,
Utf8 = 13,
ParseVersion = 14,
IncorrectDisk = 15,
// Nginx = 16,
Dependency = 17,
ParseS9pk = 18,
ParseUrl = 19,
DiskNotAvailable = 20,
BlockDevice = 21,
InvalidOnionAddress = 22,
Pack = 23,
ValidateS9pk = 24,
DiskCorrupted = 25, // Remove
Tor = 26,
ConfigGen = 27,
ParseNumber = 28,
Database = 29,
InvalidId = 30,
InvalidSignature = 31,
Backup = 32,
Restore = 33,
Authorization = 34,
AutoConfigure = 35,
Action = 36,
RateLimited = 37,
InvalidRequest = 38,
MigrationFailed = 39,
Uninitialized = 40,
ParseNetAddress = 41,
ParseSshKey = 42,
SoundError = 43,
ParseTimestamp = 44,
ParseSysInfo = 45,
Wifi = 46,
Journald = 47,
DiskManagement = 48,
OpenSsl = 49,
PasswordHashGeneration = 50,
DiagnosticMode = 51,
ParseDbField = 52,
Duplicate = 53,
MultipleErrors = 54,
Incoherent = 55,
InvalidBackupTargetId = 56,
ProductKeyMismatch = 57,
LanPortConflict = 58,
Javascript = 59,
Pem = 60,
TLSInit = 61,
Ascii = 62,
MissingHeader = 63,
Grub = 64,
Systemd = 65,
OpenSsh = 66,
Zram = 67,
Lshw = 68,
CpuSettings = 69,
Firmware = 70,
Timeout = 71,
Lxc = 72,
Cancelled = 73,
Git = 74,
DBus = 75,
InstallFailed = 76,
UpdateFailed = 77,
Smtp = 78,
}
impl ErrorKind {
pub fn as_str(&self) -> &'static str {
use ErrorKind::*;
match self {
Unknown => "Unknown Error",
Filesystem => "Filesystem I/O Error",
Docker => "Docker Error",
ConfigSpecViolation => "Config Spec Violation",
ConfigRulesViolation => "Config Rules Violation",
NotFound => "Not Found",
IncorrectPassword => "Incorrect Password",
VersionIncompatible => "Version Incompatible",
Network => "Network Error",
Registry => "Registry Error",
Serialization => "Serialization Error",
Deserialization => "Deserialization Error",
Utf8 => "UTF-8 Parse Error",
ParseVersion => "Version Parsing Error",
IncorrectDisk => "Incorrect Disk",
// Nginx => "Nginx Error",
Dependency => "Dependency Error",
ParseS9pk => "S9PK Parsing Error",
ParseUrl => "URL Parsing Error",
DiskNotAvailable => "Disk Not Available",
BlockDevice => "Block Device Error",
InvalidOnionAddress => "Invalid Onion Address",
Pack => "Pack Error",
ValidateS9pk => "S9PK Validation Error",
DiskCorrupted => "Disk Corrupted", // Remove
Tor => "Tor Daemon Error",
ConfigGen => "Config Generation Error",
ParseNumber => "Number Parsing Error",
Database => "Database Error",
InvalidId => "Invalid ID",
InvalidSignature => "Invalid Signature",
Backup => "Backup Error",
Restore => "Restore Error",
Authorization => "Unauthorized",
AutoConfigure => "Auto-Configure Error",
Action => "Action Failed",
RateLimited => "Rate Limited",
InvalidRequest => "Invalid Request",
MigrationFailed => "Migration Failed",
Uninitialized => "Uninitialized",
ParseNetAddress => "Net Address Parsing Error",
ParseSshKey => "SSH Key Parsing Error",
SoundError => "Sound Interface Error",
ParseTimestamp => "Timestamp Parsing Error",
ParseSysInfo => "System Info Parsing Error",
Wifi => "WiFi Internal Error",
Journald => "Journald Error",
DiskManagement => "Disk Management Error",
OpenSsl => "OpenSSL Internal Error",
PasswordHashGeneration => "Password Hash Generation Error",
DiagnosticMode => "Server is in Diagnostic Mode",
ParseDbField => "Database Field Parse Error",
Duplicate => "Duplication Error",
MultipleErrors => "Multiple Errors",
Incoherent => "Incoherent",
InvalidBackupTargetId => "Invalid Backup Target ID",
ProductKeyMismatch => "Incompatible Product Keys",
LanPortConflict => "Incompatible LAN Port Configuration",
Javascript => "Javascript Engine Error",
Pem => "PEM Encoding Error",
TLSInit => "TLS Backend Initialization Error",
Ascii => "ASCII Parse Error",
MissingHeader => "Missing Header",
Grub => "Grub Error",
Systemd => "Systemd Error",
OpenSsh => "OpenSSH Error",
Zram => "Zram Error",
Lshw => "LSHW Error",
CpuSettings => "CPU Settings Error",
Firmware => "Firmware Error",
Timeout => "Timeout Error",
Lxc => "LXC Error",
Cancelled => "Cancelled",
Git => "Git Error",
DBus => "DBus Error",
InstallFailed => "Install Failed",
UpdateFailed => "Update Failed",
Smtp => "SMTP Error",
}
}
}
impl Display for ErrorKind {
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
write!(f, "{}", self.as_str())
}
}
pub struct Error {
pub source: color_eyre::eyre::Error,
pub debug: Option<color_eyre::eyre::Error>,
pub kind: ErrorKind,
pub revision: Option<Revision>,
pub task: Option<JoinHandle<()>>,
}
impl Display for Error {
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
write!(f, "{}: {:#}", self.kind.as_str(), self.source)
}
}
impl Debug for Error {
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
write!(
f,
"{}: {:?}",
self.kind.as_str(),
self.debug.as_ref().unwrap_or(&self.source)
)
}
}
impl Error {
pub fn new<E: Into<color_eyre::eyre::Error> + std::fmt::Debug + 'static>(
source: E,
kind: ErrorKind,
) -> Self {
let debug = (std::any::TypeId::of::<E>()
== std::any::TypeId::of::<color_eyre::eyre::Error>())
.then(|| eyre!("{source:?}"));
Error {
source: source.into(),
debug,
kind,
revision: None,
task: None,
}
}
pub fn clone_output(&self) -> Self {
Error {
source: eyre!("{}", self.source),
debug: self.debug.as_ref().map(|e| eyre!("{e}")),
kind: self.kind,
revision: self.revision.clone(),
task: None,
}
}
pub fn with_task(mut self, task: JoinHandle<()>) -> Self {
self.task = Some(task);
self
}
pub async fn wait(mut self) -> Self {
if let Some(task) = &mut self.task {
task.await.log_err();
}
self.task.take();
self
}
}
impl axum::response::IntoResponse for Error {
fn into_response(self) -> axum::response::Response {
let mut res = axum::Json(RpcError::from(self)).into_response();
*res.status_mut() = StatusCode::INTERNAL_SERVER_ERROR;
res
}
}
impl From<std::convert::Infallible> for Error {
fn from(value: std::convert::Infallible) -> Self {
match value {}
}
}
impl From<InvalidId> for Error {
fn from(err: InvalidId) -> Self {
Error::new(err, ErrorKind::InvalidId)
}
}
impl From<std::io::Error> for Error {
fn from(e: std::io::Error) -> Self {
Error::new(e, ErrorKind::Filesystem)
}
}
impl From<std::str::Utf8Error> for Error {
fn from(e: std::str::Utf8Error) -> Self {
Error::new(e, ErrorKind::Utf8)
}
}
impl From<std::string::FromUtf8Error> for Error {
fn from(e: std::string::FromUtf8Error) -> Self {
Error::new(e, ErrorKind::Utf8)
}
}
impl From<exver::ParseError> for Error {
fn from(e: exver::ParseError) -> Self {
Error::new(e, ErrorKind::ParseVersion)
}
}
impl From<rpc_toolkit::url::ParseError> for Error {
fn from(e: rpc_toolkit::url::ParseError) -> Self {
Error::new(e, ErrorKind::ParseUrl)
}
}
impl From<std::num::ParseIntError> for Error {
fn from(e: std::num::ParseIntError) -> Self {
Error::new(e, ErrorKind::ParseNumber)
}
}
impl From<std::num::ParseFloatError> for Error {
fn from(e: std::num::ParseFloatError) -> Self {
Error::new(e, ErrorKind::ParseNumber)
}
}
impl From<patch_db::Error> for Error {
fn from(e: patch_db::Error) -> Self {
Error::new(e, ErrorKind::Database)
}
}
impl From<ed25519_dalek::SignatureError> for Error {
fn from(e: ed25519_dalek::SignatureError) -> Self {
Error::new(e, ErrorKind::InvalidSignature)
}
}
impl From<std::net::AddrParseError> for Error {
fn from(e: std::net::AddrParseError) -> Self {
Error::new(e, ErrorKind::ParseNetAddress)
}
}
impl From<ipnet::AddrParseError> for Error {
fn from(e: ipnet::AddrParseError) -> Self {
Error::new(e, ErrorKind::ParseNetAddress)
}
}
impl From<openssl::error::ErrorStack> for Error {
fn from(e: openssl::error::ErrorStack) -> Self {
Error::new(eyre!("{}", e), ErrorKind::OpenSsl)
}
}
impl From<mbrman::Error> for Error {
fn from(e: mbrman::Error) -> Self {
Error::new(e, ErrorKind::DiskManagement)
}
}
impl From<gpt::GptError> for Error {
fn from(e: gpt::GptError) -> Self {
Error::new(e, ErrorKind::DiskManagement)
}
}
impl From<gpt::mbr::MBRError> for Error {
fn from(e: gpt::mbr::MBRError) -> Self {
Error::new(e, ErrorKind::DiskManagement)
}
}
impl From<InvalidUri> for Error {
fn from(e: InvalidUri) -> Self {
Error::new(eyre!("{}", e), ErrorKind::ParseUrl)
}
}
impl From<ssh_key::Error> for Error {
fn from(e: ssh_key::Error) -> Self {
Error::new(e, ErrorKind::OpenSsh)
}
}
impl From<reqwest::Error> for Error {
fn from(e: reqwest::Error) -> Self {
let kind = match e {
_ if e.is_builder() => ErrorKind::ParseUrl,
_ if e.is_decode() => ErrorKind::Deserialization,
_ => ErrorKind::Network,
};
Error::new(e, kind)
}
}
#[cfg(feature = "arti")]
impl From<arti_client::Error> for Error {
fn from(e: arti_client::Error) -> Self {
Error::new(e, ErrorKind::Tor)
}
}
impl From<torut::control::ConnError> for Error {
fn from(e: torut::control::ConnError) -> Self {
Error::new(e, ErrorKind::Tor)
}
}
impl From<zbus::Error> for Error {
fn from(e: zbus::Error) -> Self {
Error::new(e, ErrorKind::DBus)
}
}
impl From<rustls::Error> for Error {
fn from(e: rustls::Error) -> Self {
Error::new(e, ErrorKind::OpenSsl)
}
}
impl From<lettre::error::Error> for Error {
fn from(e: lettre::error::Error) -> Self {
Error::new(e, ErrorKind::Smtp)
}
}
impl From<lettre::transport::smtp::Error> for Error {
fn from(e: lettre::transport::smtp::Error) -> Self {
Error::new(e, ErrorKind::Smtp)
}
}
impl From<lettre::address::AddressError> for Error {
fn from(e: lettre::address::AddressError) -> Self {
Error::new(e, ErrorKind::Smtp)
}
}
impl From<hyper::Error> for Error {
fn from(e: hyper::Error) -> Self {
Error::new(e, ErrorKind::Network)
}
}
impl From<patch_db::value::Error> for Error {
fn from(value: patch_db::value::Error) -> Self {
match value.kind {
patch_db::value::ErrorKind::Serialization => {
Error::new(value.source, ErrorKind::Serialization)
}
patch_db::value::ErrorKind::Deserialization => {
Error::new(value.source, ErrorKind::Deserialization)
}
}
}
}
#[derive(Clone, Deserialize, Serialize, TS)]
pub struct ErrorData {
pub details: String,
pub debug: String,
}
impl Display for ErrorData {
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
Display::fmt(&self.details, f)
}
}
impl Debug for ErrorData {
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
Display::fmt(&self.debug, f)
}
}
impl std::error::Error for ErrorData {}
impl From<Error> for ErrorData {
fn from(value: Error) -> Self {
Self {
details: value.to_string(),
debug: format!("{:?}", value),
}
}
}
impl From<&RpcError> for ErrorData {
fn from(value: &RpcError) -> Self {
Self {
details: value
.data
.as_ref()
.and_then(|d| {
d.as_object()
.and_then(|d| {
d.get("details")
.and_then(|d| d.as_str().map(|s| s.to_owned()))
})
.or_else(|| d.as_str().map(|s| s.to_owned()))
})
.unwrap_or_else(|| value.message.clone().into_owned()),
debug: value
.data
.as_ref()
.and_then(|d| {
d.as_object()
.and_then(|d| {
d.get("debug")
.and_then(|d| d.as_str().map(|s| s.to_owned()))
})
.or_else(|| d.as_str().map(|s| s.to_owned()))
})
.unwrap_or_else(|| value.message.clone().into_owned()),
}
}
}
impl From<Error> for RpcError {
fn from(e: Error) -> Self {
let mut data_object = serde_json::Map::with_capacity(3);
data_object.insert("details".to_owned(), format!("{}", e.source).into());
data_object.insert("debug".to_owned(), format!("{:?}", e.source).into());
data_object.insert(
"revision".to_owned(),
match serde_json::to_value(&e.revision) {
Ok(a) => a,
Err(e) => {
tracing::warn!("Error serializing revision for Error object: {}", e);
serde_json::Value::Null
}
},
);
RpcError {
code: e.kind as i32,
message: e.kind.as_str().into(),
data: Some(
match serde_json::to_value(&ErrorData {
details: format!("{}", e.source),
debug: format!("{:?}", e.source),
}) {
Ok(a) => a,
Err(e) => {
tracing::warn!("Error serializing revision for Error object: {}", e);
serde_json::Value::Null
}
},
),
}
}
}
impl From<RpcError> for Error {
fn from(e: RpcError) -> Self {
Error::new(
ErrorData::from(&e),
if let Ok(kind) = e.code.try_into() {
kind
} else if e.code == METHOD_NOT_FOUND_ERROR.code {
ErrorKind::NotFound
} else if e.code == PARSE_ERROR.code
|| e.code == INVALID_PARAMS_ERROR.code
|| e.code == INVALID_REQUEST_ERROR.code
{
ErrorKind::Deserialization
} else {
ErrorKind::Unknown
},
)
}
}
#[derive(Debug, Default)] #[derive(Debug, Default)]
pub struct ErrorCollection(Vec<Error>); pub struct ErrorCollection(Vec<Error>);
@@ -17,15 +540,11 @@ impl ErrorCollection {
} }
} }
pub fn into_result(mut self) -> Result<(), Error> { pub fn into_result(self) -> Result<(), Error> {
if self.0.len() <= 1 { if self.0.is_empty() {
if let Some(err) = self.0.pop() { Ok(())
Err(err)
} else {
Ok(())
}
} else { } else {
Err(Error::new(self, ErrorKind::MultipleErrors)) Err(Error::new(eyre!("{}", self), ErrorKind::MultipleErrors))
} }
} }
} }
@@ -52,13 +571,108 @@ impl std::fmt::Display for ErrorCollection {
Ok(()) Ok(())
} }
} }
impl std::error::Error for ErrorCollection {}
pub trait ResultExt<T, E>
where
Self: Sized,
{
fn with_kind(self, kind: ErrorKind) -> Result<T, Error>;
fn with_ctx<F: FnOnce(&E) -> (ErrorKind, D), D: Display>(self, f: F) -> Result<T, Error>;
fn log_err(self) -> Option<T>;
}
impl<T, E> ResultExt<T, E> for Result<T, E>
where
color_eyre::eyre::Error: From<E>,
E: std::fmt::Debug + 'static,
{
fn with_kind(self, kind: ErrorKind) -> Result<T, Error> {
self.map_err(|e| Error::new(e, kind))
}
fn with_ctx<F: FnOnce(&E) -> (ErrorKind, D), D: Display>(self, f: F) -> Result<T, Error> {
self.map_err(|e| {
let (kind, ctx) = f(&e);
let debug = (std::any::TypeId::of::<E>()
== std::any::TypeId::of::<color_eyre::eyre::Error>())
.then(|| eyre!("{ctx}: {e:?}"));
let source = color_eyre::eyre::Error::from(e);
let with_ctx = format!("{ctx}: {source}");
let source = source.wrap_err(with_ctx);
Error {
kind,
source,
debug,
revision: None,
task: None,
}
})
}
fn log_err(self) -> Option<T> {
match self {
Ok(a) => Some(a),
Err(e) => {
let e: color_eyre::eyre::Error = e.into();
tracing::error!("{e}");
tracing::debug!("{e:?}");
None
}
}
}
}
impl<T> ResultExt<T, Error> for Result<T, Error> {
fn with_kind(self, kind: ErrorKind) -> Result<T, Error> {
self.map_err(|e| Error { kind, ..e })
}
fn with_ctx<F: FnOnce(&Error) -> (ErrorKind, D), D: Display>(self, f: F) -> Result<T, Error> {
self.map_err(|e| {
let (kind, ctx) = f(&e);
let source = e.source;
let with_ctx = format!("{ctx}: {source}");
let source = source.wrap_err(with_ctx);
let debug = e.debug.map(|e| {
let with_ctx = format!("{ctx}: {e}");
e.wrap_err(with_ctx)
});
Error {
kind,
source,
debug,
..e
}
})
}
fn log_err(self) -> Option<T> {
match self {
Ok(a) => Some(a),
Err(e) => {
tracing::error!("{e}");
tracing::debug!("{e:?}");
None
}
}
}
}
pub trait OptionExt<T>
where
Self: Sized,
{
fn or_not_found(self, message: impl std::fmt::Display) -> Result<T, Error>;
}
impl<T> OptionExt<T> for Option<T> {
fn or_not_found(self, message: impl std::fmt::Display) -> Result<T, Error> {
self.ok_or_else(|| Error::new(eyre!("{}", message), ErrorKind::NotFound))
}
}
#[macro_export] #[macro_export]
macro_rules! ensure_code { macro_rules! ensure_code {
($x:expr, $c:expr, $fmt:expr $(, $arg:expr)*) => { ($x:expr, $c:expr, $fmt:expr $(, $arg:expr)*) => {
if !($x) { if !($x) {
Err::<(), _>(crate::error::Error::new(color_eyre::eyre::eyre!($fmt, $($arg, )*), $c))?; return Err(Error::new(color_eyre::eyre::eyre!($fmt, $($arg, )*), $c));
} }
}; };
} }

View File

@@ -2,9 +2,9 @@ use std::convert::Infallible;
use std::path::Path; use std::path::Path;
use std::str::FromStr; use std::str::FromStr;
use imbl_value::InternedString;
use serde::{Deserialize, Serialize}; use serde::{Deserialize, Serialize};
use ts_rs::TS; use ts_rs::TS;
use yasi::InternedString;
#[derive(Clone, Debug, PartialEq, Eq, PartialOrd, Ord, Hash, Serialize, TS)] #[derive(Clone, Debug, PartialEq, Eq, PartialOrd, Ord, Hash, Serialize, TS)]
#[ts(type = "string")] #[ts(type = "string")]

View File

@@ -1,9 +1,9 @@
use std::path::Path; use std::path::Path;
use std::str::FromStr; use std::str::FromStr;
use imbl_value::InternedString;
use serde::{Deserialize, Deserializer, Serialize}; use serde::{Deserialize, Deserializer, Serialize};
use ts_rs::TS; use ts_rs::TS;
use yasi::InternedString;
use crate::{Id, InvalidId}; use crate::{Id, InvalidId};

View File

@@ -5,7 +5,8 @@ use std::str::FromStr;
use serde::{Deserialize, Deserializer, Serialize}; use serde::{Deserialize, Deserializer, Serialize};
use ts_rs::TS; use ts_rs::TS;
use crate::{Id, InvalidId, PackageId, VersionString}; use crate::util::VersionString;
use crate::{Id, InvalidId, PackageId};
#[derive(Clone, Debug, PartialEq, Eq, PartialOrd, Ord, Hash, Serialize, TS)] #[derive(Clone, Debug, PartialEq, Eq, PartialOrd, Ord, Hash, Serialize, TS)]
#[ts(type = "string")] #[ts(type = "string")]

View File

@@ -1,4 +1,4 @@
use yasi::InternedString; use imbl_value::InternedString;
#[derive(Debug, thiserror::Error)] #[derive(Debug, thiserror::Error)]
#[error("Invalid ID: {0}")] #[error("Invalid ID: {0}")]

View File

@@ -1,9 +1,9 @@
use std::borrow::Borrow; use std::borrow::Borrow;
use std::str::FromStr; use std::str::FromStr;
use imbl_value::InternedString;
use regex::Regex; use regex::Regex;
use serde::{Deserialize, Deserializer, Serialize, Serializer}; use serde::{Deserialize, Deserializer, Serialize, Serializer};
use yasi::InternedString;
mod action; mod action;
mod gateway; mod gateway;

View File

@@ -2,9 +2,9 @@ use std::borrow::Borrow;
use std::path::Path; use std::path::Path;
use std::str::FromStr; use std::str::FromStr;
use imbl_value::InternedString;
use serde::{Deserialize, Serialize, Serializer}; use serde::{Deserialize, Serialize, Serializer};
use ts_rs::TS; use ts_rs::TS;
use yasi::InternedString;
use crate::{Id, InvalidId, SYSTEM_ID}; use crate::{Id, InvalidId, SYSTEM_ID};

View File

@@ -2,9 +2,9 @@ use std::convert::Infallible;
use std::path::Path; use std::path::Path;
use std::str::FromStr; use std::str::FromStr;
use imbl_value::InternedString;
use serde::{Deserialize, Serialize}; use serde::{Deserialize, Serialize};
use ts_rs::TS; use ts_rs::TS;
use yasi::InternedString;
#[derive(Clone, Debug, PartialEq, Eq, PartialOrd, Ord, Hash, Serialize, TS)] #[derive(Clone, Debug, PartialEq, Eq, PartialOrd, Ord, Hash, Serialize, TS)]
#[ts(type = "string")] #[ts(type = "string")]

View File

@@ -5,7 +5,8 @@ use rpc_toolkit::clap::builder::ValueParserFactory;
use serde::{Deserialize, Deserializer, Serialize}; use serde::{Deserialize, Deserializer, Serialize};
use ts_rs::TS; use ts_rs::TS;
use crate::{FromStrParser, Id}; use crate::Id;
use crate::util::FromStrParser;
#[derive(Clone, Debug, Default, PartialEq, Eq, PartialOrd, Ord, Hash, Serialize, TS)] #[derive(Clone, Debug, Default, PartialEq, Eq, PartialOrd, Ord, Hash, Serialize, TS)]
#[ts(export, type = "string")] #[ts(export, type = "string")]

View File

@@ -7,7 +7,6 @@ use axum::extract::ws;
use const_format::formatcp; use const_format::formatcp;
use futures::{StreamExt, TryStreamExt}; use futures::{StreamExt, TryStreamExt};
use itertools::Itertools; use itertools::Itertools;
use models::ResultExt;
use rpc_toolkit::{Context, Empty, HandlerArgs, HandlerExt, ParentHandler, from_fn_async}; use rpc_toolkit::{Context, Empty, HandlerArgs, HandlerExt, ParentHandler, from_fn_async};
use serde::{Deserialize, Serialize}; use serde::{Deserialize, Serialize};
use tokio::process::Command; use tokio::process::Command;
@@ -21,7 +20,7 @@ use crate::db::model::Database;
use crate::db::model::public::ServerStatus; use crate::db::model::public::ServerStatus;
use crate::developer::OS_DEVELOPER_KEY_PATH; use crate::developer::OS_DEVELOPER_KEY_PATH;
use crate::hostname::Hostname; use crate::hostname::Hostname;
use crate::middleware::auth::AuthContext; use crate::middleware::auth::local::LocalAuthContext;
use crate::net::gateway::UpgradableListener; use crate::net::gateway::UpgradableListener;
use crate::net::net_controller::{NetController, NetService}; use crate::net::net_controller::{NetController, NetService};
use crate::net::socks::DEFAULT_SOCKS_LISTEN; use crate::net::socks::DEFAULT_SOCKS_LISTEN;
@@ -37,9 +36,8 @@ use crate::ssh::SSH_DIR;
use crate::system::{get_mem_info, sync_kiosk}; use crate::system::{get_mem_info, sync_kiosk};
use crate::util::io::{IOHook, open_file}; use crate::util::io::{IOHook, open_file};
use crate::util::lshw::lshw; use crate::util::lshw::lshw;
use crate::util::net::WebSocketExt;
use crate::util::{Invoke, cpupower}; use crate::util::{Invoke, cpupower};
use crate::{Error, MAIN_DATA, PACKAGE_DATA}; use crate::{Error, MAIN_DATA, PACKAGE_DATA, ResultExt};
pub const SYSTEM_REBUILD_PATH: &str = "/media/startos/config/system-rebuild"; pub const SYSTEM_REBUILD_PATH: &str = "/media/startos/config/system-rebuild";
pub const STANDBY_MODE_PATH: &str = "/media/startos/config/standby"; pub const STANDBY_MODE_PATH: &str = "/media/startos/config/standby";
@@ -303,8 +301,8 @@ pub async fn init(
if tokio::fs::metadata(&downloading).await.is_ok() { if tokio::fs::metadata(&downloading).await.is_ok() {
tokio::fs::remove_dir_all(&downloading).await?; tokio::fs::remove_dir_all(&downloading).await?;
} }
let tmp_docker = Path::new(PACKAGE_DATA).join(formatcp!("tmp/{CONTAINER_TOOL}")); let tmp_docker = Path::new(PACKAGE_DATA).join("tmp").join(*CONTAINER_TOOL);
crate::disk::mount::util::bind(&tmp_docker, CONTAINER_DATADIR, false).await?; crate::disk::mount::util::bind(&tmp_docker, *CONTAINER_DATADIR, false).await?;
init_tmp.complete(); init_tmp.complete();
let server_info = db.peek().await.into_public().into_server_info(); let server_info = db.peek().await.into_public().into_server_info();

View File

@@ -10,7 +10,6 @@ use exver::VersionRange;
use futures::StreamExt; use futures::StreamExt;
use imbl_value::{InternedString, json}; use imbl_value::{InternedString, json};
use itertools::Itertools; use itertools::Itertools;
use models::{FromStrParser, VersionString};
use reqwest::Url; use reqwest::Url;
use reqwest::header::{CONTENT_LENGTH, HeaderMap}; use reqwest::header::{CONTENT_LENGTH, HeaderMap};
use rpc_toolkit::HandlerArgs; use rpc_toolkit::HandlerArgs;
@@ -29,11 +28,11 @@ use crate::registry::context::{RegistryContext, RegistryUrlParams};
use crate::registry::package::get::GetPackageResponse; use crate::registry::package::get::GetPackageResponse;
use crate::rpc_continuations::{Guid, RpcContinuation}; use crate::rpc_continuations::{Guid, RpcContinuation};
use crate::s9pk::manifest::PackageId; use crate::s9pk::manifest::PackageId;
use crate::s9pk::v2::SIG_CONTEXT;
use crate::upload::upload; use crate::upload::upload;
use crate::util::Never;
use crate::util::io::open_file; use crate::util::io::open_file;
use crate::util::net::WebSocketExt;
use crate::util::tui::choose; use crate::util::tui::choose;
use crate::util::{FromStrParser, Never, VersionString};
pub const PKG_ARCHIVE_DIR: &str = "package-data/archive"; pub const PKG_ARCHIVE_DIR: &str = "package-data/archive";
pub const PKG_PUBLIC_DIR: &str = "package-data/public"; pub const PKG_PUBLIC_DIR: &str = "package-data/public";
@@ -154,6 +153,8 @@ pub async fn install(
})? })?
.s9pk; .s9pk;
asset.validate(SIG_CONTEXT, asset.all_signers())?;
let progress_tracker = FullProgressTracker::new(); let progress_tracker = FullProgressTracker::new();
let download_progress = progress_tracker.add_phase("Downloading".into(), Some(100)); let download_progress = progress_tracker.add_phase("Downloading".into(), Some(100));
let download = ctx let download = ctx

View File

@@ -44,6 +44,7 @@ pub mod disk;
pub mod error; pub mod error;
pub mod firmware; pub mod firmware;
pub mod hostname; pub mod hostname;
pub mod id;
pub mod init; pub mod init;
pub mod install; pub mod install;
pub mod logs; pub mod logs;
@@ -75,7 +76,8 @@ pub mod volume;
use std::time::SystemTime; use std::time::SystemTime;
use clap::Parser; use clap::Parser;
pub use error::{Error, ErrorKind, ResultExt}; pub use error::{Error, ErrorKind, OptionExt, ResultExt};
pub use id::*;
use imbl_value::Value; use imbl_value::Value;
use rpc_toolkit::yajrc::RpcError; use rpc_toolkit::yajrc::RpcError;
use rpc_toolkit::{ use rpc_toolkit::{

View File

@@ -1,18 +1,19 @@
use std::convert::Infallible; use std::convert::Infallible;
use std::ops::{Deref, DerefMut}; use std::ops::{Deref, DerefMut};
use std::path::Path;
use std::process::Stdio; use std::process::Stdio;
use std::str::FromStr; use std::str::FromStr;
use std::time::{Duration, UNIX_EPOCH}; use std::time::{Duration, UNIX_EPOCH};
use axum::extract::ws::{self, WebSocket}; use axum::extract::ws;
use crate::util::net::WebSocket;
use chrono::{DateTime, Utc}; use chrono::{DateTime, Utc};
use clap::builder::ValueParserFactory; use clap::builder::ValueParserFactory;
use clap::{Args, FromArgMatches, Parser}; use clap::{Args, FromArgMatches, Parser};
use color_eyre::eyre::eyre; use color_eyre::eyre::eyre;
use futures::stream::BoxStream; use futures::stream::BoxStream;
use futures::{Future, FutureExt, Stream, StreamExt, TryStreamExt}; use futures::{Future, Stream, StreamExt, TryStreamExt};
use itertools::Itertools; use itertools::Itertools;
use models::{FromStrParser, PackageId};
use rpc_toolkit::yajrc::RpcError; use rpc_toolkit::yajrc::RpcError;
use rpc_toolkit::{ use rpc_toolkit::{
CallRemote, Context, Empty, HandlerArgs, HandlerExt, HandlerFor, ParentHandler, from_fn_async, CallRemote, Context, Empty, HandlerArgs, HandlerExt, HandlerFor, ParentHandler, from_fn_async,
@@ -25,14 +26,13 @@ use tokio_stream::wrappers::LinesStream;
use tokio_tungstenite::tungstenite::Message; use tokio_tungstenite::tungstenite::Message;
use tracing::instrument; use tracing::instrument;
use crate::PackageId;
use crate::context::{CliContext, RpcContext}; use crate::context::{CliContext, RpcContext};
use crate::error::ResultExt; use crate::error::ResultExt;
use crate::lxc::ContainerId;
use crate::prelude::*; use crate::prelude::*;
use crate::rpc_continuations::{Guid, RpcContinuation, RpcContinuations}; use crate::rpc_continuations::{Guid, RpcContinuation, RpcContinuations};
use crate::util::Invoke;
use crate::util::net::WebSocketExt;
use crate::util::serde::Reversible; use crate::util::serde::Reversible;
use crate::util::{FromStrParser, Invoke};
#[pin_project::pin_project] #[pin_project::pin_project]
pub struct LogStream { pub struct LogStream {
@@ -100,8 +100,8 @@ async fn ws_handler(
return stream.normal_close("complete").await; return stream.normal_close("complete").await;
} }
}, },
msg = stream.try_next() => { msg = stream.recv() => {
if msg.with_kind(crate::ErrorKind::Network)?.is_none() { if msg.transpose().with_kind(crate::ErrorKind::Network)?.is_none() {
return Ok(()) return Ok(())
} }
} }
@@ -223,7 +223,7 @@ fn deserialize_log_message<'de, D: serde::de::Deserializer<'de>>(
pub enum LogSource { pub enum LogSource {
Kernel, Kernel,
Unit(&'static str), Unit(&'static str),
Container(ContainerId), Package(PackageId),
} }
pub const SYSTEM_UNIT: &str = "startd"; pub const SYSTEM_UNIT: &str = "startd";
@@ -499,22 +499,10 @@ fn logs_follow<
} }
async fn get_package_id( async fn get_package_id(
ctx: &RpcContext, _: &RpcContext,
PackageIdParams { id }: PackageIdParams, PackageIdParams { id }: PackageIdParams,
) -> Result<LogSource, Error> { ) -> Result<LogSource, Error> {
let container_id = ctx Ok(LogSource::Package(id))
.services
.get(&id)
.await
.as_ref()
.map(|x| x.container_id())
.ok_or_else(|| {
Error::new(
eyre!("No service found with id: {}", id),
ErrorKind::NotFound,
)
})??;
Ok(LogSource::Container(container_id))
} }
pub fn package_logs() -> ParentHandler<RpcContext, LogsParams<PackageIdParams>> { pub fn package_logs() -> ParentHandler<RpcContext, LogsParams<PackageIdParams>> {
@@ -596,16 +584,8 @@ pub async fn journalctl(
} }
fn gen_journalctl_command(id: &LogSource) -> Command { fn gen_journalctl_command(id: &LogSource) -> Command {
let mut cmd = match id { let mut cmd = Command::new("journalctl");
LogSource::Container(container_id) => {
let mut cmd = Command::new("lxc-attach");
cmd.arg(format!("{}", container_id))
.arg("--")
.arg("journalctl");
cmd
}
_ => Command::new("journalctl"),
};
cmd.kill_on_drop(true); cmd.kill_on_drop(true);
cmd.arg("--output=json"); cmd.arg("--output=json");
@@ -618,8 +598,11 @@ fn gen_journalctl_command(id: &LogSource) -> Command {
cmd.arg("-u"); cmd.arg("-u");
cmd.arg(id); cmd.arg(id);
} }
LogSource::Container(_container_id) => { LogSource::Package(id) => {
cmd.arg("-u").arg("container-runtime.service"); cmd.arg("-u")
.arg("container-runtime.service")
.arg("-D")
.arg(Path::new("/media/startos/data/package-data/logs").join(id));
} }
}; };
cmd cmd
@@ -715,16 +698,11 @@ pub async fn follow_logs<Context: AsRef<RpcContinuations>>(
.add( .add(
guid.clone(), guid.clone(),
RpcContinuation::ws( RpcContinuation::ws(
Box::new(move |socket| { move |socket| async move {
ws_handler(first_entry, stream, socket) if let Err(e) = ws_handler(first_entry, stream, socket).await {
.map(|x| match x { tracing::error!("Error in log stream: {}", e);
Ok(_) => (), }
Err(e) => { },
tracing::error!("Error in log stream: {}", e);
}
})
.boxed()
}),
Duration::from_secs(30), Duration::from_secs(30),
), ),
) )

View File

@@ -5,11 +5,10 @@ use std::sync::{Arc, Weak};
use std::time::Duration; use std::time::Duration;
use clap::builder::ValueParserFactory; use clap::builder::ValueParserFactory;
use futures::{AsyncWriteExt, StreamExt}; use futures::StreamExt;
use imbl_value::{InOMap, InternedString}; use imbl_value::InternedString;
use models::{FromStrParser, InvalidId, PackageId};
use rpc_toolkit::yajrc::RpcError; use rpc_toolkit::yajrc::RpcError;
use rpc_toolkit::{GenericRpcMethod, RpcRequest, RpcResponse}; use rpc_toolkit::{RpcRequest, RpcResponse};
use serde::{Deserialize, Serialize}; use serde::{Deserialize, Serialize};
use tokio::io::{AsyncBufReadExt, BufReader}; use tokio::io::{AsyncBufReadExt, BufReader};
use tokio::process::Command; use tokio::process::Command;
@@ -17,10 +16,10 @@ use tokio::sync::Mutex;
use tokio::time::Instant; use tokio::time::Instant;
use ts_rs::TS; use ts_rs::TS;
use crate::context::{CliContext, RpcContext}; use crate::context::RpcContext;
use crate::disk::mount::filesystem::bind::Bind; use crate::disk::mount::filesystem::bind::Bind;
use crate::disk::mount::filesystem::block_dev::BlockDev; use crate::disk::mount::filesystem::block_dev::BlockDev;
use crate::disk::mount::filesystem::idmapped::IdMapped; use crate::disk::mount::filesystem::idmapped::{IdMap, IdMapped};
use crate::disk::mount::filesystem::overlayfs::OverlayGuard; use crate::disk::mount::filesystem::overlayfs::OverlayGuard;
use crate::disk::mount::filesystem::{MountType, ReadOnly, ReadWrite}; use crate::disk::mount::filesystem::{MountType, ReadOnly, ReadWrite};
use crate::disk::mount::guard::{GenericMountGuard, MountGuard, TmpMountGuard}; use crate::disk::mount::guard::{GenericMountGuard, MountGuard, TmpMountGuard};
@@ -30,7 +29,8 @@ use crate::rpc_continuations::{Guid, RpcContinuation};
use crate::service::ServiceStats; use crate::service::ServiceStats;
use crate::util::io::open_file; use crate::util::io::open_file;
use crate::util::rpc_client::UnixRpcClient; use crate::util::rpc_client::UnixRpcClient;
use crate::util::{Invoke, new_guid}; use crate::util::{FromStrParser, Invoke, new_guid};
use crate::{InvalidId, PackageId};
const LXC_CONTAINER_DIR: &str = "/var/lib/lxc"; const LXC_CONTAINER_DIR: &str = "/var/lib/lxc";
const RPC_DIR: &str = "media/startos/rpc"; // must not be absolute path const RPC_DIR: &str = "media/startos/rpc"; // must not be absolute path
@@ -185,9 +185,11 @@ impl LxcContainer {
TmpMountGuard::mount( TmpMountGuard::mount(
&IdMapped::new( &IdMapped::new(
BlockDev::new("/usr/lib/startos/container-runtime/rootfs.squashfs"), BlockDev::new("/usr/lib/startos/container-runtime/rootfs.squashfs"),
0, vec![IdMap {
100000, from_id: 0,
65536, to_id: 100000,
range: 65536,
}],
), ),
ReadOnly, ReadOnly,
) )
@@ -366,6 +368,7 @@ impl LxcContainer {
} }
tokio::time::sleep(Duration::from_millis(100)).await; tokio::time::sleep(Duration::from_millis(100)).await;
} }
tracing::info!("Connected to socket in {:?}", started.elapsed());
Ok(UnixRpcClient::new(sock_path)) Ok(UnixRpcClient::new(sock_path))
} }
} }
@@ -424,7 +427,7 @@ pub async fn connect(ctx: &RpcContext, container: &LxcContainer) -> Result<Guid,
|mut ws| async move { |mut ws| async move {
if let Err(e) = async { if let Err(e) = async {
loop { loop {
match ws.next().await { match ws.recv().await {
None => break, None => break,
Some(Ok(Message::Text(txt))) => { Some(Ok(Message::Text(txt))) => {
let mut id = None; let mut id = None;

View File

@@ -1,7 +0,0 @@
fn main() {
#[cfg(feature = "backtrace-on-stack-overflow")]
unsafe {
backtrace_on_stack_overflow::enable()
};
startos::bins::startbox()
}

View File

@@ -0,0 +1,8 @@
use startos::bins::MultiExecutable;
fn main() {
MultiExecutable::default()
.enable_start_registry()
.enable_start_registryd()
.execute()
}

View File

@@ -0,0 +1,19 @@
use startos::bins::MultiExecutable;
use startos::s9pk::v2::pack::PREFER_DOCKER;
fn main() {
if !std::env::var("STARTOS_USE_PODMAN").map_or(false, |v| {
let v = v.trim();
if ["1", "true", "y", "yes"].into_iter().any(|x| v == x) {
true
} else if ["0", "false", "n", "no"].into_iter().any(|x| v == x) {
false
} else {
tracing::warn!("Unknown value for STARTOS_USE_PODMAN: {v}");
false
}
}) {
PREFER_DOCKER.set(true).ok();
}
MultiExecutable::default().enable_start_cli().execute()
}

View File

@@ -0,0 +1,7 @@
use startos::bins::MultiExecutable;
fn main() {
MultiExecutable::default()
.enable_start_container()
.execute()
}

View File

@@ -0,0 +1,29 @@
use startos::bins::MultiExecutable;
fn main() {
startos::net::static_server::UI_CELL
.set(include_dir::include_dir!(
"$CARGO_MANIFEST_DIR/../../web/dist/static/ui"
))
.ok();
startos::net::static_server::SETUP_WIZARD_CELL
.set(include_dir::include_dir!(
"$CARGO_MANIFEST_DIR/../../web/dist/static/setup-wizard"
))
.ok();
startos::net::static_server::INSTALL_WIZARD_CELL
.set(include_dir::include_dir!(
"$CARGO_MANIFEST_DIR/../../web/dist/static/install-wizard"
))
.ok();
startos::db::model::public::DB_UI_SEED_CELL
.set(include_str!(concat!(
env!("CARGO_MANIFEST_DIR"),
"/../../web/patchdb-ui-seed.json"
)))
.ok();
MultiExecutable::default()
.enable_startd()
.enable_start_cli()
.execute()
}

View File

@@ -0,0 +1,13 @@
use startos::bins::MultiExecutable;
fn main() {
startos::tunnel::context::TUNNEL_UI_CELL
.set(include_dir::include_dir!(
"$CARGO_MANIFEST_DIR/../../web/dist/static/start-tunnel"
))
.ok();
MultiExecutable::default()
.enable_start_tunnel()
.enable_start_tunneld()
.execute()
}

View File

@@ -0,0 +1,101 @@
use base64::Engine;
use basic_cookies::Cookie;
use http::HeaderValue;
use http::header::COOKIE;
use rand::random;
use rpc_toolkit::yajrc::{RpcError, RpcResponse};
use rpc_toolkit::{Context, Empty, Middleware};
use tokio::io::AsyncWriteExt;
use tokio::process::Command;
use crate::context::RpcContext;
use crate::prelude::*;
use crate::util::Invoke;
use crate::util::io::{create_file_mod, read_file_to_string};
use crate::util::serde::BASE64;
pub trait LocalAuthContext: Context {
const LOCAL_AUTH_COOKIE_PATH: &str;
const LOCAL_AUTH_COOKIE_OWNERSHIP: &str;
fn init_auth_cookie() -> impl Future<Output = Result<(), Error>> + Send {
async {
let mut file = create_file_mod(Self::LOCAL_AUTH_COOKIE_PATH, 0o640).await?;
file.write_all(BASE64.encode(random::<[u8; 32]>()).as_bytes())
.await?;
file.sync_all().await?;
drop(file);
Command::new("chown")
.arg(Self::LOCAL_AUTH_COOKIE_OWNERSHIP)
.arg(Self::LOCAL_AUTH_COOKIE_PATH)
.invoke(crate::ErrorKind::Filesystem)
.await?;
Ok(())
}
}
}
impl LocalAuthContext for RpcContext {
const LOCAL_AUTH_COOKIE_PATH: &str = "/run/startos/rpc.authcookie";
const LOCAL_AUTH_COOKIE_OWNERSHIP: &str = "root:startos";
}
fn unauthorized() -> Error {
Error::new(eyre!("UNAUTHORIZED"), crate::ErrorKind::Authorization)
}
async fn check_from_header<C: LocalAuthContext>(header: Option<&HeaderValue>) -> Result<(), Error> {
if let Some(cookie_header) = header {
let cookies = Cookie::parse(
cookie_header
.to_str()
.with_kind(crate::ErrorKind::Authorization)?,
)
.with_kind(crate::ErrorKind::Authorization)?;
if let Some(cookie) = cookies.iter().find(|c| c.get_name() == "local") {
return check_cookie::<C>(cookie).await;
}
}
Err(unauthorized())
}
async fn check_cookie<C: LocalAuthContext>(local: &Cookie<'_>) -> Result<(), Error> {
if let Ok(token) = read_file_to_string(C::LOCAL_AUTH_COOKIE_PATH).await {
if local.get_value() == &*token {
return Ok(());
}
}
Err(unauthorized())
}
#[derive(Clone)]
pub struct LocalAuth {
cookie: Option<HeaderValue>,
}
impl LocalAuth {
pub fn new() -> Self {
Self { cookie: None }
}
}
impl<C: LocalAuthContext> Middleware<C> for LocalAuth {
type Metadata = Empty;
async fn process_http_request(
&mut self,
_: &C,
request: &mut axum::extract::Request,
) -> Result<(), axum::response::Response> {
self.cookie = request.headers().get(COOKIE).cloned();
Ok(())
}
async fn process_rpc_request(
&mut self,
_: &C,
_: Self::Metadata,
_: &mut rpc_toolkit::RpcRequest,
) -> Result<(), rpc_toolkit::RpcResponse> {
check_from_header::<C>(self.cookie.as_ref())
.await
.map_err(|e| RpcResponse::from(RpcError::from(e)))
}
}

Some files were not shown because too many files have changed in this diff Show More