Compare commits

..

73 Commits

Author SHA1 Message Date
Aiden McClelland
f41710c892 dynamic subnet in port forward 2025-12-18 20:19:08 -07:00
Aiden McClelland
df3f79f282 fix ws timeouts 2025-12-18 14:54:19 -07:00
Aiden McClelland
f8df692865 Merge pull request #3079 from Start9Labs/hotfix/alpha.16
hotfixes for alpha.16
2025-12-18 11:32:47 -07:00
Aiden McClelland
0c6d3b188d Merge branch 'hotfix/alpha.16' of github.com:Start9Labs/start-os into hotfix/alpha.16 2025-12-18 11:31:51 -07:00
Aiden McClelland
e7a38863ab fix registry auth 2025-12-18 11:31:30 -07:00
Alex Inkin
720e0fcdab fix: keep uptime width constant and service table DOM cached (#3078)
* fix: keep uptime width constant and service table DOM cached

* show error status and fix columns spacing

* revert const

---------

Co-authored-by: Matt Hill <mattnine@protonmail.com>
2025-12-18 07:51:34 -07:00
Aiden McClelland
bf8ff84522 inconsequential ssl changes 2025-12-18 05:56:51 -07:00
Aiden McClelland
5a9510238e add map & eq to getServiceInterface 2025-12-18 04:30:08 -07:00
Aiden McClelland
7b3c74179b add local auth to registry 2025-12-18 04:29:34 -07:00
Aiden McClelland
cd70fa4c32 hotfixes for alpha.16 2025-12-18 04:22:56 -07:00
Aiden McClelland
83133ced6a consolidate crates 2025-12-17 21:15:24 -07:00
Aiden McClelland
6c5179a179 handle flavor atom version range 2025-12-17 14:18:43 -07:00
Aiden McClelland
e33ab39b85 hotfix 2025-12-17 12:17:22 -07:00
Aiden McClelland
9567bcec1b randomize default start-tunnel subnet 2025-12-16 17:34:23 -07:00
Aiden McClelland
550b16dc0b fix build for cargo deps 2025-12-16 17:33:55 -07:00
Matt Hill
5d8331b7f7 Feature/tor logs (#3077)
* add tor logs, rework services page, other small things

* feat: sortable service table and mobile view

---------

Co-authored-by: waterplea <alexander@inkin.ru>
2025-12-16 12:47:43 -07:00
Aiden McClelland
e35b643e51 use arm runner for riscv 2025-12-15 16:19:06 -07:00
Aiden McClelland
bc6a92677b readd riscv target 2025-12-15 16:18:44 -07:00
Aiden McClelland
f52072e6ec sdk beta.45 2025-12-15 15:23:05 -07:00
Remco Ros
9c43c43a46 fix: shutdown order (#3073)
* fix: race condition in Daemon.stop()

* fix: do not stop Daemon on context leave

* fix: remove duplicate Daemons.term calls

* feat: honor dependency order when shutting terminating Daemons

* fixes, and remove started

---------

Co-authored-by: Aiden McClelland <me@drbonez.dev>
2025-12-15 15:21:23 -07:00
Aiden McClelland
0430e0f930 alpha.16 (#3068)
* add support for idmapped mounts to start-sdk

* misc fixes

* misc fixes

* add default to textarea

* fix iptables masquerade rule

* fix textarea types

* more fixes

* better logging for rsync

* fix tty size

* fix wg conf generation for android

* disable file mounts on dependencies

* mostly there, some styling issues (#3069)

* mostly there, some styling issues

* fix: address comments (#3070)

* fix: address comments

* fix: fix

* show SSL for any address with secure protocol and ssl added

* better sorting and messaging

---------

Co-authored-by: Alex Inkin <alexander@inkin.ru>

* fixes for nextcloud

* allow sidebar navigation during service state traansitions

* wip: x-forwarded headers

* implement x-forwarded-for proxy

* lowercase domain names and fix warning popover bug

* fix http2 websockets

* fix websocket retry behavior

* add arch filters to s9pk pack

* use docker for start-cli install

* add version range to package signer on registry

* fix rcs < 0

* fix user information parsing

* refactor service interface getters

* disable idmaps

* build fixes

* update docker login action

* streamline build

* add start-cli workflow

* rename

* riscv64gc

* fix ui packing

* no default features on cli

* make cli depend on GIT_HASH

* more build fixes

* more build fixes

* interpolate arch within dockerfile

* fix tests

* add launch ui to service page plus other small improvements (#3075)

* add launch ui to service page plus other small improvements

* revert translation disable

* add spinner to service list if service is health and loading

* chore: some visual tune up

* chore: update Taiga UI

---------

Co-authored-by: waterplea <alexander@inkin.ru>

* fix backups

* feat: use arm hosted runners and don't fail when apt package does not exist (#3076)

---------

Co-authored-by: Matt Hill <mattnine@protonmail.com>
Co-authored-by: Shadowy Super Coder <musashidisciple@proton.me>
Co-authored-by: Matt Hill <MattDHill@users.noreply.github.com>
Co-authored-by: Alex Inkin <alexander@inkin.ru>
Co-authored-by: Remco Ros <remcoros@live.nl>
2025-12-15 13:30:50 -07:00
Mariusz Kogen
b945243d1a refactor(tor-check): improve proxy support, error handling ... (#3072)
refactor(tor-check): improve proxy support, error handling, and output formatting
2025-12-15 18:14:46 +01:00
Aiden McClelland
d8484a8b26 add docker build for start-registry (#3067)
* add docker build for start-registry

* login to docker

* add workflow dependency

* fix path

* fix add

* fix gh actions permissions

* use apt-get
2025-12-05 18:12:09 -07:00
Matt Hill
3c27499795 Refactor/status info (#3066)
* refactor status info

* wip fe

* frontend changes and version bump

* fix tests and motd

* add registry workflow

* better starttunnel instructions

* placeholders for starttunnel tables

---------

Co-authored-by: Aiden McClelland <me@drbonez.dev>
2025-12-02 23:31:02 +00:00
Remco Ros
7c772e873d fix: pass --allow-discards to luksOpen for trim support (#3064) 2025-12-02 22:00:10 +00:00
Matt Hill
db2fab245e better language and show wg config on device save (#3065)
* better language and show wg config on device save

* chore: fix

---------

Co-authored-by: waterplea <alexander@inkin.ru>
2025-12-02 21:42:14 +00:00
Matt Hill
a9c9917f1a better ST instructions 2025-12-01 18:03:07 -07:00
Matt Hill
23e2e9e9cc Update START-TUNNEL.md 2025-11-30 16:32:40 -07:00
Aiden McClelland
2369e92460 Update download link for StartTunnel installation 2025-11-28 14:28:54 -07:00
Aiden McClelland
a53b15f2a3 improve StartTunnel validation and GC (#3062)
* improve StartTunnel validation and GC

* update sdk formatting
2025-11-28 13:14:52 -07:00
Aiden McClelland
72eb8b1eb6 Update START-TUNNEL.md 2025-11-27 08:57:38 -07:00
Aiden McClelland
4db54f3b83 Update START-TUNNEL.md 2025-11-27 08:56:05 -07:00
Aiden McClelland
24eb27f005 minor bugfixes for alpha.14 (#3058)
* overwrite AllowedIPs in wg config
mute UnknownCA errors

* fix upgrade issues

* allow start9 user to access journal

* alpha.15

* sort actions lexicographically and show desc in marketplace details

* add registry package download cli command

---------

Co-authored-by: Matt Hill <mattnine@protonmail.com>
2025-11-26 16:23:08 -07:00
Matt Hill
009d76ea35 More FE fixes (#3056)
* tell user to restart server after kiosk chnage

* remove unused import

* dont show tor address on server setup

* chore: address comments

* revert mock

* chore: remove uptime block on mobile

* utiliser le futur proche

* chore: comments

* don't show loading on authorities tab

* chore: fix mobile unions

---------

Co-authored-by: waterplea <alexander@inkin.ru>
Co-authored-by: Aiden McClelland <3732071+dr-bonez@users.noreply.github.com>
2025-11-25 23:43:19 +00:00
Aiden McClelland
6e8a425eb1 overwrite AllowedIPs in wg config (#3055)
mute UnknownCA errors
2025-11-21 11:30:21 -07:00
Aiden McClelland
66188d791b fix start-tunnel artifact upload 2025-11-20 10:53:23 -07:00
Aiden McClelland
015ff02d71 fix build 2025-11-20 01:05:50 -07:00
Aiden McClelland
10bfaf5415 fix start-tunnel build 2025-11-20 00:30:33 -07:00
Aiden McClelland
e3e0b85e0c Bugfix/alpha.13 (#3053)
* bugfixes for alpha.13

* minor fixes

* version bump

* start-tunnel workflow

* sdk beta 44

* defaultFilter

* fix reset-password on tunnel auth

* explicitly rebuild types

* fix typo

* ubuntu-latest runner

* add cleanup steps

* fix env on attach
2025-11-19 22:48:49 -07:00
Matt Hill
ad0632892e Various (#3051)
* tell user to restart server after kiosk chnage

* remove unused import

* dont show tor address on server setup

* chore: address comments

* revert mock

* chore: remove uptime block on mobile

* utiliser le futur proche

---------

Co-authored-by: waterplea <alexander@inkin.ru>
Co-authored-by: Aiden McClelland <3732071+dr-bonez@users.noreply.github.com>
2025-11-19 10:35:07 -07:00
Aiden McClelland
f26791ba39 fix raspi fsck 2025-11-17 12:17:58 -07:00
Aiden McClelland
2fbaaebf44 Bugfixes for alpha.12 (#3049)
* squashfs-wip

* sdk fixes

* misc fixes

* bump sdk

* Include StartTunnel installation command

Added installation instructions for StartTunnel.

* CA instead of leaf for StartTunnel (#3046)

* updated docs for CA instead of cert

* generate ca instead of self-signed in start-tunnel

* Fix formatting in START-TUNNEL.md installation instructions

* Fix formatting in START-TUNNEL.md

* fix infinite loop

* add success message to install

* hide loopback and bridge gateways

---------

Co-authored-by: Aiden McClelland <me@drbonez.dev>
Co-authored-by: Aiden McClelland <3732071+dr-bonez@users.noreply.github.com>

* prevent gateways from getting stuck empty

* fix set-password

* misc networking fixes

* build and efi fixes

* efi fixes

* alpha.13

* remove cross

* fix tests

* provide path to upgrade

* fix networkmanager issues

* remove squashfs before creating

---------

Co-authored-by: Matt Hill <MattDHill@users.noreply.github.com>
2025-11-15 22:33:03 -07:00
StuPleb
edb916338c minor typos and grammar (#3047)
* minor typos and grammar

* added missing word - compute
2025-11-14 12:48:41 -07:00
Aiden McClelland
f7e947d37d Fix installation command for StartTunnel (#3048) 2025-11-14 12:42:05 -07:00
Aiden McClelland
a9e3d1ed75 Revise StartTunnel installation and update commands
Updated installation and update instructions for StartTunnel.
2025-11-14 12:39:53 -07:00
Matt Hill
ce97827c42 CA instead of leaf for StartTunnel (#3046)
* updated docs for CA instead of cert

* generate ca instead of self-signed in start-tunnel

* Fix formatting in START-TUNNEL.md installation instructions

* Fix formatting in START-TUNNEL.md

* fix infinite loop

* add success message to install

* hide loopback and bridge gateways

---------

Co-authored-by: Aiden McClelland <me@drbonez.dev>
Co-authored-by: Aiden McClelland <3732071+dr-bonez@users.noreply.github.com>
2025-11-10 14:15:58 -07:00
Aiden McClelland
3efec07338 Include StartTunnel installation command
Added installation instructions for StartTunnel.
2025-11-07 20:00:31 +00:00
Aiden McClelland
68f401bfa3 Feature/start tunnel (#3037)
* fix live-build resolv.conf

* improved debuggability

* wip: start-tunnel

* fixes for trixie and tor

* non-free-firmware on trixie

* wip

* web server WIP

* wip: tls refactor

* FE patchdb, mocks, and most endpoints

* fix editing records and patch mocks

* refactor complete

* finish api

* build and formatter update

* minor change toi viewing addresses and fix build

* fixes

* more providers

* endpoint for getting config

* fix tests

* api fixes

* wip: separate port forward controller into parts

* simplify iptables rules

* bump sdk

* misc fixes

* predict next subnet and ip, use wan ips, and form validation

* refactor: break big components apart and address todos (#3043)

* refactor: break big components apart and address todos

* starttunnel readme, fix pf mocks, fix adding tor domain in startos

---------

Co-authored-by: Matt Hill <mattnine@protonmail.com>

* better tui

* tui tweaks

* fix: address comments

* better regex for subnet

* fixes

* better validation

* handle rpc errors

* build fixes

* fix: address comments (#3044)

* fix: address comments

* fix unread notification mocks

* fix row click for notification

---------

Co-authored-by: Matt Hill <mattnine@protonmail.com>

* fix raspi build

* fix build

* fix build

* fix build

* fix build

* try to fix build

* fix tests

* fix tests

* fix rsync tests

* delete useless effectful test

---------

Co-authored-by: Matt Hill <mattnine@protonmail.com>
Co-authored-by: Alex Inkin <alexander@inkin.ru>
2025-11-07 10:12:05 +00:00
Matt Hill
1ea525feaa make textarea rows configurable (#3042)
* make textarea rows configurable

* add comments

* better defaults
2025-10-31 11:46:49 -06:00
Alex Inkin
57c4a7527e fix: make CPU meter not go to 11 (#3038) 2025-10-30 16:13:39 -06:00
Aiden McClelland
5aa9c045e1 fix live-build resolv.conf (#3035)
* fix live-build resolv.conf

* improved debuggability
2025-09-24 22:44:25 -06:00
Matt Hill
6f1900f3bb limit adding gateway to StartTunnel, better copy around Tor SSL (#3033)
* limit adding gateway to StartTunnel, better copy around Tor SSL

* properly differentiate ssl

* exclude disconnected gateways

* better error handling

---------

Co-authored-by: Aiden McClelland <me@drbonez.dev>
2025-09-24 13:22:26 -06:00
Aiden McClelland
bc62de795e bugfixes for alpha.10 (#3032)
* bugfixes for alpha.10

* bump raspi kernel

* rpi kernel bump

* alpha.11
2025-09-23 22:42:17 +00:00
Alex Inkin
c62ca4b183 fix: make long dropdown options wrap (#3031) 2025-09-21 06:06:33 -06:00
Alex Inkin
876e5bc683 fix: fix overflowing interface table (#3027) 2025-09-20 07:10:58 -06:00
Alex Inkin
b99f3b73cd fix: make logs page take up all space (#3030) 2025-09-20 07:10:28 -06:00
Matt Hill
7eecf29449 fix dep error display, show starting if any health check starting, show disabled health check message, remove loader from service list, animated dots, better color (#3025)
* refector addresses to not need gateways array

* fix dep error display, show starting if any health check starting, show disabled health check message, remove loader from service list, animated dots, better color

* fix: fix action results textfields

---------

Co-authored-by: waterplea <alexander@inkin.ru>
2025-09-17 10:32:20 -06:00
Mariusz Kogen
1d331d7810 Fix file permissions for developer key and auth cookie (#3024)
* fix permissions

* include read for group
2025-09-16 09:09:33 -06:00
Aiden McClelland
68414678d8 sdk updates; beta.39 (#3022)
* sdk updates; beta.39

* beta.40
2025-09-11 15:47:48 -06:00
Aiden McClelland
2f6b9dac26 Bugfix/dns recursion (#3023)
* fix dns recursion and localhost

* additional fix
2025-09-11 15:47:38 -06:00
Aiden McClelland
d1812d875b fix dns recursion and localhost (#3021) 2025-09-11 12:35:12 -06:00
Aiden McClelland
723dea100f add more gateway info to hostnameInfo (#3019) 2025-09-10 12:16:35 -06:00
Matt Hill
c4419ed31f show correct gateway name when adding public domain 2025-09-10 09:57:39 -06:00
Matt Hill
754ab86e51 only show http for tor if protocol is http 2025-09-10 09:36:03 -06:00
Mariusz Kogen
04dab532cd Motd Redesign - Visual and Structural Upgrade (#3018)
New 040 motd
2025-09-10 06:36:27 +00:00
Matt Hill
add01ebc68 Gateways, domains, and new service interface (#3001)
* add support for inbound proxies

* backend changes

* fix file type

* proxy -> tunnel, implement backend apis

* wip start-tunneld

* add domains and gateways, remove routers, fix docs links

* dont show hidden actions

* show and test dns

* edit instead of chnage acme and change gateway

* refactor: domains page

* refactor: gateways page

* domains and acme refactor

* certificate authorities

* refactor public/private gateways

* fix fe types

* domains mostly finished

* refactor: add file control to form service

* add ip util to sdk

* domains api + migration

* start service interface page, WIP

* different options for clearnet domains

* refactor: styles for interfaces page

* minor

* better placeholder for no addresses

* start sorting addresses

* best address logic

* comments

* fix unnecessary export

* MVP of service interface page

* domains preferred

* fix: address comments

* only translations left

* wip: start-tunnel & fix build

* forms for adding domain, rework things based on new ideas

* fix: dns testing

* public domain, max width, descriptions for dns

* nix StartOS domains, implement public and private domains at interface scope

* restart tor instead of reset

* better icon for restart tor

* dns

* fix sort functions for public and private domains

* with todos

* update types

* clean up tech debt, bump dependencies

* revert to ts-rs v9

* fix all types

* fix dns form

* add missing translations

* it builds

* fix: comments (#3009)

* fix: comments

* undo default

---------

Co-authored-by: Matt Hill <mattnine@protonmail.com>

* fix: refactor legacy components (#3010)

* fix: comments

* fix: refactor legacy components

* remove default again

---------

Co-authored-by: Matt Hill <mattnine@protonmail.com>

* more translations

* wip

* fix deadlock

* coukd work

* simple renaming

* placeholder for empty service interfaces table

* honor hidden form values

* remove logs

* reason instead of description

* fix dns

* misc fixes

* implement toggling gateways for service interface

* fix showing dns records

* move status column in service list

* remove unnecessary truthy check

* refactor: refactor forms components and remove legacy Taiga UI package (#3012)

* handle wh file uploads

* wip: debugging tor

* socks5 proxy working

* refactor: fix multiple comments (#3013)

* refactor: fix multiple comments

* styling changes, add documentation to sidebar

* translations for dns page

* refactor: subtle colors

* rearrange service page

---------

Co-authored-by: Matt Hill <mattnine@protonmail.com>

* fix file_stream and remove non-terminating test

* clean  up logs

* support for sccache

* fix gha sccache

* more marketplace translations

* install wizard clarity

* stub hostnameInfo in migration

* fix address info after setup, fix styling on SI page, new 040 release notes

* remove tor logs from os

* misc fixes

* reset tor still not functioning...

* update ts

* minor styling and wording

* chore: some fixes (#3015)

* fix gateway renames

* different handling for public domains

* styling fixes

* whole navbar should not be clickable on service show page

* timeout getState request

* remove links from changelog

* misc fixes from pairing

* use custom name for gateway in more places

* fix dns parsing

* closes #3003

* closes #2999

* chore: some fixes (#3017)

* small copy change

* revert hardcoded error for testing

* dont require port forward if gateway is public

* use old wan ip when not available

* fix .const hanging on undefined

* fix test

* fix doc test

* fix renames

* update deps

* allow specifying dependency metadata directly

* temporarily make dependencies not cliackable in marketplace listings

* fix socks bind

* fix test

---------

Co-authored-by: Aiden McClelland <me@drbonez.dev>
Co-authored-by: waterplea <alexander@inkin.ru>
2025-09-10 03:43:51 +00:00
Mariusz Kogen
1cc9a1a30b build(cli): harden build-cli.sh (zig check, env defaults, GIT_HASH) (#3016) 2025-09-09 14:18:52 -06:00
Dominion5254
92a1de7500 remove entire service package directory on hard uninstall (#3007)
* remove entire service package directory on hard uninstall

* fix package path
2025-08-12 15:46:01 -06:00
Alex Inkin
a6fedcff80 fix: extract correct manifest in updating state (#3004) 2025-08-01 22:51:38 -06:00
Alex Inkin
55eb999305 fix: update notifications design (#3000) 2025-07-29 22:17:59 -06:00
Aiden McClelland
377b7b12ce update/alpha.9 (#2988)
* import marketplac preview for sideload

* fix: improve state service (#2977)

* fix: fix sideload DI

* fix: update Angular

* fix: cleanup

* fix: fix version selection

* Bump node version to fix build for Angular

* misc fixes
- update node to v22
- fix chroot-and-upgrade access to prune-images
- don't self-migrate legacy packages
- #2985
- move dataVersion to volume folder
- remove "instructions.md" from s9pk
- add "docsUrl" to manifest

* version bump

* include flavor when clicking view listing from updates tab

* closes #2980

* fix: fix select button

* bring back ssh keys

* fix: drop 'portal' from all routes

* fix: implement longtap action to select table rows

* fix description for ssh page

* replace instructions with docsLink and refactor marketplace preview

* delete unused translations

* fix patchdb diffing algorithm

* continue refactor of marketplace lib show components

* Booting StartOS instead of Setting up your server on init

* misc fixes
- closes #2990
- closes #2987

* fix build

* docsUrl and clickable service headers

* don't cleanup after update until new service install succeeds

* update types

* misc fixes

* beta.35

* sdkversion, githash for sideload, correct logs for init, startos pubkey display

* bring back reboot button on install

* misc fixes

* beta.36

* better handling of setup and init for websocket errors

* reopen init and setup logs even on graceful closure

* better logging, misc fixes

* fix build

* dont let package stats hang

* dont show docsurl in marketplace if no docsurl

* re-add needs-config

* show error if init fails, shorten hover state on header icons

* fix operator precedemce

---------

Co-authored-by: Matt Hill <mattnine@protonmail.com>
Co-authored-by: Alex Inkin <alexander@inkin.ru>
Co-authored-by: Mariusz Kogen <k0gen@pm.me>
2025-07-18 18:31:12 +00:00
gStart9
ba2906a42e Fallback DNS/NTP server privacy enhancements (#2992) 2025-07-16 21:31:52 +00:00
gStart9
ee27f14be0 Update/kiosk mode firefox settings (#2973)
* Privacy settings for firefox's kiosk mode

* Re-add extensions.update.enabled, false

* Don't enable letterboxing by default
2025-07-15 20:17:52 +00:00
809 changed files with 44359 additions and 32582 deletions

118
.github/workflows/start-cli.yaml vendored Normal file
View File

@@ -0,0 +1,118 @@
name: start-cli
on:
workflow_call:
workflow_dispatch:
inputs:
environment:
type: choice
description: Environment
options:
- NONE
- dev
- unstable
- dev-unstable
runner:
type: choice
description: Runner
options:
- standard
- fast
arch:
type: choice
description: Architecture
options:
- ALL
- x86_64
- x86_64-apple
- aarch64
- aarch64-apple
- riscv64
push:
branches:
- master
- next/*
pull_request:
branches:
- master
- next/*
env:
NODEJS_VERSION: "24.11.0"
ENVIRONMENT: '${{ fromJson(format(''["{0}", ""]'', github.event.inputs.environment || ''dev''))[github.event.inputs.environment == ''NONE''] }}'
jobs:
compile:
name: Build Debian Package
strategy:
fail-fast: true
matrix:
triple: >-
${{
fromJson('{
"x86_64": ["x86_64-unknown-linux-musl"],
"x86_64-apple": ["x86_64-apple-darwin"],
"aarch64": ["aarch64-unknown-linux-musl"],
"x86_64-apple": ["aarch64-apple-darwin"],
"riscv64": ["riscv64gc-unknown-linux-musl"],
"ALL": ["x86_64-unknown-linux-musl", "x86_64-apple-darwin", "aarch64-unknown-linux-musl", "aarch64-apple-darwin", "riscv64gc-unknown-linux-musl"]
}')[github.event.inputs.platform || 'ALL']
}}
runs-on: ${{ fromJson('["ubuntu-latest", "buildjet-32vcpu-ubuntu-2204"]')[github.event.inputs.runner == 'fast'] }}
steps:
- name: Cleaning up unnecessary files
run: |
sudo apt-get remove --purge -y mono-* \
ghc* cabal-install* \
dotnet* \
php* \
ruby* \
mysql-* \
postgresql-* \
azure-cli \
powershell \
google-cloud-sdk \
msodbcsql* mssql-tools* \
imagemagick* \
libgl1-mesa-dri \
google-chrome-stable \
firefox
sudo apt-get autoremove -y
sudo apt-get clean
- run: |
sudo mount -t tmpfs tmpfs .
if: ${{ github.event.inputs.runner == 'fast' }}
- uses: actions/checkout@v4
with:
submodules: recursive
- uses: actions/setup-node@v4
with:
node-version: ${{ env.NODEJS_VERSION }}
- name: Set up docker QEMU
uses: docker/setup-qemu-action@v3
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@v3
- name: Configure sccache
uses: actions/github-script@v7
with:
script: |
core.exportVariable('ACTIONS_RESULTS_URL', process.env.ACTIONS_RESULTS_URL || '');
core.exportVariable('ACTIONS_RUNTIME_TOKEN', process.env.ACTIONS_RUNTIME_TOKEN || '');
- name: Make
run: TARGET=${{ matrix.triple }} make cli
env:
PLATFORM: ${{ matrix.arch }}
SCCACHE_GHA_ENABLED: on
SCCACHE_GHA_VERSION: 0
- uses: actions/upload-artifact@v4
with:
name: start-cli_${{ matrix.triple }}
path: core/target/${{ matrix.triple }}/release/start-cli

203
.github/workflows/start-registry.yaml vendored Normal file
View File

@@ -0,0 +1,203 @@
name: Start-Registry
on:
workflow_call:
workflow_dispatch:
inputs:
environment:
type: choice
description: Environment
options:
- NONE
- dev
- unstable
- dev-unstable
runner:
type: choice
description: Runner
options:
- standard
- fast
arch:
type: choice
description: Architecture
options:
- ALL
- x86_64
- aarch64
- riscv64
push:
branches:
- master
- next/*
pull_request:
branches:
- master
- next/*
env:
NODEJS_VERSION: "24.11.0"
ENVIRONMENT: '${{ fromJson(format(''["{0}", ""]'', github.event.inputs.environment || ''dev''))[github.event.inputs.environment == ''NONE''] }}'
jobs:
compile:
name: Build Debian Package
strategy:
fail-fast: true
matrix:
arch: >-
${{
fromJson('{
"x86_64": ["x86_64"],
"aarch64": ["aarch64"],
"riscv64": ["riscv64"],
"ALL": ["x86_64", "aarch64", "riscv64"]
}')[github.event.inputs.platform || 'ALL']
}}
runs-on: ${{ fromJson('["ubuntu-latest", "buildjet-32vcpu-ubuntu-2204"]')[github.event.inputs.runner == 'fast'] }}
steps:
- name: Cleaning up unnecessary files
run: |
sudo apt-get remove --purge -y mono-* \
ghc* cabal-install* \
dotnet* \
php* \
ruby* \
mysql-* \
postgresql-* \
azure-cli \
powershell \
google-cloud-sdk \
msodbcsql* mssql-tools* \
imagemagick* \
libgl1-mesa-dri \
google-chrome-stable \
firefox
sudo apt-get autoremove -y
sudo apt-get clean
- run: |
sudo mount -t tmpfs tmpfs .
if: ${{ github.event.inputs.runner == 'fast' }}
- uses: actions/checkout@v4
with:
submodules: recursive
- uses: actions/setup-node@v4
with:
node-version: ${{ env.NODEJS_VERSION }}
- name: Set up docker QEMU
uses: docker/setup-qemu-action@v3
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@v3
- name: Configure sccache
uses: actions/github-script@v7
with:
script: |
core.exportVariable('ACTIONS_RESULTS_URL', process.env.ACTIONS_RESULTS_URL || '');
core.exportVariable('ACTIONS_RUNTIME_TOKEN', process.env.ACTIONS_RUNTIME_TOKEN || '');
- name: Make
run: make registry-deb
env:
PLATFORM: ${{ matrix.arch }}
SCCACHE_GHA_ENABLED: on
SCCACHE_GHA_VERSION: 0
- uses: actions/upload-artifact@v4
with:
name: start-registry_${{ matrix.arch }}.deb
path: results/start-registry-*_${{ matrix.arch }}.deb
create-image:
name: Create Docker Image
needs: [compile]
permissions:
contents: read
packages: write
runs-on: ${{ fromJson('["ubuntu-latest", "buildjet-32vcpu-ubuntu-2204"]')[github.event.inputs.runner == 'fast'] }}
steps:
- name: Cleaning up unnecessary files
run: |
sudo apt-get remove --purge -y google-chrome-stable firefox mono-devel
sudo apt-get autoremove -y
sudo apt-get clean
- run: |
sudo mount -t tmpfs tmpfs .
if: ${{ github.event.inputs.runner == 'fast' }}
- name: Set up docker QEMU
uses: docker/setup-qemu-action@v3
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@v3
- name: "Login to GitHub Container Registry"
uses: docker/login-action@v3
with:
registry: ghcr.io
username: ${{github.actor}}
password: ${{secrets.GITHUB_TOKEN}}
- name: Docker meta
id: meta
uses: docker/metadata-action@v5
with:
images: ghcr.io/Start9Labs/startos-registry
tags: |
type=raw,value=${{ github.ref_name }}
- name: Download debian package
uses: actions/download-artifact@v4
with:
pattern: start-registry_*.deb
- name: Map matrix.arch to docker platform
run: |
platforms=""
for deb in *.deb; do
filename=$(basename "$deb" .deb)
arch="${filename#*_}"
case "$arch" in
x86_64)
platform="linux/amd64"
;;
aarch64)
platform="linux/arm64"
;;
riscv64)
platform="linux/riscv64"
;;
*)
echo "Unknown architecture: $arch" >&2
exit 1
;;
esac
if [ -z "$platforms" ]; then
platforms="$platform"
else
platforms="$platforms,$platform"
fi
done
echo "DOCKER_PLATFORM=$platforms" >> "$GITHUB_ENV"
- run: |
cat | docker buildx build --platform "$DOCKER_PLATFORM" --push -t ${{ steps.meta.outputs.tags }} -f - . << 'EOF'
FROM debian:trixie
ADD *.deb .
RUN apt-get install -y ./*_$(uname -m).deb && rm *.deb
VOLUME /var/lib/startos
ENV RUST_LOG=startos=debug
ENTRYPOINT ["start-registryd"]
EOF

114
.github/workflows/start-tunnel.yaml vendored Normal file
View File

@@ -0,0 +1,114 @@
name: Start-Tunnel
on:
workflow_call:
workflow_dispatch:
inputs:
environment:
type: choice
description: Environment
options:
- NONE
- dev
- unstable
- dev-unstable
runner:
type: choice
description: Runner
options:
- standard
- fast
arch:
type: choice
description: Architecture
options:
- ALL
- x86_64
- aarch64
- riscv64
push:
branches:
- master
- next/*
pull_request:
branches:
- master
- next/*
env:
NODEJS_VERSION: "24.11.0"
ENVIRONMENT: '${{ fromJson(format(''["{0}", ""]'', github.event.inputs.environment || ''dev''))[github.event.inputs.environment == ''NONE''] }}'
jobs:
compile:
name: Build Debian Package
strategy:
fail-fast: true
matrix:
arch: >-
${{
fromJson('{
"x86_64": ["x86_64"],
"aarch64": ["aarch64"],
"riscv64": ["riscv64"],
"ALL": ["x86_64", "aarch64", "riscv64"]
}')[github.event.inputs.platform || 'ALL']
}}
runs-on: ${{ fromJson('["ubuntu-latest", "buildjet-32vcpu-ubuntu-2204"]')[github.event.inputs.runner == 'fast'] }}
steps:
- name: Cleaning up unnecessary files
run: |
sudo apt-get remove --purge -y mono-* \
ghc* cabal-install* \
dotnet* \
php* \
ruby* \
mysql-* \
postgresql-* \
azure-cli \
powershell \
google-cloud-sdk \
msodbcsql* mssql-tools* \
imagemagick* \
libgl1-mesa-dri \
google-chrome-stable \
firefox
sudo apt-get autoremove -y
sudo apt-get clean
- run: |
sudo mount -t tmpfs tmpfs .
if: ${{ github.event.inputs.runner == 'fast' }}
- uses: actions/checkout@v4
with:
submodules: recursive
- uses: actions/setup-node@v4
with:
node-version: ${{ env.NODEJS_VERSION }}
- name: Set up docker QEMU
uses: docker/setup-qemu-action@v3
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@v3
- name: Configure sccache
uses: actions/github-script@v7
with:
script: |
core.exportVariable('ACTIONS_RESULTS_URL', process.env.ACTIONS_RESULTS_URL || '');
core.exportVariable('ACTIONS_RUNTIME_TOKEN', process.env.ACTIONS_RUNTIME_TOKEN || '');
- name: Make
run: make tunnel-deb
env:
PLATFORM: ${{ matrix.arch }}
SCCACHE_GHA_ENABLED: on
SCCACHE_GHA_VERSION: 0
- uses: actions/upload-artifact@v4
with:
name: start-tunnel_${{ matrix.arch }}.deb
path: results/start-tunnel-*_${{ matrix.arch }}.deb

View File

@@ -28,6 +28,7 @@ on:
- aarch64
- aarch64-nonfree
- raspberrypi
- riscv64
deploy:
type: choice
description: Deploy
@@ -45,7 +46,7 @@ on:
- next/*
env:
NODEJS_VERSION: "20.16.0"
NODEJS_VERSION: "24.11.0"
ENVIRONMENT: '${{ fromJson(format(''["{0}", ""]'', github.event.inputs.environment || ''dev''))[github.event.inputs.environment == ''NONE''] }}'
jobs:
@@ -62,11 +63,48 @@ jobs:
"aarch64": ["aarch64"],
"aarch64-nonfree": ["aarch64"],
"raspberrypi": ["aarch64"],
"ALL": ["x86_64", "aarch64"]
"riscv64": ["riscv64"],
"ALL": ["x86_64", "aarch64", "riscv64"]
}')[github.event.inputs.platform || 'ALL']
}}
runs-on: ${{ fromJson('["ubuntu-22.04", "buildjet-32vcpu-ubuntu-2204"]')[github.event.inputs.runner == 'fast'] }}
runs-on: >-
${{
fromJson(
format(
'["{0}", "{1}"]',
fromJson('{
"x86_64": "ubuntu-latest",
"aarch64": "ubuntu-24.04-arm",
"riscv64": "ubuntu-latest"
}')[matrix.arch],
fromJson('{
"x86_64": "buildjet-32vcpu-ubuntu-2204",
"aarch64": "buildjet-32vcpu-ubuntu-2204-arm",
"riscv64": "buildjet-32vcpu-ubuntu-2204"
}')[matrix.arch]
)
)[github.event.inputs.runner == 'fast']
}}
steps:
- name: Cleaning up unnecessary files
run: |
sudo apt-get remove --purge -y azure-cli || true
sudo apt-get remove --purge -y firefox || true
sudo apt-get remove --purge -y ghc-* || true
sudo apt-get remove --purge -y google-cloud-sdk || true
sudo apt-get remove --purge -y google-chrome-stable || true
sudo apt-get remove --purge -y powershell || true
sudo apt-get remove --purge -y php* || true
sudo apt-get remove --purge -y ruby* || true
sudo apt-get remove --purge -y mono-* || true
sudo apt-get autoremove -y
sudo apt-get clean
sudo rm -rf /usr/lib/jvm # All JDKs
sudo rm -rf /usr/local/.ghcup # Haskell toolchain
sudo rm -rf /usr/local/lib/android # Android SDK/NDK, emulator
sudo rm -rf /usr/share/dotnet # .NET SDKs
sudo rm -rf /usr/share/swift # Swift toolchain (if present)
sudo rm -rf "$AGENT_TOOLSDIRECTORY" # Pre-cached tool cache (Go, Node, etc.)
- run: |
sudo mount -t tmpfs tmpfs .
if: ${{ github.event.inputs.runner == 'fast' }}
@@ -93,8 +131,18 @@ jobs:
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@v3
- name: Configure sccache
uses: actions/github-script@v7
with:
script: |
core.exportVariable('ACTIONS_RESULTS_URL', process.env.ACTIONS_RESULTS_URL || '');
core.exportVariable('ACTIONS_RUNTIME_TOKEN', process.env.ACTIONS_RUNTIME_TOKEN || '');
- name: Make
run: make ARCH=${{ matrix.arch }} compiled-${{ matrix.arch }}.tar
env:
SCCACHE_GHA_ENABLED: on
SCCACHE_GHA_VERSION: 0
- uses: actions/upload-artifact@v4
with:
@@ -112,7 +160,7 @@ jobs:
format(
'[
["{0}"],
["x86_64", "x86_64-nonfree", "aarch64", "aarch64-nonfree", "raspberrypi"]
["x86_64", "x86_64-nonfree", "aarch64", "aarch64-nonfree", "riscv64", "raspberrypi"]
]',
github.event.inputs.platform || 'ALL'
)
@@ -122,13 +170,22 @@ jobs:
${{
fromJson(
format(
'["ubuntu-22.04", "{0}"]',
'["{0}", "{1}"]',
fromJson('{
"x86_64": "ubuntu-latest",
"x86_64-nonfree": "ubuntu-latest",
"aarch64": "ubuntu-24.04-arm",
"aarch64-nonfree": "ubuntu-24.04-arm",
"raspberrypi": "ubuntu-24.04-arm",
"riscv64": "ubuntu-24.04-arm",
}')[matrix.platform],
fromJson('{
"x86_64": "buildjet-8vcpu-ubuntu-2204",
"x86_64-nonfree": "buildjet-8vcpu-ubuntu-2204",
"aarch64": "buildjet-8vcpu-ubuntu-2204-arm",
"aarch64-nonfree": "buildjet-8vcpu-ubuntu-2204-arm",
"raspberrypi": "buildjet-8vcpu-ubuntu-2204-arm",
"riscv64": "buildjet-8vcpu-ubuntu-2204",
}')[matrix.platform]
)
)[github.event.inputs.runner == 'fast']
@@ -142,11 +199,29 @@ jobs:
"aarch64": "aarch64",
"aarch64-nonfree": "aarch64",
"raspberrypi": "aarch64",
"riscv64": "riscv64",
}')[matrix.platform]
}}
steps:
- name: Free space
run: rm -rf /opt/hostedtoolcache*
run: |
sudo apt-get remove --purge -y azure-cli || true
sudo apt-get remove --purge -y firefox || true
sudo apt-get remove --purge -y ghc-* || true
sudo apt-get remove --purge -y google-cloud-sdk || true
sudo apt-get remove --purge -y google-chrome-stable || true
sudo apt-get remove --purge -y powershell || true
sudo apt-get remove --purge -y php* || true
sudo apt-get remove --purge -y ruby* || true
sudo apt-get remove --purge -y mono-* || true
sudo apt-get autoremove -y
sudo apt-get clean
sudo rm -rf /usr/lib/jvm # All JDKs
sudo rm -rf /usr/local/.ghcup # Haskell toolchain
sudo rm -rf /usr/local/lib/android # Android SDK/NDK, emulator
sudo rm -rf /usr/share/dotnet # .NET SDKs
sudo rm -rf /usr/share/swift # Swift toolchain (if present)
sudo rm -rf "$AGENT_TOOLSDIRECTORY" # Pre-cached tool cache (Go, Node, etc.)
if: ${{ github.event.inputs.runner != 'fast' }}
- uses: actions/checkout@v4
@@ -253,7 +328,7 @@ jobs:
index:
if: ${{ github.event.inputs.deploy != '' && github.event.inputs.deploy != 'NONE' }}
needs: [image]
runs-on: ubuntu-22.04
runs-on: ubuntu-latest
steps:
- run: >-
curl "https://${{

View File

@@ -11,13 +11,13 @@ on:
- next/*
env:
NODEJS_VERSION: "20.16.0"
NODEJS_VERSION: "24.11.0"
ENVIRONMENT: dev-unstable
jobs:
test:
name: Run Automated Tests
runs-on: ubuntu-22.04
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
with:

3
.gitignore vendored
View File

@@ -1,8 +1,5 @@
.DS_Store
.idea
system-images/binfmt/binfmt.tar
system-images/compat/compat.tar
system-images/util/util.tar
/*.img
/*.img.gz
/*.img.xz

View File

@@ -25,9 +25,9 @@ docker buildx create --use
curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh # proceed with default installation
curl -o- https://raw.githubusercontent.com/nvm-sh/nvm/master/install.sh | bash
source ~/.bashrc
nvm install 22
nvm use 22
nvm alias default 22 # this prevents your machine from reverting back to another version
nvm install 24
nvm use 24
nvm alias default 24 # this prevents your machine from reverting back to another version
```
## Cloning the repository

209
Makefile
View File

@@ -1,32 +1,45 @@
ls-files = $(shell git ls-files --cached --others --exclude-standard $1)
PROFILE = release
PLATFORM_FILE := $(shell ./check-platform.sh)
ENVIRONMENT_FILE := $(shell ./check-environment.sh)
GIT_HASH_FILE := $(shell ./check-git-hash.sh)
VERSION_FILE := $(shell ./check-version.sh)
BASENAME := $(shell ./basename.sh)
BASENAME := $(shell PROJECT=startos ./basename.sh)
PLATFORM := $(shell if [ -f ./PLATFORM.txt ]; then cat ./PLATFORM.txt; else echo unknown; fi)
ARCH := $(shell if [ "$(PLATFORM)" = "raspberrypi" ]; then echo aarch64; else echo $(PLATFORM) | sed 's/-nonfree$$//g'; fi)
RUST_ARCH := $(shell if [ "$(ARCH)" = "riscv64" ]; then echo riscv64gc; else echo $(ARCH); fi)
REGISTRY_BASENAME := $(shell PROJECT=start-registry PLATFORM=$(ARCH) ./basename.sh)
TUNNEL_BASENAME := $(shell PROJECT=start-tunnel PLATFORM=$(ARCH) ./basename.sh)
IMAGE_TYPE=$(shell if [ "$(PLATFORM)" = raspberrypi ]; then echo img; else echo iso; fi)
WEB_UIS := web/dist/raw/ui/index.html web/dist/raw/setup-wizard/index.html web/dist/raw/install-wizard/index.html
COMPRESSED_WEB_UIS := web/dist/static/ui/index.html web/dist/static/setup-wizard/index.html web/dist/static/install-wizard/index.html
FIRMWARE_ROMS := ./firmware/$(PLATFORM) $(shell jq --raw-output '.[] | select(.platform[] | contains("$(PLATFORM)")) | "./firmware/$(PLATFORM)/" + .id + ".rom.gz"' build/lib/firmware.json)
BUILD_SRC := $(shell git ls-files build) build/lib/depends build/lib/conflicts $(FIRMWARE_ROMS)
DEBIAN_SRC := $(shell git ls-files debian/)
IMAGE_RECIPE_SRC := $(shell git ls-files image-recipe/)
BUILD_SRC := $(call ls-files, build) build/lib/depends build/lib/conflicts $(FIRMWARE_ROMS)
IMAGE_RECIPE_SRC := $(call ls-files, image-recipe/)
STARTD_SRC := core/startos/startd.service $(BUILD_SRC)
COMPAT_SRC := $(shell git ls-files system-images/compat/)
UTILS_SRC := $(shell git ls-files system-images/utils/)
BINFMT_SRC := $(shell git ls-files system-images/binfmt/)
CORE_SRC := $(shell git ls-files core) $(shell git ls-files --recurse-submodules patch-db) $(GIT_HASH_FILE)
WEB_SHARED_SRC := $(shell git ls-files web/projects/shared) $(shell git ls-files web/projects/marketplace) $(shell ls -p web/ | grep -v / | sed 's/^/web\//g') web/node_modules/.package-lock.json web/config.json patch-db/client/dist/index.js sdk/baseDist/package.json web/patchdb-ui-seed.json sdk/dist/package.json
WEB_UI_SRC := $(shell git ls-files web/projects/ui)
WEB_SETUP_WIZARD_SRC := $(shell git ls-files web/projects/setup-wizard)
WEB_INSTALL_WIZARD_SRC := $(shell git ls-files web/projects/install-wizard)
CORE_SRC := $(call ls-files, core) $(shell git ls-files --recurse-submodules patch-db) $(GIT_HASH_FILE)
WEB_SHARED_SRC := $(call ls-files, web/projects/shared) $(call ls-files, web/projects/marketplace) $(shell ls -p web/ | grep -v / | sed 's/^/web\//g') web/node_modules/.package-lock.json web/config.json patch-db/client/dist/index.js sdk/baseDist/package.json web/patchdb-ui-seed.json sdk/dist/package.json
WEB_UI_SRC := $(call ls-files, web/projects/ui)
WEB_SETUP_WIZARD_SRC := $(call ls-files, web/projects/setup-wizard)
WEB_INSTALL_WIZARD_SRC := $(call ls-files, web/projects/install-wizard)
WEB_START_TUNNEL_SRC := $(call ls-files, web/projects/start-tunnel)
PATCH_DB_CLIENT_SRC := $(shell git ls-files --recurse-submodules patch-db/client)
GZIP_BIN := $(shell which pigz || which gzip)
TAR_BIN := $(shell which gtar || which tar)
COMPILED_TARGETS := core/target/$(ARCH)-unknown-linux-musl/release/startbox core/target/$(ARCH)-unknown-linux-musl/release/containerbox system-images/compat/docker-images/$(ARCH).tar system-images/utils/docker-images/$(ARCH).tar system-images/binfmt/docker-images/$(ARCH).tar container-runtime/rootfs.$(ARCH).squashfs
ALL_TARGETS := $(STARTD_SRC) $(ENVIRONMENT_FILE) $(GIT_HASH_FILE) $(VERSION_FILE) $(COMPILED_TARGETS) cargo-deps/$(ARCH)-unknown-linux-musl/release/startos-backup-fs $(shell if [ "$(PLATFORM)" = "raspberrypi" ]; then echo cargo-deps/aarch64-unknown-linux-musl/release/pi-beep; fi) $(shell /bin/bash -c 'if [[ "${ENVIRONMENT}" =~ (^|-)unstable($$|-) ]]; then echo cargo-deps/$(ARCH)-unknown-linux-musl/release/tokio-console; fi') $(PLATFORM_FILE)
REBUILD_TYPES = 1
COMPILED_TARGETS := core/target/$(RUST_ARCH)-unknown-linux-musl/$(PROFILE)/startbox core/target/$(RUST_ARCH)-unknown-linux-musl/release/start-container container-runtime/rootfs.$(ARCH).squashfs
STARTOS_TARGETS := $(STARTD_SRC) $(ENVIRONMENT_FILE) $(GIT_HASH_FILE) $(VERSION_FILE) $(COMPILED_TARGETS) cargo-deps/$(RUST_ARCH)-unknown-linux-musl/release/startos-backup-fs $(PLATFORM_FILE) \
$(shell if [ "$(PLATFORM)" = "raspberrypi" ]; then \
echo cargo-deps/aarch64-unknown-linux-musl/release/pi-beep; \
fi) \
$(shell /bin/bash -c 'if [[ "${ENVIRONMENT}" =~ (^|-)unstable($$|-) ]]; then \
echo cargo-deps/$(RUST_ARCH)-unknown-linux-musl/release/flamegraph; \
fi') \
$(shell /bin/bash -c 'if [[ "${ENVIRONMENT}" =~ (^|-)console($$|-) ]]; then \
echo cargo-deps/$(RUST_ARCH)-unknown-linux-musl/release/tokio-console; \
fi')
REGISTRY_TARGETS := core/target/$(RUST_ARCH)-unknown-linux-musl/$(PROFILE)/registrybox core/startos/start-registryd.service
TUNNEL_TARGETS := core/target/$(RUST_ARCH)-unknown-linux-musl/$(PROFILE)/tunnelbox core/startos/start-tunneld.service
ifeq ($(REMOTE),)
mkdir = mkdir -p $1
@@ -49,18 +62,16 @@ endif
.DELETE_ON_ERROR:
.PHONY: all metadata install clean format cli uis ui reflash deb $(IMAGE_TYPE) squashfs wormhole wormhole-deb test test-core test-sdk test-container-runtime registry
.PHONY: all metadata install clean format install-cli cli uis ui reflash deb $(IMAGE_TYPE) squashfs wormhole wormhole-deb test test-core test-sdk test-container-runtime registry install-registry tunnel install-tunnel ts-bindings
all: $(ALL_TARGETS)
all: $(STARTOS_TARGETS)
touch:
touch $(ALL_TARGETS)
touch $(STARTOS_TARGETS)
metadata: $(VERSION_FILE) $(PLATFORM_FILE) $(ENVIRONMENT_FILE) $(GIT_HASH_FILE)
clean:
rm -f system-images/**/*.tar
rm -rf system-images/compat/target
rm -rf core/target
rm -rf core/startos/bindings
rm -rf web/.angular
@@ -95,44 +106,86 @@ test: | test-core test-sdk test-container-runtime
test-core: $(CORE_SRC) $(ENVIRONMENT_FILE)
./core/run-tests.sh
test-sdk: $(shell git ls-files sdk) sdk/base/lib/osBindings/index.ts
test-sdk: $(call ls-files, sdk) sdk/base/lib/osBindings/index.ts
cd sdk && make test
test-container-runtime: container-runtime/node_modules/.package-lock.json $(shell git ls-files container-runtime/src) container-runtime/package.json container-runtime/tsconfig.json
test-container-runtime: container-runtime/node_modules/.package-lock.json $(call ls-files, container-runtime/src) container-runtime/package.json container-runtime/tsconfig.json
cd container-runtime && npm test
cli:
cd core && ./install-cli.sh
install-cli: $(GIT_HASH_FILE)
./core/build-cli.sh --install
registry:
cd core && ./build-registrybox.sh
cli: $(GIT_HASH_FILE)
./core/build-cli.sh
registry: core/target/$(RUST_ARCH)-unknown-linux-musl/$(PROFILE)/registrybox
install-registry: $(REGISTRY_TARGETS)
$(call mkdir,$(DESTDIR)/usr/bin)
$(call cp,core/target/$(RUST_ARCH)-unknown-linux-musl/$(PROFILE)/registrybox,$(DESTDIR)/usr/bin/start-registrybox)
$(call ln,/usr/bin/start-registrybox,$(DESTDIR)/usr/bin/start-registryd)
$(call ln,/usr/bin/start-registrybox,$(DESTDIR)/usr/bin/start-registry)
$(call mkdir,$(DESTDIR)/lib/systemd/system)
$(call cp,core/startos/start-registryd.service,$(DESTDIR)/lib/systemd/system/start-registryd.service)
core/target/$(RUST_ARCH)-unknown-linux-musl/$(PROFILE)/registrybox: $(CORE_SRC) $(ENVIRONMENT_FILE)
ARCH=$(ARCH) PROFILE=$(PROFILE) ./core/build-registrybox.sh
tunnel: core/target/$(RUST_ARCH)-unknown-linux-musl/$(PROFILE)/tunnelbox
install-tunnel: core/target/$(RUST_ARCH)-unknown-linux-musl/$(PROFILE)/tunnelbox core/startos/start-tunneld.service
$(call mkdir,$(DESTDIR)/usr/bin)
$(call cp,core/target/$(RUST_ARCH)-unknown-linux-musl/$(PROFILE)/tunnelbox,$(DESTDIR)/usr/bin/start-tunnelbox)
$(call ln,/usr/bin/start-tunnelbox,$(DESTDIR)/usr/bin/start-tunneld)
$(call ln,/usr/bin/start-tunnelbox,$(DESTDIR)/usr/bin/start-tunnel)
$(call mkdir,$(DESTDIR)/lib/systemd/system)
$(call cp,core/startos/start-tunneld.service,$(DESTDIR)/lib/systemd/system/start-tunneld.service)
$(call mkdir,$(DESTDIR)/usr/lib/startos/scripts)
$(call cp,build/lib/scripts/forward-port,$(DESTDIR)/usr/lib/startos/scripts/forward-port)
core/target/$(RUST_ARCH)-unknown-linux-musl/$(PROFILE)/tunnelbox: $(CORE_SRC) $(ENVIRONMENT_FILE) $(GIT_HASH_FILE) web/dist/static/start-tunnel/index.html
ARCH=$(ARCH) PROFILE=$(PROFILE) ./core/build-tunnelbox.sh
deb: results/$(BASENAME).deb
debian/control: build/lib/depends build/lib/conflicts
./debuild/control.sh
results/$(BASENAME).deb: dpkg-build.sh $(DEBIAN_SRC) $(ALL_TARGETS)
results/$(BASENAME).deb: dpkg-build.sh $(call ls-files,debian/startos) $(STARTOS_TARGETS)
PLATFORM=$(PLATFORM) REQUIRES=debian ./build/os-compat/run-compat.sh ./dpkg-build.sh
registry-deb: results/$(REGISTRY_BASENAME).deb
results/$(REGISTRY_BASENAME).deb: dpkg-build.sh $(call ls-files,debian/start-registry) $(REGISTRY_TARGETS)
PROJECT=start-registry PLATFORM=$(ARCH) REQUIRES=debian ./build/os-compat/run-compat.sh ./dpkg-build.sh
tunnel-deb: results/$(TUNNEL_BASENAME).deb
results/$(TUNNEL_BASENAME).deb: dpkg-build.sh $(call ls-files,debian/start-tunnel) $(TUNNEL_TARGETS) build/lib/scripts/forward-port
PROJECT=start-tunnel PLATFORM=$(ARCH) REQUIRES=debian DEPENDS=wireguard-tools,iptables,conntrack ./build/os-compat/run-compat.sh ./dpkg-build.sh
$(IMAGE_TYPE): results/$(BASENAME).$(IMAGE_TYPE)
squashfs: results/$(BASENAME).squashfs
results/$(BASENAME).$(IMAGE_TYPE) results/$(BASENAME).squashfs: $(IMAGE_RECIPE_SRC) results/$(BASENAME).deb
REQUIRES=debian ./build/os-compat/run-compat.sh ./image-recipe/run-local-build.sh "results/$(BASENAME).deb"
./image-recipe/run-local-build.sh "results/$(BASENAME).deb"
# For creating os images. DO NOT USE
install: $(ALL_TARGETS)
install: $(STARTOS_TARGETS)
$(call mkdir,$(DESTDIR)/usr/bin)
$(call mkdir,$(DESTDIR)/usr/sbin)
$(call cp,core/target/$(ARCH)-unknown-linux-musl/release/startbox,$(DESTDIR)/usr/bin/startbox)
$(call cp,core/target/$(RUST_ARCH)-unknown-linux-musl/$(PROFILE)/startbox,$(DESTDIR)/usr/bin/startbox)
$(call ln,/usr/bin/startbox,$(DESTDIR)/usr/bin/startd)
$(call ln,/usr/bin/startbox,$(DESTDIR)/usr/bin/start-cli)
$(call ln,/usr/bin/startbox,$(DESTDIR)/usr/bin/start-sdk)
if [ "$(PLATFORM)" = "raspberrypi" ]; then $(call cp,cargo-deps/aarch64-unknown-linux-musl/release/pi-beep,$(DESTDIR)/usr/bin/pi-beep); fi
if /bin/bash -c '[[ "${ENVIRONMENT}" =~ (^|-)unstable($$|-) ]]'; then $(call cp,cargo-deps/$(ARCH)-unknown-linux-musl/release/tokio-console,$(DESTDIR)/usr/bin/tokio-console); fi
$(call cp,cargo-deps/$(ARCH)-unknown-linux-musl/release/startos-backup-fs,$(DESTDIR)/usr/bin/startos-backup-fs)
if /bin/bash -c '[[ "${ENVIRONMENT}" =~ (^|-)unstable($$|-) ]]'; then \
$(call cp,cargo-deps/$(RUST_ARCH)-unknown-linux-musl/release/flamegraph,$(DESTDIR)/usr/bin/flamegraph); \
fi
if /bin/bash -c '[[ "${ENVIRONMENT}" =~ (^|-)console($$|-) ]]'; then \
$(call cp,cargo-deps/$(RUST_ARCH)-unknown-linux-musl/release/tokio-console,$(DESTDIR)/usr/bin/tokio-console); \
fi
$(call cp,cargo-deps/$(RUST_ARCH)-unknown-linux-musl/release/startos-backup-fs,$(DESTDIR)/usr/bin/startos-backup-fs)
$(call ln,/usr/bin/startos-backup-fs,$(DESTDIR)/usr/sbin/mount.backup-fs)
$(call mkdir,$(DESTDIR)/lib/systemd/system)
@@ -149,13 +202,9 @@ install: $(ALL_TARGETS)
$(call cp,GIT_HASH.txt,$(DESTDIR)/usr/lib/startos/GIT_HASH.txt)
$(call cp,VERSION.txt,$(DESTDIR)/usr/lib/startos/VERSION.txt)
$(call mkdir,$(DESTDIR)/usr/lib/startos/system-images)
$(call cp,system-images/compat/docker-images/$(ARCH).tar,$(DESTDIR)/usr/lib/startos/system-images/compat.tar)
$(call cp,system-images/utils/docker-images/$(ARCH).tar,$(DESTDIR)/usr/lib/startos/system-images/utils.tar)
$(call cp,firmware/$(PLATFORM),$(DESTDIR)/usr/lib/startos/firmware)
update-overlay: $(ALL_TARGETS)
update-overlay: $(STARTOS_TARGETS)
@echo "\033[33m!!! THIS WILL ONLY REFLASH YOUR DEVICE IN MEMORY !!!\033[0m"
@echo "\033[33mALL CHANGES WILL BE REVERTED IF YOU RESTART THE DEVICE\033[0m"
@if [ -z "$(REMOTE)" ]; then >&2 echo "Must specify REMOTE" && false; fi
@@ -164,10 +213,10 @@ update-overlay: $(ALL_TARGETS)
$(MAKE) install REMOTE=$(REMOTE) SSHPASS=$(SSHPASS) PLATFORM=$(PLATFORM)
$(call ssh,"sudo systemctl start startd")
wormhole: core/target/$(ARCH)-unknown-linux-musl/release/startbox
wormhole: core/target/$(RUST_ARCH)-unknown-linux-musl/$(PROFILE)/startbox
@echo "Paste the following command into the shell of your StartOS server:"
@echo
@wormhole send core/target/$(ARCH)-unknown-linux-musl/release/startbox 2>&1 | awk -Winteractive '/wormhole receive/ { printf "sudo /usr/lib/startos/scripts/chroot-and-upgrade \"cd /usr/bin && rm startbox && wormhole receive --accept-file %s && chmod +x startbox\"\n", $$3 }'
@wormhole send core/target/$(RUST_ARCH)-unknown-linux-musl/$(PROFILE)/startbox 2>&1 | awk -Winteractive '/wormhole receive/ { printf "sudo /usr/lib/startos/scripts/chroot-and-upgrade \"cd /usr/bin && rm startbox && wormhole receive --accept-file %s && chmod +x startbox\"\n", $$3 }'
wormhole-deb: results/$(BASENAME).deb
@echo "Paste the following command into the shell of your StartOS server:"
@@ -179,18 +228,18 @@ wormhole-squashfs: results/$(BASENAME).squashfs
$(eval SQFS_SIZE := $(shell du -s --bytes results/$(BASENAME).squashfs | awk '{print $$1}'))
@echo "Paste the following command into the shell of your StartOS server:"
@echo
@wormhole send results/$(BASENAME).squashfs 2>&1 | awk -Winteractive '/wormhole receive/ { printf "sudo sh -c '"'"'/usr/lib/startos/scripts/prune-images $(SQFS_SIZE) && /usr/lib/startos/scripts/prune-boot && cd /media/startos/images && wormhole receive --accept-file %s && CHECKSUM=$(SQFS_SUM) /usr/lib/startos/scripts/use-img ./$(BASENAME).squashfs'"'"'\n", $$3 }'
@wormhole send results/$(BASENAME).squashfs 2>&1 | awk -Winteractive '/wormhole receive/ { printf "sudo sh -c '"'"'/usr/lib/startos/scripts/prune-images $(SQFS_SIZE) && /usr/lib/startos/scripts/prune-boot && cd /media/startos/images && wormhole receive --accept-file %s && CHECKSUM=$(SQFS_SUM) /usr/lib/startos/scripts/upgrade ./$(BASENAME).squashfs'"'"'\n", $$3 }'
update: $(ALL_TARGETS)
update: $(STARTOS_TARGETS)
@if [ -z "$(REMOTE)" ]; then >&2 echo "Must specify REMOTE" && false; fi
$(call ssh,'sudo /usr/lib/startos/scripts/chroot-and-upgrade --create')
$(MAKE) install REMOTE=$(REMOTE) SSHPASS=$(SSHPASS) DESTDIR=/media/startos/next PLATFORM=$(PLATFORM)
$(call ssh,'sudo /media/startos/next/usr/lib/startos/scripts/chroot-and-upgrade --no-sync "apt-get install -y $(shell cat ./build/lib/depends)"')
update-startbox: core/target/$(ARCH)-unknown-linux-musl/release/startbox # only update binary (faster than full update)
update-startbox: core/target/$(RUST_ARCH)-unknown-linux-musl/$(PROFILE)/startbox # only update binary (faster than full update)
@if [ -z "$(REMOTE)" ]; then >&2 echo "Must specify REMOTE" && false; fi
$(call ssh,'sudo /usr/lib/startos/scripts/chroot-and-upgrade --create')
$(call cp,core/target/$(ARCH)-unknown-linux-musl/release/startbox,/media/startos/next/usr/bin/startbox)
$(call cp,core/target/$(RUST_ARCH)-unknown-linux-musl/$(PROFILE)/startbox,/media/startos/next/usr/bin/startbox)
$(call ssh,'sudo /media/startos/next/usr/lib/startos/scripts/chroot-and-upgrade --no-sync true')
update-deb: results/$(BASENAME).deb # better than update, but only available from debian
@@ -207,9 +256,9 @@ update-squashfs: results/$(BASENAME).squashfs
$(call ssh,'/usr/lib/startos/scripts/prune-images $(SQFS_SIZE)')
$(call ssh,'/usr/lib/startos/scripts/prune-boot')
$(call cp,results/$(BASENAME).squashfs,/media/startos/images/next.rootfs)
$(call ssh,'sudo CHECKSUM=$(SQFS_SUM) /usr/lib/startos/scripts/use-img /media/startos/images/next.rootfs')
$(call ssh,'sudo CHECKSUM=$(SQFS_SUM) /usr/lib/startos/scripts/upgrade /media/startos/images/next.rootfs')
emulate-reflash: $(ALL_TARGETS)
emulate-reflash: $(STARTOS_TARGETS)
@if [ -z "$(REMOTE)" ]; then >&2 echo "Must specify REMOTE" && false; fi
$(call ssh,'sudo /usr/lib/startos/scripts/chroot-and-upgrade --create')
$(MAKE) install REMOTE=$(REMOTE) SSHPASS=$(SSHPASS) DESTDIR=/media/startos/next PLATFORM=$(PLATFORM)
@@ -230,56 +279,46 @@ container-runtime/node_modules/.package-lock.json: container-runtime/package-loc
npm --prefix container-runtime ci
touch container-runtime/node_modules/.package-lock.json
sdk/base/lib/osBindings/index.ts: $(shell if [ "$(REBUILD_TYPES)" -ne 0 ]; then echo core/startos/bindings/index.ts; fi)
ts-bindings: core/startos/bindings/index.ts
mkdir -p sdk/base/lib/osBindings
rsync -ac --delete core/startos/bindings/ sdk/base/lib/osBindings/
touch sdk/base/lib/osBindings/index.ts
core/startos/bindings/index.ts: $(shell git ls-files core) $(ENVIRONMENT_FILE)
core/startos/bindings/index.ts: $(call ls-files, core) $(ENVIRONMENT_FILE)
rm -rf core/startos/bindings
./core/build-ts.sh
ls core/startos/bindings/*.ts | sed 's/core\/startos\/bindings\/\([^.]*\)\.ts/export { \1 } from ".\/\1";/g' | grep -v '"./index"' | tee core/startos/bindings/index.ts
npm --prefix sdk exec -- prettier --config ./sdk/base/package.json -w ./core/startos/bindings/*.ts
touch core/startos/bindings/index.ts
sdk/dist/package.json sdk/baseDist/package.json: $(shell git ls-files sdk) sdk/base/lib/osBindings/index.ts
sdk/dist/package.json sdk/baseDist/package.json: $(call ls-files, sdk) sdk/base/lib/osBindings/index.ts
(cd sdk && make bundle)
touch sdk/dist/package.json
touch sdk/baseDist/package.json
# TODO: make container-runtime its own makefile?
container-runtime/dist/index.js: container-runtime/node_modules/.package-lock.json $(shell git ls-files container-runtime/src) container-runtime/package.json container-runtime/tsconfig.json
container-runtime/dist/index.js: container-runtime/node_modules/.package-lock.json $(call ls-files, container-runtime/src) container-runtime/package.json container-runtime/tsconfig.json
npm --prefix container-runtime run build
container-runtime/dist/node_modules/.package-lock.json container-runtime/dist/package.json container-runtime/dist/package-lock.json: container-runtime/package.json container-runtime/package-lock.json sdk/dist/package.json container-runtime/install-dist-deps.sh
./container-runtime/install-dist-deps.sh
touch container-runtime/dist/node_modules/.package-lock.json
container-runtime/rootfs.$(ARCH).squashfs: container-runtime/debian.$(ARCH).squashfs container-runtime/container-runtime.service container-runtime/update-image.sh container-runtime/deb-install.sh container-runtime/dist/index.js container-runtime/dist/node_modules/.package-lock.json core/target/$(ARCH)-unknown-linux-musl/release/containerbox
ARCH=$(ARCH) REQUIRES=linux ./build/os-compat/run-compat.sh ./container-runtime/update-image.sh
container-runtime/rootfs.$(ARCH).squashfs: container-runtime/debian.$(ARCH).squashfs container-runtime/container-runtime.service container-runtime/update-image.sh container-runtime/deb-install.sh container-runtime/dist/index.js container-runtime/dist/node_modules/.package-lock.json core/target/$(RUST_ARCH)-unknown-linux-musl/release/start-container
ARCH=$(ARCH) REQUIRES=qemu ./build/os-compat/run-compat.sh ./container-runtime/update-image.sh
build/lib/depends build/lib/conflicts: build/dpkg-deps/*
build/dpkg-deps/generate.sh
build/lib/depends build/lib/conflicts: $(ENVIRONMENT_FILE) $(PLATFORM_FILE) $(shell ls build/dpkg-deps/*)
PLATFORM=$(PLATFORM) ARCH=$(ARCH) build/dpkg-deps/generate.sh
$(FIRMWARE_ROMS): build/lib/firmware.json download-firmware.sh $(PLATFORM_FILE)
./download-firmware.sh $(PLATFORM)
system-images/compat/docker-images/$(ARCH).tar: $(COMPAT_SRC)
cd system-images/compat && make docker-images/$(ARCH).tar && touch docker-images/$(ARCH).tar
core/target/$(RUST_ARCH)-unknown-linux-musl/$(PROFILE)/startbox: $(CORE_SRC) $(COMPRESSED_WEB_UIS) web/patchdb-ui-seed.json $(ENVIRONMENT_FILE)
ARCH=$(ARCH) PROFILE=$(PROFILE) ./core/build-startbox.sh
touch core/target/$(RUST_ARCH)-unknown-linux-musl/$(PROFILE)/startbox
system-images/utils/docker-images/$(ARCH).tar: $(UTILS_SRC)
cd system-images/utils && make docker-images/$(ARCH).tar && touch docker-images/$(ARCH).tar
system-images/binfmt/docker-images/$(ARCH).tar: $(BINFMT_SRC)
cd system-images/binfmt && make docker-images/$(ARCH).tar && touch docker-images/$(ARCH).tar
core/target/$(ARCH)-unknown-linux-musl/release/startbox: $(CORE_SRC) $(COMPRESSED_WEB_UIS) web/patchdb-ui-seed.json $(ENVIRONMENT_FILE)
ARCH=$(ARCH) ./core/build-startbox.sh
touch core/target/$(ARCH)-unknown-linux-musl/release/startbox
core/target/$(ARCH)-unknown-linux-musl/release/containerbox: $(CORE_SRC) $(ENVIRONMENT_FILE)
ARCH=$(ARCH) ./core/build-containerbox.sh
touch core/target/$(ARCH)-unknown-linux-musl/release/containerbox
core/target/$(RUST_ARCH)-unknown-linux-musl/release/start-container: $(CORE_SRC) $(ENVIRONMENT_FILE)
ARCH=$(ARCH) ./core/build-start-container.sh
touch core/target/$(RUST_ARCH)-unknown-linux-musl/release/start-container
web/package-lock.json: web/package.json sdk/baseDist/package.json
npm --prefix web i
@@ -306,8 +345,12 @@ web/dist/raw/install-wizard/index.html: $(WEB_INSTALL_WIZARD_SRC) $(WEB_SHARED_S
npm --prefix web run build:install
touch web/dist/raw/install-wizard/index.html
$(COMPRESSED_WEB_UIS): $(WEB_UIS) $(ENVIRONMENT_FILE)
./compress-uis.sh
web/dist/raw/start-tunnel/index.html: $(WEB_START_TUNNEL_SRC) $(WEB_SHARED_SRC) web/.angular/.updated
npm --prefix web run build:tunnel
touch web/dist/raw/start-tunnel/index.html
web/dist/static/%/index.html: web/dist/raw/%/index.html
./compress-uis.sh $*
web/config.json: $(GIT_HASH_FILE) web/config-sample.json
jq '.useMocks = false' web/config-sample.json | jq '.gitHash = "$(shell cat GIT_HASH.txt)"' > web/config.json
@@ -331,11 +374,17 @@ uis: $(WEB_UIS)
# this is a convenience step to build the UI
ui: web/dist/raw/ui
cargo-deps/aarch64-unknown-linux-musl/release/pi-beep:
cargo-deps/aarch64-unknown-linux-musl/release/pi-beep: ./build-cargo-dep.sh
ARCH=aarch64 ./build-cargo-dep.sh pi-beep
cargo-deps/$(ARCH)-unknown-linux-musl/release/tokio-console:
ARCH=$(ARCH) PREINSTALL="apk add musl-dev pkgconfig" ./build-cargo-dep.sh tokio-console
cargo-deps/$(RUST_ARCH)-unknown-linux-musl/release/tokio-console: ./build-cargo-dep.sh
ARCH=$(ARCH) ./build-cargo-dep.sh tokio-console
touch $@
cargo-deps/$(ARCH)-unknown-linux-musl/release/startos-backup-fs:
ARCH=$(ARCH) PREINSTALL="apk add fuse3 fuse3-dev fuse3-static musl-dev pkgconfig" ./build-cargo-dep.sh --git https://github.com/Start9Labs/start-fs.git startos-backup-fs
cargo-deps/$(RUST_ARCH)-unknown-linux-musl/release/startos-backup-fs: ./build-cargo-dep.sh
ARCH=$(ARCH) ./build-cargo-dep.sh --git https://github.com/Start9Labs/start-fs.git startos-backup-fs
touch $@
cargo-deps/$(RUST_ARCH)-unknown-linux-musl/release/flamegraph: ./build-cargo-dep.sh
ARCH=$(ARCH) ./build-cargo-dep.sh flamegraph
touch $@

95
START-TUNNEL.md Normal file
View File

@@ -0,0 +1,95 @@
# StartTunnel
A self-hosted WireGuard VPN optimized for creating VLANs and reverse tunneling to personal servers.
You can think of StartTunnel as "virtual router in the cloud".
Use it for private remote access to self-hosted services running on a personal server, or to expose self-hosted services to the public Internet without revealing the host server's IP address.
## Features
- **Create Subnets**: Each subnet creates a private, virtual local area network (VLAN), similar to the LAN created by a home router.
- **Add Devices**: When you add a device (server, phone, laptop) to a subnet, it receives a LAN IP address on that subnet as well as a unique WireGuard config that must be copied, downloaded, or scanned into the device.
- **Forward Ports**: Forwarding a port creates a "reverse tunnel", exposing a specific port on a specific device to the public Internet.
## Installation
1. Rent a low cost VPS. For most use cases, the cheapest option should be enough.
- It must have a dedicated public IP address.
- For compute (CPU), memory (RAM), and storage (disk), choose the minimum spec.
- For transfer (bandwidth), it depends on (1) your use case and (2) your home Internet's _upload_ speed. Even if you intend to serve large files or stream content from your server, there is no reason to pay for speeds that exceed your home Internet's upload speed.
1. Provision the VPS with the latest version of Debian.
1. Access the VPS via SSH.
1. Run the StartTunnel install script:
curl -fsSL https://start9labs.github.io/start-tunnel | sh
1. [Initialize the web interface](#web-interface) (recommended)
## Updating
Simply re-run the install command:
```sh
curl -fsSL https://start9labs.github.io/start-tunnel | sh
```
## CLI
By default, StartTunnel is managed via the `start-tunnel` command line interface, which is self-documented.
```
start-tunnel --help
```
## Web Interface
Enable the web interface (recommended in most cases) to access your StartTunnel from the browser or via API.
1. Initialize the web interface.
start-tunnel web init
1. If your VPS has multiple public IP addresses, you will be prompted to select the IP address at which to host the web interface.
1. When prompted, enter the port at which to host the web interface. The default is 8443, and we recommend using it. If you change the default, choose an uncommon port to avoid future conflicts.
1. To access your StartTunnel web interface securely over HTTPS, you need an SSL certificate. When prompted, select whether to autogenerate a certificate or provide your own. _This is only for accessing your StartTunnel web interface_.
1. You will receive a success message with 3 pieces of information:
- **<https://IP:port>**: the URL where you can reach your personal web interface.
- **Password**: an autogenerated password for your interface. If you lose/forget it, you can reset it using the start-tunnel CLI.
- **Root Certificate Authority**: the Root CA of your StartTunnel instance.
1. If you autogenerated your SSL certificate, visiting the `https://IP:port` URL in the browser will warn you that the website is insecure. This is expected. You have two options for getting past this warning:
- option 1 (recommended): [Trust your StartTunnel Root CA on your connecting device](#trusting-your-starttunnel-root-ca).
- Option 2: bypass the warning in the browser, creating a one-time security exception.
### Trusting your StartTunnel Root CA
1. Copy the contents of your Root CA (starting with -----BEGIN CERTIFICATE----- and ending with -----END CERTIFICATE-----).
2. Open a text editor:
- Linux: gedit, nano, or any editor
- Mac: TextEdit
- Windows: Notepad
3. Paste the contents of your Root CA.
4. Save the file with a `.crt` extension (e.g. `start-tunnel.crt`) (make sure it saves as plain text, not rich text).
5. Trust the Root CA on your client device(s):
- [Linux](https://staging.docs.start9.com/device-guides/linux/ca.html)
- [Mac](https://staging.docs.start9.com/device-guides/mac/ca.html)
- [Windows](https://staging.docs.start9.com/device-guides/windows/ca.html)
- [Android/Graphene](https://staging.docs.start9.com/device-guides/android/ca.html)
- [iOS](https://staging.docs.start9.com/device-guides/ios/ca.html)

201
agents/VERSION_BUMP.md Normal file
View File

@@ -0,0 +1,201 @@
# StartOS Version Bump Guide
This document explains how to bump the StartOS version across the entire codebase.
## Overview
When bumping from version `X.Y.Z-alpha.N` to `X.Y.Z-alpha.N+1`, you need to update files in multiple locations across the repository. The `// VERSION_BUMP` comment markers indicate where changes are needed.
## Files to Update
### 1. Core Rust Crate Version
**File: `core/startos/Cargo.toml`**
Update the version string (line ~18):
```toml
version = "0.4.0-alpha.15" # VERSION_BUMP
```
**File: `core/Cargo.lock`**
This file is auto-generated. After updating `Cargo.toml`, run:
```bash
cd core
cargo check
```
This will update the version in `Cargo.lock` automatically.
### 2. Create New Version Migration Module
**File: `core/startos/src/version/vX_Y_Z_alpha_N+1.rs`**
Create a new version file by copying the previous version and updating:
```rust
use exver::{PreReleaseSegment, VersionRange};
use super::v0_3_5::V0_3_0_COMPAT;
use super::{VersionT, v0_4_0_alpha_14}; // Update to previous version
use crate::prelude::*;
lazy_static::lazy_static! {
static ref V0_4_0_alpha_15: exver::Version = exver::Version::new(
[0, 4, 0],
[PreReleaseSegment::String("alpha".into()), 15.into()] // Update number
);
}
#[derive(Clone, Copy, Debug, Default)]
pub struct Version;
impl VersionT for Version {
type Previous = v0_4_0_alpha_14::Version; // Update to previous version
type PreUpRes = ();
async fn pre_up(self) -> Result<Self::PreUpRes, Error> {
Ok(())
}
fn semver(self) -> exver::Version {
V0_4_0_alpha_15.clone() // Update version name
}
fn compat(self) -> &'static VersionRange {
&V0_3_0_COMPAT
}
#[instrument(skip_all)]
fn up(self, _db: &mut Value, _: Self::PreUpRes) -> Result<Value, Error> {
// Add migration logic here if needed
Ok(Value::Null)
}
fn down(self, _db: &mut Value) -> Result<(), Error> {
// Add rollback logic here if needed
Ok(())
}
}
```
### 3. Update Version Module Registry
**File: `core/startos/src/version/mod.rs`**
Make changes in **5 locations**:
#### Location 1: Module Declaration (~line 57)
Add the new module after the previous version:
```rust
mod v0_4_0_alpha_14;
mod v0_4_0_alpha_15; // Add this
```
#### Location 2: Current Type Alias (~line 59)
Update the `Current` type and move the `// VERSION_BUMP` comment:
```rust
pub type Current = v0_4_0_alpha_15::Version; // VERSION_BUMP
```
#### Location 3: Version Enum (~line 175)
Remove `// VERSION_BUMP` from the previous version, add new variant, add comment:
```rust
V0_4_0_alpha_14(Wrapper<v0_4_0_alpha_14::Version>),
V0_4_0_alpha_15(Wrapper<v0_4_0_alpha_15::Version>), // VERSION_BUMP
Other(exver::Version),
```
#### Location 4: as_version_t() Match (~line 233)
Remove `// VERSION_BUMP`, add new match arm, add comment:
```rust
Self::V0_4_0_alpha_14(v) => DynVersion(Box::new(v.0)),
Self::V0_4_0_alpha_15(v) => DynVersion(Box::new(v.0)), // VERSION_BUMP
Self::Other(v) => {
```
#### Location 5: as_exver() Match (~line 284, inside #[cfg(test)])
Remove `// VERSION_BUMP`, add new match arm, add comment:
```rust
Version::V0_4_0_alpha_14(Wrapper(x)) => x.semver(),
Version::V0_4_0_alpha_15(Wrapper(x)) => x.semver(), // VERSION_BUMP
Version::Other(x) => x.clone(),
```
### 4. SDK TypeScript Version
**File: `sdk/package/lib/StartSdk.ts`**
Update the OSVersion constant (~line 64):
```typescript
export const OSVersion = testTypeVersion("0.4.0-alpha.15");
```
### 5. Web UI Package Version
**File: `web/package.json`**
Update the version field:
```json
{
"name": "startos-ui",
"version": "0.4.0-alpha.15",
...
}
```
**File: `web/package-lock.json`**
This file is auto-generated, but it's faster to update manually. Find all instances of "startos-ui" and update the version field.
## Verification Step
```
make
```
## VERSION_BUMP Comment Pattern
The `// VERSION_BUMP` comment serves as a marker for where to make changes next time:
- Always **remove** it from the old location
- **Add** the new version entry
- **Move** the comment to mark the new location
This pattern helps you quickly find all the places that need updating in the next version bump.
## Summary Checklist
- [ ] Update `core/startos/Cargo.toml` version
- [ ] Create new `core/startos/src/version/vX_Y_Z_alpha_N+1.rs` file
- [ ] Update `core/startos/src/version/mod.rs` in 5 locations
- [ ] Run `cargo check` to update `core/Cargo.lock`
- [ ] Update `sdk/package/lib/StartSdk.ts` OSVersion
- [ ] Update `web/package.json` and `web/package-lock.json` version
- [ ] Verify all changes compile/build successfully
## Migration Logic
The `up()` and `down()` methods in the version file handle database migrations:
- **up()**: Migrates the database from the previous version to this version
- **down()**: Rolls back from this version to the previous version
- **pre_up()**: Runs before migration, useful for pre-migration checks or data gathering
If no migration is needed, return `Ok(Value::Null)` for `up()` and `Ok(())` for `down()`.
For complex migrations, you may need to:
1. Update `type PreUpRes` to pass data between `pre_up()` and `up()`
2. Implement database transformations in the `up()` method
3. Implement reverse transformations in `down()` for rollback support

View File

@@ -1,5 +1,7 @@
#!/bin/bash
PROJECT=${PROJECT:-"startos"}
cd "$(dirname "${BASH_SOURCE[0]}")"
PLATFORM="$(if [ -f ./PLATFORM.txt ]; then cat ./PLATFORM.txt; else echo unknown; fi)"
@@ -16,4 +18,4 @@ if [ -n "$STARTOS_ENV" ]; then
VERSION_FULL="$VERSION_FULL~${STARTOS_ENV}"
fi
echo -n "startos-${VERSION_FULL}_${PLATFORM}"
echo -n "${PROJECT}-${VERSION_FULL}_${PLATFORM}"

View File

@@ -8,27 +8,22 @@ if [ "$0" != "./build-cargo-dep.sh" ]; then
exit 1
fi
USE_TTY=
if tty -s; then
USE_TTY="-it"
fi
if [ -z "$ARCH" ]; then
ARCH=$(uname -m)
fi
DOCKER_PLATFORM="linux/${ARCH}"
if [ "$ARCH" = aarch64 ] || [ "$ARCH" = arm64 ]; then
DOCKER_PLATFORM="linux/arm64"
elif [ "$ARCH" = x86_64 ]; then
DOCKER_PLATFORM="linux/amd64"
RUST_ARCH="$ARCH"
if [ "$ARCH" = "riscv64" ]; then
RUST_ARCH="riscv64gc"
fi
mkdir -p cargo-deps
alias 'rust-musl-builder'='docker run $USE_TTY --platform=${DOCKER_PLATFORM} --rm -e "RUSTFLAGS=$RUSTFLAGS" -v "$HOME/.cargo/registry":/root/.cargo/registry -v "$(pwd)"/cargo-deps:/home/rust/src -w /home/rust/src -P rust:alpine'
PREINSTALL=${PREINSTALL:-true}
source core/builder-alias.sh
rust-musl-builder sh -c "$PREINSTALL && cargo install $* --target-dir /home/rust/src --target=$ARCH-unknown-linux-musl"
sudo chown -R $USER cargo-deps
sudo chown -R $USER ~/.cargo
RUSTFLAGS="-C target-feature=+crt-static"
rust-zig-builder cargo-zigbuild install $* --target-dir /workdir/cargo-deps/ --target=$RUST_ARCH-unknown-linux-musl
if [ "$(ls -nd "cargo-deps/$RUST_ARCH-unknown-linux-musl/release/${!#}" | awk '{ print $3 }')" != "$UID" ]; then
rust-zig-builder sh -c "chown -R $UID:$UID cargo-deps && chown -R $UID:$UID /usr/local/cargo"
fi

View File

@@ -7,9 +7,9 @@ bmon
btrfs-progs
ca-certificates
cifs-utils
conntrack
cryptsetup
curl
dnsutils
dmidecode
dnsutils
dosfstools
@@ -19,6 +19,7 @@ exfatprogs
flashrom
fuse3
grub-common
grub-efi
htop
httpdirfs
iotop
@@ -41,7 +42,6 @@ nvme-cli
nyx
openssh-server
podman
postgresql
psmisc
qemu-guest-agent
rfkill

View File

@@ -5,11 +5,15 @@ set -e
cd "$(dirname "${BASH_SOURCE[0]}")"
IFS="-" read -ra FEATURES <<< "$ENVIRONMENT"
FEATURES+=("${ARCH}")
if [ "$ARCH" != "$PLATFORM" ]; then
FEATURES+=("${PLATFORM}")
fi
feature_file_checker='
/^#/ { next }
/^\+ [a-z0-9]+$/ { next }
/^- [a-z0-9]+$/ { next }
/^\+ [a-z0-9.-]+$/ { next }
/^- [a-z0-9.-]+$/ { next }
{ exit 1 }
'
@@ -30,8 +34,8 @@ for type in conflicts depends; do
for feature in ${FEATURES[@]}; do
file="$feature.$type"
if [ -f $file ]; then
if grep "^- $pkg$" $file; then
SKIP=1
if grep "^- $pkg$" $file > /dev/null; then
SKIP=yes
fi
fi
done

View File

@@ -0,0 +1,10 @@
- grub-efi
+ parted
+ raspberrypi-net-mods
+ raspberrypi-sys-mods
+ raspi-config
+ raspi-firmware
+ raspi-utils
+ rpi-eeprom
+ rpi-update
+ rpi.gpio-common

View File

@@ -1,2 +1,3 @@
+ gdb
+ heaptrack
+ heaptrack
+ linux-perf

View File

@@ -0,0 +1 @@
+ grub-pc-bin

View File

@@ -1,34 +1,123 @@
#!/bin/sh
printf "\n"
printf "Welcome to\n"
cat << "ASCII"
███████
█ █ █
█ █ █ █
█ █ █ █
█ █ █ █
█ █ █ █
█ █
███████
parse_essential_db_info() {
DB_DUMP="/tmp/startos_db.json"
_____ __ ___ __ __
(_ | /\ |__) | / \(_
__) | / \| \ | \__/__)
ASCII
printf " v$(cat /usr/lib/startos/VERSION.txt)\n\n"
printf " %s (%s %s)\n" "$(uname -o)" "$(uname -r)" "$(uname -m)"
printf " Git Hash: $(cat /usr/lib/startos/GIT_HASH.txt)"
if [ -n "$(cat /usr/lib/startos/ENVIRONMENT.txt)" ]; then
printf " ~ $(cat /usr/lib/startos/ENVIRONMENT.txt)\n"
else
printf "\n"
if command -v start-cli >/dev/null 2>&1; then
start-cli db dump > "$DB_DUMP" 2>/dev/null || return 1
else
return 1
fi
if command -v jq >/dev/null 2>&1 && [ -f "$DB_DUMP" ]; then
HOSTNAME=$(jq -r '.value.serverInfo.hostname // "unknown"' "$DB_DUMP" 2>/dev/null)
VERSION=$(jq -r '.value.serverInfo.version // "unknown"' "$DB_DUMP" 2>/dev/null)
RAM_BYTES=$(jq -r '.value.serverInfo.ram // 0' "$DB_DUMP" 2>/dev/null)
WAN_IP=$(jq -r '.value.serverInfo.network.gateways[].ipInfo.wanIp // "unknown"' "$DB_DUMP" 2>/dev/null | head -1)
NTP_SYNCED=$(jq -r '.value.serverInfo.ntpSynced // false' "$DB_DUMP" 2>/dev/null)
if [ "$RAM_BYTES" != "0" ] && [ "$RAM_BYTES" != "null" ]; then
RAM_GB=$(echo "scale=1; $RAM_BYTES / 1073741824" | bc 2>/dev/null || echo "unknown")
else
RAM_GB="unknown"
fi
RUNNING_SERVICES=$(jq -r '[.value.packageData[] | select(.statusInfo.started != null)] | length' "$DB_DUMP" 2>/dev/null)
TOTAL_SERVICES=$(jq -r '.value.packageData | length' "$DB_DUMP" 2>/dev/null)
rm -f "$DB_DUMP"
return 0
else
rm -f "$DB_DUMP" 2>/dev/null
return 1
fi
}
DB_INFO_AVAILABLE=0
if parse_essential_db_info; then
DB_INFO_AVAILABLE=1
fi
printf "\n"
printf " * Documentation: https://docs.start9.com\n"
printf " * Management: https://%s.local\n" "$(hostname)"
printf " * Support: https://start9.com/contact\n"
printf " * Source Code: https://github.com/Start9Labs/start-os\n"
printf " * License: MIT\n"
printf "\n"
if [ "$DB_INFO_AVAILABLE" -eq 1 ] && [ "$VERSION" != "unknown" ]; then
version_display="v$VERSION"
else
version_display="v$(cat /usr/lib/startos/VERSION.txt 2>/dev/null || echo 'unknown')"
fi
printf "\n\033[1;37m ▄▄▀▀▀▀▀▄▄\033[0m\n"
printf "\033[1;37m ▄▀ ▄ ▀▄ ▄▄▄▄▄ ▄▄▄▄▄▄▄ ▄ ▄▄▄▄▄ ▄▄▄▄▄▄▄ \033[1;31m▄██████▄ ▄██████\033[0m\n"
printf "\033[1;37m █ █ █ █ █ █ █ █ █ ▀▄ █ \033[1;31m██ ██ ██ \033[0m\n"
printf "\033[1;37m█ █ █ █ ▀▄▄▄▄ █ █ █ █ ▄▄▄▀ █ \033[1;31m██ ██ ▀█████▄\033[0m\n"
printf "\033[1;37m█ █ █ █ █ █ █ █ █ ▀▄ █ \033[1;31m██ ██ ██\033[0m\n"
printf "\033[1;37m █ █ █ █ ▄▄▄▄▄▀ █ █ █ █ ▀▄ █ \033[1;31m▀██████▀ ██████▀\033[0m\n"
printf "\033[1;37m █ █\033[0m\n"
printf "\033[1;37m ▀▀▄▄▄▀▀ $version_display\033[0m\n\n"
uptime_str=$(uptime | awk -F'up ' '{print $2}' | awk -F',' '{print $1}' | sed 's/^ *//')
if [ "$DB_INFO_AVAILABLE" -eq 1 ] && [ "$RAM_GB" != "unknown" ]; then
memory_used=$(free -m | awk 'NR==2{printf "%.0fMB", $3}')
memory_display="$memory_used / ${RAM_GB}GB"
else
memory_display=$(free -m | awk 'NR==2{printf "%.0fMB / %.0fMB", $3, $2}')
fi
root_usage=$(df -h / | awk 'NR==2{printf "%s (%s free)", $5, $4}')
if [ -d "/media/startos/data/package-data" ]; then
data_usage=$(df -h /media/startos/data/package-data | awk 'NR==2{printf "%s (%s free)", $5, $4}')
else
data_usage="N/A"
fi
if [ "$DB_INFO_AVAILABLE" -eq 1 ]; then
services_text="$RUNNING_SERVICES/$TOTAL_SERVICES running"
else
services_text="Unknown"
fi
local_ip=$(ip route get 1.1.1.1 2>/dev/null | awk '{for(i=1;i<=NF;i++) if($i=="src") print $(i+1)}' | head -1)
if [ -z "$local_ip" ]; then local_ip="N/A"; fi
if [ "$DB_INFO_AVAILABLE" -eq 1 ] && [ "$WAN_IP" != "unknown" ]; then
wan_ip="$WAN_IP"
else
wan_ip="N/A"
fi
printf " \033[1;37m┌─ SYSTEM STATUS ───────────────────────────────────────────────────┐\033[0m\n"
printf " \033[1;37m│\033[0m %-8s \033[0;33m%-22s\033[0m %-8s \033[0;33m%-23s\033[0m \033[1;37m│\033[0m\n" "Uptime:" "$uptime_str" "Memory:" "$memory_display"
printf " \033[1;37m│\033[0m %-8s \033[0;33m%-22s\033[0m %-8s \033[0;33m%-23s\033[0m \033[1;37m│\033[0m\n" "Root:" "$root_usage" "Data:" "$data_usage"
if [ "$DB_INFO_AVAILABLE" -eq 1 ]; then
if [ "$RUNNING_SERVICES" -eq "$TOTAL_SERVICES" ] && [ "$TOTAL_SERVICES" -gt 0 ]; then
printf " \033[1;37m│\033[0m %-8s \033[0;32m%-22s\033[0m %-8s \033[0;33m%-23s\033[0m \033[1;37m│\033[0m\n" "Services:" "$services_text" "WAN:" "$wan_ip"
elif [ "$RUNNING_SERVICES" -gt 0 ]; then
printf " \033[1;37m│\033[0m %-8s \033[0;33m%-22s\033[0m %-8s \033[0;33m%-23s\033[0m \033[1;37m│\033[0m\n" "Services:" "$services_text" "WAN:" "$wan_ip"
else
printf " \033[1;37m│\033[0m %-8s \033[0;31m%-22s\033[0m %-8s \033[0;33m%-23s\033[0m \033[1;37m│\033[0m\n" "Services:" "$services_text" "WAN:" "$wan_ip"
fi
else
printf " \033[1;37m│\033[0m %-8s \033[0;37m%-22s\033[0m %-8s \033[0;33m%-23s\033[0m \033[1;37m│\033[0m\n" "Services:" "$services_text" "WAN:" "$wan_ip"
fi
if [ "$DB_INFO_AVAILABLE" -eq 1 ] && [ "$NTP_SYNCED" = "true" ]; then
printf " \033[1;37m│\033[0m %-8s \033[0;33m%-22s\033[0m %-8s \033[0;32m%-23s\033[0m \033[1;37m│\033[0m\n" "Local:" "$local_ip" "NTP:" "Synced"
elif [ "$DB_INFO_AVAILABLE" -eq 1 ] && [ "$NTP_SYNCED" = "false" ]; then
printf " \033[1;37m│\033[0m %-8s \033[0;33m%-22s\033[0m %-8s \033[0;31m%-23s\033[0m \033[1;37m│\033[0m\n" "Local:" "$local_ip" "NTP:" "Not Synced"
else
printf " \033[1;37m│\033[0m %-8s \033[0;33m%-22s\033[0m %-8s \033[0;37m%-23s\033[0m \033[1;37m│\033[0m\n" "Local:" "$local_ip" "NTP:" "Unknown"
fi
printf " \033[1;37m└───────────────────────────────────────────────────────────────────┘\033[0m"
if [ "$DB_INFO_AVAILABLE" -eq 1 ] && [ "$HOSTNAME" != "unknown" ]; then
web_url="https://$HOSTNAME.local"
else
web_url="https://$(hostname).local"
fi
printf "\n \033[1;37m┌──────────────────────────────────────────────────── QUICK ACCESS ─┐\033[0m\n"
printf " \033[1;37m│\033[0m Web Interface: \033[0;36m%-50s\033[0m \033[1;37m│\033[0m\n" "$web_url"
printf " \033[1;37m│\033[0m Documentation: \033[0;36m%-50s\033[0m \033[1;37m│\033[0m\n" "https://staging.docs.start9.com"
printf " \033[1;37m│\033[0m Support: \033[0;36m%-50s\033[0m \033[1;37m│\033[0m\n" "https://start9.com/contact"
printf " \033[1;37m└───────────────────────────────────────────────────────────────────┘\033[0m\n\n"

View File

@@ -1,6 +1,6 @@
#!/bin/bash
SOURCE_DIR="$(dirname "${BASH_SOURCE[0]}")"
SOURCE_DIR="$(dirname $(realpath "${BASH_SOURCE[0]}"))"
if [ "$UID" -ne 0 ]; then
>&2 echo 'Must be run as root'
@@ -10,24 +10,24 @@ fi
POSITIONAL_ARGS=()
while [[ $# -gt 0 ]]; do
case $1 in
--no-sync)
NO_SYNC=1
shift
;;
--create)
ONLY_CREATE=1
shift
;;
-*|--*)
echo "Unknown option $1"
exit 1
;;
*)
POSITIONAL_ARGS+=("$1") # save positional arg
shift # past argument
;;
esac
case $1 in
--no-sync)
NO_SYNC=1
shift
;;
--create)
ONLY_CREATE=1
shift
;;
-*|--*)
echo "Unknown option $1"
exit 1
;;
*)
POSITIONAL_ARGS+=("$1") # save positional arg
shift # past argument
;;
esac
done
set -- "${POSITIONAL_ARGS[@]}" # restore positional parameters
@@ -35,7 +35,7 @@ set -- "${POSITIONAL_ARGS[@]}" # restore positional parameters
if [ -z "$NO_SYNC" ]; then
echo 'Syncing...'
umount -R /media/startos/next 2> /dev/null
umount -R /media/startos/upper 2> /dev/null
umount /media/startos/upper 2> /dev/null
rm -rf /media/startos/upper /media/startos/next
mkdir /media/startos/upper
mount -t tmpfs tmpfs /media/startos/upper
@@ -43,8 +43,6 @@ if [ -z "$NO_SYNC" ]; then
mount -t overlay \
-olowerdir=/media/startos/current,upperdir=/media/startos/upper/data,workdir=/media/startos/upper/work \
overlay /media/startos/next
mkdir -p /media/startos/next/media/startos/root
mount --bind /media/startos/root /media/startos/next/media/startos/root
fi
if [ -n "$ONLY_CREATE" ]; then
@@ -56,12 +54,18 @@ mkdir -p /media/startos/next/dev
mkdir -p /media/startos/next/sys
mkdir -p /media/startos/next/proc
mkdir -p /media/startos/next/boot
mkdir -p /media/startos/next/media/startos/root
mount --bind /run /media/startos/next/run
mount --bind /tmp /media/startos/next/tmp
mount --bind /dev /media/startos/next/dev
mount --bind /sys /media/startos/next/sys
mount --bind /proc /media/startos/next/proc
mount --bind /boot /media/startos/next/boot
mount --bind /media/startos/root /media/startos/next/media/startos/root
if mountpoint /sys/firmware/efi/efivars 2> /dev/null; then
mount --bind /sys/firmware/efi/efivars /media/startos/next/sys/firmware/efi/efivars
fi
if [ -z "$*" ]; then
chroot /media/startos/next
@@ -71,6 +75,10 @@ else
CHROOT_RES=$?
fi
if mountpoint /media/startos/next/sys/firmware/efi/efivars 2> /dev/null; then
umount /media/startos/next/sys/firmware/efi/efivars
fi
umount /media/startos/next/run
umount /media/startos/next/tmp
umount /media/startos/next/dev
@@ -87,11 +95,12 @@ if [ "$CHROOT_RES" -eq 0 ]; then
echo 'Upgrading...'
rm -f /media/startos/images/next.squashfs
if ! time mksquashfs /media/startos/next /media/startos/images/next.squashfs -b 4096 -comp gzip; then
umount -R /media/startos/next
umount -R /media/startos/upper
rm -rf /media/startos/upper /media/startos/next
exit 1
umount -l /media/startos/next
umount -l /media/startos/upper
rm -rf /media/startos/upper /media/startos/next
exit 1
fi
hash=$(b3sum /media/startos/images/next.squashfs | head -c 32)
mv /media/startos/images/next.squashfs /media/startos/images/${hash}.rootfs
@@ -103,5 +112,5 @@ if [ "$CHROOT_RES" -eq 0 ]; then
fi
umount -R /media/startos/next
umount -R /media/startos/upper
umount /media/startos/upper
rm -rf /media/startos/upper /media/startos/next

View File

@@ -27,7 +27,6 @@ user_pref("browser.crashReports.unsubmittedCheck.autoSubmit2", false);
user_pref("browser.newtabpage.activity-stream.feeds.asrouterfeed", false);
user_pref("browser.newtabpage.activity-stream.feeds.topsites", false);
user_pref("browser.newtabpage.activity-stream.showSponsoredTopSites", false);
user_pref("browser.onboarding.enabled", false);
user_pref("browser.ping-centre.telemetry", false);
user_pref("browser.pocket.enabled", false);
user_pref("browser.safebrowsing.blockedURIs.enabled", false);
@@ -43,7 +42,7 @@ user_pref("browser.startup.homepage_override.mstone", "ignore");
user_pref("browser.theme.content-theme", 0);
user_pref("browser.theme.toolbar-theme", 0);
user_pref("browser.urlbar.groupLabels.enabled", false);
user_pref("browser.urlbar.suggest.searches" false);
user_pref("browser.urlbar.suggest.searches", false);
user_pref("datareporting.policy.firstRunURL", "");
user_pref("datareporting.healthreport.service.enabled", false);
user_pref("datareporting.healthreport.uploadEnabled", false);
@@ -52,10 +51,9 @@ user_pref("dom.securecontext.allowlist_onions", true);
user_pref("dom.securecontext.whitelist_onions", true);
user_pref("experiments.enabled", false);
user_pref("experiments.activeExperiment", false);
user_pref("experiments.supported", false);
user_pref("extensions.activeThemeID", "firefox-compact-dark@mozilla.org");
user_pref("extensions.blocklist.enabled", false);
user_pref("extensions.getAddons.cache.enabled", false);
user_pref("extensions.htmlaboutaddons.recommendations.enabled", false);
user_pref("extensions.pocket.enabled", false);
user_pref("extensions.update.enabled", false);
user_pref("extensions.shield-recipe-client.enabled", false);
@@ -66,9 +64,15 @@ user_pref("messaging-system.rsexperimentloader.enabled", false);
user_pref("network.allow-experiments", false);
user_pref("network.captive-portal-service.enabled", false);
user_pref("network.connectivity-service.enabled", false);
user_pref("network.proxy.autoconfig_url", "file:///usr/lib/startos/proxy.pac");
user_pref("network.proxy.socks", "10.0.3.1");
user_pref("network.proxy.socks_port", 9050);
user_pref("network.proxy.socks_version", 5);
user_pref("network.proxy.socks_remote_dns", true);
user_pref("network.proxy.type", 2);
user_pref("network.proxy.type", 1);
user_pref("privacy.resistFingerprinting", true);
//Enable letterboxing if we want the window size sent to the server to snap to common resolutions:
//user_pref("privacy.resistFingerprinting.letterboxing", true);
user_pref("privacy.trackingprotection.enabled", true);
user_pref("signon.rememberSignons", false);
user_pref("toolkit.telemetry.archive.enabled", false);
user_pref("toolkit.telemetry.bhrPing.enabled", false);
@@ -81,6 +85,17 @@ user_pref("toolkit.telemetry.shutdownPingSender.enabled", false);
user_pref("toolkit.telemetry.unified", false);
user_pref("toolkit.telemetry.updatePing.enabled", false);
user_pref("toolkit.telemetry.cachedClientID", "");
//Blocking automatic Mozilla CDN server requests
user_pref("extensions.getAddons.showPane", false);
user_pref("extensions.getAddons.cache.enabled", false);
//user_pref("services.settings.server", ""); // Remote settings server (HSTS preload updates and Cerfiticate Revocation Lists are fetched)
user_pref("browser.aboutHomeSnippets.updateUrl", "");
user_pref("browser.newtabpage.activity-stream.feeds.snippets", false);
user_pref("browser.newtabpage.activity-stream.feeds.section.topstories", false);
user_pref("browser.newtabpage.activity-stream.feeds.system.topstories", false);
user_pref("browser.newtabpage.activity-stream.feeds.discoverystreamfeed", false);
user_pref("browser.safebrowsing.provider.mozilla.updateURL", "");
user_pref("browser.safebrowsing.provider.mozilla.gethashURL", "");
EOF
ln -sf /usr/lib/$(uname -m)-linux-gnu/pkcs11/p11-kit-trust.so /usr/lib/firefox-esr/libnssckbi.so

View File

@@ -1,26 +1,51 @@
#!/bin/bash
if [ -z "$iiface" ] || [ -z "$oiface" ] || [ -z "$sip" ] || [ -z "$dip" ] || [ -z "$sport" ] || [ -z "$dport" ]; then
if [ -z "$sip" ] || [ -z "$dip" ] || [ -z "$dprefix" ] || [ -z "$sport" ] || [ -z "$dport" ]; then
>&2 echo 'missing required env var'
exit 1
fi
kind="-A"
NAME="F$(echo "$sip:$sport -> $dip/$dprefix:$dport" | sha256sum | head -c 15)"
for kind in INPUT FORWARD ACCEPT; do
if ! iptables -C $kind -j "${NAME}_${kind}" 2> /dev/null; then
iptables -N "${NAME}_${kind}" 2> /dev/null
iptables -A $kind -j "${NAME}_${kind}"
fi
done
for kind in PREROUTING INPUT OUTPUT POSTROUTING; do
if ! iptables -t nat -C $kind -j "${NAME}_${kind}" 2> /dev/null; then
iptables -t nat -N "${NAME}_${kind}" 2> /dev/null
iptables -t nat -A $kind -j "${NAME}_${kind}"
fi
done
err=0
trap 'err=1' ERR
for kind in INPUT FORWARD ACCEPT; do
iptables -F "${NAME}_${kind}" 2> /dev/null
done
for kind in PREROUTING INPUT OUTPUT POSTROUTING; do
iptables -t nat -F "${NAME}_${kind}" 2> /dev/null
done
if [ "$UNDO" = 1 ]; then
kind="-D"
conntrack -D -p tcp -d $sip --dport $sport || true # conntrack returns exit 1 if no connections are active
conntrack -D -p udp -d $sip --dport $sport || true # conntrack returns exit 1 if no connections are active
exit $err
fi
iptables -t nat "$kind" POSTROUTING -o $iiface -j MASQUERADE
iptables -t nat "$kind" PREROUTING -i $iiface -p tcp --dport $sport -j DNAT --to-destination $dip:$dport
iptables -t nat "$kind" PREROUTING -i $iiface -p udp --dport $sport -j DNAT --to-destination $dip:$dport
iptables -t nat "$kind" PREROUTING -i $oiface -s $dip/24 -d $sip -p tcp --dport $sport -j DNAT --to-destination $dip:$dport
iptables -t nat "$kind" PREROUTING -i $oiface -s $dip/24 -d $sip -p udp --dport $sport -j DNAT --to-destination $dip:$dport
iptables -t nat "$kind" POSTROUTING -o $oiface -s $dip/24 -d $dip/32 -p tcp --dport $dport -j SNAT --to-source $sip:$sport
iptables -t nat "$kind" POSTROUTING -o $oiface -s $dip/24 -d $dip/32 -p udp --dport $dport -j SNAT --to-source $sip:$sport
iptables -t nat -A ${NAME}_PREROUTING -d "$sip" -p tcp --dport "$sport" -j DNAT --to-destination "$dip:$dport"
iptables -t nat -A ${NAME}_PREROUTING -d "$sip" -p udp --dport "$sport" -j DNAT --to-destination "$dip:$dport"
iptables -t nat -A ${NAME}_OUTPUT -d "$sip" -p tcp --dport "$sport" -j DNAT --to-destination "$dip:$dport"
iptables -t nat -A ${NAME}_OUTPUT -d "$sip" -p udp --dport "$sport" -j DNAT --to-destination "$dip:$dport"
iptables -t nat -A ${NAME}_PREROUTING -s "$dip/$dprefix" -d "$sip" -p tcp --dport "$sport" -j DNAT --to-destination "$dip:$dport"
iptables -t nat -A ${NAME}_PREROUTING -s "$dip/$dprefix" -d "$sip" -p udp --dport "$sport" -j DNAT --to-destination "$dip:$dport"
iptables -t nat -A ${NAME}_POSTROUTING -s "$dip/$dprefix" -d "$dip" -p tcp --dport "$dport" -j MASQUERADE
iptables -t nat -A ${NAME}_POSTROUTING -s "$dip/$dprefix" -d "$dip" -p udp --dport "$dport" -j MASQUERADE
iptables -t nat "$kind" PREROUTING -i $iiface -s $sip/32 -d $sip -p tcp --dport $sport -j DNAT --to-destination $dip:$dport
iptables -t nat "$kind" PREROUTING -i $iiface -s $sip/32 -d $sip -p udp --dport $sport -j DNAT --to-destination $dip:$dport
iptables -t nat "$kind" POSTROUTING -o $oiface -s $sip/32 -d $dip/32 -p tcp --dport $dport -j SNAT --to-source $sip:$sport
iptables -t nat "$kind" POSTROUTING -o $oiface -s $sip/32 -d $dip/32 -p udp --dport $dport -j SNAT --to-source $sip:$sport
iptables -A ${NAME}_FORWARD -d $dip -p tcp --dport $dport -m state --state NEW -j ACCEPT
iptables -A ${NAME}_FORWARD -d $dip -p udp --dport $dport -m state --state NEW -j ACCEPT
exit $err

View File

@@ -83,6 +83,7 @@ local_mount_root()
if [ -d "$image" ]; then
mount -r --bind $image /lower
elif [ -f "$image" ]; then
modprobe loop
modprobe squashfs
mount -r $image /lower
else

View File

@@ -1,36 +1,64 @@
#!/bin/bash
fail=$(printf " [\033[31m fail \033[0m]")
pass=$(printf " [\033[32m pass \033[0m]")
# --- Config ---
# Colors (using printf to ensure compatibility)
GRAY=$(printf '\033[90m')
GREEN=$(printf '\033[32m')
RED=$(printf '\033[31m')
NC=$(printf '\033[0m') # No Color
# Proxies to test
proxies=(
"Host Tor|127.0.1.1:9050"
"Startd Tor|10.0.3.1:9050"
)
# Default URLs
onion_list=(
"The Tor Project|http://2gzyxa5ihm7nsggfxnu52rck2vv4rvmdlkiu3zzui5du4xyclen53wid.onion"
"Start9|http://privacy34kn4ez3y3nijweec6w4g54i3g54sdv7r5mr6soma3w4begyd.onion"
"Mempool|http://mempoolhqx4isw62xs7abwphsq7ldayuidyx2v2oethdhhj6mlo2r6ad.onion"
"DuckDuckGo|https://duckduckgogg42xjoc72x3sjasowoarfbgcmvfimaftt6twagswzczad.onion"
"Brave Search|https://search.brave4u7jddbv7cyviptqjc7jusxh72uik7zt6adtckl5f4nwy2v72qd.onion"
)
# Check if ~/.startos/tor-check.list exists and read its contents if available
if [ -f ~/.startos/tor-check.list ]; then
while IFS= read -r line; do
# Check if the line starts with a #
if [[ ! "$line" =~ ^# ]]; then
onion_list+=("$line")
# Load custom list
[ -f ~/.startos/tor-check.list ] && readarray -t custom_list < <(grep -v '^#' ~/.startos/tor-check.list) && onion_list+=("${custom_list[@]}")
# --- Functions ---
print_line() { printf "${GRAY}────────────────────────────────────────${NC}\n"; }
# --- Main ---
echo "Testing Onion Connections..."
for proxy_info in "${proxies[@]}"; do
proxy_name="${proxy_info%%|*}"
proxy_addr="${proxy_info#*|}"
print_line
printf "${GRAY}Proxy: %s (%s)${NC}\n" "$proxy_name" "$proxy_addr"
for data in "${onion_list[@]}"; do
name="${data%%|*}"
url="${data#*|}"
# Capture verbose output + http code.
# --no-progress-meter: Suppresses the "0 0 0" stats but keeps -v output
output=$(curl -v --no-progress-meter --max-time 15 --socks5-hostname "$proxy_addr" "$url" 2>&1)
exit_code=$?
if [ $exit_code -eq 0 ]; then
printf " ${GREEN}[pass]${NC} %s (%s)\n" "$name" "$url"
else
printf " ${RED}[fail]${NC} %s (%s)\n" "$name" "$url"
printf " ${RED}↳ Curl Error %s${NC}\n" "$exit_code"
# Print the last 4 lines of verbose log to show the specific handshake error
# We look for lines starting with '*' or '>' or '<' to filter out junk if any remains
echo "$output" | tail -n 4 | sed "s/^/ ${GRAY}/"
fi
done < ~/.startos/tor-check.list
fi
echo "Testing connection to Onion Pages ..."
for data in "${onion_list[@]}"; do
name="${data%%|*}"
url="${data#*|}"
if curl --socks5-hostname localhost:9050 "$url" > /dev/null 2>&1; then
echo " ${pass}: $name ($url) "
else
echo " ${fail}: $name ($url) "
fi
done
done
echo
echo "Done."
print_line
# Reset color just in case
printf "${NC}"

82
build/lib/scripts/upgrade Executable file
View File

@@ -0,0 +1,82 @@
#!/bin/bash
set -e
SOURCE_DIR="$(dirname $(realpath "${BASH_SOURCE[0]}"))"
if [ "$UID" -ne 0 ]; then
>&2 echo 'Must be run as root'
exit 1
fi
if ! [ -f "$1" ]; then
>&2 echo "usage: $0 <SQUASHFS>"
exit 1
fi
echo 'Upgrading...'
hash=$(b3sum $1 | head -c 32)
if [ -n "$2" ] && [ "$hash" != "$CHECKSUM" ]; then
>&2 echo 'Checksum mismatch'
exit 2
fi
unsquashfs -f -d / $1 boot
umount -R /media/startos/next 2> /dev/null || true
umount /media/startos/upper 2> /dev/null || true
umount /media/startos/lower 2> /dev/null || true
mkdir -p /media/startos/upper
mount -t tmpfs tmpfs /media/startos/upper
mkdir -p /media/startos/lower /media/startos/upper/data /media/startos/upper/work /media/startos/next
mount $1 /media/startos/lower
mount -t overlay \
-olowerdir=/media/startos/lower,upperdir=/media/startos/upper/data,workdir=/media/startos/upper/work \
overlay /media/startos/next
mkdir -p /media/startos/next/run
mkdir -p /media/startos/next/dev
mkdir -p /media/startos/next/sys
mkdir -p /media/startos/next/proc
mkdir -p /media/startos/next/boot
mkdir -p /media/startos/next/media/startos/root
mount --bind /run /media/startos/next/run
mount --bind /tmp /media/startos/next/tmp
mount --bind /dev /media/startos/next/dev
mount --bind /sys /media/startos/next/sys
mount --bind /proc /media/startos/next/proc
mount --bind /boot /media/startos/next/boot
mount --bind /media/startos/root /media/startos/next/media/startos/root
if mountpoint /boot/efi 2> /dev/null; then
mkdir -p /media/startos/next/boot/efi
mount --bind /boot/efi /media/startos/next/boot/efi
fi
if mountpoint /sys/firmware/efi/efivars 2> /dev/null; then
mount --bind /sys/firmware/efi/efivars /media/startos/next/sys/firmware/efi/efivars
fi
chroot /media/startos/next bash -e << "EOF"
if [ -f /boot/grub/grub.cfg ]; then
grub-install /dev/$(eval $(lsblk -o MOUNTPOINT,PKNAME -P | grep 'MOUNTPOINT="/media/startos/root"') && echo $PKNAME)
update-grub
fi
EOF
sync
umount -Rl /media/startos/next
umount /media/startos/upper
umount /media/startos/lower
mv $1 /media/startos/images/${hash}.rootfs
ln -rsf /media/startos/images/${hash}.rootfs /media/startos/config/current.rootfs
sync
echo 'System upgrade complete. Reboot to apply changes...'

View File

@@ -1,61 +0,0 @@
#!/bin/bash
set -e
if [ "$UID" -ne 0 ]; then
>&2 echo 'Must be run as root'
exit 1
fi
if [ -z "$1" ]; then
>&2 echo "usage: $0 <SQUASHFS>"
exit 1
fi
VERSION=$(unsquashfs -cat $1 /usr/lib/startos/VERSION.txt)
GIT_HASH=$(unsquashfs -cat $1 /usr/lib/startos/GIT_HASH.txt)
B3SUM=$(b3sum $1 | head -c 32)
if [ -n "$CHECKSUM" ] && [ "$CHECKSUM" != "$B3SUM" ]; then
>&2 echo "CHECKSUM MISMATCH"
exit 2
fi
mv $1 /media/startos/images/${B3SUM}.rootfs
ln -rsf /media/startos/images/${B3SUM}.rootfs /media/startos/config/current.rootfs
unsquashfs -n -f -d / /media/startos/images/${B3SUM}.rootfs boot
umount -R /media/startos/next 2> /dev/null || true
umount -R /media/startos/lower 2> /dev/null || true
umount -R /media/startos/upper 2> /dev/null || true
rm -rf /media/startos/lower /media/startos/upper /media/startos/next
mkdir /media/startos/upper
mount -t tmpfs tmpfs /media/startos/upper
mkdir -p /media/startos/lower /media/startos/upper/data /media/startos/upper/work /media/startos/next
mount /media/startos/images/${B3SUM}.rootfs /media/startos/lower
mount -t overlay \
-olowerdir=/media/startos/lower,upperdir=/media/startos/upper/data,workdir=/media/startos/upper/work \
overlay /media/startos/next
mkdir -p /media/startos/next/media/startos/root
mount --bind /media/startos/root /media/startos/next/media/startos/root
mkdir -p /media/startos/next/dev
mkdir -p /media/startos/next/sys
mkdir -p /media/startos/next/proc
mkdir -p /media/startos/next/boot
mount --bind /dev /media/startos/next/dev
mount --bind /sys /media/startos/next/sys
mount --bind /proc /media/startos/next/proc
mount --bind /boot /media/startos/next/boot
chroot /media/startos/next update-grub2
umount -R /media/startos/next
umount -R /media/startos/upper
umount -R /media/startos/lower
rm -rf /media/startos/lower /media/startos/upper /media/startos/next
sync
reboot

View File

@@ -1,4 +1,4 @@
FROM debian:bookworm
FROM debian:forky
RUN apt-get update && \
apt-get install -y \
@@ -25,7 +25,8 @@ RUN apt-get update && \
systemd-container \
systemd-sysv \
dbus \
dbus-user-session
dbus-user-session \
nodejs
RUN systemctl mask \
systemd-firstboot.service \
@@ -38,17 +39,6 @@ RUN git config --global --add safe.directory /root/start-os
RUN mkdir -p /etc/debspawn && \
echo "AllowUnsafePermissions=true" > /etc/debspawn/global.toml
ENV NVM_DIR=~/.nvm
ENV NODE_VERSION=22
RUN mkdir -p $NVM_DIR && \
curl -o- https://raw.githubusercontent.com/nvm-sh/nvm/master/install.sh | bash && \
. $NVM_DIR/nvm.sh \
nvm install $NODE_VERSION && \
nvm use $NODE_VERSION && \
nvm alias default $NODE_VERSION && \
ln -s $(which node) /usr/bin/node && \
ln -s $(which npm) /usr/bin/npm
RUN mkdir -p /root/start-os
WORKDIR /root/start-os

View File

@@ -1,6 +1,6 @@
#!/bin/bash
if [ "$FORCE_COMPAT" = 1 ] || ( [ "$REQUIRES" = "linux" ] && [ "$(uname -s)" != "Linux" ] ) || ( [ "$REQUIRES" = "debian" ] && ! which dpkg > /dev/null ); then
if [ "$FORCE_COMPAT" = 1 ] || ( [ "$REQUIRES" = "linux" ] && [ "$(uname -s)" != "Linux" ] ) || ( [ "$REQUIRES" = "debian" ] && ! which dpkg > /dev/null ) || ( [ "$REQUIRES" = "qemu" ] && ! which qemu-$ARCH > /dev/null ); then
project_pwd="$(cd "$(dirname "${BASH_SOURCE[0]}")"/../.. && pwd)/"
pwd="$(pwd)/"
if ! [[ "$pwd" = "$project_pwd"* ]]; then
@@ -18,9 +18,9 @@ if [ "$FORCE_COMPAT" = 1 ] || ( [ "$REQUIRES" = "linux" ] && [ "$(uname -s)" !=
docker run -d --rm --name os-compat --privileged --security-opt apparmor=unconfined -v "${project_pwd}:/root/start-os" -v /lib/modules:/lib/modules:ro start9/build-env
while ! docker exec os-compat systemctl is-active --quiet multi-user.target 2> /dev/null; do sleep .5; done
docker exec -eARCH -eENVIRONMENT -ePLATFORM -eGIT_BRANCH_AS_HASH $USE_TTY -w "/root/start-os${rel_pwd}" os-compat $@
docker exec -eARCH -eENVIRONMENT -ePLATFORM -eGIT_BRANCH_AS_HASH -ePROJECT -eDEPENDS -eCONFLICTS $USE_TTY -w "/root/start-os${rel_pwd}" os-compat $@
code=$?
docker stop os-compat
docker stop os-compat > /dev/null
exit $code
else
exec $@

View File

@@ -4,13 +4,17 @@ cd "$(dirname "${BASH_SOURCE[0]}")"
set -e
rm -rf web/dist/static
STATIC_DIR=web/dist/static/$1
RAW_DIR=web/dist/raw/$1
mkdir -p $STATIC_DIR
rm -rf $STATIC_DIR
if ! [[ "$ENVIRONMENT" =~ (^|-)dev($|-) ]]; then
find web/dist/raw -type f -not -name '*.gz' -and -not -name '*.br' | xargs -n 1 -P 0 gzip -kf
find web/dist/raw -type f -not -name '*.gz' -and -not -name '*.br' | xargs -n 1 -P 0 brotli -kf
find $RAW_DIR -type f -not -name '*.gz' -and -not -name '*.br' | xargs -n 1 -P 0 gzip -kf
find $RAW_DIR -type f -not -name '*.gz' -and -not -name '*.br' | xargs -n 1 -P 0 brotli -kf
for file in $(find web/dist/raw -type f -not -name '*.gz' -and -not -name '*.br'); do
for file in $(find $RAW_DIR -type f -not -name '*.gz' -and -not -name '*.br'); do
raw_size=$(du $file | awk '{print $1 * 512}')
gz_size=$(du $file.gz | awk '{print $1 * 512}')
br_size=$(du $file.br | awk '{print $1 * 512}')
@@ -23,4 +27,5 @@ if ! [[ "$ENVIRONMENT" =~ (^|-)dev($|-) ]]; then
done
fi
cp -r web/dist/raw web/dist/static
cp -r $RAW_DIR $STATIC_DIR

View File

@@ -3,4 +3,4 @@ Description=StartOS Container Runtime Failure Handler
[Service]
Type=oneshot
ExecStart=/usr/bin/start-cli rebuild
ExecStart=/usr/bin/start-container rebuild

View File

@@ -4,6 +4,7 @@ OnFailure=container-runtime-failure.service
[Service]
Type=simple
Environment=RUST_LOG=startos=debug
ExecStart=/usr/bin/node --experimental-detect-module --trace-warnings --unhandled-rejections=warn /usr/lib/startos/init/index.js
Restart=no

View File

@@ -6,13 +6,9 @@ mkdir -p /run/systemd/resolve
echo "nameserver 8.8.8.8" > /run/systemd/resolve/stub-resolv.conf
apt-get update
apt-get install -y curl rsync qemu-user-static
curl -o- https://raw.githubusercontent.com/nvm-sh/nvm/v0.39.7/install.sh | bash
source ~/.bashrc
nvm install 22
ln -s $(which node) /usr/bin/node
apt-get install -y curl rsync qemu-user-static nodejs
sed -i '/\(^\|#\)DNSStubListener=/c\DNSStubListener=no' /etc/systemd/resolved.conf
sed -i '/\(^\|#\)Storage=/c\Storage=persistent' /etc/systemd/journald.conf
sed -i '/\(^\|#\)Compress=/c\Compress=yes' /etc/systemd/journald.conf
sed -i '/\(^\|#\)SystemMaxUse=/c\SystemMaxUse=1G' /etc/systemd/journald.conf
@@ -20,4 +16,7 @@ sed -i '/\(^\|#\)ForwardToSyslog=/c\ForwardToSyslog=no' /etc/systemd/journald.co
systemctl enable container-runtime.service
rm -rf /run/systemd
rm -rf /run/systemd
rm -f /etc/resolv.conf
echo "nameserver 10.0.3.1" > /etc/resolv.conf

View File

@@ -3,7 +3,7 @@ cd "$(dirname "${BASH_SOURCE[0]}")"
set -e
DISTRO=debian
VERSION=bookworm
VERSION=trixie
ARCH=${ARCH:-$(uname -m)}
FLAVOR=default

View File

@@ -38,7 +38,7 @@
},
"../sdk/dist": {
"name": "@start9labs/start-sdk",
"version": "0.4.0-beta.33",
"version": "0.4.0-beta.45",
"license": "MIT",
"dependencies": {
"@iarna/toml": "^3.0.0",
@@ -110,6 +110,7 @@
"integrity": "sha512-l+lkXCHS6tQEc5oUpK28xBOZ6+HwaH7YwoYQbLFiYb4nS2/l1tKnZEtEWkD0GuiYdvArf9qBS0XlQGXzPMsNqQ==",
"dev": true,
"license": "MIT",
"peer": true,
"dependencies": {
"@ampproject/remapping": "^2.2.0",
"@babel/code-frame": "^7.26.2",
@@ -1200,6 +1201,7 @@
"dev": true,
"hasInstallScript": true,
"license": "Apache-2.0",
"peer": true,
"dependencies": {
"@swc/counter": "^0.1.3",
"@swc/types": "^0.1.17"
@@ -2143,6 +2145,7 @@
}
],
"license": "MIT",
"peer": true,
"dependencies": {
"caniuse-lite": "^1.0.30001688",
"electron-to-chromium": "^1.5.73",
@@ -3990,6 +3993,7 @@
"integrity": "sha512-NIy3oAFp9shda19hy4HK0HRTWKtPJmGdnvywu01nOqNC2vZg+Z+fvJDxpMQA88eb2I9EcafcdjYgsDthnYTvGw==",
"dev": true,
"license": "MIT",
"peer": true,
"dependencies": {
"@jest/core": "^29.7.0",
"@jest/types": "^29.6.3",
@@ -6556,6 +6560,7 @@
"integrity": "sha512-84MVSjMEHP+FQRPy3pX9sTVV/INIex71s9TL2Gm5FG/WG1SqXeKyZ0k7/blY/4FdOzI12CBy1vGc4og/eus0fw==",
"dev": true,
"license": "Apache-2.0",
"peer": true,
"bin": {
"tsc": "bin/tsc",
"tsserver": "bin/tsserver"

View File

@@ -35,13 +35,13 @@ const SOCKET_PATH = "/media/startos/rpc/host.sock"
let hostSystemId = 0
export type EffectContext = {
procedureId: string | null
eventId: string | null
callbacks?: CallbackHolder
constRetry?: () => void
}
const rpcRoundFor =
(procedureId: string | null) =>
(eventId: string | null) =>
<K extends T.EffectMethod | "clearCallbacks">(
method: K,
params: Record<string, unknown>,
@@ -52,7 +52,7 @@ const rpcRoundFor =
JSON.stringify({
id,
method,
params: { ...params, procedureId: procedureId || undefined },
params: { ...params, eventId: eventId ?? undefined },
}) + "\n",
)
})
@@ -103,8 +103,9 @@ const rpcRoundFor =
}
export function makeEffects(context: EffectContext): Effects {
const rpcRound = rpcRoundFor(context.procedureId)
const rpcRound = rpcRoundFor(context.eventId)
const self: Effects = {
eventId: context.eventId,
child: (name) =>
makeEffects({ ...context, callbacks: context.callbacks?.child(name) }),
constRetry: context.constRetry,
@@ -288,6 +289,7 @@ export function makeEffects(context: EffectContext): Effects {
getStatus(...[o]: Parameters<T.Effects["getStatus"]>) {
return rpcRound("get-status", o) as ReturnType<T.Effects["getStatus"]>
},
/// DEPRECATED
setMainStatus(o: { status: "running" | "stopped" }): Promise<null> {
return rpcRound("set-main-status", o) as ReturnType<
T.Effects["setHealth"]

View File

@@ -146,6 +146,7 @@ const handleRpc = (id: IdType, result: Promise<RpcResult>) =>
const hasId = object({ id: idType }).test
export class RpcListener {
shouldExit = false
unixSocketServer = net.createServer(async (server) => {})
private _system: System | undefined
private callbacks: CallbackHolder | undefined
@@ -158,6 +159,8 @@ export class RpcListener {
this.unixSocketServer.listen(SOCKET_PATH)
console.log("Listening on %s", SOCKET_PATH)
this.unixSocketServer.on("connection", (s) => {
let id: IdType = null
const captureId = <X>(x: X) => {
@@ -208,6 +211,11 @@ export class RpcListener {
.catch(mapError)
.then(logData("response"))
.then(writeDataToSocket)
.then((_) => {
if (this.shouldExit) {
process.exit(0)
}
})
.catch((e) => {
console.error(`Major error in socket handling: ${e}`)
console.debug(`Data in: ${a.toString()}`)
@@ -242,11 +250,11 @@ export class RpcListener {
.when(runType, async ({ id, params }) => {
const system = this.system
const procedure = jsonPath.unsafeCast(params.procedure)
const { input, timeout, id: procedureId } = params
const { input, timeout, id: eventId } = params
const result = this.getResult(
procedure,
system,
procedureId,
eventId,
timeout,
input,
)
@@ -256,11 +264,11 @@ export class RpcListener {
.when(sandboxRunType, async ({ id, params }) => {
const system = this.system
const procedure = jsonPath.unsafeCast(params.procedure)
const { input, timeout, id: procedureId } = params
const { input, timeout, id: eventId } = params
const result = this.getResult(
procedure,
system,
procedureId,
eventId,
timeout,
input,
)
@@ -275,7 +283,7 @@ export class RpcListener {
const callbacks =
this.callbacks?.getChild("main") || this.callbacks?.child("main")
const effects = makeEffects({
procedureId: null,
eventId: null,
callbacks,
})
return handleRpc(
@@ -284,10 +292,13 @@ export class RpcListener {
)
})
.when(stopType, async ({ id }) => {
this.callbacks?.removeChild("main")
return handleRpc(
id,
this.system.stop().then((result) => ({ result })),
this.system.stop().then((result) => {
this.callbacks?.removeChild("main")
return { result }
}),
)
})
.when(exitType, async ({ id, params }) => {
@@ -304,10 +315,11 @@ export class RpcListener {
}
await this._system.exit(
makeEffects({
procedureId: params.id,
eventId: params.id,
}),
target,
)
this.shouldExit = true
}
})().then((result) => ({ result })),
)
@@ -320,14 +332,14 @@ export class RpcListener {
const system = await this.getDependencies.system()
this.callbacks = new CallbackHolder(
makeEffects({
procedureId: params.id,
eventId: params.id,
}),
)
const callbacks = this.callbacks.child("init")
console.error("Initializing...")
await system.init(
makeEffects({
procedureId: params.id,
eventId: params.id,
callbacks,
}),
params.kind,
@@ -399,7 +411,7 @@ export class RpcListener {
private getResult(
procedure: typeof jsonPath._TYPE,
system: System,
procedureId: string,
eventId: string,
timeout: number | null | undefined,
input: any,
) {
@@ -410,7 +422,7 @@ export class RpcListener {
}
const callbacks = this.callbacks?.child(procedure)
const effects = makeEffects({
procedureId,
eventId,
callbacks,
})

View File

@@ -118,7 +118,7 @@ export class DockerProcedureContainer extends Drop {
subpath: volumeMount.path,
readonly: volumeMount.readonly,
volumeId: volumeMount["volume-id"],
filetype: "directory",
idmap: [],
},
})
} else if (volumeMount.type === "backup") {

View File

@@ -120,6 +120,7 @@ export class MainLoop {
? {
preferredExternalPort: lanConf.external,
alpn: { specified: ["http/1.1"] },
addXForwardedHeaders: false,
}
: null,
})
@@ -133,7 +134,7 @@ export class MainLoop {
delete this.mainEvent
delete this.healthLoops
await main?.daemon
.stop()
.term()
.catch((e: unknown) => console.error(`Main loop error`, utils.asError(e)))
this.effects.setMainStatus({ status: "stopped" })
if (healthLoops) healthLoops.forEach((x) => clearInterval(x.interval))

View File

@@ -289,6 +289,7 @@ function convertProperties(
const DEFAULT_REGISTRY = "https://registry.start9.com"
export class SystemForEmbassy implements System {
private version: ExtendedVersion
currentRunning: MainLoop | undefined
static async of(manifestLocation: string = MANIFEST_LOCATION) {
const moduleCode = await import(EMBASSY_JS_LOCATION)
@@ -310,7 +311,27 @@ export class SystemForEmbassy implements System {
constructor(
readonly manifest: Manifest,
readonly moduleCode: Partial<U.ExpectedExports>,
) {}
) {
this.version = ExtendedVersion.parseEmver(manifest.version)
if (
this.manifest.id === "bitcoind" &&
this.manifest.title.toLowerCase().includes("knots")
)
this.version.flavor = "knots"
if (
this.manifest.id === "lnd" ||
this.manifest.id === "ride-the-lightning" ||
this.manifest.id === "datum"
) {
this.version.upstream.prerelease = ["beta"]
} else if (
this.manifest.id === "lightning-terminal" ||
this.manifest.id === "robosats"
) {
this.version.upstream.prerelease = ["alpha"]
}
}
async init(
effects: Effects,
@@ -394,27 +415,9 @@ export class SystemForEmbassy implements System {
reason: "This service must be configured before it can be run",
})
}
const version = ExtendedVersion.parseEmver(this.manifest.version)
if (
this.manifest.id === "bitcoind" &&
this.manifest.title.toLowerCase().includes("knots")
)
version.flavor = "knots"
if (
this.manifest.id === "lnd" ||
this.manifest.id === "ride-the-lightning" ||
this.manifest.id === "datum"
) {
version.upstream.prerelease = ["beta"]
} else if (
this.manifest.id === "lightning-terminal" ||
this.manifest.id === "robosats"
) {
version.upstream.prerelease = ["alpha"]
}
await effects.setDataVersion({
version: version.toString(),
version: this.version.toString(),
})
// @FullMetal: package hacks go here
}
@@ -453,6 +456,7 @@ export class SystemForEmbassy implements System {
addSsl = {
preferredExternalPort: lanPortNum,
alpn: { specified: [] },
addXForwardedHeaders: false,
}
}
return [
@@ -506,13 +510,18 @@ export class SystemForEmbassy implements System {
): Promise<T.ActionInput | null> {
if (actionId === "config") {
const config = await this.getConfig(effects, timeoutMs)
return { spec: config.spec, value: config.config }
return {
eventId: effects.eventId!,
spec: config.spec,
value: config.config,
}
} else if (actionId === "properties") {
return null
} else {
const oldSpec = this.manifest.actions?.[actionId]?.["input-spec"]
if (!oldSpec) return null
return {
eventId: effects.eventId!,
spec: transformConfigSpec(oldSpec as OldConfigSpec),
value: null,
}
@@ -599,10 +608,7 @@ export class SystemForEmbassy implements System {
timeoutMs?: number | null,
): Promise<void> {
await this.currentRunning?.clean({ timeout: timeoutMs ?? undefined })
if (
target &&
!overlaps(target, ExtendedVersion.parseEmver(this.manifest.version))
) {
if (target) {
await this.migration(effects, { to: target }, timeoutMs ?? null)
}
await effects.setMainStatus({ status: "stopped" })
@@ -823,6 +829,7 @@ export class SystemForEmbassy implements System {
let migration
let args: [string, ...string[]]
if ("from" in version) {
if (overlaps(this.version, version.from)) return null
args = [version.from.toString(), "from"]
if (!this.manifest.migrations) return { configured: true }
migration = Object.entries(this.manifest.migrations.from)
@@ -832,6 +839,7 @@ export class SystemForEmbassy implements System {
)
.find(([versionEmver, _]) => overlaps(versionEmver, version.from))
} else {
if (overlaps(this.version, version.to)) return null
args = [version.to.toString(), "to"]
if (!this.manifest.migrations) return { configured: true }
migration = Object.entries(this.manifest.migrations.to)
@@ -881,7 +889,6 @@ export class SystemForEmbassy implements System {
effects: Effects,
timeoutMs: number | null,
): Promise<PropertiesReturn> {
// TODO BLU-J set the properties ever so often
const setConfigValue = this.manifest.properties
if (!setConfigValue) throw new Error("There is no properties")
if (setConfigValue.type === "docker") {
@@ -1036,17 +1043,17 @@ export class SystemForEmbassy implements System {
volumeId: "embassy",
subpath: null,
readonly: true,
filetype: "directory",
idmap: [],
},
})
configFile
.withPath(`/media/embassy/${id}/config.json`)
.read()
.onChange(effects, async (oldConfig: U.Config) => {
if (!oldConfig) return
if (!oldConfig) return { cancel: false }
const moduleCode = await this.moduleCode
const method = moduleCode?.dependencies?.[id]?.autoConfigure
if (!method) return
if (!method) return { cancel: true }
const newConfig = (await method(
polyfillEffects(effects, this.manifest),
JSON.parse(JSON.stringify(oldConfig)),
@@ -1073,6 +1080,7 @@ export class SystemForEmbassy implements System {
},
})
}
return { cancel: false }
})
}
}
@@ -1183,7 +1191,7 @@ async function updateConfig(
volumeId: "embassy",
subpath: null,
readonly: true,
filetype: "directory",
idmap: [],
},
})
const remoteConfig = configFile
@@ -1230,14 +1238,14 @@ async function updateConfig(
const url: string =
filled === null || filled.addressInfo === null
? ""
: catchFn(() =>
utils.hostnameInfoToAddress(
specValue.target === "lan-address"
? filled.addressInfo!.localHostnames[0] ||
filled.addressInfo!.onionHostnames[0]
: filled.addressInfo!.onionHostnames[0] ||
filled.addressInfo!.localHostnames[0],
),
: catchFn(
() =>
(specValue.target === "lan-address"
? filled.addressInfo!.filter({ kind: "mdns" }) ||
filled.addressInfo!.onion
: filled.addressInfo!.onion ||
filled.addressInfo!.filter({ kind: "mdns" })
).hostnames[0].hostname.value,
) || ""
mutConfigValue[key] = url
}

View File

@@ -32,7 +32,7 @@ export class SystemForStartOs implements System {
target: ExtendedVersion | VersionRange | null,
timeoutMs: number | null = null,
): Promise<void> {
// TODO: stop?
await this.stop()
return void (await this.abi.uninit({ effects, target }))
}
@@ -68,22 +68,18 @@ export class SystemForStartOs implements System {
try {
if (this.runningMain || this.starting) return
this.starting = true
effects.constRetry = utils.once(() => effects.restart())
effects.constRetry = utils.once(() => {
console.debug(".const() triggered")
effects.restart()
})
let mainOnTerm: () => Promise<void> | undefined
const started = async (onTerm: () => Promise<void>) => {
await effects.setMainStatus({ status: "running" })
mainOnTerm = onTerm
return null
}
const daemons = await (
await this.abi.main({
effects,
started,
})
).build()
this.runningMain = {
stop: async () => {
if (mainOnTerm) await mainOnTerm()
await daemons.term()
},
}

View File

@@ -7,6 +7,12 @@ const getDependencies: AllGetDependencies = {
system: getSystem,
}
for (let s of ["SIGTERM", "SIGINT", "SIGHUP"]) {
process.on(s, (s) => {
console.log(`Caught ${s}`)
})
}
new RpcListener(getDependencies)
/**

View File

@@ -4,7 +4,12 @@ cd "$(dirname "${BASH_SOURCE[0]}")"
set -e
if mountpoint -q tmp/combined; then sudo umount -R tmp/combined; fi
RUST_ARCH="$ARCH"
if [ "$ARCH" = "riscv64" ]; then
RUST_ARCH="riscv64gc"
fi
if mountpoint -q tmp/combined; then sudo umount -l tmp/combined; fi
if mountpoint -q tmp/lower; then sudo umount tmp/lower; fi
sudo rm -rf tmp
mkdir -p tmp/lower tmp/upper tmp/work tmp/combined
@@ -21,15 +26,15 @@ fi
QEMU=
if [ "$ARCH" != "$(uname -m)" ]; then
QEMU=/usr/bin/qemu-${ARCH}-static
if ! which qemu-$ARCH-static > /dev/null; then
>&2 echo qemu-user-static is required for cross-platform builds
QEMU=/usr/bin/qemu-${ARCH}
if ! which qemu-$ARCH > /dev/null; then
>&2 echo qemu-user is required for cross-platform builds
sudo umount tmp/combined
sudo umount tmp/lower
sudo rm -rf tmp
exit 1
fi
sudo cp $(which qemu-$ARCH-static) tmp/combined${QEMU}
sudo cp $(which qemu-$ARCH) tmp/combined${QEMU}
fi
sudo mkdir -p tmp/combined/usr/lib/startos/
@@ -39,8 +44,10 @@ sudo cp container-runtime.service tmp/combined/lib/systemd/system/container-runt
sudo chown 0:0 tmp/combined/lib/systemd/system/container-runtime.service
sudo cp container-runtime-failure.service tmp/combined/lib/systemd/system/container-runtime-failure.service
sudo chown 0:0 tmp/combined/lib/systemd/system/container-runtime-failure.service
sudo cp ../core/target/$ARCH-unknown-linux-musl/release/containerbox tmp/combined/usr/bin/start-cli
sudo chown 0:0 tmp/combined/usr/bin/start-cli
sudo cp ../core/target/${RUST_ARCH}-unknown-linux-musl/release/start-container tmp/combined/usr/bin/start-container
echo -e '#!/bin/bash\nexec start-container "$@"' | sudo tee tmp/combined/usr/bin/start-cli # TODO: remove
sudo chmod +x tmp/combined/usr/bin/start-cli
sudo chown 0:0 tmp/combined/usr/bin/start-container
echo container-runtime | sha256sum | head -c 32 | cat - <(echo) | sudo tee tmp/combined/etc/machine-id
cat deb-install.sh | sudo systemd-nspawn --console=pipe -D tmp/combined $QEMU /bin/bash
sudo truncate -s 0 tmp/combined/etc/machine-id

6610
core/Cargo.lock generated

File diff suppressed because it is too large Load Diff

View File

@@ -1,3 +1,3 @@
[workspace]
members = ["helpers", "models", "startos"]
members = ["startos"]

2
core/Cross.toml Normal file
View File

@@ -0,0 +1,2 @@
[build]
pre-build = ["apt-get update && apt-get install -y rsync"]

View File

@@ -2,53 +2,78 @@
cd "$(dirname "${BASH_SOURCE[0]}")"
source ./builder-alias.sh
set -ea
INSTALL=false
while [[ $# -gt 0 ]]; do
case $1 in
--install)
INSTALL=true
shift
;;
*)
>&2 echo "Unknown option: $1"
exit 1
;;
esac
done
shopt -s expand_aliases
if [ -z "$ARCH" ]; then
ARCH=$(uname -m)
PROFILE=${PROFILE:-release}
if [ "${PROFILE}" = "release" ]; then
BUILD_FLAGS="--release"
else
if [ "$PROFILE" != "debug"]; then
>&2 echo "Unknown profile $PROFILE: falling back to debug..."
PROFILE=debug
fi
fi
if [ -z "${ARCH:-}" ]; then
ARCH=$(uname -m)
fi
if [ "$ARCH" = "arm64" ]; then
ARCH="aarch64"
fi
if [ -z "$KERNEL_NAME" ]; then
KERNEL_NAME=$(uname -s)
RUST_ARCH="$ARCH"
if [ "$ARCH" = "riscv64" ]; then
RUST_ARCH="riscv64gc"
fi
if [ -z "$TARGET" ]; then
if [ "$KERNEL_NAME" = "Linux" ]; then
TARGET="$ARCH-unknown-linux-musl"
elif [ "$KERNEL_NAME" = "Darwin" ]; then
TARGET="$ARCH-apple-darwin"
else
>&2 echo "unknown kernel $KERNEL_NAME"
exit 1
fi
if [ -z "${KERNEL_NAME:-}" ]; then
KERNEL_NAME=$(uname -s)
fi
USE_TTY=
if tty -s; then
USE_TTY="-it"
if [ -z "${TARGET:-}" ]; then
if [ "$KERNEL_NAME" = "Linux" ]; then
TARGET="$RUST_ARCH-unknown-linux-musl"
elif [ "$KERNEL_NAME" = "Darwin" ]; then
TARGET="$RUST_ARCH-apple-darwin"
else
>&2 echo "unknown kernel $KERNEL_NAME"
exit 1
fi
fi
cd ..
FEATURES="$(echo $ENVIRONMENT | sed 's/-/,/g')"
FEATURES="$(echo "${ENVIRONMENT:-}" | sed 's/-/,/g')"
RUSTFLAGS=""
if [[ "${ENVIRONMENT}" =~ (^|-)unstable($|-) ]]; then
RUSTFLAGS="--cfg tokio_unstable"
if [[ "${ENVIRONMENT:-}" =~ (^|-)console($|-) ]]; then
RUSTFLAGS="--cfg tokio_unstable"
fi
if which zig > /dev/null && [ "$ENFORCE_USE_DOCKER" != 1 ]; do
echo "FEATURES=\"$FEATURES\""
echo "RUSTFLAGS=\"$RUSTFLAGS\""
RUSTFLAGS=$RUSTFLAGS sh -c "cd core && cargo zigbuild --release --no-default-features --features cli,$FEATURES --locked --bin start-cli --target=$TARGET"
else
alias 'rust-zig-builder'='docker run $USE_TTY --rm -e "RUSTFLAGS=$RUSTFLAGS" -v "$HOME/.cargo/registry":/root/.cargo/registry -v "$HOME/.cargo/git":/root/.cargo/git -v "$(pwd)":/home/rust/src -w /home/rust/src -P messense/cargo-zigbuild'
RUSTFLAGS=$RUSTFLAGS rust-zig-builder sh -c "cd core && cargo zigbuild --release --no-default-features --features cli,$FEATURES --locked --bin start-cli --target=$TARGET"
echo "FEATURES=\"$FEATURES\""
echo "RUSTFLAGS=\"$RUSTFLAGS\""
rust-zig-builder cargo zigbuild --manifest-path=./core/Cargo.toml $BUILD_FLAGS --features=$FEATURES --locked --bin start-cli --target=$TARGET
if [ "$(ls -nd "core/target/$TARGET/$PROFILE/start-cli" | awk '{ print $3 }')" != "$UID" ]; then
rust-zig-builder sh -c "cd core && chown -R $UID:$UID target && chown -R $UID:$UID /usr/local/cargo"
fi
if [ "$(ls -nd core/target/$TARGET/release/start-cli | awk '{ print $3 }')" != "$UID" ]; then
rust-zig-builder sh -c "cd core && chown -R $UID:$UID target && chown -R $UID:$UID /root/.cargo"
fi
if [ "$INSTALL" = "true" ]; then
cp "core/target/$TARGET/$PROFILE/start-cli" ~/.cargo/bin/start-cli
fi

View File

@@ -1,36 +0,0 @@
#!/bin/bash
cd "$(dirname "${BASH_SOURCE[0]}")"
set -ea
shopt -s expand_aliases
if [ -z "$ARCH" ]; then
ARCH=$(uname -m)
fi
if [ "$ARCH" = "arm64" ]; then
ARCH="aarch64"
fi
USE_TTY=
if tty -s; then
USE_TTY="-it"
fi
cd ..
FEATURES="$(echo $ENVIRONMENT | sed 's/-/,/g')"
RUSTFLAGS=""
if [[ "${ENVIRONMENT}" =~ (^|-)unstable($|-) ]]; then
RUSTFLAGS="--cfg tokio_unstable"
fi
alias 'rust-musl-builder'='docker run $USE_TTY --rm -e "RUSTFLAGS=$RUSTFLAGS" -v "$HOME/.cargo/registry":/root/.cargo/registry -v "$HOME/.cargo/git":/root/.cargo/git -v "$(pwd)":/home/rust/src -w /home/rust/src -P messense/rust-musl-cross:$ARCH-musl'
echo "FEATURES=\"$FEATURES\""
echo "RUSTFLAGS=\"$RUSTFLAGS\""
rust-musl-builder sh -c "cd core && cargo build --release --no-default-features --features container-runtime,$FEATURES --locked --bin containerbox --target=$ARCH-unknown-linux-musl"
if [ "$(ls -nd core/target/$ARCH-unknown-linux-musl/release/containerbox | awk '{ print $3 }')" != "$UID" ]; then
rust-musl-builder sh -c "cd core && chown -R $UID:$UID target && chown -R $UID:$UID /root/.cargo"
fi

View File

@@ -2,9 +2,21 @@
cd "$(dirname "${BASH_SOURCE[0]}")"
source ./builder-alias.sh
set -ea
shopt -s expand_aliases
PROFILE=${PROFILE:-release}
if [ "${PROFILE}" = "release" ]; then
BUILD_FLAGS="--release"
else
if [ "$PROFILE" != "debug"]; then
>&2 echo "Unknown profile $PROFILE: falling back to debug..."
PROFILE=debug
fi
fi
if [ -z "$ARCH" ]; then
ARCH=$(uname -m)
fi
@@ -13,24 +25,22 @@ if [ "$ARCH" = "arm64" ]; then
ARCH="aarch64"
fi
USE_TTY=
if tty -s; then
USE_TTY="-it"
RUST_ARCH="$ARCH"
if [ "$ARCH" = "riscv64" ]; then
RUST_ARCH="riscv64gc"
fi
cd ..
FEATURES="$(echo $ENVIRONMENT | sed 's/-/,/g')"
RUSTFLAGS=""
if [[ "${ENVIRONMENT}" =~ (^|-)unstable($|-) ]]; then
if [[ "${ENVIRONMENT}" =~ (^|-)console($|-) ]]; then
RUSTFLAGS="--cfg tokio_unstable"
fi
alias 'rust-musl-builder'='docker run $USE_TTY --rm -e "RUSTFLAGS=$RUSTFLAGS" -v "$HOME/.cargo/registry":/root/.cargo/registry -v "$HOME/.cargo/git":/root/.cargo/git -v "$(pwd)":/home/rust/src -w /home/rust/src -P messense/rust-musl-cross:$ARCH-musl'
echo "FEATURES=\"$FEATURES\""
echo "RUSTFLAGS=\"$RUSTFLAGS\""
rust-musl-builder sh -c "cd core && cargo build --release --no-default-features --features cli,registry,$FEATURES --locked --bin registrybox --target=$ARCH-unknown-linux-musl"
if [ "$(ls -nd core/target/$ARCH-unknown-linux-musl/release/registrybox | awk '{ print $3 }')" != "$UID" ]; then
rust-musl-builder sh -c "cd core && chown -R $UID:$UID target && chown -R $UID:$UID /root/.cargo"
fi
rust-zig-builder cargo zigbuild --manifest-path=./core/Cargo.toml $BUILD_FLAGS --features=$FEATURES --locked --bin registrybox --target=$RUST_ARCH-unknown-linux-musl
if [ "$(ls -nd "core/target/$RUST_ARCH-unknown-linux-musl/$PROFILE/registrybox" | awk '{ print $3 }')" != "$UID" ]; then
rust-zig-builder sh -c "chown -R $UID:$UID core/target && chown -R $UID:$UID /usr/local/cargo"
fi

46
core/build-start-container.sh Executable file
View File

@@ -0,0 +1,46 @@
#!/bin/bash
cd "$(dirname "${BASH_SOURCE[0]}")"
source ./builder-alias.sh
set -ea
shopt -s expand_aliases
PROFILE=${PROFILE:-release}
if [ "${PROFILE}" = "release" ]; then
BUILD_FLAGS="--release"
else
if [ "$PROFILE" != "debug"]; then
>&2 echo "Unknown profile $PROFILE: falling back to debug..."
PROFILE=debug
fi
fi
if [ -z "$ARCH" ]; then
ARCH=$(uname -m)
fi
if [ "$ARCH" = "arm64" ]; then
ARCH="aarch64"
fi
RUST_ARCH="$ARCH"
if [ "$ARCH" = "riscv64" ]; then
RUST_ARCH="riscv64gc"
fi
cd ..
FEATURES="$(echo $ENVIRONMENT | sed 's/-/,/g')"
RUSTFLAGS=""
if [[ "${ENVIRONMENT}" =~ (^|-)console($|-) ]]; then
RUSTFLAGS="--cfg tokio_unstable"
fi
echo "FEATURES=\"$FEATURES\""
echo "RUSTFLAGS=\"$RUSTFLAGS\""
rust-zig-builder cargo zigbuild --manifest-path=./core/Cargo.toml $BUILD_FLAGS --features=$FEATURES --locked --bin start-container --target=$RUST_ARCH-unknown-linux-musl
if [ "$(ls -nd "core/target/$RUST_ARCH-unknown-linux-musl/$PROFILE/start-container" | awk '{ print $3 }')" != "$UID" ]; then
rust-zig-builder sh -c "chown -R $UID:$UID core/target && chown -R $UID:$UID /usr/local/cargo"
fi

View File

@@ -2,9 +2,21 @@
cd "$(dirname "${BASH_SOURCE[0]}")"
source ./builder-alias.sh
set -ea
shopt -s expand_aliases
PROFILE=${PROFILE:-release}
if [ "${PROFILE}" = "release" ]; then
BUILD_FLAGS="--release"
else
if [ "$PROFILE" != "debug"]; then
>&2 echo "Unknown profile $PROFILE: falling back to debug..."
PROFILE=debug
fi
fi
if [ -z "$ARCH" ]; then
ARCH=$(uname -m)
fi
@@ -13,24 +25,22 @@ if [ "$ARCH" = "arm64" ]; then
ARCH="aarch64"
fi
USE_TTY=
if tty -s; then
USE_TTY="-it"
RUST_ARCH="$ARCH"
if [ "$ARCH" = "riscv64" ]; then
RUST_ARCH="riscv64gc"
fi
cd ..
FEATURES="$(echo $ENVIRONMENT | sed 's/-/,/g')"
RUSTFLAGS=""
if [[ "${ENVIRONMENT}" =~ (^|-)unstable($|-) ]]; then
if [[ "${ENVIRONMENT}" =~ (^|-)console($|-) ]]; then
RUSTFLAGS="--cfg tokio_unstable"
fi
alias 'rust-musl-builder'='docker run $USE_TTY --rm -e "RUSTFLAGS=$RUSTFLAGS" -v "$HOME/.cargo/registry":/root/.cargo/registry -v "$HOME/.cargo/git":/root/.cargo/git -v "$(pwd)":/home/rust/src -w /home/rust/src -P messense/rust-musl-cross:$ARCH-musl'
echo "FEATURES=\"$FEATURES\""
echo "RUSTFLAGS=\"$RUSTFLAGS\""
rust-musl-builder sh -c "cd core && cargo build --release --no-default-features --features cli,daemon,$FEATURES --locked --bin startbox --target=$ARCH-unknown-linux-musl"
if [ "$(ls -nd core/target/$ARCH-unknown-linux-musl/release/startbox | awk '{ print $3 }')" != "$UID" ]; then
rust-musl-builder sh -c "cd core && chown -R $UID:$UID target && chown -R $UID:$UID /root/.cargo"
rust-zig-builder cargo zigbuild --manifest-path=./core/Cargo.toml $BUILD_FLAGS --features=$FEATURES --locked --bin startbox --target=$RUST_ARCH-unknown-linux-musl
if [ "$(ls -nd "core/target/$RUST_ARCH-unknown-linux-musl/$PROFILE/startbox" | awk '{ print $3 }')" != "$UID" ]; then
rust-zig-builder sh -c "chown -R $UID:$UID core/target && chown -R $UID:$UID /usr/local/cargo"
fi

View File

@@ -2,35 +2,43 @@
cd "$(dirname "${BASH_SOURCE[0]}")"
source ./builder-alias.sh
set -ea
shopt -s expand_aliases
PROFILE=${PROFILE:-release}
if [ "${PROFILE}" = "release" ]; then
BUILD_FLAGS="--release"
else
if [ "$PROFILE" != "debug"]; then
>&2 echo "Unknown profile $PROFILE: falling back to debug..."
PROFILE=debug
fi
fi
if [ -z "$ARCH" ]; then
ARCH=$(uname -m)
fi
if [ "$ARCH" = "arm64" ]; then
ARCH="aarch64"
ARCH="aarch64"
fi
USE_TTY=
if tty -s; then
USE_TTY="-it"
RUST_ARCH="$ARCH"
if [ "$ARCH" = "riscv64" ]; then
RUST_ARCH="riscv64gc"
fi
cd ..
FEATURES="$(echo $ENVIRONMENT | sed 's/-/,/g')"
RUSTFLAGS=""
if [[ "${ENVIRONMENT}" =~ (^|-)unstable($|-) ]]; then
if [[ "${ENVIRONMENT}" =~ (^|-)console($|-) ]]; then
RUSTFLAGS="--cfg tokio_unstable"
fi
alias 'rust-musl-builder'='docker run $USE_TTY --rm -e "RUSTFLAGS=$RUSTFLAGS" -v "$HOME/.cargo/registry":/root/.cargo/registry -v "$HOME/.cargo/git":/root/.cargo/git -v "$(pwd)":/home/rust/src -w /home/rust/src -P messense/rust-musl-cross:$ARCH-musl'
echo "FEATURES=\"$FEATURES\""
echo "RUSTFLAGS=\"$RUSTFLAGS\""
rust-musl-builder sh -c "cd core && cargo test --release --features=test,$FEATURES 'export_bindings_' && chown \$UID:\$UID startos/bindings"
if [ "$(ls -nd core/startos/bindings | awk '{ print $3 }')" != "$UID" ]; then
rust-musl-builder sh -c "cd core && chown -R $UID:$UID startos/bindings && chown -R $UID:$UID target && chown -R $UID:$UID /root/.cargo"
rust-zig-builder cargo test --manifest-path=./core/Cargo.toml $BUILD_FLAGS --features test,$FEATURES --locked 'export_bindings_'
if [ "$(ls -nd "core/startos/bindings" | awk '{ print $3 }')" != "$UID" ]; then
rust-zig-builder sh -c "chown -R $UID:$UID core/target && chown -R $UID:$UID core/startos/bindings && chown -R $UID:$UID /usr/local/cargo"
fi

46
core/build-tunnelbox.sh Executable file
View File

@@ -0,0 +1,46 @@
#!/bin/bash
cd "$(dirname "${BASH_SOURCE[0]}")"
source ./builder-alias.sh
set -ea
shopt -s expand_aliases
PROFILE=${PROFILE:-release}
if [ "${PROFILE}" = "release" ]; then
BUILD_FLAGS="--release"
else
if [ "$PROFILE" != "debug"]; then
>&2 echo "Unknown profile $PROFILE: falling back to debug..."
PROFILE=debug
fi
fi
if [ -z "$ARCH" ]; then
ARCH=$(uname -m)
fi
if [ "$ARCH" = "arm64" ]; then
ARCH="aarch64"
fi
RUST_ARCH="$ARCH"
if [ "$ARCH" = "riscv64" ]; then
RUST_ARCH="riscv64gc"
fi
cd ..
FEATURES="$(echo $ENVIRONMENT | sed 's/-/,/g')"
RUSTFLAGS=""
if [[ "${ENVIRONMENT}" =~ (^|-)console($|-) ]]; then
RUSTFLAGS="--cfg tokio_unstable"
fi
echo "FEATURES=\"$FEATURES\""
echo "RUSTFLAGS=\"$RUSTFLAGS\""
rust-zig-builder cargo zigbuild --manifest-path=./core/Cargo.toml $BUILD_FLAGS --features=$FEATURES --locked --bin tunnelbox --target=$RUST_ARCH-unknown-linux-musl
if [ "$(ls -nd "core/target/$RUST_ARCH-unknown-linux-musl/$PROFILE/tunnelbox" | awk '{ print $3 }')" != "$UID" ]; then
rust-zig-builder sh -c "chown -R $UID:$UID core/target && chown -R $UID:$UID /usr/local/cargo"
fi

8
core/builder-alias.sh Normal file
View File

@@ -0,0 +1,8 @@
#!/bin/bash
USE_TTY=
if tty -s; then
USE_TTY="-it"
fi
alias 'rust-zig-builder'='docker run '"$USE_TTY"' --rm -e "RUSTFLAGS=$RUSTFLAGS" -e "PKG_CONFIG_SYSROOT_DIR=/opt/sysroot/$ARCH" -e PKG_CONFIG_PATH="" -e PKG_CONFIG_LIBDIR="/opt/sysroot/$ARCH/usr/lib/pkgconfig" -e "AWS_LC_SYS_CMAKE_TOOLCHAIN_FILE_riscv64gc_unknown_linux_musl=/root/cmake-overrides/toolchain-riscv64-musl-clang.cmake" -e SCCACHE_GHA_ENABLED -e SCCACHE_GHA_VERSION -e ACTIONS_RESULTS_URL -e ACTIONS_RUNTIME_TOKEN -v "$HOME/.cargo/registry":/usr/local/cargo/registry -v "$HOME/.cargo/git":/usr/local/cargo/git -v "$HOME/.cache/sccache":/root/.cache/sccache -v "$HOME/.cache/cargo-zigbuild:/root/.cache/cargo-zigbuild" -v "$(pwd)":/workdir -w /workdir start9/cargo-zigbuild'

View File

@@ -1,19 +0,0 @@
[package]
name = "helpers"
version = "0.1.0"
edition = "2021"
# See more keys and their definitions at https://doc.rust-lang.org/cargo/reference/manifest.html
[dependencies]
color-eyre = "0.6.2"
futures = "0.3.28"
lazy_async_pool = "0.3.3"
models = { path = "../models" }
pin-project = "1.1.3"
rpc-toolkit = { git = "https://github.com/Start9Labs/rpc-toolkit.git", branch = "master" }
serde = { version = "1.0", features = ["derive", "rc"] }
serde_json = "1.0"
tokio = { version = "1", features = ["full"] }
tokio-stream = { version = "0.1.14", features = ["io-util", "sync"] }
tracing = "0.1.39"

View File

@@ -1,31 +0,0 @@
use std::task::Poll;
use tokio::io::{AsyncRead, ReadBuf};
#[pin_project::pin_project]
pub struct ByteReplacementReader<R> {
pub replace: u8,
pub with: u8,
#[pin]
pub inner: R,
}
impl<R: AsyncRead> AsyncRead for ByteReplacementReader<R> {
fn poll_read(
self: std::pin::Pin<&mut Self>,
cx: &mut std::task::Context<'_>,
buf: &mut ReadBuf<'_>,
) -> std::task::Poll<std::io::Result<()>> {
let this = self.project();
match this.inner.poll_read(cx, buf) {
Poll::Ready(Ok(())) => {
for idx in 0..buf.filled().len() {
if buf.filled()[idx] == *this.replace {
buf.filled_mut()[idx] = *this.with;
}
}
Poll::Ready(Ok(()))
}
a => a,
}
}
}

View File

@@ -1,262 +0,0 @@
use std::future::Future;
use std::ops::{Deref, DerefMut};
use std::path::{Path, PathBuf};
use std::time::Duration;
use color_eyre::eyre::{eyre, Context, Error};
use futures::future::BoxFuture;
use futures::FutureExt;
use models::ResultExt;
use tokio::fs::File;
use tokio::sync::oneshot;
use tokio::task::{JoinError, JoinHandle, LocalSet};
mod byte_replacement_reader;
mod rsync;
mod script_dir;
pub use byte_replacement_reader::*;
pub use rsync::*;
pub use script_dir::*;
pub fn const_true() -> bool {
true
}
pub fn to_tmp_path(path: impl AsRef<Path>) -> Result<PathBuf, Error> {
let path = path.as_ref();
if let (Some(parent), Some(file_name)) =
(path.parent(), path.file_name().and_then(|f| f.to_str()))
{
Ok(parent.join(format!(".{}.tmp", file_name)))
} else {
Err(eyre!("invalid path: {}", path.display()))
}
}
pub async fn canonicalize(
path: impl AsRef<Path> + Send + Sync,
create_parent: bool,
) -> Result<PathBuf, Error> {
fn create_canonical_folder<'a>(
path: impl AsRef<Path> + Send + Sync + 'a,
) -> BoxFuture<'a, Result<PathBuf, Error>> {
async move {
let path = canonicalize(path, true).await?;
tokio::fs::create_dir(&path)
.await
.with_context(|| path.display().to_string())?;
Ok(path)
}
.boxed()
}
let path = path.as_ref();
if tokio::fs::metadata(path).await.is_err() {
let parent = path.parent().unwrap_or(Path::new("."));
if let Some(file_name) = path.file_name() {
if create_parent && tokio::fs::metadata(parent).await.is_err() {
return Ok(create_canonical_folder(parent).await?.join(file_name));
} else {
return Ok(tokio::fs::canonicalize(parent)
.await
.with_context(|| parent.display().to_string())?
.join(file_name));
}
}
}
tokio::fs::canonicalize(&path)
.await
.with_context(|| path.display().to_string())
}
#[pin_project::pin_project(PinnedDrop)]
pub struct NonDetachingJoinHandle<T>(#[pin] JoinHandle<T>);
impl<T> NonDetachingJoinHandle<T> {
pub async fn wait_for_abort(self) -> Result<T, JoinError> {
self.abort();
self.await
}
}
impl<T> From<JoinHandle<T>> for NonDetachingJoinHandle<T> {
fn from(t: JoinHandle<T>) -> Self {
NonDetachingJoinHandle(t)
}
}
impl<T> Deref for NonDetachingJoinHandle<T> {
type Target = JoinHandle<T>;
fn deref(&self) -> &Self::Target {
&self.0
}
}
impl<T> DerefMut for NonDetachingJoinHandle<T> {
fn deref_mut(&mut self) -> &mut Self::Target {
&mut self.0
}
}
#[pin_project::pinned_drop]
impl<T> PinnedDrop for NonDetachingJoinHandle<T> {
fn drop(self: std::pin::Pin<&mut Self>) {
let this = self.project();
this.0.into_ref().get_ref().abort()
}
}
impl<T> Future for NonDetachingJoinHandle<T> {
type Output = Result<T, JoinError>;
fn poll(
self: std::pin::Pin<&mut Self>,
cx: &mut std::task::Context<'_>,
) -> std::task::Poll<Self::Output> {
let this = self.project();
this.0.poll(cx)
}
}
pub struct AtomicFile {
tmp_path: PathBuf,
path: PathBuf,
file: Option<File>,
}
impl AtomicFile {
pub async fn new(
path: impl AsRef<Path> + Send + Sync,
tmp_path: Option<impl AsRef<Path> + Send + Sync>,
) -> Result<Self, Error> {
let path = canonicalize(&path, true).await?;
let tmp_path = if let Some(tmp_path) = tmp_path {
canonicalize(&tmp_path, true).await?
} else {
to_tmp_path(&path)?
};
let file = File::create(&tmp_path)
.await
.with_context(|| tmp_path.display().to_string())?;
Ok(Self {
tmp_path,
path,
file: Some(file),
})
}
pub async fn rollback(mut self) -> Result<(), Error> {
drop(self.file.take());
tokio::fs::remove_file(&self.tmp_path)
.await
.with_context(|| format!("rm {}", self.tmp_path.display()))?;
Ok(())
}
pub async fn save(mut self) -> Result<(), Error> {
use tokio::io::AsyncWriteExt;
if let Some(file) = self.file.as_mut() {
file.flush().await?;
file.shutdown().await?;
file.sync_all().await?;
}
drop(self.file.take());
tokio::fs::rename(&self.tmp_path, &self.path)
.await
.with_context(|| {
format!("mv {} -> {}", self.tmp_path.display(), self.path.display())
})?;
Ok(())
}
}
impl std::ops::Deref for AtomicFile {
type Target = File;
fn deref(&self) -> &Self::Target {
self.file.as_ref().unwrap()
}
}
impl std::ops::DerefMut for AtomicFile {
fn deref_mut(&mut self) -> &mut Self::Target {
self.file.as_mut().unwrap()
}
}
impl Drop for AtomicFile {
fn drop(&mut self) {
if let Some(file) = self.file.take() {
drop(file);
let path = std::mem::take(&mut self.tmp_path);
tokio::spawn(async move { tokio::fs::remove_file(path).await.log_err() });
}
}
}
pub struct TimedResource<T: 'static + Send> {
handle: NonDetachingJoinHandle<Option<T>>,
ready: oneshot::Sender<()>,
}
impl<T: 'static + Send> TimedResource<T> {
pub fn new(resource: T, timer: Duration) -> Self {
let (send, recv) = oneshot::channel();
let handle = tokio::spawn(async move {
tokio::select! {
_ = tokio::time::sleep(timer) => {
drop(resource);
None
},
_ = recv => Some(resource),
}
});
Self {
handle: handle.into(),
ready: send,
}
}
pub fn new_with_destructor<
Fn: FnOnce(T) -> Fut + Send + 'static,
Fut: Future<Output = ()> + Send,
>(
resource: T,
timer: Duration,
destructor: Fn,
) -> Self {
let (send, recv) = oneshot::channel();
let handle = tokio::spawn(async move {
tokio::select! {
_ = tokio::time::sleep(timer) => {
destructor(resource).await;
None
},
_ = recv => Some(resource),
}
});
Self {
handle: handle.into(),
ready: send,
}
}
pub async fn get(self) -> Option<T> {
let _ = self.ready.send(());
self.handle.await.unwrap()
}
pub fn is_timed_out(&self) -> bool {
self.ready.is_closed()
}
}
pub async fn spawn_local<
T: 'static + Send,
F: FnOnce() -> Fut + Send + 'static,
Fut: Future<Output = T> + 'static,
>(
fut: F,
) -> NonDetachingJoinHandle<T> {
let (send, recv) = tokio::sync::oneshot::channel();
std::thread::spawn(move || {
tokio::runtime::Builder::new_current_thread()
.enable_all()
.build()
.unwrap()
.block_on(async move {
let set = LocalSet::new();
send.send(set.spawn_local(fut()).into())
.unwrap_or_else(|_| unreachable!());
set.await
})
});
recv.await.unwrap()
}

View File

@@ -1,82 +0,0 @@
use std::sync::Arc;
use color_eyre::Report;
use models::InterfaceId;
use models::PackageId;
use serde_json::Value;
use tokio::sync::mpsc;
pub struct RuntimeDropped;
pub struct Callback {
id: Arc<String>,
sender: mpsc::UnboundedSender<(Arc<String>, Vec<Value>)>,
}
impl Callback {
pub fn new(id: String, sender: mpsc::UnboundedSender<(Arc<String>, Vec<Value>)>) -> Self {
Self {
id: Arc::new(id),
sender,
}
}
pub fn is_listening(&self) -> bool {
self.sender.is_closed()
}
pub fn call(&self, args: Vec<Value>) -> Result<(), RuntimeDropped> {
self.sender
.send((self.id.clone(), args))
.map_err(|_| RuntimeDropped)
}
}
#[derive(serde::Deserialize, serde::Serialize, Debug, Clone)]
#[serde(rename_all = "camelCase")]
pub struct AddressSchemaOnion {
pub id: InterfaceId,
pub external_port: u16,
}
#[derive(serde::Deserialize, serde::Serialize, Debug, Clone)]
#[serde(rename_all = "camelCase")]
pub struct AddressSchemaLocal {
pub id: InterfaceId,
pub external_port: u16,
}
#[derive(serde::Deserialize, serde::Serialize, Debug, Clone)]
#[serde(rename_all = "camelCase")]
pub struct Address(pub String);
#[derive(serde::Deserialize, serde::Serialize, Debug, Clone)]
#[serde(rename_all = "camelCase")]
pub struct Domain;
#[derive(serde::Deserialize, serde::Serialize, Debug, Clone)]
#[serde(rename_all = "camelCase")]
pub struct Name;
#[async_trait::async_trait]
#[allow(unused_variables)]
pub trait OsApi: Send + Sync + 'static {
async fn get_service_config(
&self,
id: PackageId,
path: &str,
callback: Option<Callback>,
) -> Result<Vec<Value>, Report>;
async fn bind_local(
&self,
internal_port: u16,
address_schema: AddressSchemaLocal,
) -> Result<Address, Report>;
async fn bind_onion(
&self,
internal_port: u16,
address_schema: AddressSchemaOnion,
) -> Result<Address, Report>;
async fn unbind_local(&self, id: InterfaceId, external: u16) -> Result<(), Report>;
async fn unbind_onion(&self, id: InterfaceId, external: u16) -> Result<(), Report>;
fn set_started(&self) -> Result<(), Report>;
async fn restart(&self) -> Result<(), Report>;
async fn start(&self) -> Result<(), Report>;
async fn stop(&self) -> Result<(), Report>;
}

View File

@@ -1,17 +0,0 @@
use std::path::{Path, PathBuf};
use models::{PackageId, VersionString};
pub const PKG_SCRIPT_DIR: &str = "package-data/scripts";
pub fn script_dir<P: AsRef<Path>>(
datadir: P,
pkg_id: &PackageId,
version: &VersionString,
) -> PathBuf {
datadir
.as_ref()
.join(&*PKG_SCRIPT_DIR)
.join(pkg_id)
.join(version.as_str())
}

View File

@@ -1,19 +0,0 @@
#!/bin/bash
cd "$(dirname "${BASH_SOURCE[0]}")"
set -ea
shopt -s expand_aliases
web="../web/dist/static"
[ -d "$web" ] || mkdir -p "$web"
if [ -z "$PLATFORM" ]; then
PLATFORM=$(uname -m)
fi
if [ "$PLATFORM" = "arm64" ]; then
PLATFORM="aarch64"
fi
cargo install --path=./startos --no-default-features --features=cli,docker,registry --bin start-cli --locked

View File

@@ -1,44 +0,0 @@
[package]
name = "models"
version = "0.1.0"
edition = "2021"
# See more keys and their definitions at https://doc.rust-lang.org/cargo/reference/manifest.html
[dependencies]
axum = "0.8.4"
base64 = "0.22.1"
color-eyre = "0.6.2"
ed25519-dalek = { version = "2.0.0", features = ["serde"] }
gpt = "4.1.0"
lazy_static = "1.4"
mbrman = "0.6.0"
exver = { version = "0.2.0", git = "https://github.com/Start9Labs/exver-rs.git", features = [
"serde",
] }
ipnet = "2.8.0"
num_enum = "0.7.1"
openssl = { version = "0.10.57", features = ["vendored"] }
patch-db = { version = "*", path = "../../patch-db/patch-db", features = [
"trace",
] }
rand = "0.9.1"
regex = "1.10.2"
reqwest = "0.12"
rpc-toolkit = { git = "https://github.com/Start9Labs/rpc-toolkit.git", branch = "master" }
rustls = "0.23"
serde = { version = "1.0", features = ["derive", "rc"] }
serde_json = "1.0"
sqlx = { version = "0.8.6", features = [
"chrono",
"runtime-tokio-rustls",
"postgres",
] }
ssh-key = "0.6.2"
ts-rs = { git = "https://github.com/dr-bonez/ts-rs.git", branch = "feature/top-level-as" } # "8"
thiserror = "2.0"
tokio = { version = "1", features = ["full"] }
torut = { git = "https://github.com/Start9Labs/torut.git", branch = "update/dependencies" }
tracing = "0.1.39"
yasi = "0.1.5"
zbus = "5"

View File

@@ -1,645 +0,0 @@
use std::fmt::{Debug, Display};
use axum::http::uri::InvalidUri;
use axum::http::StatusCode;
use color_eyre::eyre::eyre;
use num_enum::TryFromPrimitive;
use patch_db::Revision;
use rpc_toolkit::reqwest;
use rpc_toolkit::yajrc::{
RpcError, INVALID_PARAMS_ERROR, INVALID_REQUEST_ERROR, METHOD_NOT_FOUND_ERROR, PARSE_ERROR,
};
use serde::{Deserialize, Serialize};
use tokio::task::JoinHandle;
use crate::InvalidId;
#[derive(Debug, Clone, Copy, PartialEq, Eq, TryFromPrimitive)]
#[repr(i32)]
pub enum ErrorKind {
Unknown = 1,
Filesystem = 2,
Docker = 3,
ConfigSpecViolation = 4,
ConfigRulesViolation = 5,
NotFound = 6,
IncorrectPassword = 7,
VersionIncompatible = 8,
Network = 9,
Registry = 10,
Serialization = 11,
Deserialization = 12,
Utf8 = 13,
ParseVersion = 14,
IncorrectDisk = 15,
// Nginx = 16,
Dependency = 17,
ParseS9pk = 18,
ParseUrl = 19,
DiskNotAvailable = 20,
BlockDevice = 21,
InvalidOnionAddress = 22,
Pack = 23,
ValidateS9pk = 24,
DiskCorrupted = 25, // Remove
Tor = 26,
ConfigGen = 27,
ParseNumber = 28,
Database = 29,
InvalidId = 30,
InvalidSignature = 31,
Backup = 32,
Restore = 33,
Authorization = 34,
AutoConfigure = 35,
Action = 36,
RateLimited = 37,
InvalidRequest = 38,
MigrationFailed = 39,
Uninitialized = 40,
ParseNetAddress = 41,
ParseSshKey = 42,
SoundError = 43,
ParseTimestamp = 44,
ParseSysInfo = 45,
Wifi = 46,
Journald = 47,
DiskManagement = 48,
OpenSsl = 49,
PasswordHashGeneration = 50,
DiagnosticMode = 51,
ParseDbField = 52,
Duplicate = 53,
MultipleErrors = 54,
Incoherent = 55,
InvalidBackupTargetId = 56,
ProductKeyMismatch = 57,
LanPortConflict = 58,
Javascript = 59,
Pem = 60,
TLSInit = 61,
Ascii = 62,
MissingHeader = 63,
Grub = 64,
Systemd = 65,
OpenSsh = 66,
Zram = 67,
Lshw = 68,
CpuSettings = 69,
Firmware = 70,
Timeout = 71,
Lxc = 72,
Cancelled = 73,
Git = 74,
DBus = 75,
InstallFailed = 76,
UpdateFailed = 77,
}
impl ErrorKind {
pub fn as_str(&self) -> &'static str {
use ErrorKind::*;
match self {
Unknown => "Unknown Error",
Filesystem => "Filesystem I/O Error",
Docker => "Docker Error",
ConfigSpecViolation => "Config Spec Violation",
ConfigRulesViolation => "Config Rules Violation",
NotFound => "Not Found",
IncorrectPassword => "Incorrect Password",
VersionIncompatible => "Version Incompatible",
Network => "Network Error",
Registry => "Registry Error",
Serialization => "Serialization Error",
Deserialization => "Deserialization Error",
Utf8 => "UTF-8 Parse Error",
ParseVersion => "Version Parsing Error",
IncorrectDisk => "Incorrect Disk",
// Nginx => "Nginx Error",
Dependency => "Dependency Error",
ParseS9pk => "S9PK Parsing Error",
ParseUrl => "URL Parsing Error",
DiskNotAvailable => "Disk Not Available",
BlockDevice => "Block Device Error",
InvalidOnionAddress => "Invalid Onion Address",
Pack => "Pack Error",
ValidateS9pk => "S9PK Validation Error",
DiskCorrupted => "Disk Corrupted", // Remove
Tor => "Tor Daemon Error",
ConfigGen => "Config Generation Error",
ParseNumber => "Number Parsing Error",
Database => "Database Error",
InvalidId => "Invalid ID",
InvalidSignature => "Invalid Signature",
Backup => "Backup Error",
Restore => "Restore Error",
Authorization => "Unauthorized",
AutoConfigure => "Auto-Configure Error",
Action => "Action Failed",
RateLimited => "Rate Limited",
InvalidRequest => "Invalid Request",
MigrationFailed => "Migration Failed",
Uninitialized => "Uninitialized",
ParseNetAddress => "Net Address Parsing Error",
ParseSshKey => "SSH Key Parsing Error",
SoundError => "Sound Interface Error",
ParseTimestamp => "Timestamp Parsing Error",
ParseSysInfo => "System Info Parsing Error",
Wifi => "WiFi Internal Error",
Journald => "Journald Error",
DiskManagement => "Disk Management Error",
OpenSsl => "OpenSSL Internal Error",
PasswordHashGeneration => "Password Hash Generation Error",
DiagnosticMode => "Server is in Diagnostic Mode",
ParseDbField => "Database Field Parse Error",
Duplicate => "Duplication Error",
MultipleErrors => "Multiple Errors",
Incoherent => "Incoherent",
InvalidBackupTargetId => "Invalid Backup Target ID",
ProductKeyMismatch => "Incompatible Product Keys",
LanPortConflict => "Incompatible LAN Port Configuration",
Javascript => "Javascript Engine Error",
Pem => "PEM Encoding Error",
TLSInit => "TLS Backend Initialization Error",
Ascii => "ASCII Parse Error",
MissingHeader => "Missing Header",
Grub => "Grub Error",
Systemd => "Systemd Error",
OpenSsh => "OpenSSH Error",
Zram => "Zram Error",
Lshw => "LSHW Error",
CpuSettings => "CPU Settings Error",
Firmware => "Firmware Error",
Timeout => "Timeout Error",
Lxc => "LXC Error",
Cancelled => "Cancelled",
Git => "Git Error",
DBus => "DBus Error",
InstallFailed => "Install Failed",
UpdateFailed => "Update Failed",
}
}
}
impl Display for ErrorKind {
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
write!(f, "{}", self.as_str())
}
}
#[derive(Debug)]
pub struct Error {
pub source: color_eyre::eyre::Error,
pub kind: ErrorKind,
pub revision: Option<Revision>,
pub task: Option<JoinHandle<()>>,
}
impl Display for Error {
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
write!(f, "{}: {}", self.kind.as_str(), self.source)
}
}
impl Error {
pub fn new<E: Into<color_eyre::eyre::Error>>(source: E, kind: ErrorKind) -> Self {
Error {
source: source.into(),
kind,
revision: None,
task: None,
}
}
pub fn clone_output(&self) -> Self {
Error {
source: ErrorData {
details: format!("{}", self.source),
debug: format!("{:?}", self.source),
}
.into(),
kind: self.kind,
revision: self.revision.clone(),
task: None,
}
}
pub fn with_task(mut self, task: JoinHandle<()>) -> Self {
self.task = Some(task);
self
}
pub async fn wait(mut self) -> Self {
if let Some(task) = &mut self.task {
task.await.log_err();
}
self.task.take();
self
}
}
impl axum::response::IntoResponse for Error {
fn into_response(self) -> axum::response::Response {
let mut res = axum::Json(RpcError::from(self)).into_response();
*res.status_mut() = StatusCode::INTERNAL_SERVER_ERROR;
res
}
}
impl From<std::convert::Infallible> for Error {
fn from(value: std::convert::Infallible) -> Self {
match value {}
}
}
impl From<InvalidId> for Error {
fn from(err: InvalidId) -> Self {
Error::new(err, ErrorKind::InvalidId)
}
}
impl From<std::io::Error> for Error {
fn from(e: std::io::Error) -> Self {
Error::new(e, ErrorKind::Filesystem)
}
}
impl From<std::str::Utf8Error> for Error {
fn from(e: std::str::Utf8Error) -> Self {
Error::new(e, ErrorKind::Utf8)
}
}
impl From<std::string::FromUtf8Error> for Error {
fn from(e: std::string::FromUtf8Error) -> Self {
Error::new(e, ErrorKind::Utf8)
}
}
impl From<exver::ParseError> for Error {
fn from(e: exver::ParseError) -> Self {
Error::new(e, ErrorKind::ParseVersion)
}
}
impl From<rpc_toolkit::url::ParseError> for Error {
fn from(e: rpc_toolkit::url::ParseError) -> Self {
Error::new(e, ErrorKind::ParseUrl)
}
}
impl From<std::num::ParseIntError> for Error {
fn from(e: std::num::ParseIntError) -> Self {
Error::new(e, ErrorKind::ParseNumber)
}
}
impl From<std::num::ParseFloatError> for Error {
fn from(e: std::num::ParseFloatError) -> Self {
Error::new(e, ErrorKind::ParseNumber)
}
}
impl From<patch_db::Error> for Error {
fn from(e: patch_db::Error) -> Self {
Error::new(e, ErrorKind::Database)
}
}
impl From<sqlx::Error> for Error {
fn from(e: sqlx::Error) -> Self {
Error::new(e, ErrorKind::Database)
}
}
impl From<ed25519_dalek::SignatureError> for Error {
fn from(e: ed25519_dalek::SignatureError) -> Self {
Error::new(e, ErrorKind::InvalidSignature)
}
}
impl From<std::net::AddrParseError> for Error {
fn from(e: std::net::AddrParseError) -> Self {
Error::new(e, ErrorKind::ParseNetAddress)
}
}
impl From<torut::control::ConnError> for Error {
fn from(e: torut::control::ConnError) -> Self {
Error::new(e, ErrorKind::Tor)
}
}
impl From<ipnet::AddrParseError> for Error {
fn from(e: ipnet::AddrParseError) -> Self {
Error::new(e, ErrorKind::ParseNetAddress)
}
}
impl From<openssl::error::ErrorStack> for Error {
fn from(e: openssl::error::ErrorStack) -> Self {
Error::new(eyre!("{}", e), ErrorKind::OpenSsl)
}
}
impl From<mbrman::Error> for Error {
fn from(e: mbrman::Error) -> Self {
Error::new(e, ErrorKind::DiskManagement)
}
}
impl From<gpt::GptError> for Error {
fn from(e: gpt::GptError) -> Self {
Error::new(e, ErrorKind::DiskManagement)
}
}
impl From<gpt::mbr::MBRError> for Error {
fn from(e: gpt::mbr::MBRError) -> Self {
Error::new(e, ErrorKind::DiskManagement)
}
}
impl From<InvalidUri> for Error {
fn from(e: InvalidUri) -> Self {
Error::new(eyre!("{}", e), ErrorKind::ParseUrl)
}
}
impl From<ssh_key::Error> for Error {
fn from(e: ssh_key::Error) -> Self {
Error::new(e, ErrorKind::OpenSsh)
}
}
impl From<reqwest::Error> for Error {
fn from(e: reqwest::Error) -> Self {
let kind = match e {
_ if e.is_builder() => ErrorKind::ParseUrl,
_ if e.is_decode() => ErrorKind::Deserialization,
_ => ErrorKind::Network,
};
Error::new(e, kind)
}
}
impl From<torut::onion::OnionAddressParseError> for Error {
fn from(e: torut::onion::OnionAddressParseError) -> Self {
Error::new(e, ErrorKind::Tor)
}
}
impl From<zbus::Error> for Error {
fn from(e: zbus::Error) -> Self {
Error::new(e, ErrorKind::DBus)
}
}
impl From<rustls::Error> for Error {
fn from(e: rustls::Error) -> Self {
Error::new(e, ErrorKind::OpenSsl)
}
}
impl From<patch_db::value::Error> for Error {
fn from(value: patch_db::value::Error) -> Self {
match value.kind {
patch_db::value::ErrorKind::Serialization => {
Error::new(value.source, ErrorKind::Serialization)
}
patch_db::value::ErrorKind::Deserialization => {
Error::new(value.source, ErrorKind::Deserialization)
}
}
}
}
#[derive(Clone, Deserialize, Serialize)]
pub struct ErrorData {
pub details: String,
pub debug: String,
}
impl Display for ErrorData {
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
Display::fmt(&self.details, f)
}
}
impl Debug for ErrorData {
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
Display::fmt(&self.debug, f)
}
}
impl std::error::Error for ErrorData {}
impl From<Error> for ErrorData {
fn from(value: Error) -> Self {
Self {
details: value.to_string(),
debug: format!("{:?}", value),
}
}
}
impl From<&RpcError> for ErrorData {
fn from(value: &RpcError) -> Self {
Self {
details: value
.data
.as_ref()
.and_then(|d| {
d.as_object()
.and_then(|d| {
d.get("details")
.and_then(|d| d.as_str().map(|s| s.to_owned()))
})
.or_else(|| d.as_str().map(|s| s.to_owned()))
})
.unwrap_or_else(|| value.message.clone().into_owned()),
debug: value
.data
.as_ref()
.and_then(|d| {
d.as_object()
.and_then(|d| {
d.get("debug")
.and_then(|d| d.as_str().map(|s| s.to_owned()))
})
.or_else(|| d.as_str().map(|s| s.to_owned()))
})
.unwrap_or_else(|| value.message.clone().into_owned()),
}
}
}
impl From<Error> for RpcError {
fn from(e: Error) -> Self {
let mut data_object = serde_json::Map::with_capacity(3);
data_object.insert("details".to_owned(), format!("{}", e.source).into());
data_object.insert("debug".to_owned(), format!("{:?}", e.source).into());
data_object.insert(
"revision".to_owned(),
match serde_json::to_value(&e.revision) {
Ok(a) => a,
Err(e) => {
tracing::warn!("Error serializing revision for Error object: {}", e);
serde_json::Value::Null
}
},
);
RpcError {
code: e.kind as i32,
message: e.kind.as_str().into(),
data: Some(
match serde_json::to_value(&ErrorData {
details: format!("{}", e.source),
debug: format!("{:?}", e.source),
}) {
Ok(a) => a,
Err(e) => {
tracing::warn!("Error serializing revision for Error object: {}", e);
serde_json::Value::Null
}
},
),
}
}
}
impl From<RpcError> for Error {
fn from(e: RpcError) -> Self {
Error::new(
ErrorData::from(&e),
if let Ok(kind) = e.code.try_into() {
kind
} else if e.code == METHOD_NOT_FOUND_ERROR.code {
ErrorKind::NotFound
} else if e.code == PARSE_ERROR.code
|| e.code == INVALID_PARAMS_ERROR.code
|| e.code == INVALID_REQUEST_ERROR.code
{
ErrorKind::Deserialization
} else {
ErrorKind::Unknown
},
)
}
}
#[derive(Debug, Default)]
pub struct ErrorCollection(Vec<Error>);
impl ErrorCollection {
pub fn new() -> Self {
Self::default()
}
pub fn handle<T, E: Into<Error>>(&mut self, result: Result<T, E>) -> Option<T> {
match result {
Ok(a) => Some(a),
Err(e) => {
self.0.push(e.into());
None
}
}
}
pub fn into_result(self) -> Result<(), Error> {
if self.0.is_empty() {
Ok(())
} else {
Err(Error::new(eyre!("{}", self), ErrorKind::MultipleErrors))
}
}
}
impl From<ErrorCollection> for Result<(), Error> {
fn from(e: ErrorCollection) -> Self {
e.into_result()
}
}
impl<T, E: Into<Error>> Extend<Result<T, E>> for ErrorCollection {
fn extend<I: IntoIterator<Item = Result<T, E>>>(&mut self, iter: I) {
for item in iter {
self.handle(item);
}
}
}
impl std::fmt::Display for ErrorCollection {
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
for (idx, e) in self.0.iter().enumerate() {
if idx > 0 {
write!(f, "; ")?;
}
write!(f, "{}", e)?;
}
Ok(())
}
}
pub trait ResultExt<T, E>
where
Self: Sized,
{
fn with_kind(self, kind: ErrorKind) -> Result<T, Error>;
fn with_ctx<F: FnOnce(&E) -> (ErrorKind, D), D: Display>(self, f: F) -> Result<T, Error>;
fn log_err(self) -> Option<T>;
}
impl<T, E> ResultExt<T, E> for Result<T, E>
where
color_eyre::eyre::Error: From<E>,
{
fn with_kind(self, kind: ErrorKind) -> Result<T, Error> {
self.map_err(|e| Error {
source: e.into(),
kind,
revision: None,
task: None,
})
}
fn with_ctx<F: FnOnce(&E) -> (ErrorKind, D), D: Display>(self, f: F) -> Result<T, Error> {
self.map_err(|e| {
let (kind, ctx) = f(&e);
let source = color_eyre::eyre::Error::from(e);
let ctx = format!("{}: {}", ctx, source);
let source = source.wrap_err(ctx);
Error {
kind,
source,
revision: None,
task: None,
}
})
}
fn log_err(self) -> Option<T> {
match self {
Ok(a) => Some(a),
Err(e) => {
let e: color_eyre::eyre::Error = e.into();
tracing::error!("{e}");
tracing::debug!("{e:?}");
None
}
}
}
}
impl<T> ResultExt<T, Error> for Result<T, Error> {
fn with_kind(self, kind: ErrorKind) -> Result<T, Error> {
self.map_err(|e| Error {
source: e.source,
kind,
revision: e.revision,
task: e.task,
})
}
fn with_ctx<F: FnOnce(&Error) -> (ErrorKind, D), D: Display>(self, f: F) -> Result<T, Error> {
self.map_err(|e| {
let (kind, ctx) = f(&e);
let source = e.source;
let ctx = format!("{}: {}", ctx, source);
let source = source.wrap_err(ctx);
Error {
kind,
source,
revision: e.revision,
task: e.task,
}
})
}
fn log_err(self) -> Option<T> {
match self {
Ok(a) => Some(a),
Err(e) => {
tracing::error!("{e}");
tracing::debug!("{e:?}");
None
}
}
}
}
pub trait OptionExt<T>
where
Self: Sized,
{
fn or_not_found(self, message: impl std::fmt::Display) -> Result<T, Error>;
}
impl<T> OptionExt<T> for Option<T> {
fn or_not_found(self, message: impl std::fmt::Display) -> Result<T, Error> {
self.ok_or_else(|| Error::new(eyre!("{}", message), ErrorKind::NotFound))
}
}
#[macro_export]
macro_rules! ensure_code {
($x:expr, $c:expr, $fmt:expr $(, $arg:expr)*) => {
if !($x) {
return Err(Error::new(color_eyre::eyre::eyre!($fmt, $($arg, )*), $c));
}
};
}

View File

@@ -1,15 +0,0 @@
mod clap;
mod data_url;
mod errors;
mod id;
mod mime;
mod procedure_name;
mod version;
pub use clap::*;
pub use data_url::*;
pub use errors::*;
pub use id::*;
pub use mime::*;
pub use procedure_name::*;
pub use version::*;

View File

@@ -2,9 +2,21 @@
cd "$(dirname "${BASH_SOURCE[0]}")"
source ./builder-alias.sh
set -ea
shopt -s expand_aliases
PROFILE=${PROFILE:-release}
if [ "${PROFILE}" = "release" ]; then
BUILD_FLAGS="--release"
else
if [ "$PROFILE" != "debug"]; then
>&2 echo "Unknown profile $PROFILE: falling back to debug..."
PROFILE=debug
fi
fi
if [ -z "$ARCH" ]; then
ARCH=$(uname -m)
fi
@@ -22,15 +34,12 @@ cd ..
FEATURES="$(echo $ENVIRONMENT | sed 's/-/,/g')"
RUSTFLAGS=""
if [[ "${ENVIRONMENT}" =~ (^|-)unstable($|-) ]]; then
if [[ "${ENVIRONMENT}" =~ (^|-)console($|-) ]]; then
RUSTFLAGS="--cfg tokio_unstable"
fi
alias 'rust-musl-builder'='docker run $USE_TTY --rm -e "RUSTFLAGS=$RUSTFLAGS" -v "$HOME/.cargo/registry":/root/.cargo/registry -v "$HOME/.cargo/git":/root/.cargo/git -v "$(pwd)":/home/rust/src -w /home/rust/src -P messense/rust-musl-cross:$ARCH-musl'
echo "FEATURES=\"$FEATURES\""
echo "RUSTFLAGS=\"$RUSTFLAGS\""
rust-musl-builder sh -c "apt-get update && apt-get install -y rsync && cd core && cargo test --release --features=test,$FEATURES --workspace --locked --target=$ARCH-unknown-linux-musl -- --skip export_bindings_ && chown \$UID:\$UID target"
if [ "$(ls -nd core/target | awk '{ print $3 }')" != "$UID" ]; then
rust-musl-builder sh -c "cd core && chown -R $UID:$UID target && chown -R $UID:$UID /root/.cargo"
fi
rust-zig-builder cargo test --manifest-path=./core/Cargo.toml $BUILD_FLAGS --features=test,$FEATURES --workspace --locked --lib -- --skip export_bindings_
rust-zig-builder sh -c "chown -R $UID:$UID core/target && chown -R $UID:$UID /usr/local/cargo"

View File

@@ -2,20 +2,20 @@
authors = ["Aiden McClelland <me@drbonez.dev>"]
description = "The core of StartOS"
documentation = "https://docs.rs/start-os"
edition = "2021"
edition = "2024"
keywords = [
"self-hosted",
"raspberry-pi",
"privacy",
"bitcoin",
"full-node",
"lightning",
"privacy",
"raspberry-pi",
"self-hosted",
]
license = "MIT"
name = "start-os"
readme = "README.md"
repository = "https://github.com/Start9Labs/start-os"
version = "0.4.0-alpha.8" # VERSION_BUMP
license = "MIT"
version = "0.4.0-alpha.16" # VERSION_BUMP
[lib]
name = "startos"
@@ -23,47 +23,68 @@ path = "src/lib.rs"
[[bin]]
name = "startbox"
path = "src/main.rs"
path = "src/main/startbox.rs"
[[bin]]
name = "start-cli"
path = "src/main.rs"
path = "src/main/start-cli.rs"
[[bin]]
name = "containerbox"
path = "src/main.rs"
name = "start-container"
path = "src/main/start-container.rs"
[[bin]]
name = "registrybox"
path = "src/main.rs"
path = "src/main/registrybox.rs"
[[bin]]
name = "tunnelbox"
path = "src/main/tunnelbox.rs"
[features]
cli = []
container-runtime = ["procfs", "pty-process"]
daemon = ["mail-send"]
registry = []
default = ["cli", "daemon", "registry", "container-runtime"]
arti = [
"arti-client",
"safelog",
"tor-cell",
"tor-hscrypto",
"tor-hsservice",
"tor-keymgr",
"tor-llcrypto",
"tor-proto",
"tor-rtcompat",
]
console = ["console-subscriber", "tokio/tracing"]
default = []
dev = []
unstable = ["console-subscriber", "tokio/tracing"]
docker = []
test = []
unstable = ["backtrace-on-stack-overflow"]
[dependencies]
aes = { version = "0.7.5", features = ["ctr"] }
arti-client = { version = "0.33", features = [
"compression",
"ephemeral-keystore",
"experimental-api",
"onion-service-client",
"onion-service-service",
"rustls",
"static",
"tokio",
], default-features = false, git = "https://github.com/Start9Labs/arti.git", branch = "patch/disable-exit", optional = true }
async-acme = { version = "0.6.0", git = "https://github.com/dr-bonez/async-acme.git", features = [
"use_rustls",
"use_tokio",
] }
async-compression = { version = "0.4.4", features = [
"gzip",
async-compression = { version = "0.4.32", features = [
"brotli",
"gzip",
"tokio",
"zstd",
] }
async-stream = "0.3.5"
async-trait = "0.1.74"
axum = { version = "0.8.4", features = ["ws"] }
barrage = "0.2.3"
backhand = "0.21.0"
axum = { version = "0.8.4", features = ["ws", "http2"] }
backtrace-on-stack-overflow = { version = "0.3.0", optional = true }
base32 = "0.5.0"
base64 = "0.22.1"
base64ct = "1.6.0"
@@ -73,21 +94,24 @@ bytes = "1"
chrono = { version = "0.4.31", features = ["serde"] }
clap = { version = "4.4.12", features = ["string"] }
color-eyre = "0.6.2"
console = "0.15.7"
console-subscriber = { version = "0.4.1", optional = true }
console = "0.16.2"
console-subscriber = { version = "0.5.0", optional = true }
const_format = "0.2.34"
cookie = "0.18.0"
cookie_store = "0.21.0"
cookie_store = "0.22.0"
curve25519-dalek = "4.1.3"
der = { version = "0.7.9", features = ["derive", "pem"] }
digest = "0.10.7"
divrem = "1.0.0"
ed25519 = { version = "2.2.3", features = ["pkcs8", "pem", "alloc"] }
ed25519-dalek = { version = "2.1.1", features = [
dns-lookup = "3.0.1"
ed25519 = { version = "2.2.3", features = ["alloc", "pem", "pkcs8"] }
ed25519-dalek = { version = "2.2.0", features = [
"digest",
"hazmat",
"pkcs8",
"rand_core",
"serde",
"zeroize",
"rand_core",
"digest",
"pkcs8",
] }
ed25519-dalek-v1 = { package = "ed25519-dalek", version = "1" }
exver = { version = "0.2.0", git = "https://github.com/Start9Labs/exver-rs.git", features = [
@@ -97,33 +121,35 @@ fd-lock-rs = "0.1.4"
form_urlencoded = "1.2.1"
futures = "0.3.28"
gpt = "4.1.0"
helpers = { path = "../helpers" }
hex = "0.4.3"
hickory-client = "0.25.2"
hickory-server = "0.25.2"
hmac = "0.12.1"
http = "1.0.0"
http-body-util = "0.1"
hyper = { version = "1.5", features = ["server", "http1", "http2"] }
hyper = { version = "1.5", features = ["http1", "http2", "server"] }
hyper-util = { version = "0.1.10", features = [
"http1",
"http2",
"server",
"server-auto",
"server-graceful",
"service",
"http1",
"http2",
"tokio",
] }
id-pool = { version = "0.2.2", default-features = false, features = [
"serde",
"u16",
] }
imbl = "4.0.1"
imbl-value = "0.2.0"
iddqd = "0.3.14"
imbl = { version = "6", features = ["serde", "small-chunks"] }
imbl-value = { version = "0.4.3", features = ["ts-rs"] }
include_dir = { version = "0.7.3", features = ["metadata"] }
indexmap = { version = "2.0.2", features = ["serde"] }
indicatif = { version = "0.17.7", features = ["tokio"] }
indicatif = { version = "0.18.3", features = ["tokio"] }
inotify = "0.11.0"
integer-encoding = { version = "4.0.0", features = ["tokio_async"] }
ipnet = { version = "2.8.0", features = ["serde"] }
iprange = { version = "0.6.7", features = ["serde"] }
isocountry = "0.3.2"
itertools = "0.14.0"
jaq-core = "0.10.1"
@@ -133,11 +159,20 @@ jsonpath_lib = { git = "https://github.com/Start9Labs/jsonpath.git" }
lazy_async_pool = "0.3.3"
lazy_format = "2.0"
lazy_static = "1.4.0"
lettre = { version = "0.11.18", default-features = false, features = [
"aws-lc-rs",
"builder",
"hostname",
"pool",
"rustls-platform-verifier",
"smtp-transport",
"tokio1-rustls",
] }
libc = "0.2.149"
log = "0.4.20"
mio = "1"
mbrman = "0.6.0"
models = { version = "*", path = "../models" }
miette = { version = "7.6.0", features = ["fancy"] }
mio = "1"
new_mime_guess = "4"
nix = { version = "0.30.1", features = [
"fs",
@@ -150,8 +185,8 @@ nix = { version = "0.30.1", features = [
] }
nom = "8.0.0"
num = "0.4.1"
num_enum = "0.7.0"
num_cpus = "1.16.0"
num_enum = "0.7.0"
once_cell = "1.19.0"
openssh-keys = "0.6.2"
openssl = { version = "0.10.57", features = ["vendored"] }
@@ -163,72 +198,83 @@ pbkdf2 = "0.12.2"
pin-project = "1.1.3"
pkcs8 = { version = "0.10.2", features = ["std"] }
prettytable-rs = "0.10.0"
procfs = { version = "0.17.0", optional = true }
proptest = "1.3.1"
proptest-derive = "0.5.0"
pty-process = { version = "0.5.1", optional = true }
proptest-derive = "0.7.0"
qrcode = "0.14.1"
rand = "0.9.0"
r3bl_tui = "0.7.6"
rand = "0.9.2"
regex = "1.10.2"
reqwest = { version = "0.12.4", features = ["stream", "json", "socks"] }
reqwest_cookie_store = "0.8.0"
reqwest = { version = "0.12.25", features = [
"json",
"socks",
"stream",
"http2",
] }
reqwest_cookie_store = "0.9.0"
rpassword = "7.2.0"
rpc-toolkit = { git = "https://github.com/Start9Labs/rpc-toolkit.git", branch = "master" }
rust-argon2 = "2.0.0"
rustyline-async = "0.4.1"
rust-argon2 = "3.0.0"
rpc-toolkit = { git = "https://github.com/Start9Labs/rpc-toolkit.git" }
safelog = { version = "0.4.8", git = "https://github.com/Start9Labs/arti.git", branch = "patch/disable-exit", optional = true }
semver = { version = "1.0.20", features = ["serde"] }
serde = { version = "1.0", features = ["derive", "rc"] }
serde_cbor = { package = "ciborium", version = "0.2.1" }
serde_json = "1.0"
serde_toml = { package = "toml", version = "0.8.2" }
serde_urlencoded = "0.7"
serde_with = { version = "3.4.0", features = ["macros", "json"] }
serde_toml = { package = "toml", version = "0.9.9+spec-1.0.0" }
serde_yaml = { package = "serde_yml", version = "0.0.12" }
sha-crypt = "0.5.0"
sha2 = "0.10.2"
shell-words = "1"
signal-hook = "0.3.17"
simple-logging = "2.0.2"
socket2 = "0.5.7"
socket2 = { version = "0.6.0", features = ["all"] }
socks5-impl = { version = "0.7.2", features = ["client", "server"] }
sqlx = { version = "0.8.6", features = [
"chrono",
"runtime-tokio-rustls",
"postgres",
] }
"runtime-tokio-rustls",
], default-features = false }
sscanf = "0.4.1"
ssh-key = { version = "0.6.2", features = ["ed25519"] }
tar = "0.4.40"
termion = "4.0.5"
thiserror = "2.0.12"
textwrap = "0.16.1"
thiserror = "2.0.12"
tokio = { version = "1.38.1", features = ["full"] }
tokio-rustls = "0.26.0"
tokio-socks = "0.5.1"
tokio-stream = { version = "0.1.14", features = ["io-util", "sync", "net"] }
tokio-rustls = "0.26.4"
tokio-stream = { version = "0.1.14", features = ["io-util", "net", "sync"] }
tokio-tar = { git = "https://github.com/dr-bonez/tokio-tar.git" }
tokio-tungstenite = { version = "0.26.2", features = ["native-tls", "url"] }
tokio-util = { version = "0.7.9", features = ["io"] }
torut = { git = "https://github.com/Start9Labs/torut.git", branch = "update/dependencies", features = [
"serialize",
] }
tor-cell = { version = "0.33", git = "https://github.com/Start9Labs/arti.git", branch = "patch/disable-exit", optional = true }
tor-hscrypto = { version = "0.33", features = [
"full",
], git = "https://github.com/Start9Labs/arti.git", branch = "patch/disable-exit", optional = true }
tor-hsservice = { version = "0.33", git = "https://github.com/Start9Labs/arti.git", branch = "patch/disable-exit", optional = true }
tor-keymgr = { version = "0.33", features = [
"ephemeral-keystore",
], git = "https://github.com/Start9Labs/arti.git", branch = "patch/disable-exit", optional = true }
tor-llcrypto = { version = "0.33", features = [
"full",
], git = "https://github.com/Start9Labs/arti.git", branch = "patch/disable-exit", optional = true }
tor-proto = { version = "0.33", git = "https://github.com/Start9Labs/arti.git", branch = "patch/disable-exit", optional = true }
tor-rtcompat = { version = "0.33", features = [
"rustls",
"tokio",
], git = "https://github.com/Start9Labs/arti.git", branch = "patch/disable-exit", optional = true }
torut = "0.2.1"
tower-service = "0.3.3"
tracing = "0.1.39"
tracing-error = "0.2.0"
tracing-futures = "0.2.5"
tracing-journald = "0.3.0"
tracing-subscriber = { version = "0.3.17", features = ["env-filter"] }
trust-dns-server = "0.23.1"
ts-rs = { git = "https://github.com/dr-bonez/ts-rs.git", branch = "feature/top-level-as" } # "8.1.0"
typed-builder = "0.21.0"
unix-named-pipe = "0.2.0"
ts-rs = "9.0.1"
typed-builder = "0.23.2"
url = { version = "2.4.1", features = ["serde"] }
urlencoding = "2.1.3"
uuid = { version = "1.4.1", features = ["v4"] }
visit-rs = "0.1.1"
x25519-dalek = { version = "2.0.1", features = ["static_secrets"] }
zbus = "5.1.1"
zeroize = "1.6.0"
mail-send = { git = "https://github.com/dr-bonez/mail-send.git", branch = "main", optional = true }
rustls = "0.23.20"
rustls-pki-types = { version = "1.10.1", features = ["alloc"] }
[target.'cfg(target_os = "linux")'.dependencies]
procfs = "0.18.0"
pty-process = "0.5.1"
[profile.test]
opt-level = 3

View File

@@ -1,12 +1,14 @@
use std::collections::BTreeMap;
use std::time::SystemTime;
use imbl_value::InternedString;
use openssl::pkey::{PKey, Private};
use openssl::x509::X509;
use torut::onion::TorSecretKeyV3;
use crate::db::model::DatabaseModel;
use crate::hostname::{generate_hostname, generate_id, Hostname};
use crate::net::ssl::{generate_key, make_root_cert};
use crate::hostname::{Hostname, generate_hostname, generate_id};
use crate::net::ssl::{gen_nistp256, make_root_cert};
use crate::net::tor::TorSecretKey;
use crate::prelude::*;
use crate::util::serde::Pem;
@@ -19,28 +21,28 @@ fn hash_password(password: &str) -> Result<String, Error> {
.with_kind(crate::ErrorKind::PasswordHashGeneration)
}
#[derive(Debug, Clone)]
#[derive(Clone)]
pub struct AccountInfo {
pub server_id: String,
pub hostname: Hostname,
pub password: String,
pub tor_keys: Vec<TorSecretKeyV3>,
pub tor_keys: Vec<TorSecretKey>,
pub root_ca_key: PKey<Private>,
pub root_ca_cert: X509,
pub ssh_key: ssh_key::PrivateKey,
pub compat_s9pk_key: ed25519_dalek::SigningKey,
pub developer_key: ed25519_dalek::SigningKey,
}
impl AccountInfo {
pub fn new(password: &str, start_time: SystemTime) -> Result<Self, Error> {
let server_id = generate_id();
let hostname = generate_hostname();
let tor_key = vec![TorSecretKeyV3::generate()];
let root_ca_key = generate_key()?;
let tor_key = vec![TorSecretKey::generate()];
let root_ca_key = gen_nistp256()?;
let root_ca_cert = make_root_cert(&root_ca_key, &hostname, start_time)?;
let ssh_key = ssh_key::PrivateKey::from(ssh_key::private::Ed25519Keypair::random(
&mut ssh_key::rand_core::OsRng::default(),
));
let compat_s9pk_key =
let developer_key =
ed25519_dalek::SigningKey::generate(&mut ssh_key::rand_core::OsRng::default());
Ok(Self {
server_id,
@@ -50,7 +52,7 @@ impl AccountInfo {
root_ca_key,
root_ca_cert,
ssh_key,
compat_s9pk_key,
developer_key,
})
}
@@ -74,7 +76,7 @@ impl AccountInfo {
let root_ca_key = cert_store.as_root_key().de()?.0;
let root_ca_cert = cert_store.as_root_cert().de()?.0;
let ssh_key = db.as_private().as_ssh_privkey().de()?.0;
let compat_s9pk_key = db.as_private().as_compat_s9pk_key().de()?.0;
let compat_s9pk_key = db.as_private().as_developer_key().de()?.0;
Ok(Self {
server_id,
@@ -84,7 +86,7 @@ impl AccountInfo {
root_ca_key,
root_ca_cert,
ssh_key,
compat_s9pk_key,
developer_key: compat_s9pk_key,
})
}
@@ -103,27 +105,36 @@ impl AccountInfo {
&self
.tor_keys
.iter()
.map(|tor_key| tor_key.public().get_onion_address())
.map(|tor_key| tor_key.onion_address())
.collect(),
)?;
server_info.as_password_hash_mut().ser(&self.password)?;
db.as_private_mut().as_password_mut().ser(&self.password)?;
db.as_private_mut()
.as_ssh_privkey_mut()
.ser(Pem::new_ref(&self.ssh_key))?;
db.as_private_mut()
.as_compat_s9pk_key_mut()
.ser(Pem::new_ref(&self.compat_s9pk_key))?;
.as_developer_key_mut()
.ser(Pem::new_ref(&self.developer_key))?;
let key_store = db.as_private_mut().as_key_store_mut();
for tor_key in &self.tor_keys {
key_store.as_onion_mut().insert_key(tor_key)?;
}
let cert_store = key_store.as_local_certs_mut();
cert_store
.as_root_key_mut()
.ser(Pem::new_ref(&self.root_ca_key))?;
cert_store
.as_root_cert_mut()
.ser(Pem::new_ref(&self.root_ca_cert))?;
if cert_store.as_root_cert().de()?.0 != self.root_ca_cert {
cert_store
.as_root_key_mut()
.ser(Pem::new_ref(&self.root_ca_key))?;
cert_store
.as_root_cert_mut()
.ser(Pem::new_ref(&self.root_ca_cert))?;
let int_key = crate::net::ssl::gen_nistp256()?;
let int_cert =
crate::net::ssl::make_int_cert((&self.root_ca_key, &self.root_ca_cert), &int_key)?;
cert_store.as_int_key_mut().ser(&Pem(int_key))?;
cert_store.as_int_cert_mut().ser(&Pem(int_cert))?;
cert_store.as_leaves_mut().ser(&BTreeMap::new())?;
}
Ok(())
}
@@ -131,4 +142,17 @@ impl AccountInfo {
self.password = hash_password(password)?;
Ok(())
}
pub fn hostnames(&self) -> impl IntoIterator<Item = InternedString> + Send + '_ {
[
self.hostname.no_dot_host_name(),
self.hostname.local_domain_name(),
]
.into_iter()
.chain(
self.tor_keys
.iter()
.map(|k| InternedString::from_display(&k.onion_address())),
)
}
}

View File

@@ -1,21 +1,21 @@
use std::fmt;
use clap::{CommandFactory, FromArgMatches, Parser};
pub use models::ActionId;
use models::{PackageId, ReplayId};
use qrcode::QrCode;
use rpc_toolkit::{from_fn_async, Context, HandlerExt, ParentHandler};
use rpc_toolkit::{Context, HandlerExt, ParentHandler, from_fn_async};
use serde::{Deserialize, Serialize};
use tracing::instrument;
use ts_rs::TS;
pub use crate::ActionId;
use crate::context::{CliContext, RpcContext};
use crate::db::model::package::TaskSeverity;
use crate::prelude::*;
use crate::rpc_continuations::Guid;
use crate::util::serde::{
display_serializable, HandlerExtSerde, StdinDeserializable, WithIoFormat,
HandlerExtSerde, StdinDeserializable, WithIoFormat, display_serializable,
};
use crate::{PackageId, ReplayId};
pub fn action_api<C: Context>() -> ParentHandler<C> {
ParentHandler::new()
@@ -52,6 +52,8 @@ pub fn action_api<C: Context>() -> ParentHandler<C> {
#[ts(export)]
#[serde(rename_all = "camelCase")]
pub struct ActionInput {
#[serde(default)]
pub event_id: Guid,
#[ts(type = "Record<string, unknown>")]
pub spec: Value,
#[ts(type = "Record<string, unknown> | null")]
@@ -270,6 +272,7 @@ pub fn display_action_result<T: Serialize>(
#[serde(rename_all = "camelCase")]
pub struct RunActionParams {
pub package_id: PackageId,
pub event_id: Option<Guid>,
pub action_id: ActionId,
#[ts(optional, type = "any")]
pub input: Option<Value>,
@@ -278,6 +281,7 @@ pub struct RunActionParams {
#[derive(Parser)]
struct CliRunActionParams {
pub package_id: PackageId,
pub event_id: Option<Guid>,
pub action_id: ActionId,
#[command(flatten)]
pub input: StdinDeserializable<Option<Value>>,
@@ -286,12 +290,14 @@ impl From<CliRunActionParams> for RunActionParams {
fn from(
CliRunActionParams {
package_id,
event_id,
action_id,
input,
}: CliRunActionParams,
) -> Self {
Self {
package_id,
event_id,
action_id,
input: input.0,
}
@@ -331,6 +337,7 @@ pub async fn run_action(
ctx: RpcContext,
RunActionParams {
package_id,
event_id,
action_id,
input,
}: RunActionParams,
@@ -340,7 +347,11 @@ pub async fn run_action(
.await
.as_ref()
.or_not_found(lazy_format!("Manager for {}", package_id))?
.run_action(Guid::new(), action_id, input.unwrap_or_default())
.run_action(
event_id.unwrap_or_default(),
action_id,
input.unwrap_or_default(),
)
.await
.map(|res| res.map(ActionResult::upcast))
}

View File

@@ -3,29 +3,27 @@ use std::collections::BTreeMap;
use chrono::{DateTime, Utc};
use clap::Parser;
use color_eyre::eyre::eyre;
use imbl_value::{json, InternedString};
use imbl_value::{InternedString, json};
use itertools::Itertools;
use josekit::jwk::Jwk;
use rpc_toolkit::yajrc::RpcError;
use rpc_toolkit::{from_fn_async, Context, HandlerArgs, HandlerExt, ParentHandler};
use rpc_toolkit::{CallRemote, Context, HandlerArgs, HandlerExt, ParentHandler, from_fn_async};
use serde::{Deserialize, Serialize};
use tokio::io::AsyncWriteExt;
use tracing::instrument;
use ts_rs::TS;
use crate::context::{CliContext, RpcContext};
use crate::db::model::DatabaseModel;
use crate::middleware::auth::{
AsLogoutSessionId, HasLoggedOutSessions, HashSessionToken, LoginRes,
use crate::middleware::auth::session::{
AsLogoutSessionId, HasLoggedOutSessions, HashSessionToken, LoginRes, SessionAuthContext,
};
use crate::prelude::*;
use crate::util::crypto::EncryptedWire;
use crate::util::io::create_file_mod;
use crate::util::serde::{display_serializable, HandlerExtSerde, WithIoFormat};
use crate::{ensure_code, Error, ResultExt};
use crate::util::serde::{HandlerExtSerde, WithIoFormat, display_serializable};
use crate::{Error, ResultExt, ensure_code};
#[derive(Debug, Clone, Default, Deserialize, Serialize, TS)]
#[ts(as = "BTreeMap::<String, Session>")]
pub struct Sessions(pub BTreeMap<InternedString, Session>);
impl Sessions {
pub fn new() -> Self {
@@ -112,31 +110,34 @@ impl std::str::FromStr for PasswordType {
})
}
}
pub fn auth<C: Context>() -> ParentHandler<C> {
pub fn auth<C: Context, AC: SessionAuthContext>() -> ParentHandler<C>
where
CliContext: CallRemote<AC>,
{
ParentHandler::new()
.subcommand(
"login",
from_fn_async(login_impl)
from_fn_async(login_impl::<AC>)
.with_metadata("login", Value::Bool(true))
.no_cli(),
)
.subcommand(
"login",
from_fn_async(cli_login)
from_fn_async(cli_login::<AC>)
.no_display()
.with_about("Log in to StartOS server"),
.with_about("Log in a new auth session"),
)
.subcommand(
"logout",
from_fn_async(logout)
from_fn_async(logout::<AC>)
.with_metadata("get_session", Value::Bool(true))
.no_display()
.with_about("Log out of StartOS server")
.with_about("Log out of current auth session")
.with_call_remote::<CliContext>(),
)
.subcommand(
"session",
session::<C>().with_about("List or kill StartOS sessions"),
session::<C, AC>().with_about("List or kill auth sessions"),
)
.subcommand(
"reset-password",
@@ -146,7 +147,7 @@ pub fn auth<C: Context>() -> ParentHandler<C> {
"reset-password",
from_fn_async(cli_reset_password)
.no_display()
.with_about("Reset StartOS password"),
.with_about("Reset password"),
)
.subcommand(
"get-pubkey",
@@ -172,17 +173,20 @@ fn gen_pwd() {
}
#[instrument(skip_all)]
async fn cli_login(
async fn cli_login<C: SessionAuthContext>(
HandlerArgs {
context: ctx,
parent_method,
method,
..
}: HandlerArgs<CliContext>,
) -> Result<(), RpcError> {
) -> Result<(), RpcError>
where
CliContext: CallRemote<C>,
{
let password = rpassword::prompt_password("Password: ")?;
ctx.call_remote::<RpcContext>(
ctx.call_remote::<C>(
&parent_method.into_iter().chain(method).join("."),
json!({
"password": password,
@@ -210,39 +214,31 @@ pub fn check_password(hash: &str, password: &str) -> Result<(), Error> {
Ok(())
}
pub fn check_password_against_db(db: &DatabaseModel, password: &str) -> Result<(), Error> {
let pw_hash = db.as_private().as_password().de()?;
check_password(&pw_hash, password)?;
Ok(())
}
#[derive(Deserialize, Serialize, TS)]
#[serde(rename_all = "camelCase")]
#[ts(export)]
pub struct LoginParams {
password: Option<PasswordType>,
password: String,
#[ts(skip)]
#[serde(rename = "__auth_userAgent")] // from Auth middleware
#[serde(rename = "__Auth_userAgent")] // from Auth middleware
user_agent: Option<String>,
#[serde(default)]
ephemeral: bool,
}
#[instrument(skip_all)]
pub async fn login_impl(
ctx: RpcContext,
pub async fn login_impl<C: SessionAuthContext>(
ctx: C,
LoginParams {
password,
user_agent,
ephemeral,
}: LoginParams,
) -> Result<LoginRes, Error> {
let password = password.unwrap_or_default().decrypt(&ctx)?;
let tok = if ephemeral {
check_password_against_db(&ctx.db.peek().await, &password)?;
C::check_password(&ctx.db().peek().await, &password)?;
let hash_token = HashSessionToken::new();
ctx.ephemeral_sessions.mutate(|s| {
ctx.ephemeral_sessions().mutate(|s| {
s.0.insert(
hash_token.hashed().clone(),
Session {
@@ -254,11 +250,11 @@ pub async fn login_impl(
});
Ok(hash_token.to_login_res())
} else {
ctx.db
ctx.db()
.mutate(|db| {
check_password_against_db(db, &password)?;
C::check_password(db, &password)?;
let hash_token = HashSessionToken::new();
db.as_private_mut().as_sessions_mut().insert(
C::access_sessions(db).insert(
hash_token.hashed(),
&Session {
logged_in: Utc::now(),
@@ -273,12 +269,7 @@ pub async fn login_impl(
.result
}?;
if tokio::fs::metadata("/media/startos/config/overlay/etc/shadow")
.await
.is_err()
{
write_shadow(&password).await?;
}
ctx.post_login_hook(&password).await?;
Ok(tok)
}
@@ -288,12 +279,12 @@ pub async fn login_impl(
#[command(rename_all = "kebab-case")]
pub struct LogoutParams {
#[ts(skip)]
#[serde(rename = "__auth_session")] // from Auth middleware
#[serde(rename = "__Auth_session")] // from Auth middleware
session: InternedString,
}
pub async fn logout(
ctx: RpcContext,
pub async fn logout<C: SessionAuthContext>(
ctx: C,
LogoutParams { session }: LogoutParams,
) -> Result<Option<HasLoggedOutSessions>, Error> {
Ok(Some(
@@ -321,22 +312,25 @@ pub struct SessionList {
sessions: Sessions,
}
pub fn session<C: Context>() -> ParentHandler<C> {
pub fn session<C: Context, AC: SessionAuthContext>() -> ParentHandler<C>
where
CliContext: CallRemote<AC>,
{
ParentHandler::new()
.subcommand(
"list",
from_fn_async(list)
from_fn_async(list::<AC>)
.with_metadata("get_session", Value::Bool(true))
.with_display_serializable()
.with_custom_display_fn(|handle, result| display_sessions(handle.params, result))
.with_about("Display all server sessions")
.with_about("Display all auth sessions")
.with_call_remote::<CliContext>(),
)
.subcommand(
"kill",
from_fn_async(kill)
from_fn_async(kill::<AC>)
.no_display()
.with_about("Terminate existing server session(s)")
.with_about("Terminate existing auth session(s)")
.with_call_remote::<CliContext>(),
)
}
@@ -379,18 +373,18 @@ fn display_sessions(params: WithIoFormat<ListParams>, arg: SessionList) -> Resul
pub struct ListParams {
#[arg(skip)]
#[ts(skip)]
#[serde(rename = "__auth_session")] // from Auth middleware
#[serde(rename = "__Auth_session")] // from Auth middleware
session: Option<InternedString>,
}
// #[command(display(display_sessions))]
#[instrument(skip_all)]
pub async fn list(
ctx: RpcContext,
pub async fn list<C: SessionAuthContext>(
ctx: C,
ListParams { session, .. }: ListParams,
) -> Result<SessionList, Error> {
let mut sessions = ctx.db.peek().await.into_private().into_sessions().de()?;
ctx.ephemeral_sessions.peek(|s| {
let mut sessions = C::access_sessions(&mut ctx.db().peek().await).de()?;
ctx.ephemeral_sessions().peek(|s| {
sessions
.0
.extend(s.0.iter().map(|(k, v)| (k.clone(), v.clone())))
@@ -424,7 +418,10 @@ pub struct KillParams {
}
#[instrument(skip_all)]
pub async fn kill(ctx: RpcContext, KillParams { ids }: KillParams) -> Result<(), Error> {
pub async fn kill<C: SessionAuthContext>(
ctx: C,
KillParams { ids }: KillParams,
) -> Result<(), Error> {
HasLoggedOutSessions::new(ids.into_iter().map(KillSessionId::new), &ctx).await?;
Ok(())
}
@@ -480,30 +477,19 @@ pub async fn reset_password_impl(
let old_password = old_password.unwrap_or_default().decrypt(&ctx)?;
let new_password = new_password.unwrap_or_default().decrypt(&ctx)?;
let mut account = ctx.account.write().await;
if !argon2::verify_encoded(&account.password, old_password.as_bytes())
.with_kind(crate::ErrorKind::IncorrectPassword)?
{
return Err(Error::new(
eyre!("Incorrect Password"),
crate::ErrorKind::IncorrectPassword,
));
}
account.set_password(&new_password)?;
let account_password = &account.password;
let account = account.clone();
ctx.db
.mutate(|d| {
d.as_public_mut()
.as_server_info_mut()
.as_password_hash_mut()
.ser(account_password)?;
account.save(d)?;
Ok(())
})
.await
.result
let account = ctx.account.mutate(|account| {
if !argon2::verify_encoded(&account.password, old_password.as_bytes())
.with_kind(crate::ErrorKind::IncorrectPassword)?
{
return Err(Error::new(
eyre!("Incorrect Password"),
crate::ErrorKind::IncorrectPassword,
));
}
account.set_password(&new_password)?;
Ok(account.clone())
})?;
ctx.db.mutate(|d| account.save(d)).await.result
}
#[instrument(skip_all)]

View File

@@ -5,17 +5,15 @@ use std::sync::Arc;
use chrono::Utc;
use clap::Parser;
use color_eyre::eyre::eyre;
use helpers::AtomicFile;
use imbl::OrdSet;
use models::PackageId;
use serde::{Deserialize, Serialize};
use tokio::io::AsyncWriteExt;
use tracing::instrument;
use ts_rs::TS;
use super::target::{BackupTargetId, PackageBackupInfo};
use super::PackageBackupReport;
use crate::auth::check_password_against_db;
use super::target::{BackupTargetId, PackageBackupInfo};
use crate::PackageId;
use crate::backup::os::OsBackup;
use crate::backup::{BackupReport, ServerBackupReport};
use crate::context::RpcContext;
@@ -24,9 +22,10 @@ use crate::db::model::{Database, DatabaseModel};
use crate::disk::mount::backup::BackupMountGuard;
use crate::disk::mount::filesystem::ReadWrite;
use crate::disk::mount::guard::{GenericMountGuard, TmpMountGuard};
use crate::notifications::{notify, NotificationLevel};
use crate::middleware::auth::session::SessionAuthContext;
use crate::notifications::{NotificationLevel, notify};
use crate::prelude::*;
use crate::util::io::dir_copy;
use crate::util::io::{AtomicFile, dir_copy};
use crate::util::serde::IoFormat;
use crate::version::VersionT;
@@ -170,7 +169,7 @@ pub async fn backup_all(
let ((fs, package_ids, server_id), status_guard) = (
ctx.db
.mutate(|db| {
check_password_against_db(db, &password)?;
RpcContext::check_password(db, &password)?;
let fs = target_id.load(db)?;
let package_ids = if let Some(ids) = package_ids {
ids.into_iter().collect()
@@ -312,19 +311,14 @@ async fn perform_backup(
let ui = ctx.db.peek().await.into_public().into_ui().de()?;
let mut os_backup_file =
AtomicFile::new(backup_guard.path().join("os-backup.json"), None::<PathBuf>)
.await
.with_kind(ErrorKind::Filesystem)?;
AtomicFile::new(backup_guard.path().join("os-backup.json"), None::<PathBuf>).await?;
os_backup_file
.write_all(&IoFormat::Json.to_vec(&OsBackup {
account: ctx.account.read().await.clone(),
account: ctx.account.peek(|a| a.clone()),
ui,
})?)
.await?;
os_backup_file
.save()
.await
.with_kind(ErrorKind::Filesystem)?;
os_backup_file.save().await?;
let luks_folder_old = backup_guard.path().join("luks.old");
if tokio::fs::metadata(&luks_folder_old).await.is_ok() {
@@ -342,7 +336,7 @@ async fn perform_backup(
let timestamp = Utc::now();
backup_guard.unencrypted_metadata.version = crate::version::Current::default().semver().into();
backup_guard.unencrypted_metadata.hostname = ctx.account.read().await.hostname.clone();
backup_guard.unencrypted_metadata.hostname = ctx.account.peek(|a| a.hostname.clone());
backup_guard.unencrypted_metadata.timestamp = timestamp.clone();
backup_guard.metadata.version = crate::version::Current::default().semver().into();
backup_guard.metadata.timestamp = Some(timestamp);

View File

@@ -1,15 +1,12 @@
use std::collections::BTreeMap;
use chrono::{DateTime, Utc};
use models::{HostId, PackageId};
use reqwest::Url;
use rpc_toolkit::{from_fn_async, Context, HandlerExt, ParentHandler};
use rpc_toolkit::{Context, HandlerExt, ParentHandler, from_fn_async};
use serde::{Deserialize, Serialize};
use crate::PackageId;
use crate::context::CliContext;
#[allow(unused_imports)]
use crate::prelude::*;
use crate::util::serde::{Base32, Base64};
pub mod backup_bulk;
pub mod os;
@@ -58,13 +55,3 @@ pub fn package_backup<C: Context>() -> ParentHandler<C> {
.with_call_remote::<CliContext>(),
)
}
#[derive(Deserialize, Serialize)]
struct BackupMetadata {
pub timestamp: DateTime<Utc>,
#[serde(default)]
pub network_keys: BTreeMap<HostId, Base64<[u8; 32]>>,
#[serde(default)]
pub tor_keys: BTreeMap<HostId, Base32<[u8; 64]>>, // DEPRECATED
pub registry: Option<Url>,
}

View File

@@ -4,10 +4,10 @@ use openssl::x509::X509;
use patch_db::Value;
use serde::{Deserialize, Serialize};
use ssh_key::private::Ed25519Keypair;
use torut::onion::TorSecretKeyV3;
use crate::account::AccountInfo;
use crate::hostname::{generate_hostname, generate_id, Hostname};
use crate::hostname::{Hostname, generate_hostname, generate_id};
use crate::net::tor::TorSecretKey;
use crate::prelude::*;
use crate::util::crypto::ed25519_expand_key;
use crate::util::serde::{Base32, Base64, Pem};
@@ -36,7 +36,7 @@ impl<'de> Deserialize<'de> for OsBackup {
v => {
return Err(serde::de::Error::custom(&format!(
"Unknown backup version {v}"
)))
)));
}
})
}
@@ -85,8 +85,11 @@ impl OsBackupV0 {
&mut ssh_key::rand_core::OsRng::default(),
ssh_key::Algorithm::Ed25519,
)?,
tor_keys: vec![TorSecretKeyV3::from(self.tor_key.0)],
compat_s9pk_key: ed25519_dalek::SigningKey::generate(
tor_keys: TorSecretKey::from_bytes(self.tor_key.0)
.ok()
.into_iter()
.collect(),
developer_key: ed25519_dalek::SigningKey::generate(
&mut ssh_key::rand_core::OsRng::default(),
),
},
@@ -116,8 +119,11 @@ impl OsBackupV1 {
root_ca_key: self.root_ca_key.0,
root_ca_cert: self.root_ca_cert.0,
ssh_key: ssh_key::PrivateKey::from(Ed25519Keypair::from_seed(&self.net_key.0)),
tor_keys: vec![TorSecretKeyV3::from(ed25519_expand_key(&self.net_key.0))],
compat_s9pk_key: ed25519_dalek::SigningKey::from_bytes(&self.net_key),
tor_keys: TorSecretKey::from_bytes(ed25519_expand_key(&self.net_key.0))
.ok()
.into_iter()
.collect(),
developer_key: ed25519_dalek::SigningKey::from_bytes(&self.net_key),
},
ui: self.ui,
}
@@ -134,7 +140,7 @@ struct OsBackupV2 {
root_ca_key: Pem<PKey<Private>>, // PEM Encoded OpenSSL Key
root_ca_cert: Pem<X509>, // PEM Encoded OpenSSL X509 Certificate
ssh_key: Pem<ssh_key::PrivateKey>, // PEM Encoded OpenSSH Key
tor_keys: Vec<TorSecretKeyV3>, // Base64 Encoded Ed25519 Expanded Secret Key
tor_keys: Vec<TorSecretKey>, // Base64 Encoded Ed25519 Expanded Secret Key
compat_s9pk_key: Pem<ed25519_dalek::SigningKey>, // PEM Encoded ED25519 Key
ui: Value, // JSON Value
}
@@ -149,7 +155,7 @@ impl OsBackupV2 {
root_ca_cert: self.root_ca_cert.0,
ssh_key: self.ssh_key.0,
tor_keys: self.tor_keys,
compat_s9pk_key: self.compat_s9pk_key.0,
developer_key: self.compat_s9pk_key.0,
},
ui: self.ui,
}
@@ -162,7 +168,7 @@ impl OsBackupV2 {
root_ca_cert: Pem(backup.account.root_ca_cert.clone()),
ssh_key: Pem(backup.account.ssh_key.clone()),
tor_keys: backup.account.tor_keys.clone(),
compat_s9pk_key: Pem(backup.account.compat_s9pk_key.clone()),
compat_s9pk_key: Pem(backup.account.developer_key.clone()),
ui: backup.ui.clone(),
}
}

View File

@@ -2,8 +2,7 @@ use std::collections::BTreeMap;
use std::sync::Arc;
use clap::Parser;
use futures::{stream, StreamExt};
use models::PackageId;
use futures::{StreamExt, stream};
use patch_db::json_ptr::ROOT;
use serde::{Deserialize, Serialize};
use tokio::sync::Mutex;
@@ -26,7 +25,7 @@ use crate::service::service_map::DownloadInstallFuture;
use crate::setup::SetupExecuteProgress;
use crate::system::sync_kiosk;
use crate::util::serde::IoFormat;
use crate::PLATFORM;
use crate::{PLATFORM, PackageId};
#[derive(Deserialize, Serialize, Parser, TS)]
#[serde(rename_all = "camelCase")]

View File

@@ -4,17 +4,17 @@ use std::path::{Path, PathBuf};
use clap::Parser;
use color_eyre::eyre::eyre;
use imbl_value::InternedString;
use rpc_toolkit::{from_fn_async, Context, HandlerExt, ParentHandler};
use rpc_toolkit::{Context, HandlerExt, ParentHandler, from_fn_async};
use serde::{Deserialize, Serialize};
use ts_rs::TS;
use super::{BackupTarget, BackupTargetId};
use crate::context::{CliContext, RpcContext};
use crate::db::model::DatabaseModel;
use crate::disk::mount::filesystem::cifs::Cifs;
use crate::disk::mount::filesystem::ReadOnly;
use crate::disk::mount::filesystem::cifs::Cifs;
use crate::disk::mount::guard::{GenericMountGuard, TmpMountGuard};
use crate::disk::util::{recovery_info, StartOsRecoveryInfo};
use crate::disk::util::{StartOsRecoveryInfo, recovery_info};
use crate::prelude::*;
use crate::util::serde::KeyVal;

View File

@@ -2,15 +2,14 @@ use std::collections::BTreeMap;
use std::path::{Path, PathBuf};
use chrono::{DateTime, Utc};
use clap::builder::ValueParserFactory;
use clap::Parser;
use clap::builder::ValueParserFactory;
use color_eyre::eyre::eyre;
use digest::generic_array::GenericArray;
use digest::OutputSizeUser;
use digest::generic_array::GenericArray;
use exver::Version;
use imbl_value::InternedString;
use models::{FromStrParser, PackageId};
use rpc_toolkit::{from_fn_async, Context, HandlerExt, ParentHandler};
use rpc_toolkit::{Context, HandlerExt, ParentHandler, from_fn_async};
use serde::{Deserialize, Serialize};
use sha2::Sha256;
use tokio::sync::Mutex;
@@ -18,6 +17,7 @@ use tracing::instrument;
use ts_rs::TS;
use self::cifs::CifsBackupTarget;
use crate::PackageId;
use crate::context::{CliContext, RpcContext};
use crate::db::model::DatabaseModel;
use crate::disk::mount::backup::BackupMountGuard;
@@ -28,9 +28,9 @@ use crate::disk::mount::guard::{GenericMountGuard, TmpMountGuard};
use crate::disk::util::PartitionInfo;
use crate::prelude::*;
use crate::util::serde::{
deserialize_from_str, display_serializable, serialize_display, HandlerExtSerde, WithIoFormat,
HandlerExtSerde, WithIoFormat, deserialize_from_str, display_serializable, serialize_display,
};
use crate::util::VersionString;
use crate::util::{FromStrParser, VersionString};
pub mod cifs;
@@ -301,14 +301,14 @@ lazy_static::lazy_static! {
Mutex::new(BTreeMap::new());
}
#[derive(Deserialize, Serialize, Parser, TS)]
#[derive(Deserialize, Serialize, Parser)]
#[serde(rename_all = "camelCase")]
#[command(rename_all = "kebab-case")]
pub struct MountParams {
target_id: BackupTargetId,
#[arg(long)]
server_id: Option<String>,
password: String,
password: String, // TODO: rpassword
#[arg(long)]
allow_partial: bool,
}

View File

@@ -1,68 +1,85 @@
use std::collections::VecDeque;
use std::collections::{BTreeMap, VecDeque};
use std::ffi::OsString;
use std::path::Path;
#[cfg(feature = "container-runtime")]
pub mod container_cli;
pub mod deprecated;
#[cfg(feature = "registry")]
pub mod registry;
#[cfg(feature = "cli")]
pub mod start_cli;
#[cfg(feature = "daemon")]
pub mod start_init;
#[cfg(feature = "daemon")]
pub mod startd;
pub mod tunnel;
fn select_executable(name: &str) -> Option<fn(VecDeque<OsString>)> {
match name {
#[cfg(feature = "cli")]
"start-cli" => Some(start_cli::main),
#[cfg(feature = "container-runtime")]
"start-cli" => Some(container_cli::main),
#[cfg(feature = "daemon")]
"startd" => Some(startd::main),
#[cfg(feature = "registry")]
"registry" => Some(registry::main),
"embassy-cli" => Some(|_| deprecated::renamed("embassy-cli", "start-cli")),
"embassy-sdk" => Some(|_| deprecated::renamed("embassy-sdk", "start-sdk")),
"embassyd" => Some(|_| deprecated::renamed("embassyd", "startd")),
"embassy-init" => Some(|_| deprecated::removed("embassy-init")),
"contents" => Some(|_| {
#[cfg(feature = "cli")]
println!("start-cli");
#[cfg(feature = "container-runtime")]
println!("start-cli (container)");
#[cfg(feature = "daemon")]
println!("startd");
#[cfg(feature = "registry")]
println!("registry");
}),
_ => None,
#[derive(Default)]
pub struct MultiExecutable(BTreeMap<&'static str, fn(VecDeque<OsString>)>);
impl MultiExecutable {
pub fn enable_startd(&mut self) -> &mut Self {
self.0.insert("startd", startd::main);
self.0
.insert("embassyd", |_| deprecated::renamed("embassyd", "startd"));
self.0
.insert("embassy-init", |_| deprecated::removed("embassy-init"));
self
}
pub fn enable_start_cli(&mut self) -> &mut Self {
self.0.insert("start-cli", start_cli::main);
self.0.insert("embassy-cli", |_| {
deprecated::renamed("embassy-cli", "start-cli")
});
self.0
.insert("embassy-sdk", |_| deprecated::removed("embassy-sdk"));
self
}
pub fn enable_start_container(&mut self) -> &mut Self {
self.0.insert("start-container", container_cli::main);
self
}
pub fn enable_start_registryd(&mut self) -> &mut Self {
self.0.insert("start-registryd", registry::main);
self
}
pub fn enable_start_registry(&mut self) -> &mut Self {
self.0.insert("start-registry", registry::cli);
self
}
pub fn enable_start_tunneld(&mut self) -> &mut Self {
self.0.insert("start-tunneld", tunnel::main);
self
}
pub fn enable_start_tunnel(&mut self) -> &mut Self {
self.0.insert("start-tunnel", tunnel::cli);
self
}
}
pub fn startbox() {
let mut args = std::env::args_os().collect::<VecDeque<_>>();
for _ in 0..2 {
if let Some(s) = args.pop_front() {
if let Some(x) = Path::new(&*s)
.file_name()
.and_then(|s| s.to_str())
.and_then(|s| select_executable(&s))
{
args.push_front(s);
return x(args);
fn select_executable(&self, name: &str) -> Option<fn(VecDeque<OsString>)> {
self.0.get(&name).copied()
}
pub fn execute(&self) {
let mut args = std::env::args_os().collect::<VecDeque<_>>();
for _ in 0..2 {
if let Some(s) = args.pop_front() {
if let Some(name) = Path::new(&*s).file_name().and_then(|s| s.to_str()) {
if name == "--contents" {
for name in self.0.keys() {
println!("{name}");
}
}
if let Some(x) = self.select_executable(&name) {
args.push_front(s);
return x(args);
}
}
}
}
let args = std::env::args().collect::<VecDeque<_>>();
eprintln!(
"unknown executable: {}",
args.get(1)
.or_else(|| args.get(0))
.map(|s| s.as_str())
.unwrap_or("N/A")
);
std::process::exit(1);
}
let args = std::env::args().collect::<VecDeque<_>>();
eprintln!(
"unknown executable: {}",
args.get(1)
.or_else(|| args.get(0))
.map(|s| s.as_str())
.unwrap_or("N/A")
);
std::process::exit(1);
}

View File

@@ -2,20 +2,26 @@ use std::ffi::OsString;
use clap::Parser;
use futures::FutureExt;
use rpc_toolkit::CliApp;
use tokio::signal::unix::signal;
use tracing::instrument;
use crate::context::CliContext;
use crate::context::config::ClientConfig;
use crate::net::web_server::{Acceptor, WebServer};
use crate::prelude::*;
use crate::registry::context::{RegistryConfig, RegistryContext};
use crate::registry::registry_router;
use crate::util::logger::LOGGER;
#[instrument(skip_all)]
async fn inner_main(config: &RegistryConfig) -> Result<(), Error> {
let server = async {
let ctx = RegistryContext::init(config).await?;
let mut server = WebServer::new(Acceptor::bind([ctx.listen]).await?);
server.serve_registry(ctx.clone());
let server = WebServer::new(
Acceptor::bind([ctx.listen]).await?,
registry_router(ctx.clone()),
);
let mut shutdown_recv = ctx.shutdown.subscribe();
@@ -85,3 +91,30 @@ pub fn main(args: impl IntoIterator<Item = OsString>) {
}
}
}
pub fn cli(args: impl IntoIterator<Item = OsString>) {
LOGGER.enable();
if let Err(e) = CliApp::new(
|cfg: ClientConfig| Ok(CliContext::init(cfg.load()?)?),
crate::registry::registry_api(),
)
.run(args)
{
match e.data {
Some(serde_json::Value::String(s)) => eprintln!("{}: {}", e.message, s),
Some(serde_json::Value::Object(o)) => {
if let Some(serde_json::Value::String(s)) = o.get("details") {
eprintln!("{}: {}", e.message, s);
if let Some(serde_json::Value::String(s)) = o.get("debug") {
tracing::debug!("{}", s)
}
}
}
Some(a) => eprintln!("{}: {}", e.message, a),
None => eprintln!("{}", e.message),
}
std::process::exit(e.code);
}
}

View File

@@ -3,8 +3,8 @@ use std::ffi::OsString;
use rpc_toolkit::CliApp;
use serde_json::Value;
use crate::context::config::ClientConfig;
use crate::context::CliContext;
use crate::context::config::ClientConfig;
use crate::util::logger::LOGGER;
use crate::version::{Current, VersionT};
@@ -17,7 +17,7 @@ pub fn main(args: impl IntoIterator<Item = OsString>) {
if let Err(e) = CliApp::new(
|cfg: ClientConfig| Ok(CliContext::init(cfg.load()?)?),
crate::expanded_api(),
crate::main_api(),
)
.run(args)
{

View File

@@ -1,4 +1,3 @@
use std::path::Path;
use std::sync::Arc;
use tokio::process::Command;
@@ -7,12 +6,13 @@ use tracing::instrument;
use crate::context::config::ServerConfig;
use crate::context::rpc::InitRpcContextPhases;
use crate::context::{DiagnosticContext, InitContext, InstallContext, RpcContext, SetupContext};
use crate::disk::REPAIR_DISK_PATH;
use crate::disk::fsck::RepairStrategy;
use crate::disk::main::DEFAULT_PASSWORD;
use crate::disk::REPAIR_DISK_PATH;
use crate::firmware::{check_for_firmware_update, update_firmware};
use crate::init::{InitPhases, STANDBY_MODE_PATH};
use crate::net::web_server::{UpgradableListener, WebServer};
use crate::net::gateway::UpgradableListener;
use crate::net::web_server::WebServer;
use crate::prelude::*;
use crate::progress::FullProgressTracker;
use crate::shutdown::Shutdown;
@@ -38,7 +38,7 @@ async fn setup_or_init(
let mut update_phase = handle.add_phase("Updating Firmware".into(), Some(10));
let mut reboot_phase = handle.add_phase("Rebooting".into(), Some(1));
server.serve_init(init_ctx);
server.serve_ui_for(init_ctx);
update_phase.start();
if let Err(e) = update_firmware(firmware).await {
@@ -48,7 +48,7 @@ async fn setup_or_init(
update_phase.complete();
reboot_phase.start();
return Ok(Err(Shutdown {
export_args: None,
disk_guid: None,
restart: true,
}));
}
@@ -94,7 +94,7 @@ async fn setup_or_init(
let ctx = InstallContext::init().await?;
server.serve_install(ctx.clone());
server.serve_ui_for(ctx.clone());
ctx.shutdown
.subscribe()
@@ -103,7 +103,7 @@ async fn setup_or_init(
.expect("context dropped");
return Ok(Err(Shutdown {
export_args: None,
disk_guid: None,
restart: true,
}));
}
@@ -114,10 +114,12 @@ async fn setup_or_init(
{
let ctx = SetupContext::init(server, config)?;
server.serve_setup(ctx.clone());
server.serve_ui_for(ctx.clone());
let mut shutdown = ctx.shutdown.subscribe();
shutdown.recv().await.expect("context dropped");
if let Some(shutdown) = shutdown.recv().await.expect("context dropped") {
return Ok(Err(shutdown));
}
tokio::task::yield_now().await;
if let Err(e) = Command::new("killall")
@@ -136,7 +138,7 @@ async fn setup_or_init(
return Err(Error::new(
eyre!("Setup mode exited before setup completed"),
ErrorKind::Unknown,
))
));
}
}))
} else {
@@ -148,7 +150,7 @@ async fn setup_or_init(
let init_phases = InitPhases::new(&handle);
let rpc_ctx_phases = InitRpcContextPhases::new(&handle);
server.serve_init(init_ctx);
server.serve_ui_for(init_ctx);
async {
disk_phase.start();
@@ -183,7 +185,7 @@ async fn setup_or_init(
let mut reboot_phase = handle.add_phase("Rebooting".into(), Some(1));
reboot_phase.start();
return Ok(Err(Shutdown {
export_args: Some((disk_guid, Path::new(DATA_DIR).to_owned())),
disk_guid: Some(disk_guid),
restart: true,
}));
}
@@ -246,7 +248,7 @@ pub async fn main(
e,
)?;
server.serve_diagnostic(ctx.clone());
server.serve_ui_for(ctx.clone());
let shutdown = ctx.shutdown.subscribe().recv().await.unwrap();

View File

@@ -12,8 +12,9 @@ use tracing::instrument;
use crate::context::config::ServerConfig;
use crate::context::rpc::InitRpcContextPhases;
use crate::context::{DiagnosticContext, InitContext, RpcContext};
use crate::net::network_interface::SelfContainedNetworkInterfaceListener;
use crate::net::web_server::{Acceptor, UpgradableListener, WebServer};
use crate::net::gateway::{BindTcp, SelfContainedNetworkInterfaceListener, UpgradableListener};
use crate::net::static_server::refresher;
use crate::net::web_server::{Acceptor, WebServer};
use crate::shutdown::Shutdown;
use crate::system::launch_metrics_task;
use crate::util::io::append_file;
@@ -38,7 +39,7 @@ async fn inner_main(
};
tokio::fs::write("/run/startos/initialized", "").await?;
server.serve_main(ctx.clone());
server.serve_ui_for(ctx.clone());
LOGGER.set_logfile(None);
handle.complete();
@@ -47,7 +48,7 @@ async fn inner_main(
let init_ctx = InitContext::init(config).await?;
let handle = init_ctx.progress.clone();
let rpc_ctx_phases = InitRpcContextPhases::new(&handle);
server.serve_init(init_ctx);
server.serve_ui_for(init_ctx);
let ctx = RpcContext::init(
&server.acceptor_setter(),
@@ -63,14 +64,14 @@ async fn inner_main(
)
.await?;
server.serve_main(ctx.clone());
server.serve_ui_for(ctx.clone());
handle.complete();
ctx
};
let (rpc_ctx, shutdown) = async {
crate::hostname::sync_hostname(&rpc_ctx.account.read().await.hostname).await?;
crate::hostname::sync_hostname(&rpc_ctx.account.peek(|a| a.hostname.clone())).await?;
let mut shutdown_recv = rpc_ctx.shutdown.subscribe();
@@ -132,8 +133,6 @@ async fn inner_main(
.await?;
rpc_ctx.shutdown().await?;
tracing::info!("RPC Context is dropped");
Ok(shutdown)
}
@@ -144,14 +143,15 @@ pub fn main(args: impl IntoIterator<Item = OsString>) {
let res = {
let rt = tokio::runtime::Builder::new_multi_thread()
.worker_threads(max(4, num_cpus::get()))
.worker_threads(max(1, num_cpus::get()))
.enable_all()
.build()
.expect("failed to initialize runtime");
let res = rt.block_on(async {
let mut server = WebServer::new(Acceptor::bind_upgradable(
SelfContainedNetworkInterfaceListener::bind(80),
));
let mut server = WebServer::new(
Acceptor::bind_upgradable(SelfContainedNetworkInterfaceListener::bind(BindTcp, 80)),
refresher(),
);
match inner_main(&mut server, &config).await {
Ok(a) => {
server.shutdown().await;
@@ -179,7 +179,7 @@ pub fn main(args: impl IntoIterator<Item = OsString>) {
e,
)?;
server.serve_diagnostic(ctx.clone());
server.serve_ui_for(ctx.clone());
let mut shutdown = ctx.shutdown.subscribe();

View File

@@ -0,0 +1,200 @@
use std::ffi::OsString;
use std::net::SocketAddr;
use std::sync::Arc;
use std::time::Duration;
use clap::Parser;
use futures::FutureExt;
use rpc_toolkit::CliApp;
use tokio::signal::unix::signal;
use tracing::instrument;
use visit_rs::Visit;
use crate::context::CliContext;
use crate::context::config::ClientConfig;
use crate::net::gateway::{Bind, BindTcp};
use crate::net::tls::TlsListener;
use crate::net::web_server::{Accept, Acceptor, MetadataVisitor, WebServer};
use crate::prelude::*;
use crate::tunnel::context::{TunnelConfig, TunnelContext};
use crate::tunnel::tunnel_router;
use crate::tunnel::web::TunnelCertHandler;
use crate::util::future::NonDetachingJoinHandle;
use crate::util::logger::LOGGER;
#[derive(Clone, Copy, Debug, PartialEq, Eq, PartialOrd, Ord)]
enum WebserverListener {
Http,
Https(SocketAddr),
}
impl<V: MetadataVisitor> Visit<V> for WebserverListener {
fn visit(&self, visitor: &mut V) -> <V as visit_rs::Visitor>::Result {
visitor.visit(self)
}
}
#[instrument(skip_all)]
async fn inner_main(config: &TunnelConfig) -> Result<(), Error> {
let server = async {
let ctx = TunnelContext::init(config).await?;
let listen = ctx.listen;
let server = WebServer::new(
Acceptor::bind_map_dyn([(WebserverListener::Http, listen)]).await?,
tunnel_router(ctx.clone()),
);
let acceptor_setter = server.acceptor_setter();
let https_db = ctx.db.clone();
let https_thread: NonDetachingJoinHandle<()> = tokio::spawn(async move {
let mut sub = https_db.subscribe("/webserver".parse().unwrap()).await;
while {
while let Err(e) = async {
let webserver = https_db.peek().await.into_webserver();
if webserver.as_enabled().de()? {
let addr = webserver.as_listen().de()?.or_not_found("listen address")?;
acceptor_setter.send_if_modified(|a| {
let key = WebserverListener::Https(addr);
if !a.contains_key(&key) {
match (|| {
Ok::<_, Error>(TlsListener::new(
BindTcp.bind(addr)?,
TunnelCertHandler {
db: https_db.clone(),
crypto_provider: Arc::new(tokio_rustls::rustls::crypto::ring::default_provider()),
},
))
})() {
Ok(l) => {
a.retain(|k, _| *k == WebserverListener::Http);
a.insert(key, l.into_dyn());
true
}
Err(e) => {
tracing::error!("error adding ssl listener: {e}");
tracing::debug!("{e:?}");
false
}
}
} else {
false
}
});
} else {
acceptor_setter.send_if_modified(|a| {
let before = a.len();
a.retain(|k, _| *k == WebserverListener::Http);
a.len() != before
});
}
Ok::<_, Error>(())
}
.await
{
tracing::error!("error updating webserver bind: {e}");
tracing::debug!("{e:?}");
tokio::time::sleep(Duration::from_secs(5)).await;
}
sub.recv().await.is_some()
} {}
})
.into();
let mut shutdown_recv = ctx.shutdown.subscribe();
let sig_handler_ctx = ctx;
let sig_handler: NonDetachingJoinHandle<()> = tokio::spawn(async move {
use tokio::signal::unix::SignalKind;
futures::future::select_all(
[
SignalKind::interrupt(),
SignalKind::quit(),
SignalKind::terminate(),
]
.iter()
.map(|s| {
async move {
signal(*s)
.unwrap_or_else(|_| panic!("register {:?} handler", s))
.recv()
.await
}
.boxed()
}),
)
.await;
sig_handler_ctx
.shutdown
.send(())
.map_err(|_| ())
.expect("send shutdown signal");
})
.into();
shutdown_recv
.recv()
.await
.with_kind(crate::ErrorKind::Unknown)?;
sig_handler.wait_for_abort().await.with_kind(ErrorKind::Unknown)?;
https_thread.wait_for_abort().await.with_kind(ErrorKind::Unknown)?;
Ok::<_, Error>(server)
}
.await?;
server.shutdown().await;
Ok(())
}
pub fn main(args: impl IntoIterator<Item = OsString>) {
LOGGER.enable();
let config = TunnelConfig::parse_from(args).load().unwrap();
let res = {
let rt = tokio::runtime::Builder::new_multi_thread()
.enable_all()
.build()
.expect("failed to initialize runtime");
rt.block_on(inner_main(&config))
};
match res {
Ok(()) => (),
Err(e) => {
eprintln!("{}", e.source);
tracing::debug!("{:?}", e.source);
drop(e.source);
std::process::exit(e.kind as i32)
}
}
}
pub fn cli(args: impl IntoIterator<Item = OsString>) {
LOGGER.enable();
if let Err(e) = CliApp::new(
|cfg: ClientConfig| Ok(CliContext::init(cfg.load()?)?),
crate::tunnel::api::tunnel_api(),
)
.run(args)
{
match e.data {
Some(serde_json::Value::String(s)) => eprintln!("{}: {}", e.message, s),
Some(serde_json::Value::Object(o)) => {
if let Some(serde_json::Value::String(s)) = o.get("details") {
eprintln!("{}: {}", e.message, s);
if let Some(serde_json::Value::String(s)) = o.get("debug") {
tracing::debug!("{}", s)
}
}
}
Some(a) => eprintln!("{}: {}", e.message, a),
None => eprintln!("{}", e.message),
}
std::process::exit(e.code);
}
}

View File

@@ -1,7 +1,7 @@
use helpers::Callback;
use itertools::Itertools;
use jsonpath_lib::Compiled;
use models::PackageId;
use crate::PackageId;
use serde_json::Value;
use crate::context::RpcContext;

View File

@@ -1,27 +1,33 @@
use std::fs::File;
use std::io::BufReader;
use std::net::SocketAddr;
use std::path::{Path, PathBuf};
use std::sync::Arc;
use cookie_store::{CookieStore, RawCookie};
use cookie::{Cookie, Expiration, SameSite};
use cookie_store::CookieStore;
use http::HeaderMap;
use imbl_value::InternedString;
use josekit::jwk::Jwk;
use once_cell::sync::OnceCell;
use reqwest::Proxy;
use reqwest_cookie_store::CookieStoreMutex;
use rpc_toolkit::reqwest::{Client, Url};
use rpc_toolkit::yajrc::RpcError;
use rpc_toolkit::{call_remote_http, CallRemote, Context, Empty};
use rpc_toolkit::{CallRemote, Context, Empty};
use tokio::net::TcpStream;
use tokio::runtime::Runtime;
use tokio_tungstenite::{MaybeTlsStream, WebSocketStream};
use tracing::instrument;
use super::setup::CURRENT_SECRET;
use crate::context::config::{local_config_path, ClientConfig};
use crate::context::config::{ClientConfig, local_config_path};
use crate::context::{DiagnosticContext, InitContext, InstallContext, RpcContext, SetupContext};
use crate::middleware::auth::LOCAL_AUTH_COOKIE_PATH;
use crate::developer::{OS_DEVELOPER_KEY_PATH, default_developer_key_path};
use crate::middleware::auth::local::LocalAuthContext;
use crate::prelude::*;
use crate::rpc_continuations::Guid;
use crate::util::io::read_file_to_string;
#[derive(Debug)]
pub struct CliContextSeed {
@@ -29,6 +35,10 @@ pub struct CliContextSeed {
pub base_url: Url,
pub rpc_url: Url,
pub registry_url: Option<Url>,
pub registry_hostname: Vec<InternedString>,
pub registry_listen: Option<SocketAddr>,
pub tunnel_addr: Option<SocketAddr>,
pub tunnel_listen: Option<SocketAddr>,
pub client: Client,
pub cookie_store: Arc<CookieStoreMutex>,
pub cookie_path: PathBuf,
@@ -55,9 +65,8 @@ impl Drop for CliContextSeed {
true,
)
.unwrap();
let mut store = self.cookie_store.lock().unwrap();
store.remove("localhost", "", "local");
store.save_json(&mut *writer).unwrap();
let store = self.cookie_store.lock().unwrap();
cookie_store::serde::json::save(&store, &mut *writer).unwrap();
writer.sync_all().unwrap();
std::fs::rename(tmp, &self.cookie_path).unwrap();
}
@@ -85,26 +94,14 @@ impl CliContext {
.unwrap_or(Path::new("/"))
.join(".cookies.json")
});
let cookie_store = Arc::new(CookieStoreMutex::new({
let mut store = if cookie_path.exists() {
CookieStore::load_json(BufReader::new(
File::open(&cookie_path)
.with_ctx(|_| (ErrorKind::Filesystem, cookie_path.display()))?,
))
.map_err(|e| eyre!("{}", e))
.with_kind(crate::ErrorKind::Deserialization)?
} else {
CookieStore::default()
};
if let Ok(local) = std::fs::read_to_string(LOCAL_AUTH_COOKIE_PATH) {
store
.insert_raw(
&RawCookie::new("local", local),
&"http://localhost".parse()?,
)
.with_kind(crate::ErrorKind::Network)?;
}
store
let cookie_store = Arc::new(CookieStoreMutex::new(if cookie_path.exists() {
cookie_store::serde::json::load(BufReader::new(
File::open(&cookie_path)
.with_ctx(|_| (ErrorKind::Filesystem, cookie_path.display()))?,
))
.unwrap_or_default()
} else {
CookieStore::default()
}));
Ok(CliContext(Arc::new(CliContextSeed {
@@ -129,9 +126,17 @@ impl CliContext {
Ok::<_, Error>(registry)
})
.transpose()?,
registry_hostname: config.registry_hostname.unwrap_or_default(),
registry_listen: config.registry_listen,
tunnel_addr: config.tunnel,
tunnel_listen: config.tunnel_listen,
client: {
let mut builder = Client::builder().cookie_provider(cookie_store.clone());
if let Some(proxy) = config.proxy {
if let Some(proxy) = config.proxy.or_else(|| {
config
.socks_listen
.and_then(|socks| format!("socks5h://{socks}").parse::<Url>().log_err())
}) {
builder =
builder.proxy(Proxy::all(proxy).with_kind(crate::ErrorKind::ParseUrl)?)
}
@@ -139,14 +144,9 @@ impl CliContext {
},
cookie_store,
cookie_path,
developer_key_path: config.developer_key_path.unwrap_or_else(|| {
local_config_path()
.as_deref()
.unwrap_or_else(|| Path::new(super::config::CONFIG_PATH))
.parent()
.unwrap_or(Path::new("/"))
.join("developer.key.pem")
}),
developer_key_path: config
.developer_key_path
.unwrap_or_else(default_developer_key_path),
developer_key: OnceCell::new(),
})))
}
@@ -155,20 +155,26 @@ impl CliContext {
#[instrument(skip_all)]
pub fn developer_key(&self) -> Result<&ed25519_dalek::SigningKey, Error> {
self.developer_key.get_or_try_init(|| {
if !self.developer_key_path.exists() {
return Err(Error::new(eyre!("Developer Key does not exist! Please run `start-cli init` before running this command."), crate::ErrorKind::Uninitialized));
}
let pair = <ed25519::KeypairBytes as ed25519::pkcs8::DecodePrivateKey>::from_pkcs8_pem(
&std::fs::read_to_string(&self.developer_key_path)?,
)
.with_kind(crate::ErrorKind::Pem)?;
let secret = ed25519_dalek::SecretKey::try_from(&pair.secret_key[..]).map_err(|_| {
Error::new(
eyre!("pkcs8 key is of incorrect length"),
ErrorKind::OpenSsl,
for path in [Path::new(OS_DEVELOPER_KEY_PATH), &self.developer_key_path] {
if !path.exists() {
continue;
}
let pair = <ed25519::KeypairBytes as ed25519::pkcs8::DecodePrivateKey>::from_pkcs8_pem(
&std::fs::read_to_string(path)?,
)
})?;
Ok(secret.into())
.with_kind(crate::ErrorKind::Pem)?;
let secret = ed25519_dalek::SecretKey::try_from(&pair.secret_key[..]).map_err(|_| {
Error::new(
eyre!("pkcs8 key is of incorrect length"),
ErrorKind::OpenSsl,
)
})?;
return Ok(secret.into())
}
Err(Error::new(
eyre!("Developer Key does not exist! Please run `start-cli init-key` before running this command."),
crate::ErrorKind::Uninitialized
))
})
}
@@ -185,7 +191,7 @@ impl CliContext {
eyre!("Cannot parse scheme from base URL"),
crate::ErrorKind::ParseUrl,
)
.into())
.into());
}
};
url.set_scheme(ws_scheme)
@@ -228,23 +234,28 @@ impl CliContext {
&self,
method: &str,
params: Value,
) -> Result<Value, RpcError>
) -> Result<Value, Error>
where
Self: CallRemote<RemoteContext>,
{
<Self as CallRemote<RemoteContext, Empty>>::call_remote(&self, method, params, Empty {})
.await
.map_err(Error::from)
.with_ctx(|e| (e.kind, method))
}
pub async fn call_remote_with<RemoteContext, T>(
&self,
method: &str,
params: Value,
extra: T,
) -> Result<Value, RpcError>
) -> Result<Value, Error>
where
Self: CallRemote<RemoteContext, T>,
{
<Self as CallRemote<RemoteContext, T>>::call_remote(&self, method, params, extra).await
<Self as CallRemote<RemoteContext, T>>::call_remote(&self, method, params, extra)
.await
.map_err(Error::from)
.with_ctx(|e| (e.kind, method))
}
}
impl AsRef<Jwk> for CliContext {
@@ -274,40 +285,88 @@ impl Context for CliContext {
)
}
}
impl AsRef<Client> for CliContext {
fn as_ref(&self) -> &Client {
&self.client
}
}
impl CallRemote<RpcContext> for CliContext {
async fn call_remote(&self, method: &str, params: Value, _: Empty) -> Result<Value, RpcError> {
call_remote_http(&self.client, self.rpc_url.clone(), method, params).await
if let Ok(local) = read_file_to_string(RpcContext::LOCAL_AUTH_COOKIE_PATH).await {
self.cookie_store
.lock()
.unwrap()
.insert_raw(
&Cookie::build(("local", local))
.domain("localhost")
.expires(Expiration::Session)
.same_site(SameSite::Strict)
.build(),
&"http://localhost".parse()?,
)
.with_kind(crate::ErrorKind::Network)?;
}
crate::middleware::auth::signature::call_remote(
self,
self.rpc_url.clone(),
HeaderMap::new(),
self.rpc_url.host_str(),
method,
params,
)
.await
}
}
impl CallRemote<DiagnosticContext> for CliContext {
async fn call_remote(&self, method: &str, params: Value, _: Empty) -> Result<Value, RpcError> {
call_remote_http(&self.client, self.rpc_url.clone(), method, params).await
crate::middleware::auth::signature::call_remote(
self,
self.rpc_url.clone(),
HeaderMap::new(),
self.rpc_url.host_str(),
method,
params,
)
.await
}
}
impl CallRemote<InitContext> for CliContext {
async fn call_remote(&self, method: &str, params: Value, _: Empty) -> Result<Value, RpcError> {
call_remote_http(&self.client, self.rpc_url.clone(), method, params).await
crate::middleware::auth::signature::call_remote(
self,
self.rpc_url.clone(),
HeaderMap::new(),
self.rpc_url.host_str(),
method,
params,
)
.await
}
}
impl CallRemote<SetupContext> for CliContext {
async fn call_remote(&self, method: &str, params: Value, _: Empty) -> Result<Value, RpcError> {
call_remote_http(&self.client, self.rpc_url.clone(), method, params).await
crate::middleware::auth::signature::call_remote(
self,
self.rpc_url.clone(),
HeaderMap::new(),
self.rpc_url.host_str(),
method,
params,
)
.await
}
}
impl CallRemote<InstallContext> for CliContext {
async fn call_remote(&self, method: &str, params: Value, _: Empty) -> Result<Value, RpcError> {
call_remote_http(&self.client, self.rpc_url.clone(), method, params).await
crate::middleware::auth::signature::call_remote(
self,
self.rpc_url.clone(),
HeaderMap::new(),
self.rpc_url.host_str(),
method,
params,
)
.await
}
}
#[test]
fn test() {
let ctx = CliContext::init(ClientConfig::default()).unwrap();
ctx.runtime().unwrap().block_on(async {
reqwest::Client::new()
.get("http://example.com")
.send()
.await
.unwrap();
});
}

View File

@@ -3,18 +3,16 @@ use std::net::SocketAddr;
use std::path::{Path, PathBuf};
use clap::Parser;
use imbl_value::InternedString;
use reqwest::Url;
use serde::de::DeserializeOwned;
use serde::{Deserialize, Serialize};
use sqlx::postgres::PgConnectOptions;
use sqlx::PgPool;
use crate::MAIN_DATA;
use crate::disk::OsPartitionInfo;
use crate::init::init_postgres;
use crate::prelude::*;
use crate::util::serde::IoFormat;
use crate::version::VersionT;
use crate::MAIN_DATA;
pub const DEVICE_CONFIG_PATH: &str = "/media/startos/config/config.yaml"; // "/media/startos/config/config.yaml";
pub const CONFIG_PATH: &str = "/etc/startos/config.yaml";
@@ -58,7 +56,6 @@ pub trait ContextConfig: DeserializeOwned + Default {
#[derive(Debug, Default, Deserialize, Serialize, Parser)]
#[serde(rename_all = "kebab-case")]
#[command(rename_all = "kebab-case")]
#[command(name = "start-cli")]
#[command(version = crate::version::Current::default().semver().to_string())]
pub struct ClientConfig {
#[arg(short = 'c', long)]
@@ -67,8 +64,18 @@ pub struct ClientConfig {
pub host: Option<Url>,
#[arg(short = 'r', long)]
pub registry: Option<Url>,
#[arg(long)]
pub registry_hostname: Option<Vec<InternedString>>,
#[arg(skip)]
pub registry_listen: Option<SocketAddr>,
#[arg(short = 't', long)]
pub tunnel: Option<SocketAddr>,
#[arg(skip)]
pub tunnel_listen: Option<SocketAddr>,
#[arg(short = 'p', long)]
pub proxy: Option<Url>,
#[arg(skip)]
pub socks_listen: Option<SocketAddr>,
#[arg(long)]
pub cookie_path: Option<PathBuf>,
#[arg(long)]
@@ -81,6 +88,8 @@ impl ContextConfig for ClientConfig {
fn merge_with(&mut self, other: Self) {
self.host = self.host.take().or(other.host);
self.registry = self.registry.take().or(other.registry);
self.registry_hostname = self.registry_hostname.take().or(other.registry_hostname);
self.tunnel = self.tunnel.take().or(other.tunnel);
self.proxy = self.proxy.take().or(other.proxy);
self.cookie_path = self.cookie_path.take().or(other.cookie_path);
self.developer_key_path = self.developer_key_path.take().or(other.developer_key_path);
@@ -107,15 +116,15 @@ pub struct ServerConfig {
#[arg(skip)]
pub os_partitions: Option<OsPartitionInfo>,
#[arg(long)]
pub tor_control: Option<SocketAddr>,
#[arg(long)]
pub tor_socks: Option<SocketAddr>,
pub socks_listen: Option<SocketAddr>,
#[arg(long)]
pub revision_cache_size: Option<usize>,
#[arg(long)]
pub disable_encryption: Option<bool>,
#[arg(long)]
pub multi_arch_s9pks: Option<bool>,
#[arg(long)]
pub developer_key_path: Option<PathBuf>,
}
impl ContextConfig for ServerConfig {
fn next(&mut self) -> Option<PathBuf> {
@@ -124,14 +133,14 @@ impl ContextConfig for ServerConfig {
fn merge_with(&mut self, other: Self) {
self.ethernet_interface = self.ethernet_interface.take().or(other.ethernet_interface);
self.os_partitions = self.os_partitions.take().or(other.os_partitions);
self.tor_control = self.tor_control.take().or(other.tor_control);
self.tor_socks = self.tor_socks.take().or(other.tor_socks);
self.socks_listen = self.socks_listen.take().or(other.socks_listen);
self.revision_cache_size = self
.revision_cache_size
.take()
.or(other.revision_cache_size);
self.disable_encryption = self.disable_encryption.take().or(other.disable_encryption);
self.multi_arch_s9pks = self.multi_arch_s9pks.take().or(other.multi_arch_s9pks);
self.developer_key_path = self.developer_key_path.take().or(other.developer_key_path);
}
}
@@ -151,16 +160,4 @@ impl ServerConfig {
Ok(db)
}
#[instrument(skip_all)]
pub async fn secret_store(&self) -> Result<PgPool, Error> {
init_postgres("/media/startos/data").await?;
let secret_store =
PgPool::connect_with(PgConnectOptions::new().database("secrets").username("root"))
.await?;
sqlx::migrate!()
.run(&secret_store)
.await
.with_kind(crate::ErrorKind::Database)?;
Ok(secret_store)
}
}

View File

@@ -1,15 +1,15 @@
use std::ops::Deref;
use std::sync::Arc;
use rpc_toolkit::yajrc::RpcError;
use rpc_toolkit::Context;
use rpc_toolkit::yajrc::RpcError;
use tokio::sync::broadcast::Sender;
use tracing::instrument;
use crate::Error;
use crate::context::config::ServerConfig;
use crate::rpc_continuations::RpcContinuations;
use crate::shutdown::Shutdown;
use crate::Error;
pub struct DiagnosticContextSeed {
pub shutdown: Sender<Shutdown>,

View File

@@ -6,10 +6,10 @@ use tokio::sync::broadcast::Sender;
use tokio::sync::watch;
use tracing::instrument;
use crate::Error;
use crate::context::config::ServerConfig;
use crate::progress::FullProgressTracker;
use crate::rpc_continuations::RpcContinuations;
use crate::Error;
pub struct InitContextSeed {
pub config: ServerConfig,
@@ -25,10 +25,12 @@ impl InitContext {
#[instrument(skip_all)]
pub async fn init(cfg: &ServerConfig) -> Result<Self, Error> {
let (shutdown, _) = tokio::sync::broadcast::channel(1);
let mut progress = FullProgressTracker::new();
progress.enable_logging(true);
Ok(Self(Arc::new(InitContextSeed {
config: cfg.clone(),
error: watch::channel(None).0,
progress: FullProgressTracker::new(),
progress,
shutdown,
rpc_continuations: RpcContinuations::new(),
})))

View File

@@ -5,9 +5,9 @@ use rpc_toolkit::Context;
use tokio::sync::broadcast::Sender;
use tracing::instrument;
use crate::Error;
use crate::net::utils::find_eth_iface;
use crate::rpc_continuations::RpcContinuations;
use crate::Error;
pub struct InstallContextSeed {
pub ethernet_interface: String,

View File

@@ -1,22 +1,21 @@
use std::collections::{BTreeMap, BTreeSet};
use std::ffi::OsStr;
use std::future::Future;
use std::net::{Ipv4Addr, SocketAddr, SocketAddrV4};
use std::ops::Deref;
use std::sync::atomic::{AtomicBool, Ordering};
use std::path::{Path, PathBuf};
use std::sync::Arc;
use std::sync::atomic::{AtomicBool, Ordering};
use std::time::Duration;
use chrono::{TimeDelta, Utc};
use helpers::NonDetachingJoinHandle;
use imbl::OrdMap;
use imbl_value::InternedString;
use itertools::Itertools;
use josekit::jwk::Jwk;
use models::{ActionId, PackageId};
use reqwest::{Client, Proxy};
use rpc_toolkit::yajrc::RpcError;
use rpc_toolkit::{CallRemote, Context, Empty};
use tokio::sync::{broadcast, oneshot, watch, Mutex, RwLock};
use tokio::sync::{RwLock, broadcast, oneshot, watch};
use tokio::time::Instant;
use tracing::instrument;
@@ -24,24 +23,30 @@ use super::setup::CURRENT_SECRET;
use crate::account::AccountInfo;
use crate::auth::Sessions;
use crate::context::config::ServerConfig;
use crate::db::model::package::TaskSeverity;
use crate::db::model::Database;
use crate::db::model::package::TaskSeverity;
use crate::disk::OsPartitionInfo;
use crate::init::{check_time_is_synchronized, InitResult};
use crate::lxc::{ContainerId, LxcContainer, LxcManager};
use crate::init::{InitResult, check_time_is_synchronized};
use crate::install::PKG_ARCHIVE_DIR;
use crate::lxc::LxcManager;
use crate::net::gateway::UpgradableListener;
use crate::net::net_controller::{NetController, NetService};
use crate::net::socks::DEFAULT_SOCKS_LISTEN;
use crate::net::utils::{find_eth_iface, find_wifi_iface};
use crate::net::web_server::{UpgradableListener, WebServerAcceptorSetter};
use crate::net::web_server::WebServerAcceptorSetter;
use crate::net::wifi::WpaCli;
use crate::prelude::*;
use crate::progress::{FullProgressTracker, PhaseProgressTrackerHandle};
use crate::rpc_continuations::{Guid, OpenAuthedContinuations, RpcContinuations};
use crate::service::ServiceMap;
use crate::service::action::update_tasks;
use crate::service::effects::callbacks::ServiceCallbacks;
use crate::service::ServiceMap;
use crate::shutdown::Shutdown;
use crate::util::future::NonDetachingJoinHandle;
use crate::util::io::delete_file;
use crate::util::lshw::LshwDevice;
use crate::util::sync::{SyncMutex, Watch};
use crate::util::sync::{SyncMutex, SyncRwLock, Watch};
use crate::{ActionId, DATA_DIR, PackageId};
pub struct RpcContextSeed {
is_closed: AtomicBool,
@@ -52,7 +57,7 @@ pub struct RpcContextSeed {
pub ephemeral_sessions: SyncMutex<Sessions>,
pub db: TypedPatchDb<Database>,
pub sync_db: watch::Sender<u64>,
pub account: RwLock<AccountInfo>,
pub account: SyncRwLock<AccountInfo>,
pub net_controller: Arc<NetController>,
pub os_net_service: NetService,
pub s9pk_arch: Option<&'static str>,
@@ -60,7 +65,6 @@ pub struct RpcContextSeed {
pub cancellable_installs: SyncMutex<BTreeMap<PackageId, oneshot::Sender<()>>>,
pub metrics_cache: Watch<Option<crate::system::Metrics>>,
pub shutdown: broadcast::Sender<Option<Shutdown>>,
pub tor_socks: SocketAddr,
pub lxc_manager: Arc<LxcManager>,
pub open_authed_continuations: OpenAuthedContinuations<Option<InternedString>>,
pub rpc_continuations: RpcContinuations,
@@ -70,12 +74,11 @@ pub struct RpcContextSeed {
pub client: Client,
pub start_time: Instant,
pub crons: SyncMutex<BTreeMap<Guid, NonDetachingJoinHandle<()>>>,
// #[cfg(feature = "dev")]
pub dev: Dev,
}
pub struct Dev {
pub lxc: Mutex<BTreeMap<ContainerId, LxcContainer>>,
impl Drop for RpcContextSeed {
fn drop(&mut self) {
tracing::info!("RpcContext is dropped");
}
}
pub struct Hardware {
@@ -103,6 +106,7 @@ impl InitRpcContextPhases {
pub struct CleanupInitPhases {
cleanup_sessions: PhaseProgressTrackerHandle,
init_services: PhaseProgressTrackerHandle,
prune_s9pks: PhaseProgressTrackerHandle,
check_tasks: PhaseProgressTrackerHandle,
}
impl CleanupInitPhases {
@@ -110,6 +114,7 @@ impl CleanupInitPhases {
Self {
cleanup_sessions: handle.add_phase("Cleaning up sessions".into(), Some(1)),
init_services: handle.add_phase("Initializing services".into(), Some(10)),
prune_s9pks: handle.add_phase("Pruning S9PKs".into(), Some(1)),
check_tasks: handle.add_phase("Checking action requests".into(), Some(1)),
}
}
@@ -131,10 +136,7 @@ impl RpcContext {
run_migrations,
}: InitRpcContextPhases,
) -> Result<Self, Error> {
let tor_proxy = config.tor_socks.unwrap_or(SocketAddr::V4(SocketAddrV4::new(
Ipv4Addr::new(127, 0, 0, 1),
9050,
)));
let socks_proxy = config.socks_listen.unwrap_or(DEFAULT_SOCKS_LISTEN);
let (shutdown, _) = tokio::sync::broadcast::channel(1);
load_db.start();
@@ -156,18 +158,9 @@ impl RpcContext {
{
(net_ctrl, os_net_service)
} else {
let net_ctrl = Arc::new(
NetController::init(
db.clone(),
config
.tor_control
.unwrap_or(SocketAddr::from(([127, 0, 0, 1], 9051))),
tor_proxy,
&account.hostname,
)
.await?,
);
webserver.try_upgrade(|a| net_ctrl.net_iface.upgrade_listener(a))?;
let net_ctrl =
Arc::new(NetController::init(db.clone(), &account.hostname, socks_proxy).await?);
webserver.try_upgrade(|a| net_ctrl.net_iface.watcher.upgrade_listener(a))?;
let os_net_service = net_ctrl.os_bindings().await?;
(net_ctrl, os_net_service)
};
@@ -176,7 +169,7 @@ impl RpcContext {
let services = ServiceMap::default();
let metrics_cache = Watch::<Option<crate::system::Metrics>>::new(None);
let tor_proxy_url = format!("socks5h://{tor_proxy}");
let socks_proxy_url = format!("socks5h://{socks_proxy}");
let crons = SyncMutex::new(BTreeMap::new());
@@ -231,7 +224,7 @@ impl RpcContext {
ephemeral_sessions: SyncMutex::new(Sessions::new()),
sync_db: watch::Sender::new(db.sequence().await),
db,
account: RwLock::new(account),
account: SyncRwLock::new(account),
callbacks: net_controller.callbacks.clone(),
net_controller,
os_net_service,
@@ -244,7 +237,6 @@ impl RpcContext {
cancellable_installs: SyncMutex::new(BTreeMap::new()),
metrics_cache,
shutdown,
tor_socks: tor_proxy,
lxc_manager: Arc::new(LxcManager::new()),
open_authed_continuations: OpenAuthedContinuations::new(),
rpc_continuations: RpcContinuations::new(),
@@ -260,21 +252,11 @@ impl RpcContext {
})?,
),
client: Client::builder()
.proxy(Proxy::custom(move |url| {
if url.host_str().map_or(false, |h| h.ends_with(".onion")) {
Some(tor_proxy_url.clone())
} else {
None
}
}))
.proxy(Proxy::all(socks_proxy_url)?)
.build()
.with_kind(crate::ErrorKind::ParseUrl)?,
start_time: Instant::now(),
crons,
// #[cfg(feature = "dev")]
dev: Dev {
lxc: Mutex::new(BTreeMap::new()),
},
});
let res = Self(seed.clone());
@@ -291,7 +273,7 @@ impl RpcContext {
self.crons.mutate(|c| std::mem::take(c));
self.services.shutdown_all().await?;
self.is_closed.store(true, Ordering::SeqCst);
tracing::info!("RPC Context is shutdown");
tracing::info!("RpcContext is shutdown");
Ok(())
}
@@ -307,7 +289,8 @@ impl RpcContext {
&self,
CleanupInitPhases {
mut cleanup_sessions,
init_services,
mut init_services,
mut prune_s9pks,
mut check_tasks,
}: CleanupInitPhases,
) -> Result<(), Error> {
@@ -366,12 +349,38 @@ impl RpcContext {
});
cleanup_sessions.complete();
self.services.init(&self, init_services).await?;
tracing::info!("Initialized Services");
init_services.start();
self.services.init(&self).await?;
init_services.complete();
// TODO
check_tasks.start();
prune_s9pks.start();
let peek = self.db.peek().await;
let keep = peek
.as_public()
.as_package_data()
.as_entries()?
.into_iter()
.map(|(_, pde)| pde.as_s9pk().de())
.collect::<Result<BTreeSet<PathBuf>, Error>>()?;
let installed_dir = &Path::new(DATA_DIR).join(PKG_ARCHIVE_DIR).join("installed");
if tokio::fs::metadata(&installed_dir).await.is_ok() {
let mut dir = tokio::fs::read_dir(&installed_dir)
.await
.with_ctx(|_| (ErrorKind::Filesystem, lazy_format!("dir {installed_dir:?}")))?;
while let Some(file) = dir
.next_entry()
.await
.with_ctx(|_| (ErrorKind::Filesystem, lazy_format!("dir {installed_dir:?}")))?
{
let path = file.path();
if path.extension() == Some(OsStr::new("s9pk")) && !keep.contains(&path) {
delete_file(path).await?;
}
}
}
prune_s9pks.complete();
check_tasks.start();
let mut action_input: OrdMap<PackageId, BTreeMap<ActionId, Value>> = OrdMap::new();
let tasks: BTreeSet<_> = peek
.as_public()
@@ -379,12 +388,19 @@ impl RpcContext {
.as_entries()?
.into_iter()
.map(|(_, pde)| {
Ok(pde.as_tasks().as_entries()?.into_iter().map(|(_, r)| {
Ok::<_, Error>((
r.as_task().as_package_id().de()?,
r.as_task().as_action_id().de()?,
))
}))
Ok(pde
.as_tasks()
.as_entries()?
.into_iter()
.map(|(_, r)| {
let t = r.as_task();
Ok::<_, Error>(if t.as_input().transpose_ref().is_some() {
Some((t.as_package_id().de()?, t.as_action_id().de()?))
} else {
None
})
})
.filter_map_ok(|a| a))
})
.flatten_ok()
.map(|a| a.and_then(|a| a))
@@ -406,46 +422,32 @@ impl RpcContext {
}
}
}
for id in
self.db
.mutate::<Vec<PackageId>>(|db| {
for (package_id, action_input) in &action_input {
for (action_id, input) in action_input {
for (_, pde) in
db.as_public_mut().as_package_data_mut().as_entries_mut()?
{
pde.as_tasks_mut().mutate(|tasks| {
Ok(update_tasks(tasks, package_id, action_id, input, false))
})?;
}
self.db
.mutate(|db| {
for (package_id, action_input) in &action_input {
for (action_id, input) in action_input {
for (_, pde) in db.as_public_mut().as_package_data_mut().as_entries_mut()? {
pde.as_tasks_mut().mutate(|tasks| {
Ok(update_tasks(tasks, package_id, action_id, input, false))
})?;
}
}
db.as_public()
.as_package_data()
.as_entries()?
}
for (_, pde) in db.as_public_mut().as_package_data_mut().as_entries_mut()? {
if pde
.as_tasks()
.de()?
.into_iter()
.filter_map(|(id, pkg)| {
(|| {
if pkg.as_tasks().de()?.into_iter().any(|(_, t)| {
t.active && t.task.severity == TaskSeverity::Critical
}) {
Ok(Some(id))
} else {
Ok(None)
}
})()
.transpose()
})
.collect()
})
.await
.result?
{
let svc = self.services.get(&id).await;
if let Some(svc) = &*svc {
svc.stop(procedure_id.clone(), false).await?;
}
}
.any(|(_, t)| t.active && t.task.severity == TaskSeverity::Critical)
{
pde.as_status_info_mut().stop()?;
}
}
Ok(())
})
.await
.result?;
check_tasks.complete();
Ok(())
@@ -473,6 +475,11 @@ impl RpcContext {
<Self as CallRemote<RemoteContext, T>>::call_remote(&self, method, params, extra).await
}
}
impl AsRef<Client> for RpcContext {
fn as_ref(&self) -> &Client {
&self.client
}
}
impl AsRef<Jwk> for RpcContext {
fn as_ref(&self) -> &Jwk {
&CURRENT_SECRET

View File

@@ -4,29 +4,30 @@ use std::sync::Arc;
use std::time::Duration;
use futures::{Future, StreamExt};
use helpers::NonDetachingJoinHandle;
use imbl_value::InternedString;
use josekit::jwk::Jwk;
use patch_db::PatchDb;
use rpc_toolkit::Context;
use serde::{Deserialize, Serialize};
use tokio::sync::broadcast::Sender;
use tokio::sync::OnceCell;
use tokio::sync::broadcast::Sender;
use tracing::instrument;
use ts_rs::TS;
use crate::MAIN_DATA;
use crate::account::AccountInfo;
use crate::context::config::ServerConfig;
use crate::context::RpcContext;
use crate::context::config::ServerConfig;
use crate::disk::OsPartitionInfo;
use crate::hostname::Hostname;
use crate::net::web_server::{UpgradableListener, WebServer, WebServerAcceptorSetter};
use crate::net::gateway::UpgradableListener;
use crate::net::web_server::{WebServer, WebServerAcceptorSetter};
use crate::prelude::*;
use crate::progress::FullProgressTracker;
use crate::rpc_continuations::{Guid, RpcContinuation, RpcContinuations};
use crate::setup::SetupProgress;
use crate::util::net::WebSocketExt;
use crate::MAIN_DATA;
use crate::shutdown::Shutdown;
use crate::util::future::NonDetachingJoinHandle;
lazy_static::lazy_static! {
pub static ref CURRENT_SECRET: Jwk = Jwk::generate_ec_key(josekit::jwk::alg::ec::EcCurve::P256).unwrap_or_else(|e| {
@@ -54,7 +55,7 @@ impl TryFrom<&AccountInfo> for SetupResult {
tor_addresses: value
.tor_keys
.iter()
.map(|tor_key| format!("https://{}", tor_key.public().get_onion_address()))
.map(|tor_key| format!("https://{}", tor_key.onion_address()))
.collect(),
hostname: value.hostname.clone(),
lan_address: value.hostname.lan_address(),
@@ -71,7 +72,8 @@ pub struct SetupContextSeed {
pub progress: FullProgressTracker,
pub task: OnceCell<NonDetachingJoinHandle<()>>,
pub result: OnceCell<Result<(SetupResult, RpcContext), Error>>,
pub shutdown: Sender<()>,
pub disk_guid: OnceCell<Arc<String>>,
pub shutdown: Sender<Option<Shutdown>>,
pub rpc_continuations: RpcContinuations,
}
@@ -84,6 +86,8 @@ impl SetupContext {
config: &ServerConfig,
) -> Result<Self, Error> {
let (shutdown, _) = tokio::sync::broadcast::channel(1);
let mut progress = FullProgressTracker::new();
progress.enable_logging(true);
Ok(Self(Arc::new(SetupContextSeed {
webserver: webserver.acceptor_setter(),
config: config.clone(),
@@ -94,9 +98,10 @@ impl SetupContext {
)
})?,
disable_encryption: config.disable_encryption.unwrap_or(false),
progress: FullProgressTracker::new(),
progress,
task: OnceCell::new(),
result: OnceCell::new(),
disk_guid: OnceCell::new(),
shutdown,
rpc_continuations: RpcContinuations::new(),
})))

View File

@@ -1,14 +1,11 @@
use clap::Parser;
use color_eyre::eyre::eyre;
use models::PackageId;
use serde::{Deserialize, Serialize};
use tracing::instrument;
use ts_rs::TS;
use crate::context::RpcContext;
use crate::prelude::*;
use crate::rpc_continuations::Guid;
use crate::Error;
use crate::{Error, PackageId};
#[derive(Deserialize, Serialize, Parser, TS)]
#[serde(rename_all = "camelCase")]
@@ -19,37 +16,51 @@ pub struct ControlParams {
#[instrument(skip_all)]
pub async fn start(ctx: RpcContext, ControlParams { id }: ControlParams) -> Result<(), Error> {
ctx.services
.get(&id)
ctx.db
.mutate(|db| {
db.as_public_mut()
.as_package_data_mut()
.as_idx_mut(&id)
.or_not_found(&id)?
.as_status_info_mut()
.as_desired_mut()
.map_mutate(|s| Ok(s.start()))
})
.await
.as_ref()
.or_not_found(lazy_format!("Manager for {id}"))?
.start(Guid::new())
.await?;
.result?;
Ok(())
}
pub async fn stop(ctx: RpcContext, ControlParams { id }: ControlParams) -> Result<(), Error> {
ctx.services
.get(&id)
ctx.db
.mutate(|db| {
db.as_public_mut()
.as_package_data_mut()
.as_idx_mut(&id)
.or_not_found(&id)?
.as_status_info_mut()
.stop()
})
.await
.as_ref()
.ok_or_else(|| Error::new(eyre!("Manager not found"), crate::ErrorKind::InvalidRequest))?
.stop(Guid::new(), true)
.await?;
.result?;
Ok(())
}
pub async fn restart(ctx: RpcContext, ControlParams { id }: ControlParams) -> Result<(), Error> {
ctx.services
.get(&id)
ctx.db
.mutate(|db| {
db.as_public_mut()
.as_package_data_mut()
.as_idx_mut(&id)
.or_not_found(&id)?
.as_status_info_mut()
.as_desired_mut()
.map_mutate(|s| Ok(s.restart()))
})
.await
.as_ref()
.ok_or_else(|| Error::new(eyre!("Manager not found"), crate::ErrorKind::InvalidRequest))?
.restart(Guid::new(), false)
.await?;
.result?;
Ok(())
}

View File

@@ -12,7 +12,7 @@ use itertools::Itertools;
use patch_db::json_ptr::{JsonPointer, ROOT};
use patch_db::{DiffPatch, Dump, Revision};
use rpc_toolkit::yajrc::RpcError;
use rpc_toolkit::{from_fn_async, Context, HandlerArgs, HandlerExt, ParentHandler};
use rpc_toolkit::{Context, HandlerArgs, HandlerExt, ParentHandler, from_fn_async};
use serde::{Deserialize, Serialize};
use tokio::sync::mpsc::{self, UnboundedReceiver};
use tokio::sync::watch;
@@ -22,13 +22,32 @@ use ts_rs::TS;
use crate::context::{CliContext, RpcContext};
use crate::prelude::*;
use crate::rpc_continuations::{Guid, RpcContinuation};
use crate::util::net::WebSocketExt;
use crate::util::serde::{apply_expr, HandlerExtSerde};
use crate::util::serde::{HandlerExtSerde, apply_expr};
lazy_static::lazy_static! {
static ref PUBLIC: JsonPointer = "/public".parse().unwrap();
}
pub trait DbAccess<T>: Sized {
fn access<'a>(db: &'a Model<Self>) -> &'a Model<T>;
}
pub trait DbAccessMut<T>: DbAccess<T> {
fn access_mut<'a>(db: &'a mut Model<Self>) -> &'a mut Model<T>;
}
pub trait DbAccessByKey<T>: Sized {
type Key<'a>;
fn access_by_key<'a>(db: &'a Model<Self>, key: Self::Key<'_>) -> Option<&'a Model<T>>;
}
pub trait DbAccessMutByKey<T>: DbAccessByKey<T> {
fn access_mut_by_key<'a>(
db: &'a mut Model<Self>,
key: Self::Key<'_>,
) -> Option<&'a mut Model<T>>;
}
pub fn db<C: Context>() -> ParentHandler<C> {
ParentHandler::new()
.subcommand(
@@ -127,7 +146,7 @@ pub struct SubscribeParams {
#[ts(type = "string | null")]
pointer: Option<JsonPointer>,
#[ts(skip)]
#[serde(rename = "__auth_session")]
#[serde(rename = "__Auth_session")]
session: Option<InternedString>,
}

View File

@@ -12,6 +12,7 @@ use crate::net::forward::AvailablePorts;
use crate::net::keys::KeyStore;
use crate::notifications::Notifications;
use crate::prelude::*;
use crate::sign::AnyVerifyingKey;
use crate::ssh::SshKeys;
use crate::util::serde::Pem;
@@ -33,6 +34,9 @@ impl Database {
private: Private {
key_store: KeyStore::new(account)?,
password: account.password.clone(),
auth_pubkeys: [AnyVerifyingKey::Ed25519((&account.developer_key).into())]
.into_iter()
.collect(),
ssh_privkey: Pem(account.ssh_key.clone()),
ssh_pubkeys: SshKeys::new(),
available_ports: AvailablePorts::new(),
@@ -40,7 +44,7 @@ impl Database {
notifications: Notifications::new(),
cifs: CifsTargets::new(),
package_stores: BTreeMap::new(),
compat_s9pk_key: Pem(account.compat_s9pk_key.clone()),
developer_key: Pem(account.developer_key.clone()),
}, // TODO
})
}

View File

@@ -1,11 +1,11 @@
use std::collections::{BTreeMap, BTreeSet};
use std::path::PathBuf;
use chrono::{DateTime, Utc};
use exver::VersionRange;
use imbl_value::InternedString;
use models::{ActionId, DataUrl, HealthCheckId, HostId, PackageId, ReplayId, ServiceInterfaceId};
use patch_db::json_ptr::JsonPointer;
use patch_db::HasModel;
use patch_db::json_ptr::JsonPointer;
use reqwest::Url;
use serde::{Deserialize, Serialize};
use ts_rs::TS;
@@ -15,8 +15,10 @@ use crate::net::service_interface::ServiceInterface;
use crate::prelude::*;
use crate::progress::FullProgress;
use crate::s9pk::manifest::Manifest;
use crate::status::MainStatus;
use crate::util::serde::{is_partial_of, Pem};
use crate::status::StatusInfo;
use crate::util::DataUrl;
use crate::util::serde::{Pem, is_partial_of};
use crate::{ActionId, HealthCheckId, HostId, PackageId, ReplayId, ServiceInterfaceId};
#[derive(Debug, Default, Deserialize, Serialize, TS)]
#[ts(export)]
@@ -267,7 +269,7 @@ impl Model<PackageState> {
return Err(Error::new(
eyre!("could not determine package state to get manifest"),
ErrorKind::Database,
))
));
}
})
}
@@ -287,6 +289,7 @@ pub struct InstallingState {
#[ts(export)]
pub struct UpdatingState {
pub manifest: Manifest,
pub s9pk: PathBuf,
pub installing_info: InstallingInfo,
}
@@ -362,8 +365,8 @@ impl Default for ActionVisibility {
#[ts(export)]
pub struct PackageDataEntry {
pub state_info: PackageState,
pub data_version: Option<String>,
pub status: MainStatus,
pub s9pk: PathBuf,
pub status_info: StatusInfo,
#[ts(type = "string | null")]
pub registry: Option<Url>,
#[ts(type = "string")]
@@ -373,7 +376,6 @@ pub struct PackageDataEntry {
pub last_backup: Option<DateTime<Utc>>,
pub current_dependencies: CurrentDependencies,
pub actions: BTreeMap<ActionId, ActionMetadata>,
#[ts(as = "BTreeMap::<String, TaskEntry>")]
pub tasks: BTreeMap<ReplayId, TaskEntry>,
pub service_interfaces: BTreeMap<ServiceInterfaceId, ServiceInterface>,
pub hosts: Hosts,

View File

@@ -1,15 +1,16 @@
use std::collections::BTreeMap;
use std::collections::{BTreeMap, HashSet};
use models::PackageId;
use patch_db::{HasModel, Value};
use serde::{Deserialize, Serialize};
use crate::PackageId;
use crate::auth::Sessions;
use crate::backup::target::cifs::CifsTargets;
use crate::net::forward::AvailablePorts;
use crate::net::keys::KeyStore;
use crate::notifications::Notifications;
use crate::prelude::*;
use crate::sign::AnyVerifyingKey;
use crate::ssh::SshKeys;
use crate::util::serde::Pem;
@@ -19,8 +20,9 @@ use crate::util::serde::Pem;
pub struct Private {
pub key_store: KeyStore,
pub password: String, // argon2 hash
#[serde(default = "generate_compat_key")]
pub compat_s9pk_key: Pem<ed25519_dalek::SigningKey>,
pub auth_pubkeys: HashSet<AnyVerifyingKey>,
#[serde(default = "generate_developer_key")]
pub developer_key: Pem<ed25519_dalek::SigningKey>,
pub ssh_privkey: Pem<ssh_key::PrivateKey>,
pub ssh_pubkeys: SshKeys,
pub available_ports: AvailablePorts,
@@ -31,7 +33,7 @@ pub struct Private {
pub package_stores: BTreeMap<PackageId, Value>,
}
pub fn generate_compat_key() -> Pem<ed25519_dalek::SigningKey> {
pub fn generate_developer_key() -> Pem<ed25519_dalek::SigningKey> {
Pem(ed25519_dalek::SigningKey::generate(
&mut ssh_key::rand_core::OsRng::default(),
))

View File

@@ -1,23 +1,26 @@
use std::collections::{BTreeMap, BTreeSet};
use std::net::{IpAddr, Ipv4Addr};
use std::collections::{BTreeMap, BTreeSet, VecDeque};
use std::net::{IpAddr, Ipv4Addr, SocketAddr};
use std::sync::{Arc, OnceLock};
use chrono::{DateTime, Utc};
use exver::{Version, VersionRange};
use imbl::{OrdMap, OrdSet};
use imbl_value::InternedString;
use ipnet::IpNet;
use isocountry::CountryCode;
use itertools::Itertools;
use models::PackageId;
use openssl::hash::MessageDigest;
use patch_db::{HasModel, Value};
use serde::{Deserialize, Serialize};
use ts_rs::TS;
use crate::account::AccountInfo;
use crate::db::DbAccessByKey;
use crate::db::model::Database;
use crate::db::model::package::AllPackageData;
use crate::net::acme::AcmeProvider;
use crate::net::host::binding::{AddSslOptions, BindInfo, BindOptions, NetInfo};
use crate::net::host::Host;
use crate::net::host::binding::{AddSslOptions, BindInfo, BindOptions, NetInfo};
use crate::net::utils::ipv6_is_local;
use crate::net::vhost::AlpnInfo;
use crate::prelude::*;
@@ -27,7 +30,9 @@ use crate::util::cpupower::Governor;
use crate::util::lshw::LshwDevice;
use crate::util::serde::MaybeUtf8String;
use crate::version::{Current, VersionT};
use crate::{ARCH, PLATFORM};
use crate::{ARCH, GatewayId, PLATFORM, PackageId};
pub static DB_UI_SEED_CELL: OnceLock<&'static str> = OnceLock::new();
#[derive(Debug, Deserialize, Serialize, HasModel, TS)]
#[serde(rename_all = "camelCase")]
@@ -50,7 +55,7 @@ impl Public {
hostname: account.hostname.no_dot_host_name(),
last_backup: None,
package_version_compat: Current::default().compat().clone(),
post_init_migration_todos: BTreeSet::new(),
post_init_migration_todos: BTreeMap::new(),
network: NetworkInfo {
host: Host {
bindings: [(
@@ -61,9 +66,10 @@ impl Public {
preferred_external_port: 80,
add_ssl: Some(AddSslOptions {
preferred_external_port: 443,
add_x_forwarded_headers: false,
alpn: Some(AlpnInfo::Specified(vec![
MaybeUtf8String("http/1.1".into()),
MaybeUtf8String("h2".into()),
MaybeUtf8String("http/1.1".into()),
])),
}),
secure: None,
@@ -71,26 +77,25 @@ impl Public {
net: NetInfo {
assigned_port: None,
assigned_ssl_port: Some(443),
public: false,
private_disabled: OrdSet::new(),
public_enabled: OrdSet::new(),
},
},
)]
.into_iter()
.collect(),
onions: account
.tor_keys
.iter()
.map(|k| k.public().get_onion_address())
.collect(),
domains: BTreeMap::new(),
onions: account.tor_keys.iter().map(|k| k.onion_address()).collect(),
public_domains: BTreeMap::new(),
private_domains: BTreeSet::new(),
hostname_info: BTreeMap::new(),
},
wifi: WifiInfo {
enabled: true,
..Default::default()
},
network_interfaces: BTreeMap::new(),
gateways: OrdMap::new(),
acme: BTreeMap::new(),
dns: Default::default(),
},
status_info: ServerStatus {
backup_progress: None,
@@ -120,11 +125,8 @@ impl Public {
kiosk,
},
package_data: AllPackageData::default(),
ui: serde_json::from_str(include_str!(concat!(
env!("CARGO_MANIFEST_DIR"),
"/../../web/patchdb-ui-seed.json"
)))
.with_kind(ErrorKind::Deserialization)?,
ui: serde_json::from_str(*DB_UI_SEED_CELL.get().unwrap_or(&"null"))
.with_kind(ErrorKind::Deserialization)?,
})
}
}
@@ -155,8 +157,8 @@ pub struct ServerInfo {
pub version: Version,
#[ts(type = "string")]
pub package_version_compat: VersionRange,
#[ts(type = "string[]")]
pub post_init_migration_todos: BTreeSet<Version>,
#[ts(type = "Record<string, unknown>")]
pub post_init_migration_todos: BTreeMap<Version, Value>,
#[ts(type = "string | null")]
pub last_backup: Option<DateTime<Utc>>,
pub network: NetworkInfo,
@@ -186,11 +188,24 @@ pub struct ServerInfo {
pub struct NetworkInfo {
pub wifi: WifiInfo,
pub host: Host,
#[ts(as = "BTreeMap::<String, NetworkInterfaceInfo>")]
#[ts(as = "BTreeMap::<GatewayId, NetworkInterfaceInfo>")]
#[serde(default)]
pub network_interfaces: BTreeMap<InternedString, NetworkInterfaceInfo>,
pub gateways: OrdMap<GatewayId, NetworkInterfaceInfo>,
#[serde(default)]
pub acme: BTreeMap<AcmeProvider, AcmeSettings>,
#[serde(default)]
pub dns: DnsSettings,
}
#[derive(Debug, Default, Deserialize, Serialize, HasModel, TS)]
#[serde(rename_all = "camelCase")]
#[model = "Model<Self>"]
#[ts(export)]
pub struct DnsSettings {
#[ts(type = "string[]")]
pub dhcp_servers: VecDeque<SocketAddr>,
#[ts(type = "string[] | null")]
pub static_servers: Option<VecDeque<SocketAddr>>,
}
#[derive(Clone, Debug, Default, Deserialize, Serialize, HasModel, TS)]
@@ -198,13 +213,14 @@ pub struct NetworkInfo {
#[model = "Model<Self>"]
#[ts(export)]
pub struct NetworkInterfaceInfo {
pub inbound: Option<bool>,
pub outbound: Option<bool>,
pub ip_info: Option<IpInfo>,
pub name: Option<InternedString>,
pub public: Option<bool>,
pub secure: Option<bool>,
pub ip_info: Option<Arc<IpInfo>>,
}
impl NetworkInterfaceInfo {
pub fn inbound(&self) -> bool {
self.inbound.unwrap_or_else(|| {
pub fn public(&self) -> bool {
self.public.unwrap_or_else(|| {
!self.ip_info.as_ref().map_or(true, |ip_info| {
let ip4s = ip_info
.subnets
@@ -218,11 +234,9 @@ impl NetworkInterfaceInfo {
})
.collect::<BTreeSet<_>>();
if !ip4s.is_empty() {
return ip4s.iter().all(|ip4| {
ip4.is_loopback()
|| (ip4.is_private() && !ip4.octets().starts_with(&[10, 59])) // reserving 10.59 for public wireguard configurations
|| ip4.is_link_local()
});
return ip4s
.iter()
.all(|ip4| ip4.is_loopback() || ip4.is_private() || ip4.is_link_local());
}
ip_info.subnets.iter().all(|ipnet| {
if let IpAddr::V6(ip6) = ipnet.addr() {
@@ -234,6 +248,10 @@ impl NetworkInterfaceInfo {
})
})
}
pub fn secure(&self) -> bool {
self.secure.unwrap_or(false)
}
}
#[derive(Clone, Debug, Default, PartialEq, Eq, Deserialize, Serialize, TS, HasModel)]
@@ -246,19 +264,25 @@ pub struct IpInfo {
pub scope_id: u32,
pub device_type: Option<NetworkInterfaceType>,
#[ts(type = "string[]")]
pub subnets: BTreeSet<IpNet>,
pub subnets: OrdSet<IpNet>,
#[ts(type = "string[]")]
pub lan_ip: OrdSet<IpAddr>,
pub wan_ip: Option<Ipv4Addr>,
#[ts(type = "string[]")]
pub ntp_servers: BTreeSet<InternedString>,
pub ntp_servers: OrdSet<InternedString>,
#[ts(type = "string[]")]
pub dns_servers: OrdSet<IpAddr>,
}
#[derive(Clone, Copy, Debug, PartialEq, Eq, Deserialize, Serialize, TS)]
#[derive(Clone, Copy, Debug, PartialEq, Eq, PartialOrd, Ord, Deserialize, Serialize, TS)]
#[ts(export)]
#[serde(rename_all = "kebab-case")]
pub enum NetworkInterfaceType {
Ethernet,
Wireless,
Bridge,
Wireguard,
Loopback,
}
#[derive(Debug, Deserialize, Serialize, HasModel, TS)]
@@ -268,6 +292,27 @@ pub enum NetworkInterfaceType {
pub struct AcmeSettings {
pub contact: Vec<String>,
}
impl DbAccessByKey<AcmeSettings> for Database {
type Key<'a> = &'a AcmeProvider;
fn access_by_key<'a>(
db: &'a Model<Self>,
key: Self::Key<'_>,
) -> Option<&'a Model<AcmeSettings>> {
db.as_public()
.as_server_info()
.as_network()
.as_acme()
.as_idx(key)
}
}
#[derive(Debug, Deserialize, Serialize, HasModel, TS)]
#[serde(rename_all = "camelCase")]
#[model = "Model<Self>"]
#[ts(export)]
pub struct DomainSettings {
pub gateway: GatewayId,
}
#[derive(Debug, Default, Deserialize, Serialize, HasModel, TS)]
#[model = "Model<Self>"]

View File

@@ -1,8 +1,10 @@
use std::collections::{BTreeMap, BTreeSet};
use std::marker::PhantomData;
use std::str::FromStr;
use std::sync::Arc;
use chrono::{DateTime, Utc};
use imbl::OrdMap;
pub use imbl_value::Value;
use patch_db::value::InternedString;
pub use patch_db::{HasModel, MutateResult, PatchDb};
@@ -166,6 +168,21 @@ impl<T> Model<Option<T>> {
}
}
impl<T> Model<Arc<T>> {
pub fn deref(self) -> Model<T> {
use patch_db::ModelExt;
self.transmute(|a| a)
}
pub fn as_deref(&self) -> &Model<T> {
use patch_db::ModelExt;
self.transmute_ref(|a| a)
}
pub fn as_deref_mut(&mut self) -> &mut Model<T> {
use patch_db::ModelExt;
self.transmute_mut(|a| a)
}
}
pub trait Map: DeserializeOwned + Serialize {
type Key;
type Value;
@@ -191,11 +208,23 @@ impl<A, B> Map for BTreeMap<JsonKey<A>, B>
where
A: serde::Serialize + serde::de::DeserializeOwned + Ord,
B: serde::Serialize + serde::de::DeserializeOwned,
{
type Key = JsonKey<A>;
type Value = B;
fn key_str(key: &Self::Key) -> Result<impl AsRef<str>, Error> {
serde_json::to_string(&key.0).with_kind(ErrorKind::Serialization)
}
}
impl<A, B> Map for OrdMap<A, B>
where
A: serde::Serialize + serde::de::DeserializeOwned + Clone + Ord + AsRef<str>,
B: serde::Serialize + serde::de::DeserializeOwned + Clone,
{
type Key = A;
type Value = B;
fn key_str(key: &Self::Key) -> Result<impl AsRef<str>, Error> {
serde_json::to_string(key).with_kind(ErrorKind::Serialization)
Ok(key.as_ref())
}
}
@@ -203,13 +232,18 @@ impl<T: Map> Model<T>
where
T::Value: Serialize,
{
pub fn insert(&mut self, key: &T::Key, value: &T::Value) -> Result<(), Error> {
pub fn insert_model(
&mut self,
key: &T::Key,
value: Model<T::Value>,
) -> Result<Option<Model<T::Value>>, Error> {
use patch_db::ModelExt;
use serde::ser::Error;
let v = patch_db::value::to_value(value)?;
let v = value.into_value();
match &mut self.value {
Value::Object(o) => {
o.insert(T::key_string(key)?, v);
Ok(())
let prev = o.insert(T::key_string(key)?, v);
Ok(prev.map(|v| Model::from_value(v)))
}
v => Err(patch_db::value::Error {
source: patch_db::value::ErrorSource::custom(format!("expected object found {v}")),
@@ -218,6 +252,13 @@ where
.into()),
}
}
pub fn insert(
&mut self,
key: &T::Key,
value: &T::Value,
) -> Result<Option<Model<T::Value>>, Error> {
self.insert_model(key, Model::new(value)?)
}
pub fn upsert<F>(&mut self, key: &T::Key, value: F) -> Result<&mut Model<T::Value>, Error>
where
F: FnOnce() -> Result<T::Value, Error>,
@@ -244,22 +285,6 @@ where
.into()),
}
}
pub fn insert_model(&mut self, key: &T::Key, value: Model<T::Value>) -> Result<(), Error> {
use patch_db::ModelExt;
use serde::ser::Error;
let v = value.into_value();
match &mut self.value {
Value::Object(o) => {
o.insert(T::key_string(key)?, v);
Ok(())
}
v => Err(patch_db::value::Error {
source: patch_db::value::ErrorSource::custom(format!("expected object found {v}")),
kind: patch_db::value::ErrorKind::Serialization,
}
.into()),
}
}
}
impl<T: Map> Model<T>
@@ -424,6 +449,12 @@ impl<T> std::ops::DerefMut for JsonKey<T> {
&mut self.0
}
}
impl<T: DeserializeOwned> FromStr for JsonKey<T> {
type Err = Error;
fn from_str(s: &str) -> Result<Self, Self::Err> {
serde_json::from_str(s).with_kind(ErrorKind::Deserialization)
}
}
impl<T: Serialize> Serialize for JsonKey<T> {
fn serialize<S>(&self, serializer: S) -> Result<S::Ok, S::Error>
where
@@ -436,7 +467,7 @@ impl<T: Serialize> Serialize for JsonKey<T> {
}
}
// { "foo": "bar" } -> "{ \"foo\": \"bar\" }"
impl<'de, T: Serialize + DeserializeOwned> Deserialize<'de> for JsonKey<T> {
impl<'de, T: DeserializeOwned> Deserialize<'de> for JsonKey<T> {
fn deserialize<D>(deserializer: D) -> Result<Self, D::Error>
where
D: serde::Deserializer<'de>,

View File

@@ -1,13 +1,13 @@
use std::collections::BTreeMap;
use std::path::Path;
use imbl_value::InternedString;
use models::PackageId;
use serde::{Deserialize, Serialize};
use ts_rs::TS;
use crate::prelude::*;
use crate::util::PathOrUrl;
use crate::Error;
use crate::{Error, PackageId};
#[derive(Clone, Debug, Default, Deserialize, Serialize, HasModel, TS)]
#[model = "Model<Self>"]
@@ -24,20 +24,62 @@ impl Map for Dependencies {
}
}
#[derive(Clone, Debug, Deserialize, Serialize, HasModel, TS)]
#[derive(Clone, Debug, Deserialize, Serialize, HasModel)]
#[serde(rename_all = "camelCase")]
#[model = "Model<Self>"]
#[ts(export)]
pub struct DepInfo {
pub description: Option<String>,
pub optional: bool,
pub s9pk: Option<PathOrUrl>,
#[serde(flatten)]
pub metadata: Option<MetadataSrc>,
}
impl TS for DepInfo {
type WithoutGenerics = Self;
fn decl() -> String {
format!("type {} = {}", Self::name(), Self::inline())
}
fn decl_concrete() -> String {
Self::decl()
}
fn name() -> String {
"DepInfo".into()
}
fn inline() -> String {
"{ description: string | null, optional: boolean } & MetadataSrc".into()
}
fn inline_flattened() -> String {
Self::inline()
}
fn visit_dependencies(v: &mut impl ts_rs::TypeVisitor)
where
Self: 'static,
{
v.visit::<MetadataSrc>()
}
fn output_path() -> Option<&'static std::path::Path> {
Some(Path::new("DepInfo.ts"))
}
}
#[derive(Clone, Debug, Deserialize, Serialize, TS)]
#[serde(rename_all = "camelCase")]
#[ts(export)]
pub enum MetadataSrc {
Metadata(Metadata),
S9pk(Option<PathOrUrl>), // backwards compatibility
}
#[derive(Clone, Debug, Deserialize, Serialize, TS)]
#[serde(rename_all = "camelCase")]
#[ts(export)]
pub struct Metadata {
pub title: InternedString,
pub icon: PathOrUrl,
}
#[derive(Clone, Debug, Deserialize, Serialize, HasModel, TS)]
#[serde(rename_all = "camelCase")]
#[model = "Model<Self>"]
#[ts(export)]
pub struct DependencyMetadata {
#[ts(type = "string")]
pub title: InternedString,

View File

@@ -1,40 +1,57 @@
use std::fs::File;
use std::io::Write;
use std::path::Path;
use std::path::{Path, PathBuf};
use ed25519::pkcs8::EncodePrivateKey;
use ed25519::PublicKeyBytes;
use ed25519::pkcs8::EncodePrivateKey;
use ed25519_dalek::{SigningKey, VerifyingKey};
use tokio::io::AsyncWriteExt;
use tracing::instrument;
use crate::context::CliContext;
use crate::context::config::local_config_path;
use crate::prelude::*;
use crate::util::io::create_file_mod;
use crate::util::serde::Pem;
pub const OS_DEVELOPER_KEY_PATH: &str = "/run/startos/developer.key.pem";
pub fn default_developer_key_path() -> PathBuf {
local_config_path()
.as_deref()
.unwrap_or_else(|| Path::new(crate::context::config::CONFIG_PATH))
.parent()
.unwrap_or(Path::new("/"))
.join("developer.key.pem")
}
pub async fn write_developer_key(
secret: &ed25519_dalek::SigningKey,
path: impl AsRef<Path>,
) -> Result<(), Error> {
let keypair_bytes = ed25519::KeypairBytes {
secret_key: secret.to_bytes(),
public_key: Some(PublicKeyBytes(VerifyingKey::from(secret).to_bytes())),
};
let mut file = create_file_mod(path, 0o640).await?;
file.write_all(
keypair_bytes
.to_pkcs8_pem(base64ct::LineEnding::default())
.with_kind(crate::ErrorKind::Pem)?
.as_bytes(),
)
.await?;
file.sync_all().await?;
Ok(())
}
#[instrument(skip_all)]
pub fn init(ctx: CliContext) -> Result<(), Error> {
if !ctx.developer_key_path.exists() {
let parent = ctx.developer_key_path.parent().unwrap_or(Path::new("/"));
if !parent.exists() {
std::fs::create_dir_all(parent)
.with_ctx(|_| (crate::ErrorKind::Filesystem, parent.display().to_string()))?;
}
pub async fn init(ctx: CliContext) -> Result<(), Error> {
if tokio::fs::metadata(OS_DEVELOPER_KEY_PATH).await.is_ok() {
println!("Developer key already exists at {}", OS_DEVELOPER_KEY_PATH);
} else if tokio::fs::metadata(&ctx.developer_key_path).await.is_err() {
tracing::info!("Generating new developer key...");
let secret = SigningKey::generate(&mut ssh_key::rand_core::OsRng::default());
tracing::info!("Writing key to {}", ctx.developer_key_path.display());
let keypair_bytes = ed25519::KeypairBytes {
secret_key: secret.to_bytes(),
public_key: Some(PublicKeyBytes(VerifyingKey::from(&secret).to_bytes())),
};
let mut dev_key_file = File::create(&ctx.developer_key_path)
.with_ctx(|_| (ErrorKind::Filesystem, ctx.developer_key_path.display()))?;
dev_key_file.write_all(
keypair_bytes
.to_pkcs8_pem(base64ct::LineEnding::default())
.with_kind(crate::ErrorKind::Pem)?
.as_bytes(),
)?;
dev_key_file.sync_all()?;
write_developer_key(&secret, &ctx.developer_key_path).await?;
println!(
"New developer key generated at {}",
ctx.developer_key_path.display()

View File

@@ -1,9 +1,8 @@
use std::path::Path;
use std::sync::Arc;
use rpc_toolkit::yajrc::RpcError;
use rpc_toolkit::{
from_fn, from_fn_async, CallRemoteHandler, Context, Empty, HandlerExt, ParentHandler,
CallRemoteHandler, Context, Empty, HandlerExt, ParentHandler, from_fn, from_fn_async,
};
use crate::context::{CliContext, DiagnosticContext, RpcContext};
@@ -12,7 +11,6 @@ use crate::init::SYSTEM_REBUILD_PATH;
use crate::prelude::*;
use crate::shutdown::Shutdown;
use crate::util::io::delete_file;
use crate::DATA_DIR;
pub fn diagnostic<C: Context>() -> ParentHandler<C> {
ParentHandler::new()
@@ -70,10 +68,7 @@ pub fn error(ctx: DiagnosticContext) -> Result<Arc<RpcError>, Error> {
pub fn restart(ctx: DiagnosticContext) -> Result<(), Error> {
ctx.shutdown
.send(Shutdown {
export_args: ctx
.disk_guid
.clone()
.map(|guid| (guid, Path::new(DATA_DIR).to_owned())),
disk_guid: ctx.disk_guid.clone(),
restart: true,
})
.map_err(|_| eyre!("receiver dropped"))

View File

@@ -4,9 +4,9 @@ use std::path::Path;
use tokio::process::Command;
use tracing::instrument;
use crate::Error;
use crate::disk::fsck::RequiresReboot;
use crate::util::Invoke;
use crate::Error;
#[instrument(skip_all)]
pub async fn btrfs_check_readonly(logicalname: impl AsRef<Path>) -> Result<RequiresReboot, Error> {

View File

@@ -2,13 +2,13 @@ use std::ffi::OsStr;
use std::path::Path;
use color_eyre::eyre::eyre;
use futures::future::BoxFuture;
use futures::FutureExt;
use futures::future::BoxFuture;
use tokio::process::Command;
use tracing::instrument;
use crate::disk::fsck::RequiresReboot;
use crate::Error;
use crate::disk::fsck::RequiresReboot;
#[instrument(skip_all)]
pub async fn e2fsck_preen(

View File

@@ -3,10 +3,10 @@ use std::path::Path;
use color_eyre::eyre::eyre;
use tokio::process::Command;
use crate::Error;
use crate::disk::fsck::btrfs::{btrfs_check_readonly, btrfs_check_repair};
use crate::disk::fsck::ext4::{e2fsck_aggressive, e2fsck_preen};
use crate::util::Invoke;
use crate::Error;
pub mod btrfs;
pub mod ext4;
@@ -45,7 +45,7 @@ impl RepairStrategy {
return Err(Error::new(
eyre!("Unknown filesystem {fs}"),
crate::ErrorKind::DiskManagement,
))
));
}
}
}

View File

@@ -287,6 +287,7 @@ pub async fn mount_fs<P: AsRef<Path>>(
Command::new("cryptsetup")
.arg("-q")
.arg("luksOpen")
.arg("--allow-discards")
.arg(format!("--key-file={}", PASSWORD_PATH))
.arg(format!("--keyfile-size={}", password.len()))
.arg(&blockdev_path)

Some files were not shown because too many files have changed in this diff Show More