Compare commits

...

36 Commits

Author SHA1 Message Date
Aiden McClelland
9322b3d07e be resilient to bad lshw output (#2390) 2023-08-08 17:36:14 -06:00
Lucy
55f5329817 update readme layout and assets (#2382)
* update readme layout and assets

* Update README.md

---------

Co-authored-by: Matt Hill <MattDHill@users.noreply.github.com>
2023-08-02 21:45:14 -04:00
Matt Hill
79d92c30f8 Update README.md (#2381) 2023-08-02 12:37:07 -06:00
Aiden McClelland
73229501c2 Feature/hw filtering (#2368)
* update deno

* add proxy

* remove query params, now auto added by BE

* add hardware requirements and BE reg query params

* update query params for BE requests

* allow multiple arches in hw reqs

* explain git hash mismatch

* require lshw

---------

Co-authored-by: Matt Hill <mattnine@protonmail.com>
2023-08-02 09:52:38 -06:00
Reckless_Satoshi
32ca91a7c9 add qr code to insights->about->tor (#2379)
* add qr code to insights->about->tor

* fix address PR feedback from @elvece; inject modelCtrl in ctor
2023-08-01 17:06:47 -04:00
Aiden McClelland
9e03ac084e add cli & rpc to edit db with jq syntax (#2372)
* add cli & rpc to edit db with jq syntax

* build fixes

* fix build

* fix build

* update cargo.lock
2023-07-25 16:22:58 -06:00
Aiden McClelland
082c51109d fix missing parent dir (#2373) 2023-07-25 10:07:10 -06:00
Aiden McClelland
8f44c75dc3 switch back to github caching (#2371)
* switch back to github caching

* remove npm and cargo cache

* misc fixes
2023-07-25 10:06:57 -06:00
Aiden McClelland
234f0d75e8 mute unexpected eof & protect against fd leaks (#2369) 2023-07-20 17:40:30 +00:00
Lucy
564186a1f9 Fix/mistake reorganize (#2366)
* revert patch for string parsing fix due to out of date yq version

* reorganize conditionals

* use ng-container

* alertButton needs to be outside of template or container
2023-07-19 10:54:18 -06:00
Lucy
ccdb477dbb Fix/pwa refresh (#2359)
* fix ROFS error on os install

* attempt to prompt browser to update manifest data with id and modified start_url

* update icon with better shape for ios

* add additional options for refreshing on pwas

* add loader to pwa reload

* fix pwa icon and add icon for ios

* add logic for refresh display depending on if pwa

* fix build for ui; fix numeric parsing error on osx

* typo

---------

Co-authored-by: Aiden McClelland <me@drbonez.dev>
2023-07-19 09:11:23 -06:00
Aiden McClelland
5f92f9e965 fix ROFS error on os install (#2364) 2023-07-19 08:50:02 -06:00
Aiden McClelland
c2db4390bb single platform builds (#2365) 2023-07-18 19:50:27 -06:00
Matt Hill
11c21b5259 Fix bugs (#2360)
* fix reset tor, delete http redirect, show message for tor http, update release notes

* potentially fix doubel req to registries

* change language arund LAN and root ca

* link locally instead of docs
2023-07-18 12:38:52 -06:00
Aiden McClelland
3cd9e17e3f migrate tor address to https (#2358) 2023-07-18 12:08:34 -06:00
Aiden McClelland
1982ce796f update deno (#2361) 2023-07-18 11:59:00 -06:00
Aiden McClelland
825e18a551 version bump (#2357)
* version bump

* update welcome page

---------

Co-authored-by: Lucy Cifferello <12953208+elvece@users.noreply.github.com>
2023-07-14 14:58:19 -06:00
Aiden McClelland
9ff0128fb1 support http2 alpn handshake (#2354)
* support http2 alpn handshake

* fix protocol name

* switch to https for tor

* update setup wizard and main ui to accommodate https (#2356)

* update setup wizard and main ui to accommodate https

* update wording in download doc

* fix accidential conversion of tor https for services and allow ws still

* redirect to https if available

* fix replaces to only search at beginning and ignore localhost when checking for https

---------

Co-authored-by: Lucy <12953208+elvece@users.noreply.github.com>
2023-07-14 14:58:02 -06:00
Matt Hill
36c3617204 permit IP for cifs backups (#2342)
* permit IP for cifs backups

* allow ip instead of hostname (#2347)

---------

Co-authored-by: Aiden McClelland <3732071+dr-bonez@users.noreply.github.com>
Co-authored-by: Aiden McClelland <me@drbonez.dev>
2023-07-14 18:52:33 +00:00
Aiden McClelland
90a9db3a91 disable encryption for new raspi setups (#2348)
* disable encryption for new raspi setups

* use config instead of OS_ARCH

* fixes from testing
2023-07-14 18:30:52 +00:00
Aiden McClelland
59d6795d9e fix all references embassyd -> startd (#2355) 2023-07-14 18:29:20 +00:00
Aiden McClelland
2c07cf50fa better transfer progress (#2350)
* better transfer progress

* frontend for calculating transfer size

* fixes from testing

* improve internal api

---------

Co-authored-by: Matt Hill <mattnine@protonmail.com>
2023-07-13 19:40:53 -06:00
Aiden McClelland
cc0e525dc5 fix incoherent when removing (#2332)
* fix incoherent when removing

* include all packages for current dependents
2023-07-13 20:36:48 +00:00
Aiden McClelland
73bd973109 delete disk guid on reflash (#2334)
* delete disk guid on reflash

* delete unnecessary files before copy
2023-07-13 20:36:35 +00:00
Aiden McClelland
a7e501d874 pack compressed assets into single binary (#2344)
* pack compressed assets into single binary

* update naming

* tweaks

* fix build

* fix cargo lock

* rename CLI

* remove explicit ref name
2023-07-12 22:51:05 +00:00
Matt Hill
4676f0595c add reset password to UI (#2341) 2023-07-11 17:23:40 -06:00
kn0wmad
1d3d70e8d6 Update README.md (#2337)
* Update README.md

* Update README.md
2023-07-07 10:23:31 -06:00
Mariusz Kogen
bada88157e Auto-define the OS_ARCH variable. (#2329) 2023-06-30 20:34:10 +00:00
J H
13f3137701 fix: Make check-version posix compliant (#2331)
We found that we couldn't compile this on the mac arm os
2023-06-29 22:28:13 +00:00
Aiden McClelland
d3316ff6ff make it faster (#2328)
* make it faster

* better pipelining

* remove unnecessary test

* use tmpfs for debspawn

* don't download intermediate artifacts

* fix upload dir path

* switch to buildjet

* use buildjet cache on buildjet runner

* native builds when fast

* remove quotes

* always use buildjet cache

* remove newlines

* delete data after done with it

* skip aarch64 for fast dev builds

* don't tmpfs for arm

* don't try to remove debspawn tmpdir
2023-06-28 13:37:26 -06:00
kn0wmad
1b384e61b4 maint/minor UI typo fixes (#2330)
* Minor copy fixes

* Contact link fixes
2023-06-28 13:03:33 -06:00
Matt Hill
addea20cab Update README 2023-06-27 10:10:01 -06:00
Matt Hill
fac23f2f57 update README 2023-06-27 10:06:42 -06:00
Aiden McClelland
bffe1ccb3d use a more resourced runner for production builds (#2322) 2023-06-26 16:27:11 +00:00
Matt Hill
e577434fe6 Update bug-report.yml 2023-06-25 13:38:26 -06:00
Matt Hill
5d1d9827e4 Update bug-report.yml 2023-06-25 13:35:45 -06:00
135 changed files with 4625 additions and 3546 deletions

View File

@@ -12,25 +12,23 @@ body:
options: options:
- label: I have searched for [existing issues](https://github.com/start9labs/start-os/issues) that already report this problem. - label: I have searched for [existing issues](https://github.com/start9labs/start-os/issues) that already report this problem.
required: true required: true
- type: input
attributes:
label: Server Hardware
description: On what hardware are you running StartOS? Please be as detailed as possible!
placeholder: Pi (8GB) w/ 32GB microSD & Samsung T7 SSD
validations:
required: true
- type: input - type: input
attributes: attributes:
label: StartOS Version label: StartOS Version
description: What version of StartOS are you running? description: What version of StartOS are you running?
placeholder: e.g. 0.3.4.2 placeholder: e.g. 0.3.4.3
validations: validations:
required: true required: true
- type: dropdown - type: dropdown
attributes: attributes:
label: Device label: Client OS
description: What device are you using to connect to your server?
options:
- Phone/tablet
- Laptop/Desktop
validations:
required: true
- type: dropdown
attributes:
label: Device OS
description: What operating system is your device running? description: What operating system is your device running?
options: options:
- MacOS - MacOS
@@ -45,7 +43,7 @@ body:
required: true required: true
- type: input - type: input
attributes: attributes:
label: Device OS Version label: Client OS Version
description: What version is your device OS? description: What version is your device OS?
validations: validations:
required: true required: true

View File

@@ -1,29 +0,0 @@
# This folder contains GitHub Actions workflows for building the project
## backend
Runs: manually (on: workflow_dispatch) or called by product-pipeline (on: workflow_call)
This workflow uses the actions and docker/setup-buildx-action@v1 to prepare the environment for aarch64 cross complilation using docker buildx.
When execution of aarch64 containers is required the action docker/setup-qemu-action@v1 is added.
A matrix-strategy has been used to build for both x86_64 and aarch64 platforms in parallel.
### Running unittests
Unittests are run using [cargo-nextest]( https://nexte.st/). First the sources are (cross-)compiled and archived. The archive is then run on the correct platform.
## frontend
Runs: manually (on: workflow_dispatch) or called by product-pipeline (on: workflow_call)
This workflow builds the frontends.
## product
Runs: when a pull request targets the master or next branch and when a change to the master or next branch is made
This workflow builds everything, re-using the backend and frontend workflows.
The download and extraction order of artifacts is relevant to `make`, as it checks the file timestamps to decide which targets need to be executed.
Result: eos.img
## a note on uploading artifacts
Artifacts are used to share data between jobs. File permissions are not maintained during artifact upload. Where file permissions are relevant, the workaround using tar has been used. See (here)[https://github.com/actions/upload-artifact#maintaining-file-permissions-and-case-sensitive-files].

View File

@@ -1,233 +0,0 @@
name: Backend
on:
workflow_call:
workflow_dispatch:
env:
RUST_VERSION: "1.67.1"
ENVIRONMENT: "dev"
jobs:
build_libs:
name: Build libs
strategy:
fail-fast: false
matrix:
target: [x86_64, aarch64]
include:
- target: x86_64
snapshot_command: ./build-v8-snapshot.sh
artifact_name: js_snapshot
artifact_path: libs/js_engine/src/artifacts/JS_SNAPSHOT.bin
- target: aarch64
snapshot_command: ./build-arm-v8-snapshot.sh
artifact_name: arm_js_snapshot
artifact_path: libs/js_engine/src/artifacts/ARM_JS_SNAPSHOT.bin
runs-on: ubuntu-latest
timeout-minutes: 60
steps:
- uses: actions/checkout@v3
with:
submodules: recursive
- name: Set up QEMU
uses: docker/setup-qemu-action@v2
if: ${{ matrix.target == 'aarch64' }}
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@v2
if: ${{ matrix.target == 'aarch64' }}
- name: "Install Rust"
run: |
rustup toolchain install ${{ env.RUST_VERSION }} --profile minimal --no-self-update
rustup default ${{ inputs.rust }}
shell: bash
if: ${{ matrix.target == 'x86_64' }}
- uses: actions/cache@v3
with:
path: |
~/.cargo/bin/
~/.cargo/registry/index/
~/.cargo/registry/cache/
~/.cargo/git/db/
libs/target/
key: ${{ runner.os }}-cargo-libs-${{ matrix.target }}-${{ hashFiles('libs/Cargo.lock') }}
- name: Build v8 snapshot
run: ${{ matrix.snapshot_command }}
working-directory: libs
- uses: actions/upload-artifact@v3
with:
name: ${{ matrix.artifact_name }}
path: ${{ matrix.artifact_path }}
build_backend:
name: Build backend
strategy:
fail-fast: false
matrix:
target: [x86_64, aarch64]
include:
- target: x86_64
snapshot_download: js_snapshot
- target: aarch64
snapshot_download: arm_js_snapshot
runs-on: ubuntu-latest
timeout-minutes: 120
needs: build_libs
steps:
- uses: actions/checkout@v3
with:
submodules: recursive
- name: Download ${{ matrix.snapshot_download }} artifact
uses: actions/download-artifact@v3
with:
name: ${{ matrix.snapshot_download }}
path: libs/js_engine/src/artifacts/
- name: "Install Rust"
run: |
rustup toolchain install ${{ env.RUST_VERSION }} --profile minimal --no-self-update
rustup default ${{ inputs.rust }}
shell: bash
if: ${{ matrix.target == 'x86_64' }}
- uses: actions/cache@v3
with:
path: |
~/.cargo/bin/
~/.cargo/registry/index/
~/.cargo/registry/cache/
~/.cargo/git/db/
backend/target/
key: ${{ runner.os }}-cargo-backend-${{ matrix.target }}-${{ hashFiles('backend/Cargo.lock') }}
- name: Install dependencies
run: |
sudo apt-get update
sudo apt-get install libavahi-client-dev
if: ${{ matrix.target == 'x86_64' }}
- name: Check Git Hash
run: ./check-git-hash.sh
- name: Check Environment
run: ./check-environment.sh
- name: Build backend
run: make ARCH=${{ matrix.target }} backend
- name: 'Tar files to preserve file permissions'
run: make ARCH=${{ matrix.target }} backend-${{ matrix.target }}.tar
- uses: actions/upload-artifact@v3
with:
name: backend-${{ matrix.target }}
path: backend-${{ matrix.target }}.tar
- name: Install nextest
uses: taiki-e/install-action@nextest
- name: Build and archive tests
run: cargo nextest archive --archive-file nextest-archive-${{ matrix.target }}.tar.zst --target ${{ matrix.target }}-unknown-linux-gnu
working-directory: backend
if: ${{ matrix.target == 'x86_64' }}
- name: Build and archive tests
run: |
docker run --rm \
-v "$HOME/.cargo/registry":/root/.cargo/registry \
-v "$(pwd)":/home/rust/src \
-P start9/rust-arm-cross:aarch64 \
sh -c 'cd /home/rust/src/backend &&
rustup install ${{ env.RUST_VERSION }} &&
rustup override set ${{ env.RUST_VERSION }} &&
rustup target add aarch64-unknown-linux-gnu &&
curl -LsSf https://get.nexte.st/latest/linux | tar zxf - -C ${CARGO_HOME:-~/.cargo}/bin &&
cargo nextest archive --archive-file nextest-archive-${{ matrix.target }}.tar.zst --target ${{ matrix.target }}-unknown-linux-gnu'
if: ${{ matrix.target == 'aarch64' }}
- name: Reset permissions
run: sudo chown -R $USER target
working-directory: backend
if: ${{ matrix.target == 'aarch64' }}
- name: Upload archive to workflow
uses: actions/upload-artifact@v3
with:
name: nextest-archive-${{ matrix.target }}
path: backend/nextest-archive-${{ matrix.target }}.tar.zst
run_tests_backend:
name: Test backend
strategy:
fail-fast: false
matrix:
target: [x86_64, aarch64]
include:
- target: x86_64
- target: aarch64
runs-on: ubuntu-latest
timeout-minutes: 60
needs: build_backend
env:
CARGO_TERM_COLOR: always
steps:
- uses: actions/checkout@v3
with:
submodules: recursive
- name: Set up QEMU
uses: docker/setup-qemu-action@v2
if: ${{ matrix.target == 'aarch64' }}
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@v2
if: ${{ matrix.target == 'aarch64' }}
- run: mkdir -p ~/.cargo/bin
if: ${{ matrix.target == 'x86_64' }}
- name: Install nextest
uses: taiki-e/install-action@v2
with:
tool: nextest@0.9.47
if: ${{ matrix.target == 'x86_64' }}
- name: Download archive
uses: actions/download-artifact@v3
with:
name: nextest-archive-${{ matrix.target }}
- name: Download nextest (aarch64)
run: wget -O nextest-aarch64.tar.gz https://get.nexte.st/0.9.47/linux-arm
if: ${{ matrix.target == 'aarch64' }}
- name: Run tests
run: |
${CARGO_HOME:-~/.cargo}/bin/cargo-nextest nextest run --no-fail-fast --archive-file nextest-archive-${{ matrix.target }}.tar.zst \
--filter-expr 'not (test(system::test_get_temp) | test(net::tor::test) | test(system::test_get_disk_usage) | test(net::ssl::certificate_details_persist) | test(net::ssl::ca_details_persist))'
if: ${{ matrix.target == 'x86_64' }}
- name: Run tests
run: |
docker run --rm --platform linux/arm64/v8 \
-v "/home/runner/.cargo/registry":/usr/local/cargo/registry \
-v "$(pwd)":/home/rust/src \
-e CARGO_TERM_COLOR=${{ env.CARGO_TERM_COLOR }} \
-P ubuntu:20.04 \
sh -c '
apt-get update &&
apt-get install -y ca-certificates &&
apt-get install -y rsync &&
cd /home/rust/src &&
mkdir -p ~/.cargo/bin &&
tar -zxvf nextest-aarch64.tar.gz -C ${CARGO_HOME:-~/.cargo}/bin &&
${CARGO_HOME:-~/.cargo}/bin/cargo-nextest nextest run --archive-file nextest-archive-${{ matrix.target }}.tar.zst \
--filter-expr "not (test(system::test_get_temp) | test(net::tor::test) | test(system::test_get_disk_usage) | test(net::ssl::certificate_details_persist) | test(net::ssl::ca_details_persist))"'
if: ${{ matrix.target == 'aarch64' }}

View File

@@ -1,46 +0,0 @@
name: Frontend
on:
workflow_call:
workflow_dispatch:
env:
NODEJS_VERSION: '18.15.0'
ENVIRONMENT: "dev"
jobs:
frontend:
name: Build frontend
runs-on: ubuntu-latest
timeout-minutes: 60
steps:
- uses: actions/checkout@v3
with:
submodules: recursive
- uses: actions/setup-node@v3
with:
node-version: ${{ env.NODEJS_VERSION }}
- name: Get npm cache directory
id: npm-cache-dir
run: |
echo "dir=$(npm config get cache)" >> $GITHUB_OUTPUT
- uses: actions/cache@v3
id: npm-cache
with:
path: ${{ steps.npm-cache-dir.outputs.dir }}
key: ${{ runner.os }}-node-${{ hashFiles('**/package-lock.json') }}
restore-keys: |
${{ runner.os }}-node-
- name: Build frontends
run: make frontends
- name: 'Tar files to preserve file permissions'
run: tar -cvf frontend.tar ENVIRONMENT.txt GIT_HASH.txt VERSION.txt frontend/dist frontend/config.json
- uses: actions/upload-artifact@v3
with:
name: frontend
path: frontend.tar

View File

@@ -1,37 +0,0 @@
name: Reusable Workflow
on:
workflow_call:
inputs:
build_command:
required: true
type: string
artifact_name:
required: true
type: string
artifact_path:
required: true
type: string
env:
ENVIRONMENT: "dev"
jobs:
generic_build_job:
runs-on: ubuntu-latest
timeout-minutes: 60
steps:
- uses: actions/checkout@v3
with:
submodules: recursive
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@v2
- name: Build image
run: ${{ inputs.build_command }}
- uses: actions/upload-artifact@v3
with:
name: ${{ inputs.artifact_name }}
path: ${{ inputs.artifact_path }}

View File

@@ -8,10 +8,26 @@ on:
type: choice type: choice
description: Environment description: Environment
options: options:
- "<NONE>" - NONE
- dev - dev
- unstable - unstable
- dev-unstable - dev-unstable
runner:
type: choice
description: Runner
options:
- standard
- fast
platform:
type: choice
description: Platform
options:
- ALL
- x86_64
- x86_64-nonfree
- aarch64
- aarch64-nonfree
- raspberrypi
push: push:
branches: branches:
- master - master
@@ -23,31 +39,65 @@ on:
env: env:
NODEJS_VERSION: "18.15.0" NODEJS_VERSION: "18.15.0"
ENVIRONMENT: '${{ fromJson(format(''["{0}", ""]'', github.event.inputs.environment || ''dev''))[github.event.inputs.environment == ''<NONE>''] }}' ENVIRONMENT: '${{ fromJson(format(''["{0}", ""]'', github.event.inputs.environment || ''dev''))[github.event.inputs.environment == ''NONE''] }}'
jobs: jobs:
dpkg: all:
name: Build dpkg name: Build
strategy: strategy:
fail-fast: false fail-fast: false
matrix: matrix:
platform: platform: >-
[x86_64, x86_64-nonfree, aarch64, aarch64-nonfree, raspberrypi] ${{
runs-on: ubuntu-22.04 fromJson(
format(
'[
["{0}"],
["x86_64", "x86_64-nonfree", "aarch64", "aarch64-nonfree", "raspberrypi"]
]',
github.event.inputs.platform || 'ALL'
)
)[(github.event.inputs.platform || 'ALL') == 'ALL']
}}
runs-on: >-
${{
fromJson(
format(
'["ubuntu-22.04", "{0}"]',
fromJson('{
"x86_64": ["buildjet-32vcpu-ubuntu-2204", "buildjet-32vcpu-ubuntu-2204"],
"x86_64-nonfree": ["buildjet-32vcpu-ubuntu-2204", "buildjet-32vcpu-ubuntu-2204"],
"aarch64": ["buildjet-16vcpu-ubuntu-2204-arm", "buildjet-32vcpu-ubuntu-2204-arm"],
"aarch64-nonfree": ["buildjet-16vcpu-ubuntu-2204-arm", "buildjet-32vcpu-ubuntu-2204-arm"],
"raspberrypi": ["buildjet-16vcpu-ubuntu-2204-arm", "buildjet-32vcpu-ubuntu-2204-arm"],
}')[matrix.platform][github.event.inputs.platform == matrix.platform]
)
)[github.event.inputs.runner == 'fast']
}}
steps: steps:
- name: Free space
run: df -h && rm -rf /opt/hostedtoolcache* && df -h
if: ${{ github.event.inputs.runner != 'fast' }}
- run: |
sudo mount -t tmpfs tmpfs .
if: ${{ github.event.inputs.runner == 'fast' && (matrix.platform == 'x86_64' || matrix.platform == 'x86_64-nonfree' || github.event.inputs.platform == matrix.platform) }}
- uses: actions/checkout@v3 - uses: actions/checkout@v3
with: with:
repository: Start9Labs/embassy-os-deb repository: Start9Labs/embassy-os-deb
path: embassy-os-deb
- uses: actions/checkout@v3 - uses: actions/checkout@v3
with: with:
submodules: recursive submodules: recursive
path: embassyos-0.3.x path: embassy-os-deb/embassyos-0.3.x
- run: | - run: |
cp -r debian embassyos-0.3.x/ cp -r debian embassyos-0.3.x/
VERSION=0.3.x ./control.sh VERSION=0.3.x ./control.sh
cp embassyos-0.3.x/backend/embassyd.service embassyos-0.3.x/debian/embassyos.embassyd.service cp embassyos-0.3.x/backend/startd.service embassyos-0.3.x/debian/embassyos.startd.service
cp embassyos-0.3.x/backend/embassy-init.service embassyos-0.3.x/debian/embassyos.embassy-init.service working-directory: embassy-os-deb
- uses: actions/setup-node@v3 - uses: actions/setup-node@v3
with: with:
@@ -79,29 +129,16 @@ jobs:
- name: Set up Docker Buildx - name: Set up Docker Buildx
uses: docker/setup-buildx-action@v2 uses: docker/setup-buildx-action@v2
- name: Run build - name: Run dpkg build
working-directory: embassy-os-deb
run: "make VERSION=0.3.x TAG=${{ github.ref_name }}" run: "make VERSION=0.3.x TAG=${{ github.ref_name }}"
env: env:
OS_ARCH: ${{ matrix.platform }} OS_ARCH: ${{ matrix.platform }}
- uses: actions/upload-artifact@v3
with:
name: ${{ matrix.platform }}.deb
path: embassyos_0.3.x-1_*.deb
iso:
name: Build iso
strategy:
fail-fast: false
matrix:
platform:
[x86_64, x86_64-nonfree, aarch64, aarch64-nonfree, raspberrypi]
runs-on: ubuntu-22.04
needs: [dpkg]
steps:
- uses: actions/checkout@v3 - uses: actions/checkout@v3
with: with:
repository: Start9Labs/startos-image-recipes repository: Start9Labs/startos-image-recipes
path: startos-image-recipes
- name: Install dependencies - name: Install dependencies
run: | run: |
@@ -115,56 +152,57 @@ jobs:
run: | run: |
sudo mkdir -p /etc/debspawn/ sudo mkdir -p /etc/debspawn/
echo "AllowUnsafePermissions=true" | sudo tee /etc/debspawn/global.toml echo "AllowUnsafePermissions=true" | sudo tee /etc/debspawn/global.toml
sudo mkdir -p /var/tmp/debspawn
- run: sudo mount -t tmpfs tmpfs /var/tmp/debspawn
if: ${{ github.event.inputs.runner == 'fast' && (matrix.platform == 'x86_64' || matrix.platform == 'x86_64-nonfree') }}
- uses: actions/cache@v3 - uses: actions/cache@v3
with: with:
path: /var/lib/debspawn path: /var/lib/debspawn
key: ${{ runner.os }}-debspawn-init key: ${{ runner.os }}-${{ matrix.platform }}-debspawn-init
- run: "mkdir -p overlays/deb" - run: "mkdir -p startos-image-recipes/overlays/deb"
- name: Download dpkg - run: "mv embassy-os-deb/embassyos_0.3.x-1_*.deb startos-image-recipes/overlays/deb/"
uses: actions/download-artifact@v3
with:
name: ${{ matrix.platform }}.deb
path: overlays/deb
- name: Run build - run: "rm -rf embassy-os-deb ${{ steps.npm-cache-dir.outputs.dir }} $HOME/.cargo"
- name: Run iso build
working-directory: startos-image-recipes
run: | run: |
./run-local-build.sh ${{ matrix.platform }} ./run-local-build.sh ${{ matrix.platform }}
- uses: actions/upload-artifact@v3 - uses: actions/upload-artifact@v3
with: with:
name: ${{ matrix.platform }}.squashfs name: ${{ matrix.platform }}.squashfs
path: results/*.squashfs path: startos-image-recipes/results/*.squashfs
- uses: actions/upload-artifact@v3 - uses: actions/upload-artifact@v3
with: with:
name: ${{ matrix.platform }}.iso name: ${{ matrix.platform }}.iso
path: results/*.iso path: startos-image-recipes/results/*.iso
if: ${{ matrix.platform != 'raspberrypi' }} if: ${{ matrix.platform != 'raspberrypi' }}
image:
name: Build image
runs-on: ubuntu-22.04
timeout-minutes: 60
needs: [iso]
steps:
- uses: actions/checkout@v3 - uses: actions/checkout@v3
with: with:
submodules: recursive submodules: recursive
path: start-os
if: ${{ matrix.platform == 'raspberrypi' }}
- name: Download raspberrypi.squashfs artifact - run: "mv startos-image-recipes/results/startos-*_raspberrypi.squashfs start-os/startos.raspberrypi.squashfs"
uses: actions/download-artifact@v3 if: ${{ matrix.platform == 'raspberrypi' }}
with:
name: raspberrypi.squashfs
- run: mv startos-*_raspberrypi.squashfs startos.raspberrypi.squashfs - run: rm -rf startos-image-recipes
if: ${{ matrix.platform == 'raspberrypi' }}
- name: Build image - name: Build image
working-directory: start-os
run: make startos_raspberrypi.img run: make startos_raspberrypi.img
if: ${{ matrix.platform == 'raspberrypi' }}
- uses: actions/upload-artifact@v3 - uses: actions/upload-artifact@v3
with: with:
name: raspberrypi.img name: raspberrypi.img
path: startos-*_raspberrypi.img path: start-os/startos-*_raspberrypi.img
if: ${{ matrix.platform == 'raspberrypi' }}

View File

@@ -3,20 +3,20 @@ ARCH := $(shell if [ "$(OS_ARCH)" = "raspberrypi" ]; then echo aarch64; else ech
ENVIRONMENT_FILE = $(shell ./check-environment.sh) ENVIRONMENT_FILE = $(shell ./check-environment.sh)
GIT_HASH_FILE = $(shell ./check-git-hash.sh) GIT_HASH_FILE = $(shell ./check-git-hash.sh)
VERSION_FILE = $(shell ./check-version.sh) VERSION_FILE = $(shell ./check-version.sh)
EMBASSY_BINS := backend/target/$(ARCH)-unknown-linux-gnu/release/embassyd backend/target/$(ARCH)-unknown-linux-gnu/release/embassy-init backend/target/$(ARCH)-unknown-linux-gnu/release/embassy-cli backend/target/$(ARCH)-unknown-linux-gnu/release/embassy-sdk backend/target/$(ARCH)-unknown-linux-gnu/release/avahi-alias libs/target/aarch64-unknown-linux-musl/release/embassy_container_init libs/target/x86_64-unknown-linux-musl/release/embassy_container_init EMBASSY_BINS := backend/target/$(ARCH)-unknown-linux-gnu/release/startbox libs/target/aarch64-unknown-linux-musl/release/embassy_container_init libs/target/x86_64-unknown-linux-musl/release/embassy_container_init
EMBASSY_UIS := frontend/dist/ui frontend/dist/setup-wizard frontend/dist/diagnostic-ui frontend/dist/install-wizard EMBASSY_UIS := frontend/dist/raw/ui frontend/dist/raw/setup-wizard frontend/dist/raw/diagnostic-ui frontend/dist/raw/install-wizard
BUILD_SRC := $(shell find build) BUILD_SRC := $(shell find build)
EMBASSY_SRC := backend/embassyd.service backend/embassy-init.service $(EMBASSY_UIS) $(BUILD_SRC) EMBASSY_SRC := backend/startd.service $(BUILD_SRC)
COMPAT_SRC := $(shell find system-images/compat/ -not -path 'system-images/compat/target/*' -and -not -name *.tar -and -not -name target) COMPAT_SRC := $(shell find system-images/compat/ -not -path 'system-images/compat/target/*' -and -not -name *.tar -and -not -name target)
UTILS_SRC := $(shell find system-images/utils/ -not -name *.tar) UTILS_SRC := $(shell find system-images/utils/ -not -name *.tar)
BINFMT_SRC := $(shell find system-images/binfmt/ -not -name *.tar) BINFMT_SRC := $(shell find system-images/binfmt/ -not -name *.tar)
BACKEND_SRC := $(shell find backend/src) $(shell find backend/migrations) $(shell find patch-db/*/src) $(shell find libs/*/src) libs/*/Cargo.toml backend/Cargo.toml backend/Cargo.lock BACKEND_SRC := $(shell find backend/src) $(shell find backend/migrations) $(shell find patch-db/*/src) $(shell find libs/*/src) libs/*/Cargo.toml backend/Cargo.toml backend/Cargo.lock frontend/dist/static
FRONTEND_SHARED_SRC := $(shell find frontend/projects/shared) $(shell ls -p frontend/ | grep -v / | sed 's/^/frontend\//g') frontend/package.json frontend/node_modules frontend/config.json patch-db/client/dist frontend/patchdb-ui-seed.json FRONTEND_SHARED_SRC := $(shell find frontend/projects/shared) $(shell ls -p frontend/ | grep -v / | sed 's/^/frontend\//g') frontend/package.json frontend/node_modules frontend/config.json patch-db/client/dist frontend/patchdb-ui-seed.json
FRONTEND_UI_SRC := $(shell find frontend/projects/ui) FRONTEND_UI_SRC := $(shell find frontend/projects/ui)
FRONTEND_SETUP_WIZARD_SRC := $(shell find frontend/projects/setup-wizard) FRONTEND_SETUP_WIZARD_SRC := $(shell find frontend/projects/setup-wizard)
FRONTEND_DIAGNOSTIC_UI_SRC := $(shell find frontend/projects/diagnostic-ui) FRONTEND_DIAGNOSTIC_UI_SRC := $(shell find frontend/projects/diagnostic-ui)
FRONTEND_INSTALL_WIZARD_SRC := $(shell find frontend/projects/install-wizard) FRONTEND_INSTALL_WIZARD_SRC := $(shell find frontend/projects/install-wizard)
PATCH_DB_CLIENT_SRC := $(shell find patch-db/client -not -path patch-db/client/dist) PATCH_DB_CLIENT_SRC := $(shell find patch-db/client -not -path patch-db/client/dist -and -not -path patch-db/client/node_modules)
GZIP_BIN := $(shell which pigz || which gzip) GZIP_BIN := $(shell which pigz || which gzip)
ALL_TARGETS := $(EMBASSY_BINS) system-images/compat/docker-images/$(ARCH).tar system-images/utils/docker-images/$(ARCH).tar system-images/binfmt/docker-images/$(ARCH).tar $(EMBASSY_SRC) $(shell if [ "$(OS_ARCH)" = "raspberrypi" ]; then echo cargo-deps/aarch64-unknown-linux-gnu/release/pi-beep; fi) $(ENVIRONMENT_FILE) $(GIT_HASH_FILE) $(VERSION_FILE) ALL_TARGETS := $(EMBASSY_BINS) system-images/compat/docker-images/$(ARCH).tar system-images/utils/docker-images/$(ARCH).tar system-images/binfmt/docker-images/$(ARCH).tar $(EMBASSY_SRC) $(shell if [ "$(OS_ARCH)" = "raspberrypi" ]; then echo cargo-deps/aarch64-unknown-linux-gnu/release/pi-beep; fi) $(ENVIRONMENT_FILE) $(GIT_HASH_FILE) $(VERSION_FILE)
@@ -24,9 +24,11 @@ ifeq ($(REMOTE),)
mkdir = mkdir -p $1 mkdir = mkdir -p $1
rm = rm -rf $1 rm = rm -rf $1
cp = cp -r $1 $2 cp = cp -r $1 $2
ln = ln -sf $1 $2
else else
mkdir = ssh $(REMOTE) 'mkdir -p $1' mkdir = ssh $(REMOTE) 'mkdir -p $1'
rm = ssh $(REMOTE) 'sudo rm -rf $1' rm = ssh $(REMOTE) 'sudo rm -rf $1'
ln = ssh $(REMOTE) 'sudo ln -sf $1 $2'
define cp define cp
tar --transform "s|^$1|x|" -czv -f- $1 | ssh $(REMOTE) "sudo tar --transform 's|^x|$2|' -xzv -f- -C /" tar --transform "s|^$1|x|" -czv -f- $1 | ssh $(REMOTE) "sudo tar --transform 's|^x|$2|' -xzv -f- -C /"
endef endef
@@ -71,10 +73,12 @@ startos_raspberrypi.img: $(BUILD_SRC) startos.raspberrypi.squashfs $(VERSION_FIL
# For creating os images. DO NOT USE # For creating os images. DO NOT USE
install: $(ALL_TARGETS) install: $(ALL_TARGETS)
$(call mkdir,$(DESTDIR)/usr/bin) $(call mkdir,$(DESTDIR)/usr/bin)
$(call cp,backend/target/$(ARCH)-unknown-linux-gnu/release/embassy-init,$(DESTDIR)/usr/bin/embassy-init) $(call cp,backend/target/$(ARCH)-unknown-linux-gnu/release/startbox,$(DESTDIR)/usr/bin/startbox)
$(call cp,backend/target/$(ARCH)-unknown-linux-gnu/release/embassyd,$(DESTDIR)/usr/bin/embassyd) $(call ln,/usr/bin/startbox,$(DESTDIR)/usr/bin/startd)
$(call cp,backend/target/$(ARCH)-unknown-linux-gnu/release/embassy-cli,$(DESTDIR)/usr/bin/embassy-cli) $(call ln,/usr/bin/startbox,$(DESTDIR)/usr/bin/start-cli)
$(call cp,backend/target/$(ARCH)-unknown-linux-gnu/release/avahi-alias,$(DESTDIR)/usr/bin/avahi-alias) $(call ln,/usr/bin/startbox,$(DESTDIR)/usr/bin/start-sdk)
$(call ln,/usr/bin/startbox,$(DESTDIR)/usr/bin/avahi-alias)
$(call ln,/usr/bin/startbox,$(DESTDIR)/usr/bin/embassy-cli)
if [ "$(OS_ARCH)" = "raspberrypi" ]; then $(call cp,cargo-deps/aarch64-unknown-linux-gnu/release/pi-beep,$(DESTDIR)/usr/bin/pi-beep); fi if [ "$(OS_ARCH)" = "raspberrypi" ]; then $(call cp,cargo-deps/aarch64-unknown-linux-gnu/release/pi-beep,$(DESTDIR)/usr/bin/pi-beep); fi
$(call mkdir,$(DESTDIR)/usr/lib) $(call mkdir,$(DESTDIR)/usr/lib)
@@ -94,22 +98,14 @@ install: $(ALL_TARGETS)
$(call cp,system-images/utils/docker-images/$(ARCH).tar,$(DESTDIR)/usr/lib/embassy/system-images/utils.tar) $(call cp,system-images/utils/docker-images/$(ARCH).tar,$(DESTDIR)/usr/lib/embassy/system-images/utils.tar)
$(call cp,system-images/binfmt/docker-images/$(ARCH).tar,$(DESTDIR)/usr/lib/embassy/system-images/binfmt.tar) $(call cp,system-images/binfmt/docker-images/$(ARCH).tar,$(DESTDIR)/usr/lib/embassy/system-images/binfmt.tar)
$(call mkdir,$(DESTDIR)/var/www/html)
$(call cp,frontend/dist/diagnostic-ui,$(DESTDIR)/var/www/html/diagnostic)
$(call cp,frontend/dist/setup-wizard,$(DESTDIR)/var/www/html/setup)
$(call cp,frontend/dist/install-wizard,$(DESTDIR)/var/www/html/install)
$(call cp,frontend/dist/ui,$(DESTDIR)/var/www/html/main)
$(call cp,index.html,$(DESTDIR)/var/www/html/index.html)
update-overlay: update-overlay:
@echo "\033[33m!!! THIS WILL ONLY REFLASH YOUR DEVICE IN MEMORY !!!\033[0m" @echo "\033[33m!!! THIS WILL ONLY REFLASH YOUR DEVICE IN MEMORY !!!\033[0m"
@echo "\033[33mALL CHANGES WILL BE REVERTED IF YOU RESTART THE DEVICE\033[0m" @echo "\033[33mALL CHANGES WILL BE REVERTED IF YOU RESTART THE DEVICE\033[0m"
@if [ -z "$(REMOTE)" ]; then >&2 echo "Must specify REMOTE" && false; fi @if [ -z "$(REMOTE)" ]; then >&2 echo "Must specify REMOTE" && false; fi
@if [ "`ssh $(REMOTE) 'cat /usr/lib/embassy/VERSION.txt'`" != "`cat ./VERSION.txt`" ]; then >&2 echo "Embassy requires migrations: update-overlay is unavailable." && false; fi @if [ "`ssh $(REMOTE) 'cat /usr/lib/embassy/VERSION.txt'`" != "`cat ./VERSION.txt`" ]; then >&2 echo "StartOS requires migrations: update-overlay is unavailable." && false; fi
@if ssh $(REMOTE) "pidof embassy-init"; then >&2 echo "Embassy in INIT: update-overlay is unavailable." && false; fi ssh $(REMOTE) "sudo systemctl stop startd"
ssh $(REMOTE) "sudo systemctl stop embassyd"
$(MAKE) install REMOTE=$(REMOTE) OS_ARCH=$(OS_ARCH) $(MAKE) install REMOTE=$(REMOTE) OS_ARCH=$(OS_ARCH)
ssh $(REMOTE) "sudo systemctl start embassyd" ssh $(REMOTE) "sudo systemctl start startd"
update: update:
@if [ -z "$(REMOTE)" ]; then >&2 echo "Must specify REMOTE" && false; fi @if [ -z "$(REMOTE)" ]; then >&2 echo "Must specify REMOTE" && false; fi
@@ -132,11 +128,6 @@ system-images/utils/docker-images/aarch64.tar system-images/utils/docker-images/
system-images/binfmt/docker-images/aarch64.tar system-images/binfmt/docker-images/x86_64.tar: $(BINFMT_SRC) system-images/binfmt/docker-images/aarch64.tar system-images/binfmt/docker-images/x86_64.tar: $(BINFMT_SRC)
cd system-images/binfmt && make cd system-images/binfmt && make
raspios.img:
wget --continue https://downloads.raspberrypi.org/raspios_lite_arm64/images/raspios_lite_arm64-2022-01-28/2022-01-28-raspios-bullseye-arm64-lite.zip
unzip 2022-01-28-raspios-bullseye-arm64-lite.zip
mv 2022-01-28-raspios-bullseye-arm64-lite.img raspios.img
snapshots: libs/snapshot_creator/Cargo.toml snapshots: libs/snapshot_creator/Cargo.toml
cd libs/ && ./build-v8-snapshot.sh cd libs/ && ./build-v8-snapshot.sh
cd libs/ && ./build-arm-v8-snapshot.sh cd libs/ && ./build-arm-v8-snapshot.sh
@@ -148,18 +139,21 @@ $(EMBASSY_BINS): $(BACKEND_SRC) $(ENVIRONMENT_FILE) $(GIT_HASH_FILE) frontend/pa
frontend/node_modules: frontend/package.json frontend/node_modules: frontend/package.json
npm --prefix frontend ci npm --prefix frontend ci
frontend/dist/ui: $(FRONTEND_UI_SRC) $(FRONTEND_SHARED_SRC) $(ENVIRONMENT_FILE) frontend/dist/raw/ui: $(FRONTEND_UI_SRC) $(FRONTEND_SHARED_SRC) $(ENVIRONMENT_FILE)
npm --prefix frontend run build:ui npm --prefix frontend run build:ui
frontend/dist/setup-wizard: $(FRONTEND_SETUP_WIZARD_SRC) $(FRONTEND_SHARED_SRC) $(ENVIRONMENT_FILE) frontend/dist/raw/setup-wizard: $(FRONTEND_SETUP_WIZARD_SRC) $(FRONTEND_SHARED_SRC) $(ENVIRONMENT_FILE)
npm --prefix frontend run build:setup npm --prefix frontend run build:setup
frontend/dist/diagnostic-ui: $(FRONTEND_DIAGNOSTIC_UI_SRC) $(FRONTEND_SHARED_SRC) $(ENVIRONMENT_FILE) frontend/dist/raw/diagnostic-ui: $(FRONTEND_DIAGNOSTIC_UI_SRC) $(FRONTEND_SHARED_SRC) $(ENVIRONMENT_FILE)
npm --prefix frontend run build:dui npm --prefix frontend run build:dui
frontend/dist/install-wizard: $(FRONTEND_INSTALL_WIZARD_SRC) $(FRONTEND_SHARED_SRC) $(ENVIRONMENT_FILE) frontend/dist/raw/install-wizard: $(FRONTEND_INSTALL_WIZARD_SRC) $(FRONTEND_SHARED_SRC) $(ENVIRONMENT_FILE)
npm --prefix frontend run build:install-wiz npm --prefix frontend run build:install-wiz
frontend/dist/static: $(EMBASSY_UIS)
./compress-uis.sh
frontend/config.json: $(GIT_HASH_FILE) frontend/config-sample.json frontend/config.json: $(GIT_HASH_FILE) frontend/config-sample.json
jq '.useMocks = false' frontend/config-sample.json > frontend/config.json jq '.useMocks = false' frontend/config-sample.json > frontend/config.json
jq '.packageArch = "$(ARCH)"' frontend/config.json > frontend/config.json.tmp jq '.packageArch = "$(ARCH)"' frontend/config.json > frontend/config.json.tmp
@@ -186,13 +180,10 @@ backend-$(ARCH).tar: $(EMBASSY_BINS)
frontends: $(EMBASSY_UIS) frontends: $(EMBASSY_UIS)
# this is a convenience step to build the UI # this is a convenience step to build the UI
ui: frontend/dist/ui ui: frontend/dist/raw/ui
# used by github actions # used by github actions
backend: $(EMBASSY_BINS) backend: $(EMBASSY_BINS)
cargo-deps/aarch64-unknown-linux-gnu/release/nc-broadcast:
./build-cargo-dep.sh nc-broadcast
cargo-deps/aarch64-unknown-linux-gnu/release/pi-beep: cargo-deps/aarch64-unknown-linux-gnu/release/pi-beep:
./build-cargo-dep.sh pi-beep ARCH=aarch64 ./build-cargo-dep.sh pi-beep

122
README.md
View File

@@ -1,51 +1,81 @@
# StartOS <div align="center">
[![version](https://img.shields.io/github/v/tag/Start9Labs/start-os?color=success)](https://github.com/Start9Labs/start-os/releases) <img src="frontend/projects/shared/assets/img/icon_pwa.png" alt="StartOS Logo" width="16%" />
[![build](https://github.com/Start9Labs/start-os/actions/workflows/startos-iso.yaml/badge.svg)](https://github.com/Start9Labs/start-os/actions/workflows/startos-iso.yaml) <h1 style="margin-top: 0;">StartOS</h1>
[![community](https://img.shields.io/badge/community-matrix-yellow)](https://matrix.to/#/#community:matrix.start9labs.com) <a href="https://github.com/Start9Labs/start-os/releases">
[![community](https://img.shields.io/badge/community-telegram-informational)](https://t.me/start9_labs) <img src="https://img.shields.io/github/v/tag/Start9Labs/start-os?color=success" />
[![support](https://img.shields.io/badge/support-docs-important)](https://docs.start9.com) </a>
[![developer](https://img.shields.io/badge/developer-matrix-blueviolet)](https://matrix.to/#/#community-dev:matrix.start9labs.com) <a href="https://github.com/Start9Labs/start-os/actions/workflows/startos-iso.yaml">
[![website](https://img.shields.io/website?down_color=lightgrey&down_message=offline&up_color=green&up_message=online&url=https%3A%2F%2Fstart9.com)](https://start9.com) <img src="https://github.com/Start9Labs/start-os/actions/workflows/startos-iso.yaml/badge.svg">
</a>
[![mastodon](https://img.shields.io/mastodon/follow/000000001?domain=https%3A%2F%2Fmastodon.start9labs.com&label=Follow&style=social)](http://mastodon.start9labs.com) <a href="https://twitter.com/start9labs">
[![twitter](https://img.shields.io/twitter/follow/start9labs?label=Follow)](https://twitter.com/start9labs) <img src="https://img.shields.io/twitter/follow/start9labs?label=Follow">
</a>
### _Welcome to the era of Sovereign Computing_ ### <a href="http://mastodon.start9labs.com">
<img src="https://img.shields.io/mastodon/follow/000000001?domain=https%3A%2F%2Fmastodon.start9labs.com&label=Follow&style=social">
StartOS is a browser-based, graphical operating system for a personal server. StartOS facilitates the discovery, installation, network configuration, service configuration, data backup, dependency management, and health monitoring of self-hosted software services. It is the most advanced, secure, reliable, and user friendly personal server OS in the world. </a>
<a href="https://matrix.to/#/#community:matrix.start9labs.com">
## Running StartOS <img src="https://img.shields.io/badge/community-matrix-yellow">
There are multiple ways to get your hands on StartOS. </a>
<a href="https://t.me/start9_labs">
### :moneybag: Buy a Start9 server <img src="https://img.shields.io/badge/community-telegram-informational">
This is the most convenient option. Simply [buy a server](https://start9.com) from Start9 and plug it in. Depending on where you live, shipping costs and import duties will vary. </a>
<a href="https://docs.start9.com">
### :construction_worker: Build your own server <img src="https://img.shields.io/badge/support-docs-important">
This option is easier than you might imagine, and there are 4 reasons why you might prefer it: </a>
1. You already have your own hardware. <a href="https://matrix.to/#/#community-dev:matrix.start9labs.com">
1. You want to save on shipping costs. <img src="https://img.shields.io/badge/developer-matrix-blueviolet">
1. You prefer not to divulge your physical address. </a>
1. You just like building things. <a href="https://start9.com">
<img src="https://img.shields.io/website?down_color=lightgrey&down_message=offline&up_color=green&up_message=online&url=https%3A%2F%2Fstart9.com">
To pursue this option, follow one of our [DIY guides](https://start9.com/latest/diy). </a>
</div>
### :hammer_and_wrench: Build StartOS from Source <br />
<div align="center">
StartOS can be built from source, for personal use, for free. <h3>
A detailed guide for doing so can be found [here](https://github.com/Start9Labs/start-os/blob/master/build/README.md). Welcome to the era of Sovereign Computing
</h3>
## :heart: Contributing <p>
There are multiple ways to contribute: work directly on StartOS, package a service for the marketplace, or help with documentation and guides. To learn more about contributing, see [here](https://docs.start9.com/latest/contribute/) or [here](https://github.com/Start9Labs/start-os/blob/master/CONTRIBUTING.md). StartOS is a Debian-based Linux distro optimized for running a personal server. It facilitates the discovery, installation, network configuration, service configuration, data backup, dependency management, and health monitoring of self-hosted software services.
</p>
## UI Screenshots </div>
<br />
<p align="center"> <p align="center">
<img src="assets/StartOS.png" alt="StartOS" width="85%"> <img src="assets/StartOS.png" alt="StartOS" width="85%">
</p> </p>
<br />
## Running StartOS
There are multiple ways to get started with StartOS:
### 💰 Buy a Start9 server
This is the most convenient option. Simply [buy a server](https://store.start9.com) from Start9 and plug it in.
### 👷 Build your own server
This option is easier than you might imagine, and there are 4 reasons why you might prefer it:
1. You already have hardware
1. You want to save on shipping costs
1. You prefer not to divulge your physical address
1. You just like building things
To pursue this option, follow one of our [DIY guides](https://start9.com/latest/diy).
## ❤️ Contributing
There are multiple ways to contribute: work directly on StartOS, package a service for the marketplace, or help with documentation and guides. To learn more about contributing, see [here](https://start9.com/contribute/).
To report security issues, please email our security team - security@start9.com.
## 🌎 Marketplace
There are dozens of service available for StartOS, and new ones are being added all the time. Check out the full list of available services [here](https://marketplace.start9.com/marketplace). To read more about the Marketplace ecosystem, check out this [blog post](https://blog.start9.com/start9-marketplace-strategy/)
## 🖥️ User Interface Screenshots
<p align="center"> <p align="center">
<img src="assets/preferences.png" alt="StartOS Preferences" width="49%"> <img src="assets/registry.png" alt="StartOS Marketplace" width="49%">
<img src="assets/ghost.png" alt="StartOS Ghost Service" width="49%"> <img src="assets/community.png" alt="StartOS Community Registry" width="49%">
</p> <img src="assets/c-lightning.png" alt="StartOS NextCloud Service" width="49%">
<p align="center"> <img src="assets/btcpay.png" alt="StartOS BTCPay Service" width="49%">
<img src="assets/synapse-health-check.png" alt="StartOS Synapse Health Checks" width="49%"> <img src="assets/nextcloud.png" alt="StartOS System Settings" width="49%">
<img src="assets/sideload.png" alt="StartOS Sideload Service" width="49%"> <img src="assets/system.png" alt="StartOS System Settings" width="49%">
<img src="assets/welcome.png" alt="StartOS System Settings" width="49%">
<img src="assets/logs.png" alt="StartOS System Settings" width="49%">
</p> </p>

Binary file not shown.

Before

Width:  |  Height:  |  Size: 191 KiB

After

Width:  |  Height:  |  Size: 2.1 MiB

BIN
assets/btcpay.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 396 KiB

BIN
assets/c-lightning.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 402 KiB

BIN
assets/community.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 591 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 281 KiB

BIN
assets/logs.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 1.6 MiB

BIN
assets/nextcloud.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 319 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 266 KiB

BIN
assets/registry.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 521 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 154 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 213 KiB

BIN
assets/system.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 331 KiB

BIN
assets/welcome.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 402 KiB

2443
backend/Cargo.lock generated

File diff suppressed because it is too large Load Diff

View File

@@ -1,7 +1,7 @@
[package] [package]
authors = ["Aiden McClelland <me@drbonez.dev>"] authors = ["Aiden McClelland <me@drbonez.dev>"]
description = "The core of StartOS" description = "The core of StartOS"
documentation = "https://docs.rs/embassy-os" documentation = "https://docs.rs/start-os"
edition = "2021" edition = "2021"
keywords = [ keywords = [
"self-hosted", "self-hosted",
@@ -11,40 +11,28 @@ keywords = [
"full-node", "full-node",
"lightning", "lightning",
] ]
name = "embassy-os" name = "start-os"
readme = "README.md" readme = "README.md"
repository = "https://github.com/Start9Labs/start-os" repository = "https://github.com/Start9Labs/start-os"
version = "0.3.4-rev.3" version = "0.3.4-rev.4"
[lib] [lib]
name = "embassy" name = "startos"
path = "src/lib.rs" path = "src/lib.rs"
[[bin]] [[bin]]
name = "embassyd" name = "startbox"
path = "src/bin/embassyd.rs" path = "src/main.rs"
[[bin]]
name = "embassy-init"
path = "src/bin/embassy-init.rs"
[[bin]]
name = "embassy-sdk"
path = "src/bin/embassy-sdk.rs"
[[bin]]
name = "embassy-cli"
path = "src/bin/embassy-cli.rs"
[[bin]]
name = "avahi-alias"
path = "src/bin/avahi-alias.rs"
[features] [features]
avahi = ["avahi-sys"] avahi = ["avahi-sys"]
default = ["avahi", "js_engine"] default = ["avahi-alias", "cli", "sdk", "daemon", "js_engine"]
dev = [] dev = []
unstable = ["patch-db/unstable"] unstable = ["patch-db/unstable"]
avahi-alias = ["avahi"]
cli = []
sdk = []
daemon = []
[dependencies] [dependencies]
aes = { version = "0.7.5", features = ["ctr"] } aes = { version = "0.7.5", features = ["ctr"] }
@@ -90,11 +78,14 @@ http = "0.2.8"
hyper = { version = "0.14.20", features = ["full"] } hyper = { version = "0.14.20", features = ["full"] }
hyper-ws-listener = "0.2.0" hyper-ws-listener = "0.2.0"
imbl = "2.0.0" imbl = "2.0.0"
include_dir = "0.7.3"
indexmap = { version = "1.9.1", features = ["serde"] } indexmap = { version = "1.9.1", features = ["serde"] }
ipnet = { version = "2.7.1", features = ["serde"] } ipnet = { version = "2.7.1", features = ["serde"] }
iprange = { version = "0.6.7", features = ["serde"] } iprange = { version = "0.6.7", features = ["serde"] }
isocountry = "0.3.2" isocountry = "0.3.2"
itertools = "0.10.3" itertools = "0.10.3"
jaq-core = "0.10.0"
jaq-std = "0.10.0"
josekit = "0.8.1" josekit = "0.8.1"
js_engine = { path = '../libs/js_engine', optional = true } js_engine = { path = '../libs/js_engine', optional = true }
jsonpath_lib = "0.3.0" jsonpath_lib = "0.3.0"
@@ -103,6 +94,7 @@ libc = "0.2.126"
log = "0.4.17" log = "0.4.17"
mbrman = "0.5.0" mbrman = "0.5.0"
models = { version = "*", path = "../libs/models" } models = { version = "*", path = "../libs/models" }
new_mime_guess = "4"
nix = "0.25.0" nix = "0.25.0"
nom = "7.1.1" nom = "7.1.1"
num = "0.4.0" num = "0.4.0"
@@ -162,6 +154,7 @@ tracing-subscriber = { version = "0.3.14", features = ["env-filter"] }
trust-dns-server = "0.22.0" trust-dns-server = "0.22.0"
typed-builder = "0.10.0" typed-builder = "0.10.0"
url = { version = "2.2.2", features = ["serde"] } url = { version = "2.2.2", features = ["serde"] }
urlencoding = "2.1.2"
uuid = { version = "1.1.2", features = ["v4"] } uuid = { version = "1.1.2", features = ["v4"] }
zeroize = "1.5.7" zeroize = "1.5.7"

View File

@@ -12,18 +12,17 @@
## Structure ## Structure
The StartOS backend is broken up into 4 different binaries: The StartOS backend is packed into a single binary `startbox` that is symlinked under
several different names for different behaviour:
- embassyd: This is the main workhorse of StartOS - any new functionality you - startd: This is the main workhorse of StartOS - any new functionality you
want will likely go here want will likely go here
- embassy-init: This is the component responsible for allowing you to set up - start-cli: This is a CLI tool that will allow you to issue commands to
your device, and handles system initialization on startup startd and control it similarly to the UI
- embassy-cli: This is a CLI tool that will allow you to issue commands to - start-sdk: This is a CLI tool that aids in building and packaging services
embassyd and control it similarly to the UI
- embassy-sdk: This is a CLI tool that aids in building and packaging services
you wish to deploy to StartOS you wish to deploy to StartOS
Finally there is a library `embassy` that supports all four of these tools. Finally there is a library `startos` that supports all of these tools.
See [here](/backend/Cargo.toml) for details. See [here](/backend/Cargo.toml) for details.

View File

@@ -1,24 +0,0 @@
#!/bin/bash
set -e
shopt -s expand_aliases
if [ "$0" != "./build-dev.sh" ]; then
>&2 echo "Must be run from backend directory"
exit 1
fi
USE_TTY=
if tty -s; then
USE_TTY="-it"
fi
alias 'rust-arm64-builder'='docker run $USE_TTY --rm -v "$HOME/.cargo/registry":/root/.cargo/registry -v "$(pwd)":/home/rust/src start9/rust-arm-cross:aarch64'
cd ..
rust-arm64-builder sh -c "(cd backend && cargo build --locked)"
cd backend
sudo chown -R $USER target
sudo chown -R $USER ~/.cargo
#rust-arm64-builder aarch64-linux-gnu-strip target/aarch64-unknown-linux-gnu/release/embassyd

View File

@@ -1,23 +0,0 @@
#!/bin/bash
set -e
shopt -s expand_aliases
if [ "$0" != "./build-portable-dev.sh" ]; then
>&2 echo "Must be run from backend directory"
exit 1
fi
USE_TTY=
if tty -s; then
USE_TTY="-it"
fi
alias 'rust-musl-builder'='docker run $USE_TTY --rm -v "$HOME"/.cargo/registry:/root/.cargo/registry -v "$(pwd)":/home/rust/src start9/rust-musl-cross:x86_64-musl'
cd ..
rust-musl-builder sh -c "(cd backend && cargo +beta build --target=x86_64-unknown-linux-musl --no-default-features --locked)"
cd backend
sudo chown -R $USER target
sudo chown -R $USER ~/.cargo

View File

@@ -22,7 +22,7 @@ if tty -s; then
USE_TTY="-it" USE_TTY="-it"
fi fi
alias 'rust-gnu-builder'='docker run $USE_TTY --rm -e "OS_ARCH=$OS_ARCH" -v "$HOME/.cargo/registry":/root/.cargo/registry -v "$(pwd)":/home/rust/src -P start9/rust-arm-cross:aarch64' alias 'rust-gnu-builder'='docker run $USE_TTY --rm -e "OS_ARCH=$OS_ARCH" -v "$HOME/.cargo/registry":/usr/local/cargo/registry -v "$(pwd)":/home/rust/src -w /home/rust/src -P start9/rust-arm-cross:aarch64'
alias 'rust-musl-builder'='docker run $USE_TTY --rm -e "OS_ARCH=$OS_ARCH" -v "$HOME/.cargo/registry":/root/.cargo/registry -v "$(pwd)":/home/rust/src -P messense/rust-musl-cross:$ARCH-musl' alias 'rust-musl-builder'='docker run $USE_TTY --rm -e "OS_ARCH=$OS_ARCH" -v "$HOME/.cargo/registry":/root/.cargo/registry -v "$(pwd)":/home/rust/src -P messense/rust-musl-cross:$ARCH-musl'
cd .. cd ..
@@ -37,26 +37,26 @@ fi
set +e set +e
fail= fail=
if [[ "$FLAGS" = "" ]]; then if [[ "$FLAGS" = "" ]]; then
rust-gnu-builder sh -c "(git config --global --add safe.directory '*'; cd backend && cargo build --release --locked --target=$ARCH-unknown-linux-gnu)" rust-gnu-builder sh -c "(cd backend && cargo build --release --locked --target=$ARCH-unknown-linux-gnu)"
if test $? -ne 0; then if test $? -ne 0; then
fail=true fail=true
fi fi
for ARCH in x86_64 aarch64 for ARCH in x86_64 aarch64
do do
rust-musl-builder sh -c "(git config --global --add safe.directory '*'; cd libs && cargo build --release --locked --bin embassy_container_init )" rust-musl-builder sh -c "(cd libs && cargo build --release --locked --bin embassy_container_init )"
if test $? -ne 0; then if test $? -ne 0; then
fail=true fail=true
fi fi
done done
else else
echo "FLAGS=$FLAGS" echo "FLAGS=$FLAGS"
rust-gnu-builder sh -c "(git config --global --add safe.directory '*'; cd backend && cargo build --release --features $FLAGS --locked --target=$ARCH-unknown-linux-gnu)" rust-gnu-builder sh -c "(cd backend && cargo build --release --features $FLAGS --locked --target=$ARCH-unknown-linux-gnu)"
if test $? -ne 0; then if test $? -ne 0; then
fail=true fail=true
fi fi
for ARCH in x86_64 aarch64 for ARCH in x86_64 aarch64
do do
rust-musl-builder sh -c "(git config --global --add safe.directory '*'; cd libs && cargo build --release --features $FLAGS --locked --bin embassy_container_init)" rust-musl-builder sh -c "(cd libs && cargo build --release --features $FLAGS --locked --bin embassy_container_init)"
if test $? -ne 0; then if test $? -ne 0; then
fail=true fail=true
fi fi
@@ -72,5 +72,3 @@ sudo chown -R $USER ../libs/target
if [ -n "$fail" ]; then if [ -n "$fail" ]; then
exit 1 exit 1
fi fi
#rust-arm64-builder aarch64-linux-gnu-strip target/aarch64-unknown-linux-gnu/release/embassyd

View File

@@ -1,15 +0,0 @@
[Unit]
Description=Embassy Init
After=network-online.target
Requires=network-online.target
Wants=avahi-daemon.service
[Service]
Type=oneshot
Environment=RUST_LOG=embassy_init=debug,embassy=debug,js_engine=debug,patch_db=warn
ExecStart=/usr/bin/embassy-init
RemainAfterExit=true
StandardOutput=append:/var/log/embassy-init.log
[Install]
WantedBy=embassyd.service

View File

@@ -1,17 +0,0 @@
[Unit]
Description=Embassy Daemon
After=embassy-init.service
Requires=embassy-init.service
[Service]
Type=simple
Environment=RUST_LOG=embassyd=debug,embassy=debug,js_engine=debug,patch_db=warn
ExecStart=/usr/bin/embassyd
Restart=always
RestartSec=3
ManagedOOMPreference=avoid
CPUAccounting=true
CPUWeight=1000
[Install]
WantedBy=multi-user.target

View File

@@ -8,4 +8,11 @@ if [ "$0" != "./install-sdk.sh" ]; then
exit 1 exit 1
fi fi
cargo install --bin=embassy-sdk --bin=embassy-cli --path=. --no-default-features --features=js_engine --locked if [ -z "$OS_ARCH" ]; then
export OS_ARCH=$(uname -m)
fi
cargo install --path=. --no-default-features --features=js_engine,sdk,cli --locked
startbox_loc=$(which startbox)
ln -sf $startbox_loc $(dirname $startbox_loc)/start-cli
ln -sf $startbox_loc $(dirname $startbox_loc)/start-sdk

View File

@@ -370,7 +370,7 @@ async fn perform_backup<Db: DbHandle>(
} }
let luks_folder = Path::new("/media/embassy/config/luks"); let luks_folder = Path::new("/media/embassy/config/luks");
if tokio::fs::metadata(&luks_folder).await.is_ok() { if tokio::fs::metadata(&luks_folder).await.is_ok() {
dir_copy(&luks_folder, &luks_folder_bak).await?; dir_copy(&luks_folder, &luks_folder_bak, None).await?;
} }
let timestamp = Some(Utc::now()); let timestamp = Some(Utc::now());

View File

@@ -109,7 +109,7 @@ async fn approximate_progress(
if tokio::fs::metadata(&dir).await.is_err() { if tokio::fs::metadata(&dir).await.is_err() {
*size = 0; *size = 0;
} else { } else {
*size = dir_size(&dir).await?; *size = dir_size(&dir, None).await?;
} }
} }
Ok(()) Ok(())
@@ -285,7 +285,7 @@ async fn restore_packages(
progress_info.package_installs.insert(id.clone(), progress); progress_info.package_installs.insert(id.clone(), progress);
progress_info progress_info
.src_volume_size .src_volume_size
.insert(id.clone(), dir_size(backup_dir(&id)).await?); .insert(id.clone(), dir_size(backup_dir(&id), None).await?);
progress_info.target_volume_size.insert(id.clone(), 0); progress_info.target_volume_size.insert(id.clone(), 0);
let package_id = id.clone(); let package_id = id.clone();
tasks.push( tasks.push(

View File

@@ -14,7 +14,7 @@ fn log_str_error(action: &str, e: i32) {
} }
} }
fn main() { pub fn main() {
let aliases: Vec<_> = std::env::args().skip(1).collect(); let aliases: Vec<_> = std::env::args().skip(1).collect();
unsafe { unsafe {
let simple_poll = avahi_sys::avahi_simple_poll_new(); let simple_poll = avahi_sys::avahi_simple_poll_new();

View File

@@ -0,0 +1,9 @@
pub fn renamed(old: &str, new: &str) -> ! {
eprintln!("{old} has been renamed to {new}");
std::process::exit(1)
}
pub fn removed(name: &str) -> ! {
eprintln!("{name} has been removed");
std::process::exit(1)
}

55
backend/src/bins/mod.rs Normal file
View File

@@ -0,0 +1,55 @@
use std::path::Path;
#[cfg(feature = "avahi-alias")]
pub mod avahi_alias;
pub mod deprecated;
#[cfg(feature = "cli")]
pub mod start_cli;
#[cfg(feature = "daemon")]
pub mod start_init;
#[cfg(feature = "sdk")]
pub mod start_sdk;
#[cfg(feature = "daemon")]
pub mod startd;
fn select_executable(name: &str) -> Option<fn()> {
match name {
#[cfg(feature = "avahi-alias")]
"avahi-alias" => Some(avahi_alias::main),
#[cfg(feature = "cli")]
"start-cli" => Some(start_cli::main),
#[cfg(feature = "sdk")]
"start-sdk" => Some(start_sdk::main),
#[cfg(feature = "daemon")]
"startd" => Some(startd::main),
"embassy-cli" => Some(|| deprecated::renamed("embassy-cli", "start-cli")),
"embassy-sdk" => Some(|| deprecated::renamed("embassy-sdk", "start-sdk")),
"embassyd" => Some(|| deprecated::renamed("embassyd", "startd")),
"embassy-init" => Some(|| deprecated::removed("embassy-init")),
_ => None,
}
}
pub fn startbox() {
let args = std::env::args().take(2).collect::<Vec<_>>();
if let Some(x) = args
.get(0)
.and_then(|s| Path::new(&*s).file_name())
.and_then(|s| s.to_str())
.and_then(|s| select_executable(&s))
{
x()
} else if let Some(x) = args.get(1).and_then(|s| select_executable(&s)) {
x()
} else {
eprintln!(
"unknown executable: {}",
args.get(0)
.filter(|x| &**x != "startbox")
.or_else(|| args.get(1))
.map(|s| s.as_str())
.unwrap_or("N/A")
);
std::process::exit(1);
}
}

View File

@@ -1,21 +1,22 @@
use clap::Arg; use clap::Arg;
use embassy::context::CliContext;
use embassy::util::logger::EmbassyLogger;
use embassy::version::{Current, VersionT};
use embassy::Error;
use rpc_toolkit::run_cli; use rpc_toolkit::run_cli;
use rpc_toolkit::yajrc::RpcError; use rpc_toolkit::yajrc::RpcError;
use serde_json::Value; use serde_json::Value;
use crate::context::CliContext;
use crate::util::logger::EmbassyLogger;
use crate::version::{Current, VersionT};
use crate::Error;
lazy_static::lazy_static! { lazy_static::lazy_static! {
static ref VERSION_STRING: String = Current::new().semver().to_string(); static ref VERSION_STRING: String = Current::new().semver().to_string();
} }
fn inner_main() -> Result<(), Error> { fn inner_main() -> Result<(), Error> {
run_cli!({ run_cli!({
command: embassy::main_api, command: crate::main_api,
app: app => app app: app => app
.name("Embassy CLI") .name("StartOS CLI")
.version(&**VERSION_STRING) .version(&**VERSION_STRING)
.arg( .arg(
clap::Arg::with_name("config") clap::Arg::with_name("config")
@@ -48,7 +49,7 @@ fn inner_main() -> Result<(), Error> {
Ok(()) Ok(())
} }
fn main() { pub fn main() {
match inner_main() { match inner_main() {
Ok(_) => (), Ok(_) => (),
Err(e) => { Err(e) => {

View File

@@ -3,21 +3,22 @@ use std::path::{Path, PathBuf};
use std::sync::Arc; use std::sync::Arc;
use std::time::Duration; use std::time::Duration;
use embassy::context::rpc::RpcContextConfig;
use embassy::context::{DiagnosticContext, InstallContext, SetupContext};
use embassy::disk::fsck::RepairStrategy;
use embassy::disk::main::DEFAULT_PASSWORD;
use embassy::disk::REPAIR_DISK_PATH;
use embassy::init::STANDBY_MODE_PATH;
use embassy::net::web_server::WebServer;
use embassy::shutdown::Shutdown;
use embassy::sound::CHIME;
use embassy::util::logger::EmbassyLogger;
use embassy::util::Invoke;
use embassy::{Error, ErrorKind, ResultExt, OS_ARCH};
use tokio::process::Command; use tokio::process::Command;
use tracing::instrument; use tracing::instrument;
use crate::context::rpc::RpcContextConfig;
use crate::context::{DiagnosticContext, InstallContext, SetupContext};
use crate::disk::fsck::RepairStrategy;
use crate::disk::main::DEFAULT_PASSWORD;
use crate::disk::REPAIR_DISK_PATH;
use crate::init::STANDBY_MODE_PATH;
use crate::net::web_server::WebServer;
use crate::shutdown::Shutdown;
use crate::sound::CHIME;
use crate::util::logger::EmbassyLogger;
use crate::util::Invoke;
use crate::{Error, ErrorKind, ResultExt, OS_ARCH};
#[instrument(skip_all)] #[instrument(skip_all)]
async fn setup_or_init(cfg_path: Option<PathBuf>) -> Result<(), Error> { async fn setup_or_init(cfg_path: Option<PathBuf>) -> Result<(), Error> {
Command::new("ln") Command::new("ln")
@@ -78,7 +79,7 @@ async fn setup_or_init(cfg_path: Option<PathBuf>) -> Result<(), Error> {
server.shutdown().await; server.shutdown().await;
Command::new("reboot") Command::new("reboot")
.invoke(embassy::ErrorKind::Unknown) .invoke(crate::ErrorKind::Unknown)
.await?; .await?;
} else if tokio::fs::metadata("/media/embassy/config/disk.guid") } else if tokio::fs::metadata("/media/embassy/config/disk.guid")
.await .await
@@ -116,7 +117,7 @@ async fn setup_or_init(cfg_path: Option<PathBuf>) -> Result<(), Error> {
let guid_string = tokio::fs::read_to_string("/media/embassy/config/disk.guid") // unique identifier for volume group - keeps track of the disk that goes with your embassy let guid_string = tokio::fs::read_to_string("/media/embassy/config/disk.guid") // unique identifier for volume group - keeps track of the disk that goes with your embassy
.await?; .await?;
let guid = guid_string.trim(); let guid = guid_string.trim();
let requires_reboot = embassy::disk::main::import( let requires_reboot = crate::disk::main::import(
guid, guid,
cfg.datadir(), cfg.datadir(),
if tokio::fs::metadata(REPAIR_DISK_PATH).await.is_ok() { if tokio::fs::metadata(REPAIR_DISK_PATH).await.is_ok() {
@@ -124,22 +125,26 @@ async fn setup_or_init(cfg_path: Option<PathBuf>) -> Result<(), Error> {
} else { } else {
RepairStrategy::Preen RepairStrategy::Preen
}, },
DEFAULT_PASSWORD, if guid.ends_with("_UNENC") {
None
} else {
Some(DEFAULT_PASSWORD)
},
) )
.await?; .await?;
if tokio::fs::metadata(REPAIR_DISK_PATH).await.is_ok() { if tokio::fs::metadata(REPAIR_DISK_PATH).await.is_ok() {
tokio::fs::remove_file(REPAIR_DISK_PATH) tokio::fs::remove_file(REPAIR_DISK_PATH)
.await .await
.with_ctx(|_| (embassy::ErrorKind::Filesystem, REPAIR_DISK_PATH))?; .with_ctx(|_| (crate::ErrorKind::Filesystem, REPAIR_DISK_PATH))?;
} }
if requires_reboot.0 { if requires_reboot.0 {
embassy::disk::main::export(guid, cfg.datadir()).await?; crate::disk::main::export(guid, cfg.datadir()).await?;
Command::new("reboot") Command::new("reboot")
.invoke(embassy::ErrorKind::Unknown) .invoke(crate::ErrorKind::Unknown)
.await?; .await?;
} }
tracing::info!("Loaded Disk"); tracing::info!("Loaded Disk");
embassy::init::init(&cfg).await?; crate::init::init(&cfg).await?;
} }
Ok(()) Ok(())
@@ -168,11 +173,11 @@ async fn inner_main(cfg_path: Option<PathBuf>) -> Result<Option<Shutdown>, Error
if OS_ARCH == "raspberrypi" && tokio::fs::metadata(STANDBY_MODE_PATH).await.is_ok() { if OS_ARCH == "raspberrypi" && tokio::fs::metadata(STANDBY_MODE_PATH).await.is_ok() {
tokio::fs::remove_file(STANDBY_MODE_PATH).await?; tokio::fs::remove_file(STANDBY_MODE_PATH).await?;
Command::new("sync").invoke(ErrorKind::Filesystem).await?; Command::new("sync").invoke(ErrorKind::Filesystem).await?;
embassy::sound::SHUTDOWN.play().await?; crate::sound::SHUTDOWN.play().await?;
futures::future::pending::<()>().await; futures::future::pending::<()>().await;
} }
embassy::sound::BEP.play().await?; crate::sound::BEP.play().await?;
run_script_if_exists("/media/embassy/config/preinit.sh").await; run_script_if_exists("/media/embassy/config/preinit.sh").await;
@@ -180,7 +185,7 @@ async fn inner_main(cfg_path: Option<PathBuf>) -> Result<Option<Shutdown>, Error
async move { async move {
tracing::error!("{}", e.source); tracing::error!("{}", e.source);
tracing::debug!("{}", e.source); tracing::debug!("{}", e.source);
embassy::sound::BEETHOVEN.play().await?; crate::sound::BEETHOVEN.play().await?;
let ctx = DiagnosticContext::init( let ctx = DiagnosticContext::init(
cfg_path, cfg_path,
@@ -223,8 +228,8 @@ async fn inner_main(cfg_path: Option<PathBuf>) -> Result<Option<Shutdown>, Error
res res
} }
fn main() { pub fn main() {
let matches = clap::App::new("embassy-init") let matches = clap::App::new("start-init")
.arg( .arg(
clap::Arg::with_name("config") clap::Arg::with_name("config")
.short('c') .short('c')
@@ -233,8 +238,6 @@ fn main() {
) )
.get_matches(); .get_matches();
EmbassyLogger::init();
let cfg_path = matches.value_of("config").map(|p| Path::new(p).to_owned()); let cfg_path = matches.value_of("config").map(|p| Path::new(p).to_owned());
let res = { let res = {
let rt = tokio::runtime::Builder::new_multi_thread() let rt = tokio::runtime::Builder::new_multi_thread()

View File

@@ -1,20 +1,21 @@
use embassy::context::SdkContext;
use embassy::util::logger::EmbassyLogger;
use embassy::version::{Current, VersionT};
use embassy::Error;
use rpc_toolkit::run_cli; use rpc_toolkit::run_cli;
use rpc_toolkit::yajrc::RpcError; use rpc_toolkit::yajrc::RpcError;
use serde_json::Value; use serde_json::Value;
use crate::context::SdkContext;
use crate::util::logger::EmbassyLogger;
use crate::version::{Current, VersionT};
use crate::Error;
lazy_static::lazy_static! { lazy_static::lazy_static! {
static ref VERSION_STRING: String = Current::new().semver().to_string(); static ref VERSION_STRING: String = Current::new().semver().to_string();
} }
fn inner_main() -> Result<(), Error> { fn inner_main() -> Result<(), Error> {
run_cli!({ run_cli!({
command: embassy::portable_api, command: crate::portable_api,
app: app => app app: app => app
.name("Embassy SDK") .name("StartOS SDK")
.version(&**VERSION_STRING) .version(&**VERSION_STRING)
.arg( .arg(
clap::Arg::with_name("config") clap::Arg::with_name("config")
@@ -47,7 +48,7 @@ fn inner_main() -> Result<(), Error> {
Ok(()) Ok(())
} }
fn main() { pub fn main() {
match inner_main() { match inner_main() {
Ok(_) => (), Ok(_) => (),
Err(e) => { Err(e) => {

View File

@@ -3,16 +3,17 @@ use std::path::{Path, PathBuf};
use std::sync::Arc; use std::sync::Arc;
use color_eyre::eyre::eyre; use color_eyre::eyre::eyre;
use embassy::context::{DiagnosticContext, RpcContext};
use embassy::net::web_server::WebServer;
use embassy::shutdown::Shutdown;
use embassy::system::launch_metrics_task;
use embassy::util::logger::EmbassyLogger;
use embassy::{Error, ErrorKind, ResultExt};
use futures::{FutureExt, TryFutureExt}; use futures::{FutureExt, TryFutureExt};
use tokio::signal::unix::signal; use tokio::signal::unix::signal;
use tracing::instrument; use tracing::instrument;
use crate::context::{DiagnosticContext, RpcContext};
use crate::net::web_server::WebServer;
use crate::shutdown::Shutdown;
use crate::system::launch_metrics_task;
use crate::util::logger::EmbassyLogger;
use crate::{Error, ErrorKind, ResultExt};
#[instrument(skip_all)] #[instrument(skip_all)]
async fn inner_main(cfg_path: Option<PathBuf>) -> Result<Option<Shutdown>, Error> { async fn inner_main(cfg_path: Option<PathBuf>) -> Result<Option<Shutdown>, Error> {
let (rpc_ctx, server, shutdown) = { let (rpc_ctx, server, shutdown) = {
@@ -26,7 +27,7 @@ async fn inner_main(cfg_path: Option<PathBuf>) -> Result<Option<Shutdown>, Error
), ),
) )
.await?; .await?;
embassy::hostname::sync_hostname(&rpc_ctx.account.read().await.hostname).await?; crate::hostname::sync_hostname(&rpc_ctx.account.read().await.hostname).await?;
let server = WebServer::main( let server = WebServer::main(
SocketAddr::new(Ipv6Addr::UNSPECIFIED.into(), 80), SocketAddr::new(Ipv6Addr::UNSPECIFIED.into(), 80),
rpc_ctx.clone(), rpc_ctx.clone(),
@@ -71,7 +72,7 @@ async fn inner_main(cfg_path: Option<PathBuf>) -> Result<Option<Shutdown>, Error
.await .await
}); });
embassy::sound::CHIME.play().await?; crate::sound::CHIME.play().await?;
metrics_task metrics_task
.map_err(|e| { .map_err(|e| {
@@ -100,8 +101,15 @@ async fn inner_main(cfg_path: Option<PathBuf>) -> Result<Option<Shutdown>, Error
Ok(shutdown) Ok(shutdown)
} }
fn main() { pub fn main() {
let matches = clap::App::new("embassyd") EmbassyLogger::init();
if !Path::new("/run/embassy/initialized").exists() {
super::start_init::main();
std::fs::write("/run/embassy/initialized", "").unwrap();
}
let matches = clap::App::new("startd")
.arg( .arg(
clap::Arg::with_name("config") clap::Arg::with_name("config")
.short('c') .short('c')
@@ -110,8 +118,6 @@ fn main() {
) )
.get_matches(); .get_matches();
EmbassyLogger::init();
let cfg_path = matches.value_of("config").map(|p| Path::new(p).to_owned()); let cfg_path = matches.value_of("config").map(|p| Path::new(p).to_owned());
let res = { let res = {
@@ -126,7 +132,7 @@ fn main() {
async { async {
tracing::error!("{}", e.source); tracing::error!("{}", e.source);
tracing::debug!("{:?}", e.source); tracing::debug!("{:?}", e.source);
embassy::sound::BEETHOVEN.play().await?; crate::sound::BEETHOVEN.play().await?;
let ctx = DiagnosticContext::init( let ctx = DiagnosticContext::init(
cfg_path, cfg_path,
if tokio::fs::metadata("/media/embassy/config/disk.guid") if tokio::fs::metadata("/media/embassy/config/disk.guid")

View File

@@ -11,7 +11,7 @@ use helpers::to_tmp_path;
use josekit::jwk::Jwk; use josekit::jwk::Jwk;
use patch_db::json_ptr::JsonPointer; use patch_db::json_ptr::JsonPointer;
use patch_db::{DbHandle, LockReceipt, LockType, PatchDb}; use patch_db::{DbHandle, LockReceipt, LockType, PatchDb};
use reqwest::Url; use reqwest::{Client, Proxy, Url};
use rpc_toolkit::Context; use rpc_toolkit::Context;
use serde::Deserialize; use serde::Deserialize;
use sqlx::postgres::PgConnectOptions; use sqlx::postgres::PgConnectOptions;
@@ -34,7 +34,9 @@ use crate::net::wifi::WpaCli;
use crate::notifications::NotificationManager; use crate::notifications::NotificationManager;
use crate::shutdown::Shutdown; use crate::shutdown::Shutdown;
use crate::status::{MainStatus, Status}; use crate::status::{MainStatus, Status};
use crate::system::get_mem_info;
use crate::util::config::load_config_from_paths; use crate::util::config::load_config_from_paths;
use crate::util::lshw::{lshw, LshwDevice};
use crate::{Error, ErrorKind, ResultExt}; use crate::{Error, ErrorKind, ResultExt};
#[derive(Debug, Default, Deserialize)] #[derive(Debug, Default, Deserialize)]
@@ -120,6 +122,13 @@ pub struct RpcContextSeed {
pub rpc_stream_continuations: Mutex<BTreeMap<RequestGuid, RpcContinuation>>, pub rpc_stream_continuations: Mutex<BTreeMap<RequestGuid, RpcContinuation>>,
pub wifi_manager: Option<Arc<RwLock<WpaCli>>>, pub wifi_manager: Option<Arc<RwLock<WpaCli>>>,
pub current_secret: Arc<Jwk>, pub current_secret: Arc<Jwk>,
pub client: Client,
pub hardware: Hardware,
}
pub struct Hardware {
pub devices: Vec<LshwDevice>,
pub ram: u64,
} }
pub struct RpcCleanReceipts { pub struct RpcCleanReceipts {
@@ -203,6 +212,9 @@ impl RpcContext {
let metrics_cache = RwLock::new(None); let metrics_cache = RwLock::new(None);
let notification_manager = NotificationManager::new(secret_store.clone()); let notification_manager = NotificationManager::new(secret_store.clone());
tracing::info!("Initialized Notification Manager"); tracing::info!("Initialized Notification Manager");
let tor_proxy_url = format!("socks5h://{tor_proxy}");
let devices = lshw().await?;
let ram = get_mem_info().await?.total.0 as u64 * 1024 * 1024;
let seed = Arc::new(RpcContextSeed { let seed = Arc::new(RpcContextSeed {
is_closed: AtomicBool::new(false), is_closed: AtomicBool::new(false),
datadir: base.datadir().to_path_buf(), datadir: base.datadir().to_path_buf(),
@@ -235,6 +247,17 @@ impl RpcContext {
) )
})?, })?,
), ),
client: Client::builder()
.proxy(Proxy::custom(move |url| {
if url.host_str().map_or(false, |h| h.ends_with(".onion")) {
Some(tor_proxy_url.clone())
} else {
None
}
}))
.build()
.with_kind(crate::ErrorKind::ParseUrl)?,
hardware: Hardware { devices, ram },
}); });
let res = Self(seed); let res = Self(seed);
@@ -265,6 +288,45 @@ impl RpcContext {
pub async fn cleanup(&self) -> Result<(), Error> { pub async fn cleanup(&self) -> Result<(), Error> {
let mut db = self.db.handle(); let mut db = self.db.handle();
let receipts = RpcCleanReceipts::new(&mut db).await?; let receipts = RpcCleanReceipts::new(&mut db).await?;
let packages = receipts.packages.get(&mut db).await?.0;
let mut current_dependents = packages
.keys()
.map(|k| (k.clone(), BTreeMap::new()))
.collect::<BTreeMap<_, _>>();
for (package_id, package) in packages {
for (k, v) in package
.into_installed()
.into_iter()
.flat_map(|i| i.current_dependencies.0)
{
let mut entry: BTreeMap<_, _> = current_dependents.remove(&k).unwrap_or_default();
entry.insert(package_id.clone(), v);
current_dependents.insert(k, entry);
}
}
for (package_id, current_dependents) in current_dependents {
if let Some(deps) = crate::db::DatabaseModel::new()
.package_data()
.idx_model(&package_id)
.and_then(|pde| pde.installed())
.map::<_, CurrentDependents>(|i| i.current_dependents())
.check(&mut db)
.await?
{
deps.put(&mut db, &CurrentDependents(current_dependents))
.await?;
} else if let Some(deps) = crate::db::DatabaseModel::new()
.package_data()
.idx_model(&package_id)
.and_then(|pde| pde.removing())
.map::<_, CurrentDependents>(|i| i.current_dependents())
.check(&mut db)
.await?
{
deps.put(&mut db, &CurrentDependents(current_dependents))
.await?;
}
}
for (package_id, package) in receipts.packages.get(&mut db).await?.0 { for (package_id, package) in receipts.packages.get(&mut db).await?.0 {
if let Err(e) = async { if let Err(e) = async {
match package { match package {
@@ -336,31 +398,6 @@ impl RpcContext {
tracing::debug!("{:?}", e); tracing::debug!("{:?}", e);
} }
} }
let mut current_dependents = BTreeMap::new();
for (package_id, package) in receipts.packages.get(&mut db).await?.0 {
for (k, v) in package
.into_installed()
.into_iter()
.flat_map(|i| i.current_dependencies.0)
{
let mut entry: BTreeMap<_, _> = current_dependents.remove(&k).unwrap_or_default();
entry.insert(package_id.clone(), v);
current_dependents.insert(k, entry);
}
}
for (package_id, current_dependents) in current_dependents {
if let Some(deps) = crate::db::DatabaseModel::new()
.package_data()
.idx_model(&package_id)
.and_then(|pde| pde.installed())
.map::<_, CurrentDependents>(|i| i.current_dependents())
.check(&mut db)
.await?
{
deps.put(&mut db, &CurrentDependents(current_dependents))
.await?;
}
}
Ok(()) Ok(())
} }

View File

@@ -45,6 +45,8 @@ pub struct SetupContextConfig {
pub migration_batch_rows: Option<usize>, pub migration_batch_rows: Option<usize>,
pub migration_prefetch_rows: Option<usize>, pub migration_prefetch_rows: Option<usize>,
pub datadir: Option<PathBuf>, pub datadir: Option<PathBuf>,
#[serde(default)]
pub disable_encryption: bool,
} }
impl SetupContextConfig { impl SetupContextConfig {
#[instrument(skip_all)] #[instrument(skip_all)]
@@ -75,6 +77,7 @@ pub struct SetupContextSeed {
pub config_path: Option<PathBuf>, pub config_path: Option<PathBuf>,
pub migration_batch_rows: usize, pub migration_batch_rows: usize,
pub migration_prefetch_rows: usize, pub migration_prefetch_rows: usize,
pub disable_encryption: bool,
pub shutdown: Sender<()>, pub shutdown: Sender<()>,
pub datadir: PathBuf, pub datadir: PathBuf,
pub selected_v2_drive: RwLock<Option<PathBuf>>, pub selected_v2_drive: RwLock<Option<PathBuf>>,
@@ -102,6 +105,7 @@ impl SetupContext {
config_path: path.as_ref().map(|p| p.as_ref().to_owned()), config_path: path.as_ref().map(|p| p.as_ref().to_owned()),
migration_batch_rows: cfg.migration_batch_rows.unwrap_or(25000), migration_batch_rows: cfg.migration_batch_rows.unwrap_or(25000),
migration_prefetch_rows: cfg.migration_prefetch_rows.unwrap_or(100_000), migration_prefetch_rows: cfg.migration_prefetch_rows.unwrap_or(100_000),
disable_encryption: cfg.disable_encryption,
shutdown, shutdown,
datadir, datadir,
selected_v2_drive: RwLock::new(None), selected_v2_drive: RwLock::new(None),

View File

@@ -4,9 +4,10 @@ pub mod package;
use std::future::Future; use std::future::Future;
use std::sync::Arc; use std::sync::Arc;
use color_eyre::eyre::eyre;
use futures::{FutureExt, SinkExt, StreamExt}; use futures::{FutureExt, SinkExt, StreamExt};
use patch_db::json_ptr::JsonPointer; use patch_db::json_ptr::JsonPointer;
use patch_db::{Dump, Revision}; use patch_db::{DbHandle, Dump, LockType, Revision};
use rpc_toolkit::command; use rpc_toolkit::command;
use rpc_toolkit::hyper::upgrade::Upgraded; use rpc_toolkit::hyper::upgrade::Upgraded;
use rpc_toolkit::hyper::{Body, Error as HyperError, Request, Response}; use rpc_toolkit::hyper::{Body, Error as HyperError, Request, Response};
@@ -24,6 +25,7 @@ use tracing::instrument;
pub use self::model::DatabaseModel; pub use self::model::DatabaseModel;
use crate::context::RpcContext; use crate::context::RpcContext;
use crate::middleware::auth::{HasValidSession, HashSessionToken}; use crate::middleware::auth::{HasValidSession, HashSessionToken};
use crate::util::display_none;
use crate::util::serde::{display_serializable, IoFormat}; use crate::util::serde::{display_serializable, IoFormat};
use crate::{Error, ResultExt}; use crate::{Error, ResultExt};
@@ -163,7 +165,7 @@ pub async fn subscribe(ctx: RpcContext, req: Request<Body>) -> Result<Response<B
Ok(res) Ok(res)
} }
#[command(subcommands(revisions, dump, put))] #[command(subcommands(revisions, dump, put, apply))]
pub fn db() -> Result<(), RpcError> { pub fn db() -> Result<(), RpcError> {
Ok(()) Ok(())
} }
@@ -199,6 +201,85 @@ pub async fn dump(
Ok(ctx.db.dump().await?) Ok(ctx.db.dump().await?)
} }
fn apply_expr(input: jaq_core::Val, expr: &str) -> Result<jaq_core::Val, Error> {
let (expr, errs) = jaq_core::parse::parse(expr, jaq_core::parse::main());
let Some(expr) = expr else {
return Err(Error::new(
eyre!("Failed to parse expression: {:?}", errs),
crate::ErrorKind::InvalidRequest,
));
};
let mut errs = Vec::new();
let mut defs = jaq_core::Definitions::core();
for def in jaq_std::std() {
defs.insert(def, &mut errs);
}
let filter = defs.finish(expr, Vec::new(), &mut errs);
if !errs.is_empty() {
return Err(Error::new(
eyre!("Failed to compile expression: {:?}", errs),
crate::ErrorKind::InvalidRequest,
));
};
let inputs = jaq_core::RcIter::new(std::iter::empty());
let mut res_iter = filter.run(jaq_core::Ctx::new([], &inputs), input);
let Some(res) = res_iter
.next()
.transpose()
.map_err(|e| eyre!("{e}"))
.with_kind(crate::ErrorKind::Deserialization)?
else {
return Err(Error::new(
eyre!("expr returned no results"),
crate::ErrorKind::InvalidRequest,
));
};
if res_iter.next().is_some() {
return Err(Error::new(
eyre!("expr returned too many results"),
crate::ErrorKind::InvalidRequest,
));
}
Ok(res)
}
#[command(display(display_none))]
pub async fn apply(#[context] ctx: RpcContext, #[arg] expr: String) -> Result<(), Error> {
let mut db = ctx.db.handle();
DatabaseModel::new().lock(&mut db, LockType::Write).await?;
let root_ptr = JsonPointer::<String>::default();
let input = db.get_value(&root_ptr, None).await?;
let res = (|| {
let res = apply_expr(input.into(), &expr)?;
serde_json::from_value::<model::Database>(res.clone().into()).with_ctx(|_| {
(
crate::ErrorKind::Deserialization,
"result does not match database model",
)
})?;
Ok::<serde_json::Value, Error>(res.into())
})()?;
db.put_value(&root_ptr, &res).await?;
Ok(())
}
#[command(subcommands(ui))] #[command(subcommands(ui))]
pub fn put() -> Result<(), RpcError> { pub fn put() -> Result<(), RpcError> {
Ok(()) Ok(())

View File

@@ -49,7 +49,7 @@ impl Database {
last_wifi_region: None, last_wifi_region: None,
eos_version_compat: Current::new().compat().clone(), eos_version_compat: Current::new().compat().clone(),
lan_address, lan_address,
tor_address: format!("http://{}", account.key.tor_address()) tor_address: format!("https://{}", account.key.tor_address())
.parse() .parse()
.unwrap(), .unwrap(),
ip_info: BTreeMap::new(), ip_info: BTreeMap::new(),
@@ -107,6 +107,7 @@ pub struct ServerInfo {
pub lan_address: Url, pub lan_address: Url,
pub tor_address: Url, pub tor_address: Url,
#[model] #[model]
#[serde(default)]
pub ip_info: BTreeMap<String, IpInfo>, pub ip_info: BTreeMap<String, IpInfo>,
#[model] #[model]
#[serde(default)] #[serde(default)]

View File

@@ -9,11 +9,10 @@ use crate::disk::repair;
use crate::init::SYSTEM_REBUILD_PATH; use crate::init::SYSTEM_REBUILD_PATH;
use crate::logs::{fetch_logs, LogResponse, LogSource}; use crate::logs::{fetch_logs, LogResponse, LogSource};
use crate::shutdown::Shutdown; use crate::shutdown::Shutdown;
use crate::system::SYSTEMD_UNIT;
use crate::util::display_none; use crate::util::display_none;
use crate::Error; use crate::Error;
pub const SYSTEMD_UNIT: &'static str = "embassy-init";
#[command(subcommands(error, logs, exit, restart, forget_disk, disk, rebuild))] #[command(subcommands(error, logs, exit, restart, forget_disk, disk, rebuild))]
pub fn diagnostic() -> Result<(), Error> { pub fn diagnostic() -> Result<(), Error> {
Ok(()) Ok(())

View File

@@ -13,7 +13,7 @@ use crate::disk::mount::util::unmount;
use crate::util::Invoke; use crate::util::Invoke;
use crate::{Error, ErrorKind, ResultExt}; use crate::{Error, ErrorKind, ResultExt};
pub const PASSWORD_PATH: &'static str = "/etc/embassy/password"; pub const PASSWORD_PATH: &'static str = "/run/embassy/password";
pub const DEFAULT_PASSWORD: &'static str = "password"; pub const DEFAULT_PASSWORD: &'static str = "password";
pub const MAIN_FS_SIZE: FsSize = FsSize::Gigabytes(8); pub const MAIN_FS_SIZE: FsSize = FsSize::Gigabytes(8);
@@ -22,13 +22,13 @@ pub async fn create<I, P>(
disks: &I, disks: &I,
pvscan: &BTreeMap<PathBuf, Option<String>>, pvscan: &BTreeMap<PathBuf, Option<String>>,
datadir: impl AsRef<Path>, datadir: impl AsRef<Path>,
password: &str, password: Option<&str>,
) -> Result<String, Error> ) -> Result<String, Error>
where where
for<'a> &'a I: IntoIterator<Item = &'a P>, for<'a> &'a I: IntoIterator<Item = &'a P>,
P: AsRef<Path>, P: AsRef<Path>,
{ {
let guid = create_pool(disks, pvscan).await?; let guid = create_pool(disks, pvscan, password.is_some()).await?;
create_all_fs(&guid, &datadir, password).await?; create_all_fs(&guid, &datadir, password).await?;
export(&guid, datadir).await?; export(&guid, datadir).await?;
Ok(guid) Ok(guid)
@@ -38,6 +38,7 @@ where
pub async fn create_pool<I, P>( pub async fn create_pool<I, P>(
disks: &I, disks: &I,
pvscan: &BTreeMap<PathBuf, Option<String>>, pvscan: &BTreeMap<PathBuf, Option<String>>,
encrypted: bool,
) -> Result<String, Error> ) -> Result<String, Error>
where where
for<'a> &'a I: IntoIterator<Item = &'a P>, for<'a> &'a I: IntoIterator<Item = &'a P>,
@@ -62,13 +63,16 @@ where
.invoke(crate::ErrorKind::DiskManagement) .invoke(crate::ErrorKind::DiskManagement)
.await?; .await?;
} }
let guid = format!( let mut guid = format!(
"EMBASSY_{}", "EMBASSY_{}",
base32::encode( base32::encode(
base32::Alphabet::RFC4648 { padding: false }, base32::Alphabet::RFC4648 { padding: false },
&rand::random::<[u8; 32]>(), &rand::random::<[u8; 32]>(),
) )
); );
if !encrypted {
guid += "_UNENC";
}
let mut cmd = Command::new("vgcreate"); let mut cmd = Command::new("vgcreate");
cmd.arg("-y").arg(&guid); cmd.arg("-y").arg(&guid);
for disk in disks { for disk in disks {
@@ -90,11 +94,8 @@ pub async fn create_fs<P: AsRef<Path>>(
datadir: P, datadir: P,
name: &str, name: &str,
size: FsSize, size: FsSize,
password: &str, password: Option<&str>,
) -> Result<(), Error> { ) -> Result<(), Error> {
tokio::fs::write(PASSWORD_PATH, password)
.await
.with_ctx(|_| (crate::ErrorKind::Filesystem, PASSWORD_PATH))?;
let mut cmd = Command::new("lvcreate"); let mut cmd = Command::new("lvcreate");
match size { match size {
FsSize::Gigabytes(a) => cmd.arg("-L").arg(format!("{}G", a)), FsSize::Gigabytes(a) => cmd.arg("-L").arg(format!("{}G", a)),
@@ -106,37 +107,41 @@ pub async fn create_fs<P: AsRef<Path>>(
.arg(guid) .arg(guid)
.invoke(crate::ErrorKind::DiskManagement) .invoke(crate::ErrorKind::DiskManagement)
.await?; .await?;
let crypt_path = Path::new("/dev").join(guid).join(name); let mut blockdev_path = Path::new("/dev").join(guid).join(name);
Command::new("cryptsetup") if let Some(password) = password {
.arg("-q") if let Some(parent) = Path::new(PASSWORD_PATH).parent() {
.arg("luksFormat") tokio::fs::create_dir_all(parent).await?;
.arg(format!("--key-file={}", PASSWORD_PATH)) }
.arg(format!("--keyfile-size={}", password.len())) tokio::fs::write(PASSWORD_PATH, password)
.arg(&crypt_path) .await
.invoke(crate::ErrorKind::DiskManagement) .with_ctx(|_| (crate::ErrorKind::Filesystem, PASSWORD_PATH))?;
.await?; Command::new("cryptsetup")
Command::new("cryptsetup") .arg("-q")
.arg("-q") .arg("luksFormat")
.arg("luksOpen") .arg(format!("--key-file={}", PASSWORD_PATH))
.arg(format!("--key-file={}", PASSWORD_PATH)) .arg(format!("--keyfile-size={}", password.len()))
.arg(format!("--keyfile-size={}", password.len())) .arg(&blockdev_path)
.arg(&crypt_path) .invoke(crate::ErrorKind::DiskManagement)
.arg(format!("{}_{}", guid, name)) .await?;
.invoke(crate::ErrorKind::DiskManagement) Command::new("cryptsetup")
.await?; .arg("-q")
.arg("luksOpen")
.arg(format!("--key-file={}", PASSWORD_PATH))
.arg(format!("--keyfile-size={}", password.len()))
.arg(&blockdev_path)
.arg(format!("{}_{}", guid, name))
.invoke(crate::ErrorKind::DiskManagement)
.await?;
tokio::fs::remove_file(PASSWORD_PATH)
.await
.with_ctx(|_| (crate::ErrorKind::Filesystem, PASSWORD_PATH))?;
blockdev_path = Path::new("/dev/mapper").join(format!("{}_{}", guid, name));
}
Command::new("mkfs.btrfs") Command::new("mkfs.btrfs")
.arg(Path::new("/dev/mapper").join(format!("{}_{}", guid, name))) .arg(&blockdev_path)
.invoke(crate::ErrorKind::DiskManagement) .invoke(crate::ErrorKind::DiskManagement)
.await?; .await?;
mount( mount(&blockdev_path, datadir.as_ref().join(name), ReadWrite).await?;
Path::new("/dev/mapper").join(format!("{}_{}", guid, name)),
datadir.as_ref().join(name),
ReadWrite,
)
.await?;
tokio::fs::remove_file(PASSWORD_PATH)
.await
.with_ctx(|_| (crate::ErrorKind::Filesystem, PASSWORD_PATH))?;
Ok(()) Ok(())
} }
@@ -144,7 +149,7 @@ pub async fn create_fs<P: AsRef<Path>>(
pub async fn create_all_fs<P: AsRef<Path>>( pub async fn create_all_fs<P: AsRef<Path>>(
guid: &str, guid: &str,
datadir: P, datadir: P,
password: &str, password: Option<&str>,
) -> Result<(), Error> { ) -> Result<(), Error> {
create_fs(guid, &datadir, "main", MAIN_FS_SIZE, password).await?; create_fs(guid, &datadir, "main", MAIN_FS_SIZE, password).await?;
create_fs( create_fs(
@@ -161,12 +166,14 @@ pub async fn create_all_fs<P: AsRef<Path>>(
#[instrument(skip_all)] #[instrument(skip_all)]
pub async fn unmount_fs<P: AsRef<Path>>(guid: &str, datadir: P, name: &str) -> Result<(), Error> { pub async fn unmount_fs<P: AsRef<Path>>(guid: &str, datadir: P, name: &str) -> Result<(), Error> {
unmount(datadir.as_ref().join(name)).await?; unmount(datadir.as_ref().join(name)).await?;
Command::new("cryptsetup") if !guid.ends_with("_UNENC") {
.arg("-q") Command::new("cryptsetup")
.arg("luksClose") .arg("-q")
.arg(format!("{}_{}", guid, name)) .arg("luksClose")
.invoke(crate::ErrorKind::DiskManagement) .arg(format!("{}_{}", guid, name))
.await?; .invoke(crate::ErrorKind::DiskManagement)
.await?;
}
Ok(()) Ok(())
} }
@@ -203,7 +210,7 @@ pub async fn import<P: AsRef<Path>>(
guid: &str, guid: &str,
datadir: P, datadir: P,
repair: RepairStrategy, repair: RepairStrategy,
password: &str, password: Option<&str>,
) -> Result<RequiresReboot, Error> { ) -> Result<RequiresReboot, Error> {
let scan = pvscan().await?; let scan = pvscan().await?;
if scan if scan
@@ -261,46 +268,56 @@ pub async fn mount_fs<P: AsRef<Path>>(
datadir: P, datadir: P,
name: &str, name: &str,
repair: RepairStrategy, repair: RepairStrategy,
password: &str, password: Option<&str>,
) -> Result<RequiresReboot, Error> { ) -> Result<RequiresReboot, Error> {
tokio::fs::write(PASSWORD_PATH, password) let orig_path = Path::new("/dev").join(guid).join(name);
.await let mut blockdev_path = orig_path.clone();
.with_ctx(|_| (crate::ErrorKind::Filesystem, PASSWORD_PATH))?;
let crypt_path = Path::new("/dev").join(guid).join(name);
let full_name = format!("{}_{}", guid, name); let full_name = format!("{}_{}", guid, name);
Command::new("cryptsetup") if !guid.ends_with("_UNENC") {
.arg("-q") let password = password.unwrap_or(DEFAULT_PASSWORD);
.arg("luksOpen") if let Some(parent) = Path::new(PASSWORD_PATH).parent() {
.arg(format!("--key-file={}", PASSWORD_PATH)) tokio::fs::create_dir_all(parent).await?;
.arg(format!("--keyfile-size={}", password.len())) }
.arg(&crypt_path) tokio::fs::write(PASSWORD_PATH, password)
.arg(&full_name) .await
.invoke(crate::ErrorKind::DiskManagement) .with_ctx(|_| (crate::ErrorKind::Filesystem, PASSWORD_PATH))?;
.await?; Command::new("cryptsetup")
let mapper_path = Path::new("/dev/mapper").join(&full_name); .arg("-q")
let reboot = repair.fsck(&mapper_path).await?; .arg("luksOpen")
// Backup LUKS header if e2fsck succeeded .arg(format!("--key-file={}", PASSWORD_PATH))
let luks_folder = Path::new("/media/embassy/config/luks"); .arg(format!("--keyfile-size={}", password.len()))
tokio::fs::create_dir_all(luks_folder).await?; .arg(&blockdev_path)
let tmp_luks_bak = luks_folder.join(format!(".{full_name}.luks.bak.tmp")); .arg(&full_name)
if tokio::fs::metadata(&tmp_luks_bak).await.is_ok() { .invoke(crate::ErrorKind::DiskManagement)
tokio::fs::remove_file(&tmp_luks_bak).await?; .await?;
tokio::fs::remove_file(PASSWORD_PATH)
.await
.with_ctx(|_| (crate::ErrorKind::Filesystem, PASSWORD_PATH))?;
blockdev_path = Path::new("/dev/mapper").join(&full_name);
} }
let luks_bak = luks_folder.join(format!("{full_name}.luks.bak")); let reboot = repair.fsck(&blockdev_path).await?;
Command::new("cryptsetup")
.arg("-q")
.arg("luksHeaderBackup")
.arg("--header-backup-file")
.arg(&tmp_luks_bak)
.arg(&crypt_path)
.invoke(crate::ErrorKind::DiskManagement)
.await?;
tokio::fs::rename(&tmp_luks_bak, &luks_bak).await?;
mount(&mapper_path, datadir.as_ref().join(name), ReadWrite).await?;
tokio::fs::remove_file(PASSWORD_PATH) if !guid.ends_with("_UNENC") {
.await // Backup LUKS header if e2fsck succeeded
.with_ctx(|_| (crate::ErrorKind::Filesystem, PASSWORD_PATH))?; let luks_folder = Path::new("/media/embassy/config/luks");
tokio::fs::create_dir_all(luks_folder).await?;
let tmp_luks_bak = luks_folder.join(format!(".{full_name}.luks.bak.tmp"));
if tokio::fs::metadata(&tmp_luks_bak).await.is_ok() {
tokio::fs::remove_file(&tmp_luks_bak).await?;
}
let luks_bak = luks_folder.join(format!("{full_name}.luks.bak"));
Command::new("cryptsetup")
.arg("-q")
.arg("luksHeaderBackup")
.arg("--header-backup-file")
.arg(&tmp_luks_bak)
.arg(&orig_path)
.invoke(crate::ErrorKind::DiskManagement)
.await?;
tokio::fs::rename(&tmp_luks_bak, &luks_bak).await?;
}
mount(&blockdev_path, datadir.as_ref().join(name), ReadWrite).await?;
Ok(reboot) Ok(reboot)
} }
@@ -310,7 +327,7 @@ pub async fn mount_all_fs<P: AsRef<Path>>(
guid: &str, guid: &str,
datadir: P, datadir: P,
repair: RepairStrategy, repair: RepairStrategy,
password: &str, password: Option<&str>,
) -> Result<RequiresReboot, Error> { ) -> Result<RequiresReboot, Error> {
let mut reboot = RequiresReboot(false); let mut reboot = RequiresReboot(false);
reboot |= mount_fs(guid, &datadir, "main", repair, password).await?; reboot |= mount_fs(guid, &datadir, "main", repair, password).await?;

View File

@@ -16,6 +16,9 @@ use crate::util::Invoke;
use crate::Error; use crate::Error;
async fn resolve_hostname(hostname: &str) -> Result<IpAddr, Error> { async fn resolve_hostname(hostname: &str) -> Result<IpAddr, Error> {
if let Ok(addr) = hostname.parse() {
return Ok(addr);
}
#[cfg(feature = "avahi")] #[cfg(feature = "avahi")]
if hostname.ends_with(".local") { if hostname.ends_with(".local") {
return Ok(IpAddr::V4(crate::net::mdns::resolve_mdns(hostname).await?)); return Ok(IpAddr::V4(crate::net::mdns::resolve_mdns(hostname).await?));

View File

@@ -324,11 +324,13 @@ pub async fn list(os: &OsPartitionInfo) -> Result<Vec<DiskInfo>, Error> {
if index.internal { if index.internal {
for part in index.parts { for part in index.parts {
let mut disk_info = disk_info(disk.clone()).await; let mut disk_info = disk_info(disk.clone()).await;
disk_info.logicalname = part; let part_info = part_info(part).await;
disk_info.logicalname = part_info.logicalname.clone();
disk_info.capacity = part_info.capacity;
if let Some(g) = disk_guids.get(&disk_info.logicalname) { if let Some(g) = disk_guids.get(&disk_info.logicalname) {
disk_info.guid = g.clone(); disk_info.guid = g.clone();
} else { } else {
disk_info.partitions = vec![part_info(disk_info.logicalname.clone()).await]; disk_info.partitions = vec![part_info];
} }
res.push(disk_info); res.push(disk_info);
} }

View File

@@ -40,6 +40,7 @@ use crate::dependencies::{
}; };
use crate::install::cleanup::{cleanup, update_dependency_errors_of_dependents}; use crate::install::cleanup::{cleanup, update_dependency_errors_of_dependents};
use crate::install::progress::{InstallProgress, InstallProgressTracker}; use crate::install::progress::{InstallProgress, InstallProgressTracker};
use crate::marketplace::with_query_params;
use crate::notifications::NotificationLevel; use crate::notifications::NotificationLevel;
use crate::s9pk::manifest::{Manifest, PackageId}; use crate::s9pk::manifest::{Manifest, PackageId};
use crate::s9pk::reader::S9pkReader; use crate::s9pk::reader::S9pkReader;
@@ -136,35 +137,39 @@ pub async fn install(
let marketplace_url = let marketplace_url =
marketplace_url.unwrap_or_else(|| crate::DEFAULT_MARKETPLACE.parse().unwrap()); marketplace_url.unwrap_or_else(|| crate::DEFAULT_MARKETPLACE.parse().unwrap());
let version_priority = version_priority.unwrap_or_default(); let version_priority = version_priority.unwrap_or_default();
let man: Manifest = reqwest::get(format!( let man: Manifest = ctx
"{}/package/v0/manifest/{}?spec={}&version-priority={}&eos-version-compat={}&arch={}", .client
marketplace_url, .get(with_query_params(
id, &ctx,
version, format!(
version_priority, "{}/package/v0/manifest/{}?spec={}&version-priority={}",
Current::new().compat(), marketplace_url, id, version, version_priority,
&*crate::ARCH, )
)) .parse()?,
.await ))
.with_kind(crate::ErrorKind::Registry)? .send()
.error_for_status() .await
.with_kind(crate::ErrorKind::Registry)? .with_kind(crate::ErrorKind::Registry)?
.json() .error_for_status()
.await .with_kind(crate::ErrorKind::Registry)?
.with_kind(crate::ErrorKind::Registry)?; .json()
let s9pk = reqwest::get(format!( .await
"{}/package/v0/{}.s9pk?spec=={}&version-priority={}&eos-version-compat={}&arch={}", .with_kind(crate::ErrorKind::Registry)?;
marketplace_url, let s9pk = ctx
id, .client
man.version, .get(with_query_params(
version_priority, &ctx,
Current::new().compat(), format!(
&*crate::ARCH, "{}/package/v0/{}.s9pk?spec=={}&version-priority={}",
)) marketplace_url, id, man.version, version_priority,
.await )
.with_kind(crate::ErrorKind::Registry)? .parse()?,
.error_for_status() ))
.with_kind(crate::ErrorKind::Registry)?; .send()
.await
.with_kind(crate::ErrorKind::Registry)?
.error_for_status()
.with_kind(crate::ErrorKind::Registry)?;
if man.id.as_str() != id || !man.version.satisfies(&version) { if man.id.as_str() != id || !man.version.satisfies(&version) {
return Err(Error::new( return Err(Error::new(
@@ -185,16 +190,18 @@ pub async fn install(
async { async {
tokio::io::copy( tokio::io::copy(
&mut response_to_reader( &mut response_to_reader(
reqwest::get(format!( ctx.client
"{}/package/v0/license/{}?spec=={}&eos-version-compat={}&arch={}", .get(with_query_params(
marketplace_url, &ctx,
id, format!(
man.version, "{}/package/v0/license/{}?spec=={}",
Current::new().compat(), marketplace_url, id, man.version,
&*crate::ARCH, )
)) .parse()?,
.await? ))
.error_for_status()?, .send()
.await?
.error_for_status()?,
), ),
&mut File::create(public_dir_path.join("LICENSE.md")).await?, &mut File::create(public_dir_path.join("LICENSE.md")).await?,
) )
@@ -204,16 +211,18 @@ pub async fn install(
async { async {
tokio::io::copy( tokio::io::copy(
&mut response_to_reader( &mut response_to_reader(
reqwest::get(format!( ctx.client
"{}/package/v0/instructions/{}?spec=={}&eos-version-compat={}&arch={}", .get(with_query_params(
marketplace_url, &ctx,
id, format!(
man.version, "{}/package/v0/instructions/{}?spec=={}",
Current::new().compat(), marketplace_url, id, man.version,
&*crate::ARCH, )
)) .parse()?,
.await? ))
.error_for_status()?, .send()
.await?
.error_for_status()?,
), ),
&mut File::create(public_dir_path.join("INSTRUCTIONS.md")).await?, &mut File::create(public_dir_path.join("INSTRUCTIONS.md")).await?,
) )
@@ -223,16 +232,18 @@ pub async fn install(
async { async {
tokio::io::copy( tokio::io::copy(
&mut response_to_reader( &mut response_to_reader(
reqwest::get(format!( ctx.client
"{}/package/v0/icon/{}?spec=={}&eos-version-compat={}&arch={}", .get(with_query_params(
marketplace_url, &ctx,
id, format!(
man.version, "{}/package/v0/icon/{}?spec=={}",
Current::new().compat(), marketplace_url, id, man.version,
&*crate::ARCH, )
)) .parse()?,
.await? ))
.error_for_status()?, .send()
.await?
.error_for_status()?,
), ),
&mut File::create(public_dir_path.join(format!("icon.{}", icon_type))).await?, &mut File::create(public_dir_path.join(format!("icon.{}", icon_type))).await?,
) )
@@ -928,17 +939,20 @@ pub async fn install_s9pk<R: AsyncRead + AsyncSeek + Unpin + Send + Sync>(
{ {
Some(local_man) Some(local_man)
} else if let Some(marketplace_url) = &marketplace_url { } else if let Some(marketplace_url) = &marketplace_url {
match reqwest::get(format!( match ctx
"{}/package/v0/manifest/{}?spec={}&eos-version-compat={}&arch={}", .client
marketplace_url, .get(with_query_params(
dep, ctx,
info.version, format!(
Current::new().compat(), "{}/package/v0/manifest/{}?spec={}",
&*crate::ARCH, marketplace_url, dep, info.version,
)) )
.await .parse()?,
.with_kind(crate::ErrorKind::Registry)? ))
.error_for_status() .send()
.await
.with_kind(crate::ErrorKind::Registry)?
.error_for_status()
{ {
Ok(a) => Ok(Some( Ok(a) => Ok(Some(
a.json() a.json()
@@ -963,16 +977,19 @@ pub async fn install_s9pk<R: AsyncRead + AsyncSeek + Unpin + Send + Sync>(
let icon_path = dir.join(format!("icon.{}", manifest.assets.icon_type())); let icon_path = dir.join(format!("icon.{}", manifest.assets.icon_type()));
if tokio::fs::metadata(&icon_path).await.is_err() { if tokio::fs::metadata(&icon_path).await.is_err() {
tokio::fs::create_dir_all(&dir).await?; tokio::fs::create_dir_all(&dir).await?;
let icon = reqwest::get(format!( let icon = ctx
"{}/package/v0/icon/{}?spec={}&eos-version-compat={}&arch={}", .client
marketplace_url, .get(with_query_params(
dep, ctx,
info.version, format!(
Current::new().compat(), "{}/package/v0/icon/{}?spec={}",
&*crate::ARCH, marketplace_url, dep, info.version,
)) )
.await .parse()?,
.with_kind(crate::ErrorKind::Registry)?; ))
.send()
.await
.with_kind(crate::ErrorKind::Registry)?;
let mut dst = File::create(&icon_path).await?; let mut dst = File::create(&icon_path).await?;
tokio::io::copy(&mut response_to_reader(icon), &mut dst).await?; tokio::io::copy(&mut response_to_reader(icon), &mut dst).await?;
dst.sync_all().await?; dst.sync_all().await?;

View File

@@ -17,6 +17,7 @@ pub mod account;
pub mod action; pub mod action;
pub mod auth; pub mod auth;
pub mod backup; pub mod backup;
pub mod bins;
pub mod config; pub mod config;
pub mod context; pub mod context;
pub mod control; pub mod control;

3
backend/src/main.rs Normal file
View File

@@ -0,0 +1,3 @@
fn main() {
startos::bins::startbox()
}

View File

@@ -21,6 +21,7 @@ use tracing::instrument;
use crate::context::RpcContext; use crate::context::RpcContext;
use crate::manager::sync::synchronizer; use crate::manager::sync::synchronizer;
use crate::net::net_controller::NetService; use crate::net::net_controller::NetService;
use crate::net::vhost::AlpnInfo;
use crate::procedure::docker::{DockerContainer, DockerProcedure, LongRunning}; use crate::procedure::docker::{DockerContainer, DockerProcedure, LongRunning};
#[cfg(feature = "js_engine")] #[cfg(feature = "js_engine")]
use crate::procedure::js_scripts::JsProcedure; use crate::procedure::js_scripts::JsProcedure;
@@ -573,8 +574,14 @@ async fn add_network_for_main(
let mut tx = secrets.begin().await?; let mut tx = secrets.begin().await?;
for (id, interface) in &seed.manifest.interfaces.0 { for (id, interface) in &seed.manifest.interfaces.0 {
for (external, internal) in interface.lan_config.iter().flatten() { for (external, internal) in interface.lan_config.iter().flatten() {
svc.add_lan(&mut tx, id.clone(), external.0, internal.internal, false) svc.add_lan(
.await?; &mut tx,
id.clone(),
external.0,
internal.internal,
Err(AlpnInfo::Specified(vec![])),
)
.await?;
} }
for (external, internal) in interface.tor_config.iter().flat_map(|t| &t.port_mapping) { for (external, internal) in interface.tor_config.iter().flat_map(|t| &t.port_mapping) {
svc.add_tor(&mut tx, id.clone(), external.0, internal.0) svc.add_tor(&mut tx, id.clone(), external.0, internal.0)

View File

@@ -3,6 +3,8 @@ use reqwest::{StatusCode, Url};
use rpc_toolkit::command; use rpc_toolkit::command;
use serde_json::Value; use serde_json::Value;
use crate::context::RpcContext;
use crate::version::VersionT;
use crate::{Error, ResultExt}; use crate::{Error, ResultExt};
#[command(subcommands(get))] #[command(subcommands(get))]
@@ -10,9 +12,34 @@ pub fn marketplace() -> Result<(), Error> {
Ok(()) Ok(())
} }
pub fn with_query_params(ctx: &RpcContext, mut url: Url) -> Url {
url.query_pairs_mut()
.append_pair(
"os.version",
&crate::version::Current::new().semver().to_string(),
)
.append_pair(
"os.compat",
&crate::version::Current::new().compat().to_string(),
)
.append_pair("os.arch", crate::OS_ARCH)
.append_pair("hardware.arch", &*crate::ARCH)
.append_pair("hardware.ram", &ctx.hardware.ram.to_string());
for hw in &ctx.hardware.devices {
url.query_pairs_mut()
.append_pair(&format!("hardware.device.{}", hw.class()), hw.product());
}
url
}
#[command] #[command]
pub async fn get(#[arg] url: Url) -> Result<Value, Error> { pub async fn get(#[context] ctx: RpcContext, #[arg] url: Url) -> Result<Value, Error> {
let mut response = reqwest::get(url) let mut response = ctx
.client
.get(with_query_params(&ctx, url))
.send()
.await .await
.with_kind(crate::ErrorKind::Network)?; .with_kind(crate::ErrorKind::Network)?;
let status = response.status(); let status = response.status();

View File

@@ -15,7 +15,7 @@ use crate::net::keys::Key;
use crate::net::mdns::MdnsController; use crate::net::mdns::MdnsController;
use crate::net::ssl::{export_cert, export_key, SslManager}; use crate::net::ssl::{export_cert, export_key, SslManager};
use crate::net::tor::TorController; use crate::net::tor::TorController;
use crate::net::vhost::VHostController; use crate::net::vhost::{AlpnInfo, VHostController};
use crate::s9pk::manifest::PackageId; use crate::s9pk::manifest::PackageId;
use crate::volume::cert_dir; use crate::volume::cert_dir;
use crate::{Error, HOST_IP}; use crate::{Error, HOST_IP};
@@ -55,6 +55,8 @@ impl NetController {
} }
async fn add_os_bindings(&mut self, hostname: &Hostname, key: &Key) -> Result<(), Error> { async fn add_os_bindings(&mut self, hostname: &Hostname, key: &Key) -> Result<(), Error> {
let alpn = Err(AlpnInfo::Specified(vec!["http/1.1".into(), "h2".into()]));
// Internal DNS // Internal DNS
self.vhost self.vhost
.add( .add(
@@ -62,7 +64,7 @@ impl NetController {
Some("embassy".into()), Some("embassy".into()),
443, 443,
([127, 0, 0, 1], 80).into(), ([127, 0, 0, 1], 80).into(),
false, alpn.clone(),
) )
.await?; .await?;
self.os_bindings self.os_bindings
@@ -71,7 +73,13 @@ impl NetController {
// LAN IP // LAN IP
self.os_bindings.push( self.os_bindings.push(
self.vhost self.vhost
.add(key.clone(), None, 443, ([127, 0, 0, 1], 80).into(), false) .add(
key.clone(),
None,
443,
([127, 0, 0, 1], 80).into(),
alpn.clone(),
)
.await?, .await?,
); );
@@ -83,7 +91,7 @@ impl NetController {
Some("localhost".into()), Some("localhost".into()),
443, 443,
([127, 0, 0, 1], 80).into(), ([127, 0, 0, 1], 80).into(),
false, alpn.clone(),
) )
.await?, .await?,
); );
@@ -94,7 +102,7 @@ impl NetController {
Some(hostname.no_dot_host_name()), Some(hostname.no_dot_host_name()),
443, 443,
([127, 0, 0, 1], 80).into(), ([127, 0, 0, 1], 80).into(),
false, alpn.clone(),
) )
.await?, .await?,
); );
@@ -107,7 +115,7 @@ impl NetController {
Some(hostname.local_domain_name()), Some(hostname.local_domain_name()),
443, 443,
([127, 0, 0, 1], 80).into(), ([127, 0, 0, 1], 80).into(),
false, alpn.clone(),
) )
.await?, .await?,
); );
@@ -127,7 +135,7 @@ impl NetController {
Some(key.tor_address().to_string()), Some(key.tor_address().to_string()),
443, 443,
([127, 0, 0, 1], 80).into(), ([127, 0, 0, 1], 80).into(),
false, alpn.clone(),
) )
.await?, .await?,
); );
@@ -179,7 +187,7 @@ impl NetController {
key: Key, key: Key,
external: u16, external: u16,
target: SocketAddr, target: SocketAddr,
connect_ssl: bool, connect_ssl: Result<(), AlpnInfo>,
) -> Result<Vec<Arc<()>>, Error> { ) -> Result<Vec<Arc<()>>, Error> {
let mut rcs = Vec::with_capacity(2); let mut rcs = Vec::with_capacity(2);
rcs.push( rcs.push(
@@ -261,7 +269,7 @@ impl NetService {
id: InterfaceId, id: InterfaceId,
external: u16, external: u16,
internal: u16, internal: u16,
connect_ssl: bool, connect_ssl: Result<(), AlpnInfo>,
) -> Result<(), Error> ) -> Result<(), Error>
where where
for<'a> &'a mut Ex: PgExecutor<'a>, for<'a> &'a mut Ex: PgExecutor<'a>,

View File

@@ -1,16 +1,19 @@
use std::borrow::Cow;
use std::fs::Metadata; use std::fs::Metadata;
use std::path::Path; use std::path::{Path, PathBuf};
use std::sync::Arc; use std::sync::Arc;
use std::time::UNIX_EPOCH; use std::time::UNIX_EPOCH;
use async_compression::tokio::bufread::{BrotliEncoder, GzipEncoder}; use async_compression::tokio::bufread::GzipEncoder;
use color_eyre::eyre::eyre; use color_eyre::eyre::eyre;
use digest::Digest; use digest::Digest;
use futures::FutureExt; use futures::FutureExt;
use http::header::{ACCEPT_ENCODING, CONTENT_ENCODING}; use http::header::ACCEPT_ENCODING;
use http::request::Parts as RequestParts; use http::request::Parts as RequestParts;
use http::response::Builder; use http::response::Builder;
use hyper::{Body, Method, Request, Response, StatusCode}; use hyper::{Body, Method, Request, Response, StatusCode};
use include_dir::{include_dir, Dir};
use new_mime_guess::MimeGuess;
use openssl::hash::MessageDigest; use openssl::hash::MessageDigest;
use openssl::x509::X509; use openssl::x509::X509;
use rpc_toolkit::rpc_handler; use rpc_toolkit::rpc_handler;
@@ -33,10 +36,9 @@ static NOT_FOUND: &[u8] = b"Not Found";
static METHOD_NOT_ALLOWED: &[u8] = b"Method Not Allowed"; static METHOD_NOT_ALLOWED: &[u8] = b"Method Not Allowed";
static NOT_AUTHORIZED: &[u8] = b"Not Authorized"; static NOT_AUTHORIZED: &[u8] = b"Not Authorized";
pub const MAIN_UI_WWW_DIR: &str = "/var/www/html/main"; static EMBEDDED_UIS: Dir<'_> = include_dir!("$CARGO_MANIFEST_DIR/../frontend/dist/static");
pub const SETUP_UI_WWW_DIR: &str = "/var/www/html/setup";
pub const DIAG_UI_WWW_DIR: &str = "/var/www/html/diagnostic"; const PROXY_STRIP_HEADERS: &[&str] = &["cookie", "host", "origin", "referer", "user-agent"];
pub const INSTALL_UI_WWW_DIR: &str = "/var/www/html/install";
fn status_fn(_: i32) -> StatusCode { fn status_fn(_: i32) -> StatusCode {
StatusCode::OK StatusCode::OK
@@ -50,6 +52,17 @@ pub enum UiMode {
Main, Main,
} }
impl UiMode {
fn path(&self, path: &str) -> PathBuf {
match self {
Self::Setup => Path::new("setup-wizard").join(path),
Self::Diag => Path::new("diagnostic-ui").join(path),
Self::Install => Path::new("install-wizard").join(path),
Self::Main => Path::new("ui").join(path),
}
}
}
pub async fn setup_ui_file_router(ctx: SetupContext) -> Result<HttpHandler, Error> { pub async fn setup_ui_file_router(ctx: SetupContext) -> Result<HttpHandler, Error> {
let handler: HttpHandler = Arc::new(move |req| { let handler: HttpHandler = Arc::new(move |req| {
let ctx = ctx.clone(); let ctx = ctx.clone();
@@ -224,65 +237,35 @@ pub async fn main_ui_server_router(ctx: RpcContext) -> Result<HttpHandler, Error
} }
async fn alt_ui(req: Request<Body>, ui_mode: UiMode) -> Result<Response<Body>, Error> { async fn alt_ui(req: Request<Body>, ui_mode: UiMode) -> Result<Response<Body>, Error> {
let selected_root_dir = match ui_mode {
UiMode::Setup => SETUP_UI_WWW_DIR,
UiMode::Diag => DIAG_UI_WWW_DIR,
UiMode::Install => INSTALL_UI_WWW_DIR,
UiMode::Main => MAIN_UI_WWW_DIR,
};
let (request_parts, _body) = req.into_parts(); let (request_parts, _body) = req.into_parts();
let accept_encoding = request_parts
.headers
.get_all(ACCEPT_ENCODING)
.into_iter()
.filter_map(|h| h.to_str().ok())
.flat_map(|s| s.split(","))
.filter_map(|s| s.split(";").next())
.map(|s| s.trim())
.collect::<Vec<_>>();
match &request_parts.method { match &request_parts.method {
&Method::GET => { &Method::GET => {
let uri_path = request_parts let uri_path = ui_mode.path(
.uri request_parts
.path() .uri
.strip_prefix('/') .path()
.unwrap_or(request_parts.uri.path()); .strip_prefix('/')
.unwrap_or(request_parts.uri.path()),
);
let full_path = Path::new(selected_root_dir).join(uri_path); let file = EMBEDDED_UIS
file_send( .get_file(&*uri_path)
&request_parts, .or_else(|| EMBEDDED_UIS.get_file(&*ui_mode.path("index.html")));
if tokio::fs::metadata(&full_path)
if let Some(file) = file {
FileData::from_embedded(&request_parts, file)
.into_response(&request_parts)
.await .await
.ok() } else {
.map(|f| f.is_file()) Ok(not_found())
.unwrap_or(false) }
{
full_path
} else {
Path::new(selected_root_dir).join("index.html")
},
&accept_encoding,
)
.await
} }
_ => Ok(method_not_allowed()), _ => Ok(method_not_allowed()),
} }
} }
async fn main_embassy_ui(req: Request<Body>, ctx: RpcContext) -> Result<Response<Body>, Error> { async fn main_embassy_ui(req: Request<Body>, ctx: RpcContext) -> Result<Response<Body>, Error> {
let selected_root_dir = MAIN_UI_WWW_DIR;
let (request_parts, _body) = req.into_parts(); let (request_parts, _body) = req.into_parts();
let accept_encoding = request_parts
.headers
.get_all(ACCEPT_ENCODING)
.into_iter()
.filter_map(|h| h.to_str().ok())
.flat_map(|s| s.split(","))
.filter_map(|s| s.split(";").next())
.map(|s| s.trim())
.collect::<Vec<_>>();
match ( match (
&request_parts.method, &request_parts.method,
request_parts request_parts
@@ -297,11 +280,12 @@ async fn main_embassy_ui(req: Request<Body>, ctx: RpcContext) -> Result<Response
Ok(_) => { Ok(_) => {
let sub_path = Path::new(path); let sub_path = Path::new(path);
if let Ok(rest) = sub_path.strip_prefix("package-data") { if let Ok(rest) = sub_path.strip_prefix("package-data") {
file_send( FileData::from_path(
&request_parts, &request_parts,
ctx.datadir.join(PKG_PUBLIC_DIR).join(rest), &ctx.datadir.join(PKG_PUBLIC_DIR).join(rest),
&accept_encoding,
) )
.await?
.into_response(&request_parts)
.await .await
} else if let Ok(rest) = sub_path.strip_prefix("eos") { } else if let Ok(rest) = sub_path.strip_prefix("eos") {
match rest.to_str() { match rest.to_str() {
@@ -316,6 +300,40 @@ async fn main_embassy_ui(req: Request<Body>, ctx: RpcContext) -> Result<Response
Err(e) => un_authorized(e, &format!("public/{path}")), Err(e) => un_authorized(e, &format!("public/{path}")),
} }
} }
(&Method::GET, Some(("proxy", target))) => {
match HasValidSession::from_request_parts(&request_parts, &ctx).await {
Ok(_) => {
let target = urlencoding::decode(target)?;
let res = ctx
.client
.get(target.as_ref())
.headers(
request_parts
.headers
.iter()
.filter(|(h, _)| {
!PROXY_STRIP_HEADERS
.iter()
.any(|bad| h.as_str().eq_ignore_ascii_case(bad))
})
.map(|(h, v)| (h.clone(), v.clone()))
.collect(),
)
.send()
.await
.with_kind(crate::ErrorKind::Network)?;
let mut hres = Response::builder().status(res.status());
for (h, v) in res.headers().clone() {
if let Some(h) = h {
hres = hres.header(h, v);
}
}
hres.body(Body::wrap_stream(res.bytes_stream()))
.with_kind(crate::ErrorKind::Network)
}
Err(e) => un_authorized(e, &format!("proxy/{target}")),
}
}
(&Method::GET, Some(("eos", "local.crt"))) => { (&Method::GET, Some(("eos", "local.crt"))) => {
match HasValidSession::from_request_parts(&request_parts, &ctx).await { match HasValidSession::from_request_parts(&request_parts, &ctx).await {
Ok(_) => cert_send(&ctx.account.read().await.root_ca_cert), Ok(_) => cert_send(&ctx.account.read().await.root_ca_cert),
@@ -323,28 +341,25 @@ async fn main_embassy_ui(req: Request<Body>, ctx: RpcContext) -> Result<Response
} }
} }
(&Method::GET, _) => { (&Method::GET, _) => {
let uri_path = request_parts let uri_path = UiMode::Main.path(
.uri request_parts
.path() .uri
.strip_prefix('/') .path()
.unwrap_or(request_parts.uri.path()); .strip_prefix('/')
.unwrap_or(request_parts.uri.path()),
);
let full_path = Path::new(selected_root_dir).join(uri_path); let file = EMBEDDED_UIS
file_send( .get_file(&*uri_path)
&request_parts, .or_else(|| EMBEDDED_UIS.get_file(&*UiMode::Main.path("index.html")));
if tokio::fs::metadata(&full_path)
if let Some(file) = file {
FileData::from_embedded(&request_parts, file)
.into_response(&request_parts)
.await .await
.ok() } else {
.map(|f| f.is_file()) Ok(not_found())
.unwrap_or(false) }
{
full_path
} else {
Path::new(selected_root_dir).join("index.html")
},
&accept_encoding,
)
.await
} }
_ => Ok(method_not_allowed()), _ => Ok(method_not_allowed()),
} }
@@ -407,118 +422,158 @@ fn cert_send(cert: &X509) -> Result<Response<Body>, Error> {
.with_kind(ErrorKind::Network) .with_kind(ErrorKind::Network)
} }
async fn file_send( struct FileData {
req: &RequestParts, data: Body,
path: impl AsRef<Path>, len: Option<u64>,
accept_encoding: &[&str], encoding: Option<&'static str>,
) -> Result<Response<Body>, Error> { e_tag: String,
// Serve a file by asynchronously reading it by chunks using tokio-util crate. mime: Option<String>,
}
impl FileData {
fn from_embedded(req: &RequestParts, file: &'static include_dir::File<'static>) -> Self {
let path = file.path();
let (encoding, data) = req
.headers
.get_all(ACCEPT_ENCODING)
.into_iter()
.filter_map(|h| h.to_str().ok())
.flat_map(|s| s.split(","))
.filter_map(|s| s.split(";").next())
.map(|s| s.trim())
.fold((None, file.contents()), |acc, e| {
if let Some(file) = (e == "br")
.then_some(())
.and_then(|_| EMBEDDED_UIS.get_file(format!("{}.br", path.display())))
{
(Some("br"), file.contents())
} else if let Some(file) = (e == "gzip" && acc.0 != Some("br"))
.then_some(())
.and_then(|_| EMBEDDED_UIS.get_file(format!("{}.gz", path.display())))
{
(Some("gzip"), file.contents())
} else {
acc
}
});
let path = path.as_ref(); Self {
len: Some(data.len() as u64),
let file = File::open(path) encoding,
.await data: data.into(),
.with_ctx(|_| (ErrorKind::Filesystem, path.display().to_string()))?; e_tag: e_tag(path, None),
let metadata = file mime: MimeGuess::from_path(path)
.metadata() .first()
.await .map(|m| m.essence_str().to_owned()),
.with_ctx(|_| (ErrorKind::Filesystem, path.display().to_string()))?; }
let e_tag = e_tag(path, &metadata)?;
let mut builder = Response::builder();
builder = with_content_type(path, builder);
builder = builder.header(http::header::ETAG, &e_tag);
builder = builder.header(
http::header::CACHE_CONTROL,
"public, max-age=21000000, immutable",
);
if req
.headers
.get_all(http::header::CONNECTION)
.iter()
.flat_map(|s| s.to_str().ok())
.flat_map(|s| s.split(","))
.any(|s| s.trim() == "keep-alive")
{
builder = builder.header(http::header::CONNECTION, "keep-alive");
} }
if req async fn from_path(req: &RequestParts, path: &Path) -> Result<Self, Error> {
.headers let encoding = req
.get("if-none-match") .headers
.and_then(|h| h.to_str().ok()) .get_all(ACCEPT_ENCODING)
== Some(e_tag.as_str()) .into_iter()
{ .filter_map(|h| h.to_str().ok())
builder = builder.status(StatusCode::NOT_MODIFIED); .flat_map(|s| s.split(","))
builder.body(Body::empty()) .filter_map(|s| s.split(";").next())
} else { .map(|s| s.trim())
let body = if false && accept_encoding.contains(&"br") && metadata.len() > u16::MAX as u64 { .any(|e| e == "gzip")
builder = builder.header(CONTENT_ENCODING, "br"); .then_some("gzip");
Body::wrap_stream(ReaderStream::new(BrotliEncoder::new(BufReader::new(file))))
} else if accept_encoding.contains(&"gzip") && metadata.len() > u16::MAX as u64 { let file = File::open(path)
builder = builder.header(CONTENT_ENCODING, "gzip"); .await
Body::wrap_stream(ReaderStream::new(GzipEncoder::new(BufReader::new(file)))) .with_ctx(|_| (ErrorKind::Filesystem, path.display().to_string()))?;
let metadata = file
.metadata()
.await
.with_ctx(|_| (ErrorKind::Filesystem, path.display().to_string()))?;
let e_tag = e_tag(path, Some(&metadata));
let (len, data) = if encoding == Some("gzip") {
(
None,
Body::wrap_stream(ReaderStream::new(GzipEncoder::new(BufReader::new(file)))),
)
} else { } else {
builder = with_content_length(&metadata, builder); (
Body::wrap_stream(ReaderStream::new(file)) Some(metadata.len()),
Body::wrap_stream(ReaderStream::new(file)),
)
}; };
builder.body(body)
Ok(Self {
data,
len,
encoding,
e_tag,
mime: MimeGuess::from_path(path)
.first()
.map(|m| m.essence_str().to_owned()),
})
}
async fn into_response(self, req: &RequestParts) -> Result<Response<Body>, Error> {
let mut builder = Response::builder();
if let Some(mime) = self.mime {
builder = builder.header(http::header::CONTENT_TYPE, &*mime);
}
builder = builder.header(http::header::ETAG, &*self.e_tag);
builder = builder.header(
http::header::CACHE_CONTROL,
"public, max-age=21000000, immutable",
);
if req
.headers
.get_all(http::header::CONNECTION)
.iter()
.flat_map(|s| s.to_str().ok())
.flat_map(|s| s.split(","))
.any(|s| s.trim() == "keep-alive")
{
builder = builder.header(http::header::CONNECTION, "keep-alive");
}
if req
.headers
.get("if-none-match")
.and_then(|h| h.to_str().ok())
== Some(self.e_tag.as_ref())
{
builder = builder.status(StatusCode::NOT_MODIFIED);
builder.body(Body::empty())
} else {
if let Some(len) = self.len {
builder = builder.header(http::header::CONTENT_LENGTH, len);
}
if let Some(encoding) = self.encoding {
builder = builder.header(http::header::CONTENT_ENCODING, encoding);
}
builder.body(self.data)
}
.with_kind(ErrorKind::Network)
} }
.with_kind(ErrorKind::Network)
} }
fn e_tag(path: &Path, metadata: &Metadata) -> Result<String, Error> { fn e_tag(path: &Path, metadata: Option<&Metadata>) -> String {
let modified = metadata.modified().with_kind(ErrorKind::Filesystem)?;
let mut hasher = sha2::Sha256::new(); let mut hasher = sha2::Sha256::new();
hasher.update(format!("{:?}", path).as_bytes()); hasher.update(format!("{:?}", path).as_bytes());
hasher.update( if let Some(modified) = metadata.and_then(|m| m.modified().ok()) {
format!( hasher.update(
"{}", format!(
modified "{}",
.duration_since(UNIX_EPOCH) modified
.unwrap_or_default() .duration_since(UNIX_EPOCH)
.as_secs() .unwrap_or_default()
) .as_secs()
.as_bytes(), )
); .as_bytes(),
);
}
let res = hasher.finalize(); let res = hasher.finalize();
Ok(format!( format!(
"\"{}\"", "\"{}\"",
base32::encode(base32::Alphabet::RFC4648 { padding: false }, res.as_slice()).to_lowercase() base32::encode(base32::Alphabet::RFC4648 { padding: false }, res.as_slice()).to_lowercase()
)) )
}
///https://en.wikipedia.org/wiki/Media_type
fn with_content_type(path: &Path, builder: Builder) -> Builder {
let content_type = match path.extension() {
Some(os_str) => match os_str.to_str() {
Some("apng") => "image/apng",
Some("avif") => "image/avif",
Some("flif") => "image/flif",
Some("gif") => "image/gif",
Some("jpg") | Some("jpeg") | Some("jfif") | Some("pjpeg") | Some("pjp") => "image/jpeg",
Some("jxl") => "image/jxl",
Some("png") => "image/png",
Some("svg") => "image/svg+xml",
Some("webp") => "image/webp",
Some("mng") | Some("x-mng") => "image/x-mng",
Some("css") => "text/css",
Some("csv") => "text/csv",
Some("html") => "text/html",
Some("php") => "text/php",
Some("plain") | Some("md") | Some("txt") => "text/plain",
Some("xml") => "text/xml",
Some("js") => "text/javascript",
Some("wasm") => "application/wasm",
None | Some(_) => "text/plain",
},
None => "text/plain",
};
builder.header(http::header::CONTENT_TYPE, content_type)
}
fn with_content_length(metadata: &Metadata, builder: Builder) -> Builder {
builder.header(http::header::CONTENT_LENGTH, metadata.len())
} }

View File

@@ -3,6 +3,7 @@ use std::convert::Infallible;
use std::net::{IpAddr, Ipv6Addr, SocketAddr}; use std::net::{IpAddr, Ipv6Addr, SocketAddr};
use std::str::FromStr; use std::str::FromStr;
use std::sync::{Arc, Weak}; use std::sync::{Arc, Weak};
use std::time::Duration;
use color_eyre::eyre::eyre; use color_eyre::eyre::eyre;
use helpers::NonDetachingJoinHandle; use helpers::NonDetachingJoinHandle;
@@ -19,7 +20,7 @@ use tokio_rustls::{LazyConfigAcceptor, TlsConnector};
use crate::net::keys::Key; use crate::net::keys::Key;
use crate::net::ssl::SslManager; use crate::net::ssl::SslManager;
use crate::net::utils::SingleAccept; use crate::net::utils::SingleAccept;
use crate::util::io::BackTrackingReader; use crate::util::io::{BackTrackingReader, TimeoutStream};
use crate::Error; use crate::Error;
// not allowed: <=1024, >=32768, 5355, 5432, 9050, 6010, 9051, 5353 // not allowed: <=1024, >=32768, 5355, 5432, 9050, 6010, 9051, 5353
@@ -41,7 +42,7 @@ impl VHostController {
hostname: Option<String>, hostname: Option<String>,
external: u16, external: u16,
target: SocketAddr, target: SocketAddr,
connect_ssl: bool, connect_ssl: Result<(), AlpnInfo>,
) -> Result<Arc<()>, Error> { ) -> Result<Arc<()>, Error> {
let mut writable = self.servers.lock().await; let mut writable = self.servers.lock().await;
let server = if let Some(server) = writable.remove(&external) { let server = if let Some(server) = writable.remove(&external) {
@@ -77,10 +78,16 @@ impl VHostController {
#[derive(Clone, PartialEq, Eq, PartialOrd, Ord)] #[derive(Clone, PartialEq, Eq, PartialOrd, Ord)]
struct TargetInfo { struct TargetInfo {
addr: SocketAddr, addr: SocketAddr,
connect_ssl: bool, connect_ssl: Result<(), AlpnInfo>,
key: Key, key: Key,
} }
#[derive(Clone, PartialEq, Eq, PartialOrd, Ord)]
pub enum AlpnInfo {
Reflect,
Specified(Vec<Vec<u8>>),
}
struct VHostServer { struct VHostServer {
mapping: Weak<RwLock<BTreeMap<Option<String>, BTreeMap<TargetInfo, Weak<()>>>>>, mapping: Weak<RwLock<BTreeMap<Option<String>, BTreeMap<TargetInfo, Weak<()>>>>>,
_thread: NonDetachingJoinHandle<()>, _thread: NonDetachingJoinHandle<()>,
@@ -98,6 +105,8 @@ impl VHostServer {
loop { loop {
match listener.accept().await { match listener.accept().await {
Ok((stream, _)) => { Ok((stream, _)) => {
let stream =
Box::pin(TimeoutStream::new(stream, Duration::from_secs(300)));
let mut stream = BackTrackingReader::new(stream); let mut stream = BackTrackingReader::new(stream);
stream.start_buffering(); stream.start_buffering();
let mapping = mapping.clone(); let mapping = mapping.clone();
@@ -178,7 +187,7 @@ impl VHostServer {
let cfg = ServerConfig::builder() let cfg = ServerConfig::builder()
.with_safe_defaults() .with_safe_defaults()
.with_no_client_auth(); .with_no_client_auth();
let cfg = let mut cfg =
if mid.client_hello().signature_schemes().contains( if mid.client_hello().signature_schemes().contains(
&tokio_rustls::rustls::SignatureScheme::ED25519, &tokio_rustls::rustls::SignatureScheme::ED25519,
) { ) {
@@ -213,49 +222,94 @@ impl VHostServer {
.private_key_to_der()?, .private_key_to_der()?,
), ),
) )
}; }
let mut tls_stream = mid .with_kind(crate::ErrorKind::OpenSsl)?;
.into_stream(Arc::new( match target.connect_ssl {
cfg.with_kind(crate::ErrorKind::OpenSsl)?, Ok(()) => {
)) let mut client_cfg =
.await?;
tls_stream.get_mut().0.stop_buffering();
if target.connect_ssl {
tokio::io::copy_bidirectional(
&mut tls_stream,
&mut TlsConnector::from(Arc::new(
tokio_rustls::rustls::ClientConfig::builder() tokio_rustls::rustls::ClientConfig::builder()
.with_safe_defaults() .with_safe_defaults()
.with_root_certificates({ .with_root_certificates({
let mut store = RootCertStore::empty(); let mut store = RootCertStore::empty();
store.add( store.add(
&tokio_rustls::rustls::Certificate( &tokio_rustls::rustls::Certificate(
key.root_ca().to_der()?, key.root_ca().to_der()?,
), ),
).with_kind(crate::ErrorKind::OpenSsl)?; ).with_kind(crate::ErrorKind::OpenSsl)?;
store store
}) })
.with_no_client_auth(), .with_no_client_auth();
)) client_cfg.alpn_protocols = mid
.connect( .client_hello()
key.key() .alpn()
.internal_address() .into_iter()
.as_str() .flatten()
.try_into() .map(|x| x.to_vec())
.with_kind(crate::ErrorKind::OpenSsl)?, .collect();
tcp_stream, let mut target_stream =
TlsConnector::from(Arc::new(client_cfg))
.connect_with(
key.key()
.internal_address()
.as_str()
.try_into()
.with_kind(
crate::ErrorKind::OpenSsl,
)?,
tcp_stream,
|conn| {
cfg.alpn_protocols.extend(
conn.alpn_protocol()
.into_iter()
.map(|p| p.to_vec()),
)
},
)
.await
.with_kind(crate::ErrorKind::OpenSsl)?;
let mut tls_stream =
mid.into_stream(Arc::new(cfg)).await?;
tls_stream.get_mut().0.stop_buffering();
tokio::io::copy_bidirectional(
&mut tls_stream,
&mut target_stream,
) )
.await .await
.with_kind(crate::ErrorKind::OpenSsl)?, }
) Err(AlpnInfo::Reflect) => {
.await?; for proto in
} else { mid.client_hello().alpn().into_iter().flatten()
tokio::io::copy_bidirectional( {
&mut tls_stream, cfg.alpn_protocols.push(proto.into());
&mut tcp_stream, }
) let mut tls_stream =
.await?; mid.into_stream(Arc::new(cfg)).await?;
tls_stream.get_mut().0.stop_buffering();
tokio::io::copy_bidirectional(
&mut tls_stream,
&mut tcp_stream,
)
.await
}
Err(AlpnInfo::Specified(alpn)) => {
cfg.alpn_protocols = alpn;
let mut tls_stream =
mid.into_stream(Arc::new(cfg)).await?;
tls_stream.get_mut().0.stop_buffering();
tokio::io::copy_bidirectional(
&mut tls_stream,
&mut tcp_stream,
)
.await
}
} }
.map_or_else(
|e| match e.kind() {
std::io::ErrorKind::UnexpectedEof => Ok(()),
_ => Err(e),
},
|_| Ok(()),
)?;
} else { } else {
// 503 // 503
} }

View File

@@ -149,29 +149,36 @@ pub async fn execute(
if !overwrite { if !overwrite {
if let Ok(guard) = if let Ok(guard) =
TmpMountGuard::mount(&BlockDev::new(part_info.root.clone()), MountType::ReadOnly).await TmpMountGuard::mount(&BlockDev::new(part_info.root.clone()), MountType::ReadWrite).await
{ {
if let Err(e) = async { if let Err(e) = async {
// cp -r ${guard}/config /tmp/config // cp -r ${guard}/config /tmp/config
Command::new("cp")
.arg("-r")
.arg(guard.as_ref().join("config"))
.arg("/tmp/config.bak")
.invoke(crate::ErrorKind::Filesystem)
.await?;
if tokio::fs::metadata(guard.as_ref().join("config/upgrade")) if tokio::fs::metadata(guard.as_ref().join("config/upgrade"))
.await .await
.is_ok() .is_ok()
{ {
tokio::fs::remove_file(guard.as_ref().join("config/upgrade")).await?; tokio::fs::remove_file(guard.as_ref().join("config/upgrade")).await?;
} }
guard.unmount().await if tokio::fs::metadata(guard.as_ref().join("config/disk.guid"))
.await
.is_ok()
{
tokio::fs::remove_file(guard.as_ref().join("config/disk.guid")).await?;
}
Command::new("cp")
.arg("-r")
.arg(guard.as_ref().join("config"))
.arg("/tmp/config.bak")
.invoke(crate::ErrorKind::Filesystem)
.await?;
Ok::<_, Error>(())
} }
.await .await
{ {
tracing::error!("Error recovering previous config: {e}"); tracing::error!("Error recovering previous config: {e}");
tracing::debug!("{e:?}"); tracing::debug!("{e:?}");
} }
guard.unmount().await?;
} }
} }

View File

@@ -1,3 +1,4 @@
use std::collections::BTreeMap;
use std::path::{Path, PathBuf}; use std::path::{Path, PathBuf};
use color_eyre::eyre::eyre; use color_eyre::eyre::eyre;
@@ -16,6 +17,7 @@ use crate::net::interface::Interfaces;
use crate::procedure::docker::DockerContainers; use crate::procedure::docker::DockerContainers;
use crate::procedure::PackageProcedure; use crate::procedure::PackageProcedure;
use crate::status::health_check::HealthChecks; use crate::status::health_check::HealthChecks;
use crate::util::serde::Regex;
use crate::util::Version; use crate::util::Version;
use crate::version::{Current, VersionT}; use crate::version::{Current, VersionT};
use crate::volume::Volumes; use crate::volume::Volumes;
@@ -79,6 +81,9 @@ pub struct Manifest {
#[serde(default)] #[serde(default)]
pub replaces: Vec<String>, pub replaces: Vec<String>,
#[serde(default)]
pub hardware_requirements: HardwareRequirements,
} }
impl Manifest { impl Manifest {
@@ -109,6 +114,15 @@ impl Manifest {
} }
} }
#[derive(Clone, Debug, Default, Deserialize, Serialize)]
#[serde(rename_all = "kebab-case")]
pub struct HardwareRequirements {
#[serde(default)]
device: BTreeMap<String, Regex>,
ram: Option<u64>,
arch: Option<Vec<String>>,
}
#[derive(Clone, Debug, Default, Deserialize, Serialize)] #[derive(Clone, Debug, Default, Deserialize, Serialize)]
#[serde(rename_all = "kebab-case")] #[serde(rename_all = "kebab-case")]
pub struct Assets { pub struct Assets {

View File

@@ -1,9 +1,9 @@
use std::path::PathBuf; use std::path::{Path, PathBuf};
use std::sync::Arc; use std::sync::Arc;
use std::time::Duration;
use color_eyre::eyre::eyre; use color_eyre::eyre::eyre;
use futures::StreamExt; use futures::StreamExt;
use helpers::{Rsync, RsyncOptions};
use josekit::jwk::Jwk; use josekit::jwk::Jwk;
use openssl::x509::X509; use openssl::x509::X509;
use patch_db::DbHandle; use patch_db::DbHandle;
@@ -13,6 +13,7 @@ use serde::{Deserialize, Serialize};
use sqlx::Connection; use sqlx::Connection;
use tokio::fs::File; use tokio::fs::File;
use tokio::io::AsyncWriteExt; use tokio::io::AsyncWriteExt;
use tokio::try_join;
use torut::onion::OnionAddressV3; use torut::onion::OnionAddressV3;
use tracing::instrument; use tracing::instrument;
@@ -32,6 +33,7 @@ use crate::disk::REPAIR_DISK_PATH;
use crate::hostname::Hostname; use crate::hostname::Hostname;
use crate::init::{init, InitResult}; use crate::init::{init, InitResult};
use crate::middleware::encrypt::EncryptedWire; use crate::middleware::encrypt::EncryptedWire;
use crate::util::io::{dir_copy, dir_size, Counter};
use crate::{Error, ErrorKind, ResultExt}; use crate::{Error, ErrorKind, ResultExt};
#[command(subcommands(status, disk, attach, execute, cifs, complete, get_pubkey, exit))] #[command(subcommands(status, disk, attach, execute, cifs, complete, get_pubkey, exit))]
@@ -123,7 +125,7 @@ pub async fn attach(
} else { } else {
RepairStrategy::Preen RepairStrategy::Preen
}, },
DEFAULT_PASSWORD, if guid.ends_with("_UNENC") { None } else { Some(DEFAULT_PASSWORD) },
) )
.await?; .await?;
if tokio::fs::metadata(REPAIR_DISK_PATH).await.is_ok() { if tokio::fs::metadata(REPAIR_DISK_PATH).await.is_ok() {
@@ -142,7 +144,7 @@ pub async fn attach(
} }
let (hostname, tor_addr, root_ca) = setup_init(&ctx, password).await?; let (hostname, tor_addr, root_ca) = setup_init(&ctx, password).await?;
*ctx.setup_result.write().await = Some((guid, SetupResult { *ctx.setup_result.write().await = Some((guid, SetupResult {
tor_address: format!("http://{}", tor_addr), tor_address: format!("https://{}", tor_addr),
lan_address: hostname.lan_address(), lan_address: hostname.lan_address(),
root_ca: String::from_utf8(root_ca.to_pem()?)?, root_ca: String::from_utf8(root_ca.to_pem()?)?,
})); }));
@@ -279,7 +281,7 @@ pub async fn execute(
*ctx.setup_result.write().await = Some(( *ctx.setup_result.write().await = Some((
guid, guid,
SetupResult { SetupResult {
tor_address: format!("http://{}", tor_addr), tor_address: format!("https://{}", tor_addr),
lan_address: hostname.lan_address(), lan_address: hostname.lan_address(),
root_ca: String::from_utf8( root_ca: String::from_utf8(
root_ca.to_pem().expect("failed to serialize root ca"), root_ca.to_pem().expect("failed to serialize root ca"),
@@ -335,12 +337,17 @@ pub async fn execute_inner(
recovery_source: Option<RecoverySource>, recovery_source: Option<RecoverySource>,
recovery_password: Option<String>, recovery_password: Option<String>,
) -> Result<(Arc<String>, Hostname, OnionAddressV3, X509), Error> { ) -> Result<(Arc<String>, Hostname, OnionAddressV3, X509), Error> {
let encryption_password = if ctx.disable_encryption {
None
} else {
Some(DEFAULT_PASSWORD)
};
let guid = Arc::new( let guid = Arc::new(
crate::disk::main::create( crate::disk::main::create(
&[embassy_logicalname], &[embassy_logicalname],
&pvscan().await?, &pvscan().await?,
&ctx.datadir, &ctx.datadir,
DEFAULT_PASSWORD, encryption_password,
) )
.await?, .await?,
); );
@@ -348,7 +355,7 @@ pub async fn execute_inner(
&*guid, &*guid,
&ctx.datadir, &ctx.datadir,
RepairStrategy::Preen, RepairStrategy::Preen,
DEFAULT_PASSWORD, encryption_password,
) )
.await?; .await?;
@@ -416,74 +423,78 @@ async fn migrate(
&old_guid, &old_guid,
"/media/embassy/migrate", "/media/embassy/migrate",
RepairStrategy::Preen, RepairStrategy::Preen,
DEFAULT_PASSWORD, if guid.ends_with("_UNENC") {
) None
.await?; } else {
Some(DEFAULT_PASSWORD)
let mut main_transfer = Rsync::new(
"/media/embassy/migrate/main/",
"/embassy-data/main/",
RsyncOptions {
delete: true,
force: true,
ignore_existing: false,
exclude: Vec::new(),
no_permissions: false,
no_owner: false,
}, },
) )
.await?; .await?;
let mut package_data_transfer = Rsync::new(
let main_transfer_args = ("/media/embassy/migrate/main/", "/embassy-data/main/");
let package_data_transfer_args = (
"/media/embassy/migrate/package-data/", "/media/embassy/migrate/package-data/",
"/embassy-data/package-data/", "/embassy-data/package-data/",
RsyncOptions { );
delete: true,
force: true,
ignore_existing: false,
exclude: vec!["tmp".to_owned()],
no_permissions: false,
no_owner: false,
},
)
.await?;
let mut main_prog = 0.0; let tmpdir = Path::new(package_data_transfer_args.0).join("tmp");
let mut main_complete = false; if tokio::fs::metadata(&tmpdir).await.is_ok() {
let mut pkg_prog = 0.0; tokio::fs::remove_dir_all(&tmpdir).await?;
let mut pkg_complete = false;
loop {
tokio::select! {
p = main_transfer.progress.next() => {
if let Some(p) = p {
main_prog = p;
} else {
main_prog = 1.0;
main_complete = true;
}
}
p = package_data_transfer.progress.next() => {
if let Some(p) = p {
pkg_prog = p;
} else {
pkg_prog = 1.0;
pkg_complete = true;
}
}
}
if main_prog > 0.0 && pkg_prog > 0.0 {
*ctx.setup_status.write().await = Some(Ok(SetupStatus {
bytes_transferred: ((main_prog * 50.0) + (pkg_prog * 950.0)) as u64,
total_bytes: Some(1000),
complete: false,
}));
}
if main_complete && pkg_complete {
break;
}
} }
main_transfer.wait().await?; let ordering = std::sync::atomic::Ordering::Relaxed;
package_data_transfer.wait().await?;
let main_transfer_size = Counter::new(0, ordering);
let package_data_transfer_size = Counter::new(0, ordering);
let size = tokio::select! {
res = async {
let (main_size, package_data_size) = try_join!(
dir_size(main_transfer_args.0, Some(&main_transfer_size)),
dir_size(package_data_transfer_args.0, Some(&package_data_transfer_size))
)?;
Ok::<_, Error>(main_size + package_data_size)
} => { res? },
res = async {
loop {
tokio::time::sleep(Duration::from_secs(1)).await;
*ctx.setup_status.write().await = Some(Ok(SetupStatus {
bytes_transferred: 0,
total_bytes: Some(main_transfer_size.load() + package_data_transfer_size.load()),
complete: false,
}));
}
} => res,
};
*ctx.setup_status.write().await = Some(Ok(SetupStatus {
bytes_transferred: 0,
total_bytes: Some(size),
complete: false,
}));
let main_transfer_progress = Counter::new(0, ordering);
let package_data_transfer_progress = Counter::new(0, ordering);
tokio::select! {
res = async {
try_join!(
dir_copy(main_transfer_args.0, main_transfer_args.1, Some(&main_transfer_progress)),
dir_copy(package_data_transfer_args.0, package_data_transfer_args.1, Some(&package_data_transfer_progress))
)?;
Ok::<_, Error>(())
} => { res? },
res = async {
loop {
tokio::time::sleep(Duration::from_secs(1)).await;
*ctx.setup_status.write().await = Some(Ok(SetupStatus {
bytes_transferred: main_transfer_progress.load() + package_data_transfer_progress.load(),
total_bytes: Some(size),
complete: false,
}));
}
} => res,
}
let (hostname, tor_addr, root_ca) = setup_init(&ctx, Some(embassy_password)).await?; let (hostname, tor_addr, root_ca) = setup_init(&ctx, Some(embassy_password)).await?;

View File

@@ -22,7 +22,7 @@ use crate::util::serde::{display_serializable, IoFormat};
use crate::util::{display_none, Invoke}; use crate::util::{display_none, Invoke};
use crate::{Error, ErrorKind, ResultExt}; use crate::{Error, ErrorKind, ResultExt};
pub const SYSTEMD_UNIT: &'static str = "embassyd"; pub const SYSTEMD_UNIT: &'static str = "startd";
#[command(subcommands(zram))] #[command(subcommands(zram))]
pub async fn experimental() -> Result<(), Error> { pub async fn experimental() -> Result<(), Error> {
@@ -251,7 +251,7 @@ impl<'de> Deserialize<'de> for Percentage {
} }
#[derive(Clone, Debug)] #[derive(Clone, Debug)]
pub struct MebiBytes(f64); pub struct MebiBytes(pub f64);
impl Serialize for MebiBytes { impl Serialize for MebiBytes {
fn serialize<S>(&self, serializer: S) -> Result<S::Ok, S::Error> fn serialize<S>(&self, serializer: S) -> Result<S::Ok, S::Error>
where where
@@ -310,19 +310,19 @@ pub struct MetricsGeneral {
#[derive(Deserialize, Serialize, Clone, Debug)] #[derive(Deserialize, Serialize, Clone, Debug)]
pub struct MetricsMemory { pub struct MetricsMemory {
#[serde(rename = "Percentage Used")] #[serde(rename = "Percentage Used")]
percentage_used: Percentage, pub percentage_used: Percentage,
#[serde(rename = "Total")] #[serde(rename = "Total")]
total: MebiBytes, pub total: MebiBytes,
#[serde(rename = "Available")] #[serde(rename = "Available")]
available: MebiBytes, pub available: MebiBytes,
#[serde(rename = "Used")] #[serde(rename = "Used")]
used: MebiBytes, pub used: MebiBytes,
#[serde(rename = "Swap Total")] #[serde(rename = "Swap Total")]
swap_total: MebiBytes, pub swap_total: MebiBytes,
#[serde(rename = "Swap Free")] #[serde(rename = "Swap Free")]
swap_free: MebiBytes, pub swap_free: MebiBytes,
#[serde(rename = "Swap Used")] #[serde(rename = "Swap Used")]
swap_used: MebiBytes, pub swap_used: MebiBytes,
} }
#[derive(Deserialize, Serialize, Clone, Debug)] #[derive(Deserialize, Serialize, Clone, Debug)]
pub struct MetricsCpu { pub struct MetricsCpu {
@@ -698,7 +698,7 @@ pub struct MemInfo {
swap_free: Option<u64>, swap_free: Option<u64>,
} }
#[instrument(skip_all)] #[instrument(skip_all)]
async fn get_mem_info() -> Result<MetricsMemory, Error> { pub async fn get_mem_info() -> Result<MetricsMemory, Error> {
let contents = tokio::fs::read_to_string("/proc/meminfo").await?; let contents = tokio::fs::read_to_string("/proc/meminfo").await?;
let mut mem_info = MemInfo { let mut mem_info = MemInfo {
mem_total: None, mem_total: None,

View File

@@ -19,6 +19,7 @@ use crate::db::model::UpdateProgress;
use crate::disk::mount::filesystem::bind::Bind; use crate::disk::mount::filesystem::bind::Bind;
use crate::disk::mount::filesystem::ReadWrite; use crate::disk::mount::filesystem::ReadWrite;
use crate::disk::mount::guard::MountGuard; use crate::disk::mount::guard::MountGuard;
use crate::marketplace::with_query_params;
use crate::notifications::NotificationLevel; use crate::notifications::NotificationLevel;
use crate::sound::{ use crate::sound::{
CIRCLE_OF_5THS_SHORT, UPDATE_FAILED_1, UPDATE_FAILED_2, UPDATE_FAILED_3, UPDATE_FAILED_4, CIRCLE_OF_5THS_SHORT, UPDATE_FAILED_1, UPDATE_FAILED_2, UPDATE_FAILED_3, UPDATE_FAILED_4,
@@ -81,18 +82,19 @@ async fn maybe_do_update(
marketplace_url: Url, marketplace_url: Url,
) -> Result<Option<Arc<Revision>>, Error> { ) -> Result<Option<Arc<Revision>>, Error> {
let mut db = ctx.db.handle(); let mut db = ctx.db.handle();
let latest_version: Version = reqwest::get(format!( let latest_version: Version = ctx
"{}/eos/v0/latest?eos-version={}&arch={}", .client
marketplace_url, .get(with_query_params(
Current::new().semver(), &ctx,
OS_ARCH, format!("{}/eos/v0/latest", marketplace_url,).parse()?,
)) ))
.await .send()
.with_kind(ErrorKind::Network)? .await
.json::<LatestInformation>() .with_kind(ErrorKind::Network)?
.await .json::<LatestInformation>()
.with_kind(ErrorKind::Network)? .await
.version; .with_kind(ErrorKind::Network)?
.version;
crate::db::DatabaseModel::new() crate::db::DatabaseModel::new()
.server_info() .server_info()
.lock(&mut db, LockType::Write) .lock(&mut db, LockType::Write)

View File

@@ -2,7 +2,9 @@ use std::future::Future;
use std::io::Cursor; use std::io::Cursor;
use std::os::unix::prelude::MetadataExt; use std::os::unix::prelude::MetadataExt;
use std::path::Path; use std::path::Path;
use std::sync::atomic::AtomicU64;
use std::task::Poll; use std::task::Poll;
use std::time::Duration;
use futures::future::{BoxFuture, Fuse}; use futures::future::{BoxFuture, Fuse};
use futures::{AsyncSeek, FutureExt, TryStreamExt}; use futures::{AsyncSeek, FutureExt, TryStreamExt};
@@ -11,6 +13,8 @@ use nix::unistd::{Gid, Uid};
use tokio::io::{ use tokio::io::{
duplex, AsyncRead, AsyncReadExt, AsyncWrite, AsyncWriteExt, DuplexStream, ReadBuf, WriteHalf, duplex, AsyncRead, AsyncReadExt, AsyncWrite, AsyncWriteExt, DuplexStream, ReadBuf, WriteHalf,
}; };
use tokio::net::TcpStream;
use tokio::time::{Instant, Sleep};
use crate::ResultExt; use crate::ResultExt;
@@ -224,6 +228,7 @@ pub async fn copy_and_shutdown<R: AsyncRead + Unpin, W: AsyncWrite + Unpin>(
pub fn dir_size<'a, P: AsRef<Path> + 'a + Send + Sync>( pub fn dir_size<'a, P: AsRef<Path> + 'a + Send + Sync>(
path: P, path: P,
ctr: Option<&'a Counter>,
) -> BoxFuture<'a, Result<u64, std::io::Error>> { ) -> BoxFuture<'a, Result<u64, std::io::Error>> {
async move { async move {
tokio_stream::wrappers::ReadDirStream::new(tokio::fs::read_dir(path.as_ref()).await?) tokio_stream::wrappers::ReadDirStream::new(tokio::fs::read_dir(path.as_ref()).await?)
@@ -231,9 +236,12 @@ pub fn dir_size<'a, P: AsRef<Path> + 'a + Send + Sync>(
let m = e.metadata().await?; let m = e.metadata().await?;
Ok(acc Ok(acc
+ if m.is_file() { + if m.is_file() {
if let Some(ctr) = ctr {
ctr.add(m.len());
}
m.len() m.len()
} else if m.is_dir() { } else if m.is_dir() {
dir_size(e.path()).await? dir_size(e.path(), ctr).await?
} else { } else {
0 0
}) })
@@ -419,9 +427,60 @@ impl<T: AsyncWrite> AsyncWrite for BackTrackingReader<T> {
} }
} }
pub struct Counter {
atomic: AtomicU64,
ordering: std::sync::atomic::Ordering,
}
impl Counter {
pub fn new(init: u64, ordering: std::sync::atomic::Ordering) -> Self {
Self {
atomic: AtomicU64::new(init),
ordering,
}
}
pub fn load(&self) -> u64 {
self.atomic.load(self.ordering)
}
pub fn add(&self, value: u64) {
self.atomic.fetch_add(value, self.ordering);
}
}
#[pin_project::pin_project]
pub struct CountingReader<'a, R> {
ctr: &'a Counter,
#[pin]
rdr: R,
}
impl<'a, R> CountingReader<'a, R> {
pub fn new(rdr: R, ctr: &'a Counter) -> Self {
Self { ctr, rdr }
}
pub fn into_inner(self) -> R {
self.rdr
}
}
impl<'a, R: AsyncRead> AsyncRead for CountingReader<'a, R> {
fn poll_read(
self: std::pin::Pin<&mut Self>,
cx: &mut std::task::Context<'_>,
buf: &mut ReadBuf<'_>,
) -> Poll<std::io::Result<()>> {
let this = self.project();
let start = buf.filled().len();
let res = this.rdr.poll_read(cx, buf);
let len = buf.filled().len() - start;
if len > 0 {
this.ctr.add(len as u64);
}
res
}
}
pub fn dir_copy<'a, P0: AsRef<Path> + 'a + Send + Sync, P1: AsRef<Path> + 'a + Send + Sync>( pub fn dir_copy<'a, P0: AsRef<Path> + 'a + Send + Sync, P1: AsRef<Path> + 'a + Send + Sync>(
src: P0, src: P0,
dst: P1, dst: P1,
ctr: Option<&'a Counter>,
) -> BoxFuture<'a, Result<(), crate::Error>> { ) -> BoxFuture<'a, Result<(), crate::Error>> {
async move { async move {
let m = tokio::fs::metadata(&src).await?; let m = tokio::fs::metadata(&src).await?;
@@ -464,23 +523,23 @@ pub fn dir_copy<'a, P0: AsRef<Path> + 'a + Send + Sync, P1: AsRef<Path> + 'a + S
let dst_path = dst_path.join(e.file_name()); let dst_path = dst_path.join(e.file_name());
if m.is_file() { if m.is_file() {
let len = m.len(); let len = m.len();
let mut dst_file = let mut dst_file = tokio::fs::File::create(&dst_path).await.with_ctx(|_| {
&mut tokio::fs::File::create(&dst_path).await.with_ctx(|_| { (
( crate::ErrorKind::Filesystem,
crate::ErrorKind::Filesystem, format!("create {}", dst_path.display()),
format!("create {}", dst_path.display()), )
) })?;
})?; let mut rdr = tokio::fs::File::open(&src_path).await.with_ctx(|_| {
tokio::io::copy( (
&mut tokio::fs::File::open(&src_path).await.with_ctx(|_| { crate::ErrorKind::Filesystem,
( format!("open {}", src_path.display()),
crate::ErrorKind::Filesystem, )
format!("open {}", src_path.display()), })?;
) if let Some(ctr) = ctr {
})?, tokio::io::copy(&mut CountingReader::new(rdr, ctr), &mut dst_file).await
&mut dst_file, } else {
) tokio::io::copy(&mut rdr, &mut dst_file).await
.await }
.with_ctx(|_| { .with_ctx(|_| {
( (
crate::ErrorKind::Filesystem, crate::ErrorKind::Filesystem,
@@ -508,7 +567,7 @@ pub fn dir_copy<'a, P0: AsRef<Path> + 'a + Send + Sync, P1: AsRef<Path> + 'a + S
) )
})?; })?;
} else if m.is_dir() { } else if m.is_dir() {
dir_copy(src_path, dst_path).await?; dir_copy(src_path, dst_path, ctr).await?;
} else if m.file_type().is_symlink() { } else if m.file_type().is_symlink() {
tokio::fs::symlink( tokio::fs::symlink(
tokio::fs::read_link(&src_path).await.with_ctx(|_| { tokio::fs::read_link(&src_path).await.with_ctx(|_| {
@@ -535,3 +594,77 @@ pub fn dir_copy<'a, P0: AsRef<Path> + 'a + Send + Sync, P1: AsRef<Path> + 'a + S
} }
.boxed() .boxed()
} }
#[pin_project::pin_project]
pub struct TimeoutStream<S: AsyncRead + AsyncWrite = TcpStream> {
timeout: Duration,
#[pin]
sleep: Sleep,
#[pin]
stream: S,
}
impl<S: AsyncRead + AsyncWrite> TimeoutStream<S> {
pub fn new(stream: S, timeout: Duration) -> Self {
Self {
timeout,
sleep: tokio::time::sleep(timeout),
stream,
}
}
}
impl<S: AsyncRead + AsyncWrite> AsyncRead for TimeoutStream<S> {
fn poll_read(
self: std::pin::Pin<&mut Self>,
cx: &mut std::task::Context<'_>,
buf: &mut tokio::io::ReadBuf<'_>,
) -> std::task::Poll<std::io::Result<()>> {
let mut this = self.project();
if let std::task::Poll::Ready(_) = this.sleep.as_mut().poll(cx) {
return std::task::Poll::Ready(Err(std::io::Error::new(
std::io::ErrorKind::TimedOut,
"timed out",
)));
}
let res = this.stream.poll_read(cx, buf);
if res.is_ready() {
this.sleep.reset(Instant::now() + *this.timeout);
}
res
}
}
impl<S: AsyncRead + AsyncWrite> AsyncWrite for TimeoutStream<S> {
fn poll_write(
self: std::pin::Pin<&mut Self>,
cx: &mut std::task::Context<'_>,
buf: &[u8],
) -> std::task::Poll<Result<usize, std::io::Error>> {
let mut this = self.project();
let res = this.stream.poll_write(cx, buf);
if res.is_ready() {
this.sleep.reset(Instant::now() + *this.timeout);
}
res
}
fn poll_flush(
self: std::pin::Pin<&mut Self>,
cx: &mut std::task::Context<'_>,
) -> std::task::Poll<Result<(), std::io::Error>> {
let mut this = self.project();
let res = this.stream.poll_flush(cx);
if res.is_ready() {
this.sleep.reset(Instant::now() + *this.timeout);
}
res
}
fn poll_shutdown(
self: std::pin::Pin<&mut Self>,
cx: &mut std::task::Context<'_>,
) -> std::task::Poll<Result<(), std::io::Error>> {
let mut this = self.project();
let res = this.stream.poll_shutdown(cx);
if res.is_ready() {
this.sleep.reset(Instant::now() + *this.timeout);
}
res
}
}

63
backend/src/util/lshw.rs Normal file
View File

@@ -0,0 +1,63 @@
use models::{Error, ResultExt};
use serde::{Deserialize, Serialize};
use tokio::process::Command;
use crate::util::Invoke;
const KNOWN_CLASSES: &[&str] = &["processor", "display"];
#[derive(Debug, Deserialize, Serialize)]
#[serde(tag = "class")]
#[serde(rename_all = "kebab-case")]
pub enum LshwDevice {
Processor(LshwProcessor),
Display(LshwDisplay),
}
impl LshwDevice {
pub fn class(&self) -> &'static str {
match self {
Self::Processor(_) => "processor",
Self::Display(_) => "display",
}
}
pub fn product(&self) -> &str {
match self {
Self::Processor(hw) => hw.product.as_str(),
Self::Display(hw) => hw.product.as_str(),
}
}
}
#[derive(Debug, Deserialize, Serialize)]
pub struct LshwProcessor {
pub product: String,
}
#[derive(Debug, Deserialize, Serialize)]
pub struct LshwDisplay {
pub product: String,
}
pub async fn lshw() -> Result<Vec<LshwDevice>, Error> {
let mut cmd = Command::new("lshw");
cmd.arg("-json");
for class in KNOWN_CLASSES {
cmd.arg("-class").arg(*class);
}
Ok(
serde_json::from_slice::<Vec<serde_json::Value>>(
&cmd.invoke(crate::ErrorKind::Lshw).await?,
)
.with_kind(crate::ErrorKind::Deserialization)?
.into_iter()
.filter_map(|v| match serde_json::from_value(v) {
Ok(a) => Some(a),
Err(e) => {
tracing::error!("Failed to parse lshw output: {e}");
tracing::debug!("{e:?}");
None
}
})
.collect(),
)
}

View File

@@ -27,6 +27,7 @@ pub mod config;
pub mod http_reader; pub mod http_reader;
pub mod io; pub mod io;
pub mod logger; pub mod logger;
pub mod lshw;
pub mod serde; pub mod serde;
#[derive(Clone, Copy, Debug)] #[derive(Clone, Copy, Debug)]

View File

@@ -793,3 +793,42 @@ impl<T: AsRef<[u8]>> Serialize for Base64<T> {
serializer.serialize_str(&base64::encode(self.0.as_ref())) serializer.serialize_str(&base64::encode(self.0.as_ref()))
} }
} }
#[derive(Clone, Debug)]
pub struct Regex(regex::Regex);
impl From<Regex> for regex::Regex {
fn from(value: Regex) -> Self {
value.0
}
}
impl From<regex::Regex> for Regex {
fn from(value: regex::Regex) -> Self {
Regex(value)
}
}
impl AsRef<regex::Regex> for Regex {
fn as_ref(&self) -> &regex::Regex {
&self.0
}
}
impl AsMut<regex::Regex> for Regex {
fn as_mut(&mut self) -> &mut regex::Regex {
&mut self.0
}
}
impl<'de> Deserialize<'de> for Regex {
fn deserialize<D>(deserializer: D) -> Result<Self, D::Error>
where
D: Deserializer<'de>,
{
deserialize_from_str(deserializer).map(Self)
}
}
impl Serialize for Regex {
fn serialize<S>(&self, serializer: S) -> Result<S::Ok, S::Error>
where
S: Serializer,
{
serialize_display(&self.0, serializer)
}
}

View File

@@ -23,8 +23,9 @@ mod v0_3_4;
mod v0_3_4_1; mod v0_3_4_1;
mod v0_3_4_2; mod v0_3_4_2;
mod v0_3_4_3; mod v0_3_4_3;
mod v0_3_4_4;
pub type Current = v0_3_4_3::Version; pub type Current = v0_3_4_4::Version;
#[derive(serde::Serialize, serde::Deserialize, Debug, Clone)] #[derive(serde::Serialize, serde::Deserialize, Debug, Clone)]
#[serde(untagged)] #[serde(untagged)]
@@ -43,6 +44,7 @@ enum Version {
V0_3_4_1(Wrapper<v0_3_4_1::Version>), V0_3_4_1(Wrapper<v0_3_4_1::Version>),
V0_3_4_2(Wrapper<v0_3_4_2::Version>), V0_3_4_2(Wrapper<v0_3_4_2::Version>),
V0_3_4_3(Wrapper<v0_3_4_3::Version>), V0_3_4_3(Wrapper<v0_3_4_3::Version>),
V0_3_4_4(Wrapper<v0_3_4_4::Version>),
Other(emver::Version), Other(emver::Version),
} }
@@ -72,6 +74,7 @@ impl Version {
Version::V0_3_4_1(Wrapper(x)) => x.semver(), Version::V0_3_4_1(Wrapper(x)) => x.semver(),
Version::V0_3_4_2(Wrapper(x)) => x.semver(), Version::V0_3_4_2(Wrapper(x)) => x.semver(),
Version::V0_3_4_3(Wrapper(x)) => x.semver(), Version::V0_3_4_3(Wrapper(x)) => x.semver(),
Version::V0_3_4_4(Wrapper(x)) => x.semver(),
Version::Other(x) => x.clone(), Version::Other(x) => x.clone(),
} }
} }
@@ -265,6 +268,10 @@ pub async fn init<Db: DbHandle>(
v.0.migrate_to(&Current::new(), db, secrets, receipts) v.0.migrate_to(&Current::new(), db, secrets, receipts)
.await? .await?
} }
Version::V0_3_4_4(v) => {
v.0.migrate_to(&Current::new(), db, secrets, receipts)
.await?
}
Version::Other(_) => { Version::Other(_) => {
return Err(Error::new( return Err(Error::new(
eyre!("Cannot downgrade"), eyre!("Cannot downgrade"),

View File

@@ -0,0 +1,41 @@
use async_trait::async_trait;
use emver::VersionRange;
use models::ResultExt;
use super::v0_3_0::V0_3_0_COMPAT;
use super::*;
const V0_3_4_4: emver::Version = emver::Version::new(0, 3, 4, 4);
#[derive(Clone, Debug)]
pub struct Version;
#[async_trait]
impl VersionT for Version {
type Previous = v0_3_4_3::Version;
fn new() -> Self {
Version
}
fn semver(&self) -> emver::Version {
V0_3_4_4
}
fn compat(&self) -> &'static VersionRange {
&*V0_3_0_COMPAT
}
async fn up<Db: DbHandle>(&self, db: &mut Db, _secrets: &PgPool) -> Result<(), Error> {
let mut tor_addr = crate::db::DatabaseModel::new()
.server_info()
.tor_address()
.get_mut(db)
.await?;
tor_addr
.set_scheme("https")
.map_err(|_| eyre!("unable to update url scheme to https"))
.with_kind(crate::ErrorKind::ParseUrl)?;
tor_addr.save(db).await?;
Ok(())
}
async fn down<Db: DbHandle>(&self, _db: &mut Db, _secrets: &PgPool) -> Result<(), Error> {
Ok(())
}
}

19
backend/startd.service Normal file
View File

@@ -0,0 +1,19 @@
[Unit]
Description=StartOS Daemon
After=network-online.target
Requires=network-online.target
Wants=avahi-daemon.service
[Service]
Type=simple
Environment=RUST_LOG=startos=debug,js_engine=debug,patch_db=warn
ExecStart=/usr/bin/startd
Restart=always
RestartSec=3
ManagedOOMPreference=avoid
CPUAccounting=true
CPUWeight=1000
LimitNOFILE=65536
[Install]
WantedBy=multi-user.target

View File

@@ -1041,7 +1041,14 @@ export const action = {
async "test-disk-usage"(effects, _input) { async "test-disk-usage"(effects, _input) {
const usage = await effects.diskUsage() const usage = await effects.diskUsage()
return usage return {
result: {
copyable: false,
message: `${usage.used} / ${usage.total}`,
version: "0",
qr: false,
},
};
} }
}; };

View File

@@ -13,9 +13,13 @@ if tty -s; then
USE_TTY="-it" USE_TTY="-it"
fi fi
mkdir -p cargo-deps if [ -z "$ARCH" ]; then
alias 'rust-arm64-builder'='docker run $USE_TTY --rm -v "$HOME/.cargo/registry":/root/.cargo/registry -v "$(pwd)"/cargo-deps:/home/rust/src -P start9/rust-arm-cross:aarch64' ARCH=$(uname -m)
fi
rust-arm64-builder cargo install "$1" --target-dir /home/rust/src mkdir -p cargo-deps
alias 'rust-arm64-builder'='docker run $USE_TTY --rm -v "$HOME/.cargo/registry":/usr/local/cargo/registry -v "$(pwd)"/cargo-deps:/home/rust/src -P start9/rust-arm-cross:aarch64'
rust-arm64-builder cargo install "$1" --target-dir /home/rust/src --target=$ARCH-unknown-linux-gnu
sudo chown -R $USER cargo-deps sudo chown -R $USER cargo-deps
sudo chown -R $USER ~/.cargo sudo chown -R $USER ~/.cargo

View File

@@ -24,6 +24,7 @@ iw
jq jq
libavahi-client3 libavahi-client3
lm-sensors lm-sensors
lshw
lvm2 lvm2
magic-wormhole magic-wormhole
man-db man-db

View File

@@ -10,7 +10,7 @@ cat << "ASCII"
╰ ━ ━ ━ ╯ ╰ ━ ┻ ╯ ╰ ┻ ╯ ╰ ━ ┻ ━ ━ ━ ┻ ━ ━ ━ ╯ ╰ ━ ━ ━ ╯ ╰ ━ ┻ ╯ ╰ ┻ ╯ ╰ ━ ┻ ━ ━ ━ ┻ ━ ━ ━ ╯
ASCII ASCII
printf " %s (%s %s)\n" "$(uname -o)" "$(uname -r)" "$(uname -m)" printf " %s (%s %s)\n" "$(uname -o)" "$(uname -r)" "$(uname -m)"
printf " $(embassy-cli --version | sed 's/Embassy CLI /StartOS v/g') - $(embassy-cli git-info)" printf " $(start-cli --version | sed 's/StartOS CLI /StartOS v/g') - $(start-cli git-info)"
if [ -n "$(cat /usr/lib/embassy/ENVIRONMENT.txt)" ]; then if [ -n "$(cat /usr/lib/embassy/ENVIRONMENT.txt)" ]; then
printf " ~ $(cat /usr/lib/embassy/ENVIRONMENT.txt)\n" printf " ~ $(cat /usr/lib/embassy/ENVIRONMENT.txt)\n"
else else

View File

@@ -1,5 +0,0 @@
os-partitions:
boot: /dev/mmcblk0p1
root: /dev/mmcblk0p2
ethernet-interface: end0
wifi-interface: wlan0

View File

@@ -71,6 +71,7 @@ sudo losetup -d $OUTPUT_DEVICE
if [ "$ALLOW_VERSION_MISMATCH" != 1 ]; then if [ "$ALLOW_VERSION_MISMATCH" != 1 ]; then
if [ "$(cat GIT_HASH.txt)" != "$REAL_GIT_HASH" ]; then if [ "$(cat GIT_HASH.txt)" != "$REAL_GIT_HASH" ]; then
>&2 echo "startos.raspberrypi.squashfs GIT_HASH.txt mismatch" >&2 echo "startos.raspberrypi.squashfs GIT_HASH.txt mismatch"
>&2 echo "expected $REAL_GIT_HASH (dpkg) found $(cat GIT_HASH.txt) (repo)"
exit 1 exit 1
fi fi
if [ "$(cat VERSION.txt)" != "$REAL_VERSION" ]; then if [ "$(cat VERSION.txt)" != "$REAL_VERSION" ]; then

View File

@@ -1,6 +1,6 @@
#!/bin/bash #!/bin/bash
FE_VERSION="$(cat frontend/package.json | grep -Po '"version":[ \t\n]*"\K[^"]*')" FE_VERSION="$(cat frontend/package.json | grep '"version"' | sed 's/[ \t]*"version":[ \t]*"\([^"]*\)",/\1/')"
# TODO: Validate other version sources - backend/Cargo.toml, backend/src/version/mod.rs # TODO: Validate other version sources - backend/Cargo.toml, backend/src/version/mod.rs

22
compress-uis.sh Executable file
View File

@@ -0,0 +1,22 @@
#!/bin/bash
set -e
rm -rf frontend/dist/static
find frontend/dist/raw -type f -not -name '*.gz' -and -not -name '*.br' | xargs -n 1 -P 0 gzip -kf
find frontend/dist/raw -type f -not -name '*.gz' -and -not -name '*.br' | xargs -n 1 -P 0 brotli -kf
for file in $(find frontend/dist/raw -type f -not -name '*.gz' -and -not -name '*.br'); do
raw_size=$(du --bytes $file | awk '{print $1}')
gz_size=$(du --bytes $file.gz | awk '{print $1}')
br_size=$(du --bytes $file.br | awk '{print $1}')
if [ $((gz_size * 100 / raw_size)) -gt 70 ]; then
rm $file.gz
fi
if [ $((br_size * 100 / raw_size)) -gt 70 ]; then
rm $file.br
fi
done
cp -r frontend/dist/raw frontend/dist/static

View File

@@ -19,7 +19,7 @@ Check your versions
```sh ```sh
node --version node --version
v16.10.0 v18.15.0
npm --version npm --version
v8.0.0 v8.0.0

View File

@@ -14,7 +14,7 @@
"builder": "@angular-devkit/build-angular:browser", "builder": "@angular-devkit/build-angular:browser",
"options": { "options": {
"preserveSymlinks": true, "preserveSymlinks": true,
"outputPath": "dist/ui", "outputPath": "dist/raw/ui",
"index": "projects/ui/src/index.html", "index": "projects/ui/src/index.html",
"main": "projects/ui/src/main.ts", "main": "projects/ui/src/main.ts",
"polyfills": "projects/ui/src/polyfills.ts", "polyfills": "projects/ui/src/polyfills.ts",
@@ -39,7 +39,7 @@
"projects/ui/src/manifest.webmanifest", "projects/ui/src/manifest.webmanifest",
{ {
"glob": "ngsw.json", "glob": "ngsw.json",
"input": "dist/ui", "input": "dist/raw/ui",
"output": "projects/ui/src" "output": "projects/ui/src"
} }
], ],
@@ -147,7 +147,7 @@
"build": { "build": {
"builder": "@angular-devkit/build-angular:browser", "builder": "@angular-devkit/build-angular:browser",
"options": { "options": {
"outputPath": "dist/install-wizard", "outputPath": "dist/raw/install-wizard",
"index": "projects/install-wizard/src/index.html", "index": "projects/install-wizard/src/index.html",
"main": "projects/install-wizard/src/main.ts", "main": "projects/install-wizard/src/main.ts",
"polyfills": "projects/install-wizard/src/polyfills.ts", "polyfills": "projects/install-wizard/src/polyfills.ts",
@@ -277,7 +277,7 @@
"build": { "build": {
"builder": "@angular-devkit/build-angular:browser", "builder": "@angular-devkit/build-angular:browser",
"options": { "options": {
"outputPath": "dist/setup-wizard", "outputPath": "dist/raw/setup-wizard",
"index": "projects/setup-wizard/src/index.html", "index": "projects/setup-wizard/src/index.html",
"main": "projects/setup-wizard/src/main.ts", "main": "projects/setup-wizard/src/main.ts",
"polyfills": "projects/setup-wizard/src/polyfills.ts", "polyfills": "projects/setup-wizard/src/polyfills.ts",
@@ -397,7 +397,7 @@
"build": { "build": {
"builder": "@angular-devkit/build-angular:browser", "builder": "@angular-devkit/build-angular:browser",
"options": { "options": {
"outputPath": "dist/diagnostic-ui", "outputPath": "dist/raw/diagnostic-ui",
"index": "projects/diagnostic-ui/src/index.html", "index": "projects/diagnostic-ui/src/index.html",
"main": "projects/diagnostic-ui/src/main.ts", "main": "projects/diagnostic-ui/src/main.ts",
"polyfills": "projects/diagnostic-ui/src/polyfills.ts", "polyfills": "projects/diagnostic-ui/src/polyfills.ts",

View File

@@ -1,12 +1,12 @@
{ {
"name": "startos-ui", "name": "startos-ui",
"version": "0.3.4.3", "version": "0.3.4.4",
"lockfileVersion": 2, "lockfileVersion": 2,
"requires": true, "requires": true,
"packages": { "packages": {
"": { "": {
"name": "startos-ui", "name": "startos-ui",
"version": "0.3.4.3", "version": "0.3.4.4",
"dependencies": { "dependencies": {
"@angular/animations": "^14.1.0", "@angular/animations": "^14.1.0",
"@angular/common": "^14.1.0", "@angular/common": "^14.1.0",

View File

@@ -1,6 +1,6 @@
{ {
"name": "startos-ui", "name": "startos-ui",
"version": "0.3.4.3", "version": "0.3.4.4",
"author": "Start9 Labs, Inc", "author": "Start9 Labs, Inc",
"homepage": "https://start9.com/", "homepage": "https://start9.com/",
"scripts": { "scripts": {
@@ -22,7 +22,7 @@
"build:all": "npm run build:deps && npm run build:dui && npm run build:setup && npm run build:ui && npm run build:install-wiz", "build:all": "npm run build:deps && npm run build:dui && npm run build:setup && npm run build:ui && npm run build:install-wiz",
"build:shared": "ng build shared", "build:shared": "ng build shared",
"build:marketplace": "npm run build:shared && ng build marketplace", "build:marketplace": "npm run build:shared && ng build marketplace",
"analyze:ui": "webpack-bundle-analyzer dist/ui/stats.json", "analyze:ui": "webpack-bundle-analyzer dist/raw/ui/stats.json",
"publish:shared": "npm run build:shared && npm publish ./dist/shared --access public", "publish:shared": "npm run build:shared && npm publish ./dist/shared --access public",
"publish:marketplace": "npm run build:marketplace && npm publish ./dist/marketplace --access public", "publish:marketplace": "npm run build:marketplace && npm publish ./dist/marketplace --access public",
"start:dui": "npm run-script build-config && ionic serve --project diagnostic-ui --host 0.0.0.0", "start:dui": "npm run-script build-config && ionic serve --project diagnostic-ui --host 0.0.0.0",

View File

@@ -1,6 +1,6 @@
{ {
"name": null, "name": null,
"ack-welcome": "0.3.4.3", "ack-welcome": "0.3.4.4",
"marketplace": { "marketplace": {
"selected-url": "https://registry.start9.com/", "selected-url": "https://registry.start9.com/",
"known-hosts": { "known-hosts": {

View File

@@ -155,7 +155,6 @@ export class EmbassyPage {
await this.navCtrl.navigateForward(`/loading`) await this.navCtrl.navigateForward(`/loading`)
} catch (e: any) { } catch (e: any) {
this.errorToastService.present(e) this.errorToastService.present(e)
console.error(e)
} finally { } finally {
loader.dismiss() loader.dismiss()
} }

View File

@@ -2,15 +2,12 @@
<ion-grid> <ion-grid>
<ion-row class="ion-align-items-center"> <ion-row class="ion-align-items-center">
<ion-col class="ion-text-center"> <ion-col class="ion-text-center">
<ion-card <ion-card *ngIf="progress$ | async as progress" color="dark">
*ngIf="{ decimal: progress$ | async } as progress"
color="dark"
>
<ion-card-header> <ion-card-header>
<ion-card-title>Initializing StartOS</ion-card-title> <ion-card-title>Initializing StartOS</ion-card-title>
<div class="center-wrapper"> <div class="center-wrapper">
<ion-card-subtitle *ngIf="progress.decimal as decimal"> <ion-card-subtitle>
Progress: {{ (decimal * 100).toFixed(0)}}% {{ progress.transferred | toMessage }}
</ion-card-subtitle> </ion-card-subtitle>
</div> </div>
</ion-card-header> </ion-card-header>
@@ -18,16 +15,22 @@
<ion-card-content class="ion-margin"> <ion-card-content class="ion-margin">
<ion-progress-bar <ion-progress-bar
color="tertiary" color="tertiary"
style=" style="max-width: 700px; margin: auto; margin-bottom: 36px"
max-width: 700px; [type]="progress.transferred && progress.transferred < 1 ? 'determinate' : 'indeterminate'"
margin: auto; [value]="progress.transferred || 0"
padding-bottom: 20px;
margin-bottom: 40px;
"
[type]="progress.decimal && progress.decimal < 1 ? 'determinate' : 'indeterminate'"
[value]="progress.decimal || 0"
></ion-progress-bar> ></ion-progress-bar>
<p>{{ progress.decimal | toMessage }}</p> <p>
<ng-container *ngIf="progress.totalBytes as total">
<ng-container
*ngIf="progress.transferred as transferred; else calculating"
>
Progress: {{ (transferred * 100).toFixed() }}%
</ng-container>
<ng-template #calculating>
{{ (progress.totalBytes / 1073741824).toFixed(2) }} GB
</ng-template>
</ng-container>
</p>
</ion-card-content> </ion-card-content>
</ion-card> </ion-card>
</ion-col> </ion-col>

View File

@@ -2,6 +2,14 @@ import { Component } from '@angular/core'
import { NavController } from '@ionic/angular' import { NavController } from '@ionic/angular'
import { StateService } from 'src/app/services/state.service' import { StateService } from 'src/app/services/state.service'
import { Pipe, PipeTransform } from '@angular/core' import { Pipe, PipeTransform } from '@angular/core'
import { BehaviorSubject } from 'rxjs'
import { ApiService } from 'src/app/services/api/api.service'
import { ErrorToastService, pauseFor } from '@start9labs/shared'
type Progress = {
totalBytes: number | null
transferred: number
}
@Component({ @Component({
selector: 'app-loading', selector: 'app-loading',
@@ -9,23 +17,49 @@ import { Pipe, PipeTransform } from '@angular/core'
styleUrls: ['loading.page.scss'], styleUrls: ['loading.page.scss'],
}) })
export class LoadingPage { export class LoadingPage {
readonly progress$ = this.stateService.dataProgress$ readonly progress$ = new BehaviorSubject<Progress>({
totalBytes: null,
transferred: 0,
})
constructor( constructor(
private readonly stateService: StateService,
private readonly navCtrl: NavController, private readonly navCtrl: NavController,
private readonly api: ApiService,
private readonly errorToastService: ErrorToastService,
) {} ) {}
ngOnInit() { ngOnInit() {
this.stateService.pollDataTransferProgress() this.poll()
const progSub = this.stateService.dataCompletionSubject$.subscribe( }
async complete => {
if (complete) { async poll() {
progSub.unsubscribe() try {
await this.navCtrl.navigateForward(`/success`) const progress = await this.api.getStatus()
}
}, if (!progress) return
)
const {
'total-bytes': totalBytes,
'bytes-transferred': bytesTransferred,
} = progress
this.progress$.next({
totalBytes,
transferred: totalBytes ? bytesTransferred / totalBytes : 0,
})
if (progress.complete) {
this.navCtrl.navigateForward(`/success`)
this.progress$.complete()
return
}
await pauseFor(250)
setTimeout(() => this.poll(), 0) // prevent call stack from growing
} catch (e: any) {
this.errorToastService.present(e)
}
} }
} }
@@ -41,7 +75,7 @@ export class ToMessagePipe implements PipeTransform {
} }
if (!progress) { if (!progress) {
return 'Preparing data. This can take a while' return 'Calculating size'
} else if (progress < 1) { } else if (progress < 1) {
return 'Copying data' return 'Copying data'
} else { } else {

View File

@@ -27,31 +27,15 @@
<section <section
style=" style="
padding: 1rem 3rem 2rem 3rem; padding: 1rem 3rem 2rem 3rem;
border: solid #c4c4c5 3px;
margin-bottom: 24px; margin-bottom: 24px;
border: solid #c4c4c5 3px;
border-radius: 20px;
" "
> >
<h2 style="font-variant-caps: all-small-caps">
Access from home (LAN)
</h2>
<p>
Visit the address below when you are connected to the same WiFi or
Local Area Network (LAN) as your server:
</p>
<p
style="
padding: 16px;
font-weight: bold;
font-size: 1.1rem;
overflow: auto;
"
>
<code id="lan-addr"></code>
</p>
<div> <div>
<h3 style="color: #f8546a; font-weight: bold">Important!</h3> <h3 style="color: #f8546a; font-weight: bold">Important!</h3>
<p> <p>
Be sure to Download your server's Root CA and
<a <a
href="https://docs.start9.com/latest/user-manual/connecting/connecting-lan" href="https://docs.start9.com/latest/user-manual/connecting/connecting-lan"
target="_blank" target="_blank"
@@ -60,12 +44,10 @@
> >
follow the instructions follow the instructions
</a> </a>
to establish a secure connection by installing your server's root to establish a secure connection with your server.
certificate authority.
</p> </p>
</div> </div>
<div style="text-align: center">
<div style="padding: 2rem; text-align: center">
<a <a
id="cert" id="cert"
[download]="crtName" [download]="crtName"
@@ -88,12 +70,49 @@
</a> </a>
</div> </div>
</section> </section>
<section
style="
padding: 1rem 3rem 2rem 3rem;
border: solid #c4c4c5 3px;
border-radius: 20px;
margin-bottom: 24px;
"
>
<h2 style="font-variant-caps: all-small-caps">
Access from home (LAN)
</h2>
<p>
Visit the address below when you are connected to the same WiFi or
Local Area Network (LAN) as your server.
</p>
<p
style="
padding: 16px;
font-weight: bold;
font-size: 1.1rem;
overflow: auto;
"
>
<code id="lan-addr"></code>
</p>
<section style="padding: 1rem 3rem 2rem 3rem; border: solid #c4c4c5 3px">
<h2 style="font-variant-caps: all-small-caps"> <h2 style="font-variant-caps: all-small-caps">
Access on the go (Tor) Access on the go (Tor)
</h2> </h2>
<p>Visit the address below when you are away from home:</p> <p>Visit the address below when you are away from home.</p>
<p>
<span style="font-weight: bold">Note:</span>
This address will only work from a Tor-enabled browser.
<a
href="https://docs.start9.com/latest/user-manual/connecting/connecting-tor"
target="_blank"
rel="noreferrer"
style="color: #6866cc; font-weight: bold; text-decoration: none"
>
Follow the instructions
</a>
to get setup.
</p>
<p <p
style=" style="
padding: 16px; padding: 16px;
@@ -104,21 +123,6 @@
> >
<code id="tor-addr"></code> <code id="tor-addr"></code>
</p> </p>
<div>
<h3 style="color: #f8546a; font-weight: bold">Important!</h3>
<p>
This address will only work from a Tor-enabled browser.
<a
href="https://docs.start9.com/latest/user-manual/connecting/connecting-tor"
target="_blank"
rel="noreferrer"
style="color: #6866cc; font-weight: bold; text-decoration: none"
>
Follow the instructions
</a>
to get setup.
</p>
</div>
</section> </section>
</div> </div>
</body> </body>

View File

@@ -49,7 +49,7 @@ export class SuccessPage {
const ret = await this.api.complete() const ret = await this.api.complete()
if (!this.isKiosk) { if (!this.isKiosk) {
this.torAddress = ret['tor-address'] this.torAddress = ret['tor-address']
this.lanAddress = ret['lan-address'].replace('https', 'http') this.lanAddress = ret['lan-address'].replace(/^https:/, 'http:')
this.cert = ret['root-ca'] this.cert = ret['root-ca']
await this.api.exit() await this.api.exit()

View File

@@ -2,10 +2,10 @@ import { Injectable } from '@angular/core'
import { encodeBase64, pauseFor } from '@start9labs/shared' import { encodeBase64, pauseFor } from '@start9labs/shared'
import { import {
ApiService, ApiService,
CifsRecoverySource,
AttachReq, AttachReq,
ExecuteReq, CifsRecoverySource,
CompleteRes, CompleteRes,
ExecuteReq,
} from './api.service' } from './api.service'
import * as jose from 'node-jose' import * as jose from 'node-jose'
@@ -17,8 +17,6 @@ let tries: number
export class MockApiService extends ApiService { export class MockApiService extends ApiService {
async getStatus() { async getStatus() {
const restoreOrMigrate = true const restoreOrMigrate = true
const total = 4
await pauseFor(1000) await pauseFor(1000)
if (tries === undefined) { if (tries === undefined) {
@@ -27,7 +25,9 @@ export class MockApiService extends ApiService {
} }
tries++ tries++
const progress = tries - 1
const total = tries <= 4 ? tries * 268435456 : 1073741824
const progress = tries > 4 ? (tries - 4) * 268435456 : 0
return { return {
'bytes-transferred': restoreOrMigrate ? progress : 0, 'bytes-transferred': restoreOrMigrate ? progress : 0,
@@ -149,7 +149,7 @@ export class MockApiService extends ApiService {
async complete(): Promise<CompleteRes> { async complete(): Promise<CompleteRes> {
await pauseFor(1000) await pauseFor(1000)
return { return {
'tor-address': 'http://asdafsadasdasasdasdfasdfasdf.onion', 'tor-address': 'https://asdafsadasdasasdasdfasdfasdf.onion',
'lan-address': 'https://adjective-noun.local', 'lan-address': 'https://adjective-noun.local',
'root-ca': encodeBase64(rootCA), 'root-ca': encodeBase64(rootCA),
} }

View File

@@ -1,7 +1,5 @@
import { Injectable } from '@angular/core' import { Injectable } from '@angular/core'
import { BehaviorSubject } from 'rxjs'
import { ApiService, RecoverySource } from './api/api.service' import { ApiService, RecoverySource } from './api/api.service'
import { pauseFor, ErrorToastService } from '@start9labs/shared'
@Injectable({ @Injectable({
providedIn: 'root', providedIn: 'root',
@@ -12,47 +10,7 @@ export class StateService {
recoverySource?: RecoverySource recoverySource?: RecoverySource
recoveryPassword?: string recoveryPassword?: string
dataTransferProgress?: { constructor(private readonly api: ApiService) {}
bytesTransferred: number
totalBytes: number | null
complete: boolean
}
dataProgress$ = new BehaviorSubject<number>(0)
dataCompletionSubject$ = new BehaviorSubject(false)
constructor(
private readonly api: ApiService,
private readonly errorToastService: ErrorToastService,
) {}
async pollDataTransferProgress() {
await pauseFor(500)
if (this.dataTransferProgress?.complete) {
this.dataCompletionSubject$.next(true)
return
}
try {
const progress = await this.api.getStatus()
if (!progress) return
this.dataTransferProgress = {
bytesTransferred: progress['bytes-transferred'],
totalBytes: progress['total-bytes'],
complete: progress.complete,
}
if (this.dataTransferProgress.totalBytes) {
this.dataProgress$.next(
this.dataTransferProgress.bytesTransferred /
this.dataTransferProgress.totalBytes,
)
}
} catch (e: any) {
this.errorToastService.present(e)
}
setTimeout(() => this.pollDataTransferProgress(), 0) // prevent call stack from growing
}
async importDrive(guid: string, password: string): Promise<void> { async importDrive(guid: string, password: string): Promise<void> {
await this.api.attach({ await this.api.attach({

Binary file not shown.

After

Width:  |  Height:  |  Size: 24 KiB

View File

@@ -38,7 +38,7 @@ export class AppComponent implements OnDestroy {
readonly themeSwitcher: ThemeSwitcherService, readonly themeSwitcher: ThemeSwitcherService,
) {} ) {}
ngOnInit() { async ngOnInit() {
this.patch this.patch
.watch$('ui', 'name') .watch$('ui', 'name')
.subscribe(name => this.titleService.setTitle(name || 'StartOS')) .subscribe(name => this.titleService.setTitle(name || 'StartOS'))

View File

@@ -40,13 +40,13 @@ const ICONS = [
'file-tray-stacked-outline', 'file-tray-stacked-outline',
'finger-print-outline', 'finger-print-outline',
'flash-outline', 'flash-outline',
'flask-outline',
'flash-off-outline', 'flash-off-outline',
'folder-open-outline', 'folder-open-outline',
'globe-outline', 'globe-outline',
'grid-outline', 'grid-outline',
'help-circle-outline', 'help-circle-outline',
'hammer-outline', 'hammer-outline',
'home-outline',
'information-circle-outline', 'information-circle-outline',
'key-outline', 'key-outline',
'list-outline', 'list-outline',
@@ -74,6 +74,7 @@ const ICONS = [
'remove-circle-outline', 'remove-circle-outline',
'remove-outline', 'remove-outline',
'repeat-outline', 'repeat-outline',
'ribbon-outline',
'rocket-outline', 'rocket-outline',
'save-outline', 'save-outline',
'settings-outline', 'settings-outline',

View File

@@ -277,12 +277,10 @@ export class BackupDrivesStatusComponent {
const CifsSpec: ConfigSpec = { const CifsSpec: ConfigSpec = {
hostname: { hostname: {
type: 'string', type: 'string',
name: 'Hostname', name: 'Hostname/IP',
description: description:
'The hostname of your target device on the Local Area Network.', 'The hostname or IP address of the target device on your Local Area Network.',
placeholder: `e.g. 'My Computer' OR 'my-computer.local'`, placeholder: `e.g. 'MyComputer.local' OR '192.168.1.4'`,
pattern: '^[a-zA-Z0-9._-]+( [a-zA-Z0-9]+)*$',
'pattern-description': `Must be a valid hostname. e.g. 'My Computer' OR 'my-computer.local'`,
nullable: false, nullable: false,
masked: false, masked: false,
copyable: false, copyable: false,

View File

@@ -1,9 +1,28 @@
<alert *ngIf="show$ | async" header="Refresh Needed" (dismiss)="onDismiss()"> <alert *ngIf="show$ | async" header="Refresh Needed" (dismiss)="onDismiss()">
Your user interface is cached and out of date. Hard refresh the page to get <ng-container *ngIf="!onPwa; else pwa">
the latest UI. Your user interface is cached and out of date. Hard refresh the page to get
<ul> the latest UI.
<li><b>On Mac</b>: cmd + shift + R</li> <ul>
<li><b>On Linux/Windows</b>: ctrl + shift + R</li> <li>
</ul> <b>On Mac</b>
<a alertButton class="enter-click" role="cancel">Ok</a> : cmd + shift + R
</li>
<li>
<b>On Linux/Windows</b>
: ctrl + shift + R
</li>
<li>
<b>On Android/iOS</b>
: Browser specific, typically a refresh button in the browser menu.
</li>
</ul>
</ng-container>
<ng-template #pwa>
Your user interface is cached and out of date. Attempt to reload the PWA
using the button below. If you continue to see this message, uninstall and
reinstall the PWA.
</ng-template>
<!-- alertButton needs to be a direct child of alert element for ionic styling -->
<a *ngIf="!onPwa" alertButton class="enter-click" role="cancel">Ok</a>
<a *ngIf="onPwa" alertButton (click)="pwaReload()" role="cancel">Reload</a>
</alert> </alert>

View File

@@ -1,7 +1,9 @@
import { ChangeDetectionStrategy, Component, Inject } from '@angular/core' import { ChangeDetectionStrategy, Component, Inject } from '@angular/core'
import { Observable, Subject, merge } from 'rxjs' import { merge, Observable, Subject } from 'rxjs'
import { RefreshAlertService } from './refresh-alert.service' import { RefreshAlertService } from './refresh-alert.service'
import { SwUpdate } from '@angular/service-worker'
import { LoadingController } from '@ionic/angular'
@Component({ @Component({
selector: 'refresh-alert', selector: 'refresh-alert',
@@ -10,13 +12,36 @@ import { RefreshAlertService } from './refresh-alert.service'
}) })
export class RefreshAlertComponent { export class RefreshAlertComponent {
private readonly dismiss$ = new Subject<boolean>() private readonly dismiss$ = new Subject<boolean>()
readonly show$ = merge(this.dismiss$, this.refresh$) readonly show$ = merge(this.dismiss$, this.refresh$)
onPwa = false
constructor( constructor(
@Inject(RefreshAlertService) private readonly refresh$: Observable<boolean>, @Inject(RefreshAlertService) private readonly refresh$: Observable<boolean>,
private readonly updates: SwUpdate,
private readonly loadingCtrl: LoadingController,
) {} ) {}
ngOnInit() {
this.onPwa = window.matchMedia('(display-mode: standalone)').matches
}
async pwaReload() {
const loader = await this.loadingCtrl.create({
message: 'Reloading PWA...',
})
await loader.present()
try {
// attempt to update to the latest client version available
await this.updates.activateUpdate()
} catch (e) {
console.error('Error activating update from service worker: ', e)
} finally {
loader.dismiss()
// always reload, as this resolves most out of sync cases
window.location.reload()
}
}
onDismiss() { onDismiss() {
this.dismiss$.next(false) this.dismiss$.next(false)
} }

View File

@@ -46,11 +46,11 @@ export class WidgetListComponent {
qp: { back: 'true' }, qp: { back: 'true' },
}, },
{ {
title: 'Secure LAN', title: 'Root CA',
icon: 'home-outline', icon: 'ribbon-outline',
color: 'var(--alt-orange)', color: 'var(--alt-orange)',
description: `Download and trust your server's certificate`, description: `Download and trust your server's root certificate authority`,
link: '/system/lan', link: '/system/root-ca',
}, },
{ {
title: 'Create Backup', title: 'Create Backup',
@@ -78,7 +78,7 @@ export class WidgetListComponent {
icon: 'chatbubbles-outline', icon: 'chatbubbles-outline',
color: 'var(--alt-red)', color: 'var(--alt-red)',
description: 'Get help from the Start9 team and community', description: 'Get help from the Start9 team and community',
link: 'https://docs.start9.com/latest/support/contact', link: 'https://start9.com/contact',
}, },
] ]
} }

View File

@@ -12,6 +12,30 @@
<ion-content class="ion-padding"> <ion-content class="ion-padding">
<h2>This Release</h2> <h2>This Release</h2>
<h4>0.3.4.4</h4>
<p class="note-padding">
View the complete
<a
href="https://github.com/Start9Labs/start-os/releases/tag/v0.3.4.4"
target="_blank"
noreferrer
>
release notes
</a>
for more details.
</p>
<h6>Highlights</h6>
<ul class="spaced-list">
<li>Https over Tor for faster UI loading times</li>
<li>Change password through UI</li>
<li>Use IP address for Network Folder backups</li>
<li>
Multiple bug fixes, performance enhancements, and other small features
</li>
</ul>
<h2>Previous Releases</h2>
<h4>0.3.4.3</h4> <h4>0.3.4.3</h4>
<p class="note-padding"> <p class="note-padding">
View the complete View the complete
@@ -28,12 +52,10 @@
<ul class="spaced-list"> <ul class="spaced-list">
<li>Improved Tor reliability</li> <li>Improved Tor reliability</li>
<li>Experimental features tab</li> <li>Experimental features tab</li>
<li>multiple bugfixes and general performance enhancements</li> <li>Multiple bugfixes and general performance enhancements</li>
<li>Update branding</li> <li>Update branding</li>
</ul> </ul>
<h2>Previous Releases</h2>
<h4>0.3.4.2</h4> <h4>0.3.4.2</h4>
<p class="note-padding"> <p class="note-padding">
View the complete View the complete

View File

@@ -1,7 +1,7 @@
import { Component, Input } from '@angular/core' import { Component, Input } from '@angular/core'
import { ActivatedRoute } from '@angular/router' import { ActivatedRoute } from '@angular/router'
import { ModalController, ToastController } from '@ionic/angular' import { ModalController, ToastController } from '@ionic/angular'
import { getPkgId, copyToClipboard } from '@start9labs/shared' import { copyToClipboard, getPkgId } from '@start9labs/shared'
import { getUiInterfaceKey } from 'src/app/services/config.service' import { getUiInterfaceKey } from 'src/app/services/config.service'
import { import {
DataModel, DataModel,
@@ -51,6 +51,7 @@ export class AppInterfacesPage {
'lan-address': uiAddresses['lan-address'] 'lan-address': uiAddresses['lan-address']
? 'https://' + uiAddresses['lan-address'] ? 'https://' + uiAddresses['lan-address']
: '', : '',
// leave http for services
'tor-address': uiAddresses['tor-address'] 'tor-address': uiAddresses['tor-address']
? 'http://' + uiAddresses['tor-address'] ? 'http://' + uiAddresses['tor-address']
: '', : '',
@@ -69,7 +70,8 @@ export class AppInterfacesPage {
? 'https://' + addresses['lan-address'] ? 'https://' + addresses['lan-address']
: '', : '',
'tor-address': addresses['tor-address'] 'tor-address': addresses['tor-address']
? 'http://' + addresses['tor-address'] ? // leave http for services
'http://' + addresses['tor-address']
: '', : '',
}, },
} }

View File

@@ -52,21 +52,17 @@
<ion-row class="ion-align-items-center"> <ion-row class="ion-align-items-center">
<ion-col class="ion-text-center"> <ion-col class="ion-text-center">
<h2> <h2>
<ion-text color="warning"> <ion-text color="warning">Http detected</ion-text>
You are using an unencrypted http connection
</ion-text>
</h2> </h2>
<p class="ion-padding-bottom"> <p class="ion-padding-bottom">
Click the button below to switch to https. Your browser may warn Your connection is insecure.
you that the page is insecure. You can safely bypass this
warning. It will go away after you
<a <a
[routerLink]="['/system', 'lan']" [routerLink]="['/system', 'root-ca']"
style="color: var(--ion-color-dark)" style="color: var(--ion-color-dark)"
> >
download and trust your server's certificate Download and trust your server's Root CA
</a> </a>
. , then switch to https.
</p> </p>
<ion-button (click)="launchHttps()"> <ion-button (click)="launchHttps()">
Open https Open https

View File

@@ -65,7 +65,9 @@ export class AppShowPage {
} }
async launchHttps() { async launchHttps() {
const { 'lan-address': lanAddress } = await getServerInfo(this.patch) const onTor = this.config.isTor()
window.open(lanAddress) const { 'lan-address': lanAddress, 'tor-address': torAddress } =
await getServerInfo(this.patch)
onTor ? window.open(torAddress) : window.open(lanAddress)
} }
} }

View File

@@ -40,7 +40,7 @@ export class MarketplaceListPage {
if (url === start9) { if (url === start9) {
color = 'success' color = 'success'
description = description =
'Services from this registry are packaged and maintained by the Start9 team. If you experience an issue or have a questions related to a service from this registry, one of our dedicated support staff will be happy to assist you.' 'Services from this registry are packaged and maintained by the Start9 team. If you experience an issue or have a question related to a service from this registry, one of our dedicated support staff will be happy to assist you.'
} else if (url === community) { } else if (url === community) {
color = 'tertiary' color = 'tertiary'
description = description =

View File

@@ -43,9 +43,6 @@ export class ExperimentalFeaturesPage {
label: 'Wipe state', label: 'Wipe state',
type: 'checkbox', type: 'checkbox',
value: 'wipe', value: 'wipe',
handler: val => {
console.error(val)
},
}, },
], ],
buttons: [ buttons: [
@@ -56,8 +53,7 @@ export class ExperimentalFeaturesPage {
{ {
text: 'Reset', text: 'Reset',
handler: (value: string[]) => { handler: (value: string[]) => {
console.error(value) this.resetTor(value.some(v => v === 'wipe'))
this.resetTor(value.some(v => 'wipe'))
}, },
cssClass: 'enter-click', cssClass: 'enter-click',
}, },

Some files were not shown because too many files have changed in this diff Show More