Compare commits

..

32 Commits

Author SHA1 Message Date
gStart9
efc56c0a88 Add crda to build/lib/depends (#2283) 2023-05-24 15:54:33 -07:00
kn0wmad
321fca2c0a Replace some user-facing Embassy language (#2281) 2023-05-22 13:23:20 -06:00
Matt Hill
bbd66e9cb0 fix nav link (#2279) 2023-05-18 18:11:27 -06:00
Aiden McClelland
eb0277146c wait for tor (#2278) 2023-05-17 22:17:27 -06:00
Aiden McClelland
10ee32ec48 always generate snake-oil (#2277) 2023-05-17 15:09:27 -06:00
Aiden McClelland
bdb4be89ff Bugfix/pi config (#2276)
* move some install scripts to init

* fix pi config.txt

* move some image stuff to the squashfs build

* no need to clean up fake-apt

* use max temp
2023-05-16 16:06:25 -06:00
Aiden McClelland
61445e0b56 build fixes (#2275)
* move some install scripts to init

* handle fake-apt in init

* rename
2023-05-15 16:34:30 -06:00
Aiden McClelland
f15a010e0e Update build badge (#2274)
Update README.md
2023-05-14 00:01:58 -06:00
Lucy C
58747004fe Fix/misc frontend (#2273)
* update pwa icon to official latest

* fix bug if icon is null in assets

* dismiss modal when connecting to a new registry
2023-05-12 14:48:16 -06:00
Lucy C
e7ff1eb66b display icons based on mime type (#2271)
* display icons based on mime type

* Update frontend/projects/marketplace/src/pipes/mime-type.pipe.ts

Co-authored-by: Aiden McClelland <3732071+dr-bonez@users.noreply.github.com>

* fixes

---------

Co-authored-by: Aiden McClelland <3732071+dr-bonez@users.noreply.github.com>
2023-05-12 12:20:05 -06:00
Matt Hill
4a00bd4797 ensure lan address present before getting cert name (#2272) 2023-05-12 12:18:39 -06:00
Aiden McClelland
2e6fc7e4a0 v0.3.4.2 (#2269) 2023-05-12 00:35:50 -06:00
Aiden McClelland
4a8f323be7 external rename (#2265)
* backend rename

* rename embassy and closes #2179

* update root ca name on disk

* update MOTD

* update readmes

* your server typo

* another tiny typo

* fix png name

* Update backend/src/net/wifi.rs

Co-authored-by: Lucy C <12953208+elvece@users.noreply.github.com>

* changes needed due to rebase

---------

Co-authored-by: Matt Hill <matthewonthemoon@gmail.com>
Co-authored-by: Matt Hill <MattDHill@users.noreply.github.com>
Co-authored-by: Lucy C <12953208+elvece@users.noreply.github.com>
2023-05-11 16:48:52 -06:00
Aiden McClelland
c7d82102ed Bugfix/gpt reflash (#2266)
* debug entry

* update magic numbers

* remove dbg

* fix hostname

* fix reinstall logic
2023-05-11 14:16:19 -06:00
Aiden McClelland
068b861edc overhaul OS build (#2244)
* create init resize for pi

* wip

* defer to OS_ARCH env var

* enable password auth in live image

* use correct live image path

* reorder dependencies

* add grub-common as dependency

* add more depends

* reorder grub

* include systemd-resolved

* misc fixes

* remove grub from dependencies

* imports

* ssh and raspi builds

* fix resolvectl

* generate snake-oil on install

* update raspi build process

* script fixes

* fix resize and config

* add psmisc

* new workflows

* include img

* pass through OS_ARCH env var

* require OS_ARCH

* allow dispatching production builds

* configurable environment

* pass through OS_ARCH on compat build

* fix syntax error

* crossbuild dependencies

* include libavahi-client for cross builds

* reorder add-arch

* add ports

* switch existing repos to amd64

* explicitly install libc6

* add more bullshit

* fix some errors

* use ignored shlibs

* remove ubuntu ports

* platform deb

* Update depends

* Update startos-iso.yaml

* Update startos-iso.yaml

* require pi-beep

* add bios boot, fix environment

* Update startos-iso.yaml

* inline deb

* Update startos-iso.yaml

* allow ssh password auth in live build

* sync hostname on livecd

* require curl
2023-05-05 00:54:09 -06:00
kn0wmad
3c908c6a09 Update README.md (#2261)
Minor typo fix
2023-05-02 06:26:54 -06:00
Lucy C
ba3805786c Feature/pwa (#2246)
* setup ui project with pwa configurations

* enable service worker config to work with ionic livereload

* fix service worker key placement

* update webmanifest names

* cleanup

* shrink logo size

* fix package build

* build fix

* fix icon size in webmanifest
2023-04-11 10:36:25 -06:00
Aiden McClelland
70afb197f1 don't attempt docker load if s9pk corrupted (#2236) 2023-03-21 11:23:44 -06:00
Aiden McClelland
d966e35054 fix migration 2023-03-17 18:58:49 -06:00
Aiden McClelland
1675570291 fix test 2023-03-17 14:42:32 -06:00
Aiden McClelland
9b88de656e version bump (#2232)
* version bump

* welcome notes

* 0341 release notes

---------

Co-authored-by: Matt Hill <matthewonthemoon@gmail.com>
2023-03-17 12:55:21 -06:00
Aiden McClelland
3d39b5653d don't blow up if s9pk fails to load (#2231) 2023-03-17 12:09:24 -06:00
J H
eb5f7f64ad feat: Default to no owner for rsync (#2230) 2023-03-17 12:09:13 -06:00
Aiden McClelland
9fc0164c4d better logging of health (#2228) 2023-03-17 12:09:01 -06:00
Aiden McClelland
65eb520cca disable apt and add script for persisting apt pkgs (#2225)
* disable apt and add script for persisting apt pkgs

* fix typo

* exit 1 on fake-apt

* readd fake-apt after upgrade

* fix typo

* remove finicky protection

* fix build
2023-03-17 12:08:49 -06:00
Aiden McClelland
f7f07932b4 update registry rsync script (#2227) 2023-03-17 10:05:58 -06:00
Aiden McClelland
de52494039 fix loading authcookie into cookie store on ssh (#2226) 2023-03-17 10:05:12 -06:00
Matt Hill
4d87ee2bb6 update display obj on union change (#2224)
* update display obj on union change

* deelete unnecessary changes

* more efficient

* fix: properly change height of form object

* more config examples

---------

Co-authored-by: waterplea <alexander@inkin.ru>
2023-03-17 11:57:26 -04:00
Matt Hill
d0ba0936ca remove taiga icons (#2222) 2023-03-15 12:29:24 -06:00
Matt Hill
b08556861f Fix/stupid updates (#2221)
one more thing
2023-03-15 12:23:25 -06:00
Aiden McClelland
c96628ad49 do not log parameters 2023-03-15 12:19:11 -06:00
Matt Hill
a615882b3f fix more bugs with updates tab... (#2219) 2023-03-15 11:33:54 -06:00
214 changed files with 2474 additions and 2306 deletions

View File

@@ -1,6 +1,6 @@
name: 🐛 Bug Report name: 🐛 Bug Report
description: Create a report to help us improve embassyOS description: Create a report to help us improve StartOS
title: '[bug]: ' title: "[bug]: "
labels: [Bug, Needs Triage] labels: [Bug, Needs Triage]
assignees: assignees:
- MattDHill - MattDHill
@@ -10,19 +10,19 @@ body:
label: Prerequisites label: Prerequisites
description: Please confirm you have completed the following. description: Please confirm you have completed the following.
options: options:
- label: I have searched for [existing issues](https://github.com/start9labs/embassy-os/issues) that already report this problem. - label: I have searched for [existing issues](https://github.com/start9labs/start-os/issues) that already report this problem.
required: true required: true
- type: input - type: input
attributes: attributes:
label: embassyOS Version label: StartOS Version
description: What version of embassyOS are you running? description: What version of StartOS are you running?
placeholder: e.g. 0.3.0 placeholder: e.g. 0.3.4.2
validations: validations:
required: true required: true
- type: dropdown - type: dropdown
attributes: attributes:
label: Device label: Device
description: What device are you using to connect to Embassy? description: What device are you using to connect to your server?
options: options:
- Phone/tablet - Phone/tablet
- Laptop/Desktop - Laptop/Desktop
@@ -52,7 +52,7 @@ body:
- type: dropdown - type: dropdown
attributes: attributes:
label: Browser label: Browser
description: What browser are you using to connect to Embassy? description: What browser are you using to connect to your server?
options: options:
- Firefox - Firefox
- Brave - Brave

View File

@@ -1,6 +1,6 @@
name: 💡 Feature Request name: 💡 Feature Request
description: Suggest an idea for embassyOS description: Suggest an idea for StartOS
title: '[feat]: ' title: "[feat]: "
labels: [Enhancement] labels: [Enhancement]
assignees: assignees:
- MattDHill - MattDHill
@@ -10,7 +10,7 @@ body:
label: Prerequisites label: Prerequisites
description: Please confirm you have completed the following. description: Please confirm you have completed the following.
options: options:
- label: I have searched for [existing issues](https://github.com/start9labs/embassy-os/issues) that already suggest this feature. - label: I have searched for [existing issues](https://github.com/start9labs/start-os/issues) that already suggest this feature.
required: true required: true
- type: textarea - type: textarea
attributes: attributes:
@@ -27,7 +27,7 @@ body:
- type: textarea - type: textarea
attributes: attributes:
label: Describe Preferred Solution label: Describe Preferred Solution
description: How you want this feature added to embassyOS? description: How you want this feature added to StartOS?
- type: textarea - type: textarea
attributes: attributes:
label: Describe Alternatives label: Describe Alternatives

View File

@@ -1,63 +0,0 @@
name: Debian Package
on:
workflow_call:
workflow_dispatch:
env:
NODEJS_VERSION: '16.11.0'
ENVIRONMENT: "dev"
jobs:
dpkg:
name: Build dpkg
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
with:
repository: Start9Labs/embassy-os-deb
- uses: actions/checkout@v3
with:
submodules: recursive
path: embassyos-0.3.x
- run: |
cp -r debian embassyos-0.3.x/
VERSION=0.3.x ./control.sh
cp embassyos-0.3.x/backend/embassyd.service embassyos-0.3.x/debian/embassyos.embassyd.service
cp embassyos-0.3.x/backend/embassy-init.service embassyos-0.3.x/debian/embassyos.embassy-init.service
- uses: actions/setup-node@v3
with:
node-version: ${{ env.NODEJS_VERSION }}
- name: Get npm cache directory
id: npm-cache-dir
run: |
echo "dir=$(npm config get cache)" >> $GITHUB_OUTPUT
- uses: actions/cache@v3
id: npm-cache
with:
path: ${{ steps.npm-cache-dir.outputs.dir }}
key: ${{ runner.os }}-node-${{ hashFiles('**/package-lock.json') }}
restore-keys: |
${{ runner.os }}-node-
- name: Install dependencies
run: |
sudo apt-get update
sudo apt-get install debmake debhelper-compat
- name: Set up QEMU
uses: docker/setup-qemu-action@v2
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@v2
- name: Run build
run: "make VERSION=0.3.x TAG=${{ github.ref_name }}"
- uses: actions/upload-artifact@v3
with:
name: deb
path: embassyos_0.3.x-1_amd64.deb

View File

@@ -5,7 +5,7 @@ on:
workflow_dispatch: workflow_dispatch:
env: env:
NODEJS_VERSION: '16.11.0' NODEJS_VERSION: '18.15.0'
ENVIRONMENT: "dev" ENVIRONMENT: "dev"
jobs: jobs:

View File

@@ -1,129 +0,0 @@
name: Build Pipeline
on:
workflow_dispatch:
push:
branches:
- master
- next
pull_request:
branches:
- master
- next
env:
ENVIRONMENT: "dev"
jobs:
compat:
uses: ./.github/workflows/reusable-workflow.yaml
with:
build_command: make system-images/compat/docker-images/aarch64.tar
artifact_name: compat.tar
artifact_path: system-images/compat/docker-images/aarch64.tar
utils:
uses: ./.github/workflows/reusable-workflow.yaml
with:
build_command: make system-images/utils/docker-images/aarch64.tar
artifact_name: utils.tar
artifact_path: system-images/utils/docker-images/aarch64.tar
binfmt:
uses: ./.github/workflows/reusable-workflow.yaml
with:
build_command: make system-images/binfmt/docker-images/aarch64.tar
artifact_name: binfmt.tar
artifact_path: system-images/binfmt/docker-images/aarch64.tar
backend:
uses: ./.github/workflows/backend.yaml
frontend:
uses: ./.github/workflows/frontend.yaml
image:
name: Build image
runs-on: ubuntu-latest
timeout-minutes: 60
needs: [compat,utils,binfmt,backend,frontend]
steps:
- uses: actions/checkout@v3
with:
submodules: recursive
- name: Download compat.tar artifact
uses: actions/download-artifact@v3
with:
name: compat.tar
path: system-images/compat/docker-images/
- name: Download utils.tar artifact
uses: actions/download-artifact@v3
with:
name: utils.tar
path: system-images/utils/docker-images/
- name: Download binfmt.tar artifact
uses: actions/download-artifact@v3
with:
name: binfmt.tar
path: system-images/binfmt/docker-images/
- name: Download js_snapshot artifact
uses: actions/download-artifact@v3
with:
name: js_snapshot
path: libs/js_engine/src/artifacts/
- name: Download arm_js_snapshot artifact
uses: actions/download-artifact@v3
with:
name: arm_js_snapshot
path: libs/js_engine/src/artifacts/
- name: Download backend artifact
uses: actions/download-artifact@v3
with:
name: backend-aarch64
- name: 'Extract backend'
run:
tar -mxvf backend-aarch64.tar
- name: Download frontend artifact
uses: actions/download-artifact@v3
with:
name: frontend
- name: Skip frontend build
run: |
mkdir frontend/node_modules
mkdir frontend/dist
mkdir patch-db/client/node_modules
mkdir patch-db/client/dist
- name: 'Extract frontend'
run: |
tar -mxvf frontend.tar frontend/config.json
tar -mxvf frontend.tar frontend/dist
tar -xvf frontend.tar GIT_HASH.txt
tar -xvf frontend.tar ENVIRONMENT.txt
tar -xvf frontend.tar VERSION.txt
rm frontend.tar
- name: Cache raspiOS
id: cache-raspios
uses: actions/cache@v3
with:
path: raspios.img
key: cache-raspios
- name: Build image
run: |
make V=1 eos_raspberrypi-uninit.img --debug
- uses: actions/upload-artifact@v3
with:
name: image
path: eos_raspberrypi-uninit.img

View File

@@ -1,70 +0,0 @@
name: PureOS Based ISO
on:
workflow_call:
workflow_dispatch:
push:
branches:
- master
- next
pull_request:
branches:
- master
- next
env:
ENVIRONMENT: "dev"
jobs:
dpkg:
uses: ./.github/workflows/debian.yaml
iso:
name: Build iso
runs-on: ubuntu-22.04
needs: [dpkg]
steps:
- uses: actions/checkout@v3
with:
repository: Start9Labs/eos-image-recipes
- name: Install dependencies
run: |
sudo apt update
wget http://ftp.us.debian.org/debian/pool/main/d/debspawn/debspawn_0.6.1-1_all.deb
sha256sum ./debspawn_0.6.1-1_all.deb | grep fb8a3f588438ff9ef51e713ec1d83306db893f0aa97447565e28bbba9c6e90c6
sudo apt-get install -y ./debspawn_0.6.1-1_all.deb
wget https://repo.pureos.net/pureos/pool/main/d/debootstrap/debootstrap_1.0.125pureos1_all.deb
sudo apt-get install -y --allow-downgrades ./debootstrap_1.0.125pureos1_all.deb
wget https://repo.pureos.net/pureos/pool/main/p/pureos-archive-keyring/pureos-archive-keyring_2021.11.0_all.deb
sudo apt-get install -y ./pureos-archive-keyring_2021.11.0_all.deb
- name: Configure debspawn
run: |
sudo mkdir -p /etc/debspawn/
echo "AllowUnsafePermissions=true" | sudo tee /etc/debspawn/global.toml
- uses: actions/cache@v3
with:
path: /var/lib/debspawn
key: ${{ runner.os }}-debspawn-init-byzantium
- name: Make build container
run: "debspawn list | grep byzantium || debspawn create --with-init byzantium"
- run: "mkdir -p overlays/vendor/root"
- name: Download dpkg
uses: actions/download-artifact@v3
with:
name: deb
path: overlays/vendor/root
- name: Run build
run: |
./run-local-build.sh --no-fakemachine byzantium none custom "" true
- uses: actions/upload-artifact@v3
with:
name: iso
path: results/*.iso

173
.github/workflows/startos-iso.yaml vendored Normal file
View File

@@ -0,0 +1,173 @@
name: Debian-based ISO and SquashFS
on:
workflow_call:
workflow_dispatch:
inputs:
environment:
type: choice
description: Environment
options:
- "<NONE>"
- dev
- unstable
- dev-unstable
push:
branches:
- master
- next
pull_request:
branches:
- master
- next
env:
NODEJS_VERSION: "18.15.0"
ENVIRONMENT: '${{ fromJson(format(''["{0}", ""]'', github.event.inputs.environment || ''dev''))[github.event.inputs.environment == ''<NONE>''] }}'
jobs:
dpkg:
name: Build dpkg
strategy:
fail-fast: false
matrix:
platform:
[x86_64, x86_64-nonfree, aarch64, aarch64-nonfree, raspberrypi]
runs-on: ubuntu-22.04
steps:
- uses: actions/checkout@v3
with:
repository: Start9Labs/embassy-os-deb
- uses: actions/checkout@v3
with:
submodules: recursive
path: embassyos-0.3.x
- run: |
cp -r debian embassyos-0.3.x/
VERSION=0.3.x ./control.sh
cp embassyos-0.3.x/backend/embassyd.service embassyos-0.3.x/debian/embassyos.embassyd.service
cp embassyos-0.3.x/backend/embassy-init.service embassyos-0.3.x/debian/embassyos.embassy-init.service
- uses: actions/setup-node@v3
with:
node-version: ${{ env.NODEJS_VERSION }}
- name: Get npm cache directory
id: npm-cache-dir
run: |
echo "dir=$(npm config get cache)" >> $GITHUB_OUTPUT
- uses: actions/cache@v3
id: npm-cache
with:
path: ${{ steps.npm-cache-dir.outputs.dir }}
key: ${{ runner.os }}-node-${{ hashFiles('**/package-lock.json') }}
restore-keys: |
${{ runner.os }}-node-
- name: Install dependencies
run: |
sudo apt-get update
sudo apt-get install \
debmake \
debhelper-compat \
crossbuild-essential-arm64
- name: Set up QEMU
uses: docker/setup-qemu-action@v2
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@v2
- name: Run build
run: "make VERSION=0.3.x TAG=${{ github.ref_name }}"
env:
OS_ARCH: ${{ matrix.platform }}
- uses: actions/upload-artifact@v3
with:
name: ${{ matrix.platform }}.deb
path: embassyos_0.3.x-1_*.deb
iso:
name: Build iso
strategy:
fail-fast: false
matrix:
platform:
[x86_64, x86_64-nonfree, aarch64, aarch64-nonfree, raspberrypi]
runs-on: ubuntu-22.04
needs: [dpkg]
steps:
- uses: actions/checkout@v3
with:
repository: Start9Labs/startos-image-recipes
- name: Install dependencies
run: |
sudo apt-get update
sudo apt-get install -y qemu-user-static
wget http://ftp.us.debian.org/debian/pool/main/d/debspawn/debspawn_0.6.1-1_all.deb
sha256sum ./debspawn_0.6.1-1_all.deb | grep fb8a3f588438ff9ef51e713ec1d83306db893f0aa97447565e28bbba9c6e90c6
sudo apt-get install -y ./debspawn_0.6.1-1_all.deb
- name: Configure debspawn
run: |
sudo mkdir -p /etc/debspawn/
echo "AllowUnsafePermissions=true" | sudo tee /etc/debspawn/global.toml
- uses: actions/cache@v3
with:
path: /var/lib/debspawn
key: ${{ runner.os }}-debspawn-init-bullseye
- name: Make build container
run: "debspawn list | grep bullseye || debspawn create bullseye"
- run: "mkdir -p overlays/deb"
- name: Download dpkg
uses: actions/download-artifact@v3
with:
name: ${{ matrix.platform }}.deb
path: overlays/deb
- name: Run build
run: |
./run-local-build.sh ${{ matrix.platform }}
- uses: actions/upload-artifact@v3
with:
name: ${{ matrix.platform }}.squashfs
path: results/*.squashfs
- uses: actions/upload-artifact@v3
with:
name: ${{ matrix.platform }}.iso
path: results/*.iso
if: ${{ matrix.platform != 'raspberrypi' }}
image:
name: Build image
runs-on: ubuntu-22.04
timeout-minutes: 60
needs: [iso]
steps:
- uses: actions/checkout@v3
with:
submodules: recursive
- name: Download raspberrypi.squashfs artifact
uses: actions/download-artifact@v3
with:
name: raspberrypi.squashfs
- run: mv startos-*_raspberrypi.squashfs startos.raspberrypi.squashfs
- name: Build image
run: make startos_raspberrypi.img
- uses: actions/upload-artifact@v3
with:
name: raspberrypi.img
path: startos-*_raspberrypi.img

View File

@@ -1,6 +1,6 @@
<!-- omit in toc --> <!-- omit in toc -->
# Contributing to Embassy OS # Contributing to StartOS
First off, thanks for taking the time to contribute! ❤️ First off, thanks for taking the time to contribute! ❤️
@@ -19,7 +19,7 @@ forward to your contributions. 🎉
> - Tweet about it > - Tweet about it
> - Refer this project in your project's readme > - Refer this project in your project's readme
> - Mention the project at local meetups and tell your friends/colleagues > - Mention the project at local meetups and tell your friends/colleagues
> - Buy an [Embassy](https://start9labs.com) > - Buy a [Start9 server](https://start9.com)
<!-- omit in toc --> <!-- omit in toc -->
@@ -49,7 +49,7 @@ forward to your contributions. 🎉
> [Documentation](https://docs.start9labs.com). > [Documentation](https://docs.start9labs.com).
Before you ask a question, it is best to search for existing Before you ask a question, it is best to search for existing
[Issues](https://github.com/Start9Labs/embassy-os/issues) that might help you. [Issues](https://github.com/Start9Labs/start-os/issues) that might help you.
In case you have found a suitable issue and still need clarification, you can In case you have found a suitable issue and still need clarification, you can
write your question in this issue. It is also advisable to search the internet write your question in this issue. It is also advisable to search the internet
for answers first. for answers first.
@@ -57,7 +57,7 @@ for answers first.
If you then still feel the need to ask a question and need clarification, we If you then still feel the need to ask a question and need clarification, we
recommend the following: recommend the following:
- Open an [Issue](https://github.com/Start9Labs/embassy-os/issues/new). - Open an [Issue](https://github.com/Start9Labs/start-os/issues/new).
- Provide as much context as you can about what you're running into. - Provide as much context as you can about what you're running into.
- Provide project and platform versions, depending on what seems relevant. - Provide project and platform versions, depending on what seems relevant.
@@ -105,7 +105,7 @@ steps in advance to help us fix any potential bug as fast as possible.
- To see if other users have experienced (and potentially already solved) the - To see if other users have experienced (and potentially already solved) the
same issue you are having, check if there is not already a bug report existing same issue you are having, check if there is not already a bug report existing
for your bug or error in the for your bug or error in the
[bug tracker](https://github.com/Start9Labs/embassy-os/issues?q=label%3Abug). [bug tracker](https://github.com/Start9Labs/start-os/issues?q=label%3Abug).
- Also make sure to search the internet (including Stack Overflow) to see if - Also make sure to search the internet (including Stack Overflow) to see if
users outside of the GitHub community have discussed the issue. users outside of the GitHub community have discussed the issue.
- Collect information about the bug: - Collect information about the bug:
@@ -131,7 +131,7 @@ steps in advance to help us fix any potential bug as fast as possible.
We use GitHub issues to track bugs and errors. If you run into an issue with the We use GitHub issues to track bugs and errors. If you run into an issue with the
project: project:
- Open an [Issue](https://github.com/Start9Labs/embassy-os/issues/new/choose) - Open an [Issue](https://github.com/Start9Labs/start-os/issues/new/choose)
selecting the appropriate type. selecting the appropriate type.
- Explain the behavior you would expect and the actual behavior. - Explain the behavior you would expect and the actual behavior.
- Please provide as much context as possible and describe the _reproduction - Please provide as much context as possible and describe the _reproduction
@@ -155,8 +155,7 @@ Once it's filed:
### Suggesting Enhancements ### Suggesting Enhancements
This section guides you through submitting an enhancement suggestion for Embassy This section guides you through submitting an enhancement suggestion for StartOS, **including completely new features and minor improvements to existing
OS, **including completely new features and minor improvements to existing
functionality**. Following these guidelines will help maintainers and the functionality**. Following these guidelines will help maintainers and the
community to understand your suggestion and find related suggestions. community to understand your suggestion and find related suggestions.
@@ -168,7 +167,7 @@ community to understand your suggestion and find related suggestions.
- Read the [documentation](https://start9.com/latest/user-manual) carefully and - Read the [documentation](https://start9.com/latest/user-manual) carefully and
find out if the functionality is already covered, maybe by an individual find out if the functionality is already covered, maybe by an individual
configuration. configuration.
- Perform a [search](https://github.com/Start9Labs/embassy-os/issues) to see if - Perform a [search](https://github.com/Start9Labs/start-os/issues) to see if
the enhancement has already been suggested. If it has, add a comment to the the enhancement has already been suggested. If it has, add a comment to the
existing issue instead of opening a new one. existing issue instead of opening a new one.
- Find out whether your idea fits with the scope and aims of the project. It's - Find out whether your idea fits with the scope and aims of the project. It's
@@ -182,7 +181,7 @@ community to understand your suggestion and find related suggestions.
#### How Do I Submit a Good Enhancement Suggestion? #### How Do I Submit a Good Enhancement Suggestion?
Enhancement suggestions are tracked as Enhancement suggestions are tracked as
[GitHub issues](https://github.com/Start9Labs/embassy-os/issues). [GitHub issues](https://github.com/Start9Labs/start-os/issues).
- Use a **clear and descriptive title** for the issue to identify the - Use a **clear and descriptive title** for the issue to identify the
suggestion. suggestion.
@@ -197,7 +196,7 @@ Enhancement suggestions are tracked as
macOS and Windows, and [this tool](https://github.com/colinkeenan/silentcast) macOS and Windows, and [this tool](https://github.com/colinkeenan/silentcast)
or [this tool](https://github.com/GNOME/byzanz) on Linux. or [this tool](https://github.com/GNOME/byzanz) on Linux.
<!-- this should only be included if the project has a GUI --> <!-- this should only be included if the project has a GUI -->
- **Explain why this enhancement would be useful** to most Embassy OS users. You - **Explain why this enhancement would be useful** to most StartOS users. You
may also want to point out the other projects that solved it better and which may also want to point out the other projects that solved it better and which
could serve as inspiration. could serve as inspiration.
@@ -205,24 +204,24 @@ Enhancement suggestions are tracked as
### Project Structure ### Project Structure
embassyOS is composed of the following components. Please visit the README for StartOS is composed of the following components. Please visit the README for
each component to understand the dependency requirements and installation each component to understand the dependency requirements and installation
instructions. instructions.
- [`backend`](backend/README.md) (Rust) is a command line utility, daemon, and - [`backend`](backend/README.md) (Rust) is a command line utility, daemon, and
software development kit that sets up and manages services and their software development kit that sets up and manages services and their
environments, provides the interface for the ui, manages system state, and environments, provides the interface for the ui, manages system state, and
provides utilities for packaging services for embassyOS. provides utilities for packaging services for StartOS.
- [`build`](build/README.md) contains scripts and necessary for deploying - [`build`](build/README.md) contains scripts and necessary for deploying
embassyOS to a debian/raspbian system. StartOS to a debian/raspbian system.
- [`frontend`](frontend/README.md) (Typescript Ionic Angular) is the code that - [`frontend`](frontend/README.md) (Typescript Ionic Angular) is the code that
is deployed to the browser to provide the user interface for embassyOS. is deployed to the browser to provide the user interface for StartOS.
- `projects/ui` - Code for the user interface that is displayed when embassyOS - `projects/ui` - Code for the user interface that is displayed when StartOS
is running normally. is running normally.
- `projects/setup-wizard`(frontend/README.md) - Code for the user interface - `projects/setup-wizard`(frontend/README.md) - Code for the user interface
that is displayed during the setup and recovery process for embassyOS. that is displayed during the setup and recovery process for StartOS.
- `projects/diagnostic-ui` - Code for the user interface that is displayed - `projects/diagnostic-ui` - Code for the user interface that is displayed
when something has gone wrong with starting up embassyOS, which provides when something has gone wrong with starting up StartOS, which provides
helpful debugging tools. helpful debugging tools.
- `libs` (Rust) is a set of standalone crates that were separated out of - `libs` (Rust) is a set of standalone crates that were separated out of
`backend` for the purpose of portability `backend` for the purpose of portability
@@ -232,18 +231,18 @@ instructions.
[client](https://github.com/Start9Labs/patch-db/tree/master/client) with its [client](https://github.com/Start9Labs/patch-db/tree/master/client) with its
own dependency and installation requirements. own dependency and installation requirements.
- `system-images` - (Docker, Rust) A suite of utility Docker images that are - `system-images` - (Docker, Rust) A suite of utility Docker images that are
preloaded with embassyOS to assist with functions relating to services (eg. preloaded with StartOS to assist with functions relating to services (eg.
configuration, backups, health checks). configuration, backups, health checks).
### Your First Code Contribution ### Your First Code Contribution
#### Setting Up Your Development Environment #### Setting Up Your Development Environment
First, clone the embassyOS repository and from the project root, pull in the First, clone the StartOS repository and from the project root, pull in the
submodules for dependent libraries. submodules for dependent libraries.
```sh ```sh
git clone https://github.com/Start9Labs/embassy-os.git git clone https://github.com/Start9Labs/start-os.git
git submodule update --init --recursive git submodule update --init --recursive
``` ```
@@ -254,7 +253,7 @@ to, follow the installation requirements listed in that component's README
#### Building The Raspberry Pi Image #### Building The Raspberry Pi Image
This step is for setting up an environment in which to test your code changes if This step is for setting up an environment in which to test your code changes if
you do not yet have a embassyOS. you do not yet have a StartOS.
- Requirements - Requirements
- `ext4fs` (available if running on the Linux kernel) - `ext4fs` (available if running on the Linux kernel)
@@ -262,7 +261,7 @@ you do not yet have a embassyOS.
- GNU Make - GNU Make
- Building - Building
- see setup instructions [here](build/README.md) - see setup instructions [here](build/README.md)
- run `make embassyos-raspi.img ARCH=aarch64` from the project root - run `make startos-raspi.img ARCH=aarch64` from the project root
### Improving The Documentation ### Improving The Documentation
@@ -286,7 +285,7 @@ seamless and intuitive experience.
### Formatting ### Formatting
Each component of embassyOS contains its own style guide. Code must be formatted Each component of StartOS contains its own style guide. Code must be formatted
with the formatter designated for each component. These are outlined within each with the formatter designated for each component. These are outlined within each
component folder's README. component folder's README.
@@ -306,7 +305,7 @@ component. i.e. `backend: update to tokio v0.3`.
The body of a pull request should contain sufficient description of what the The body of a pull request should contain sufficient description of what the
changes do, as well as a justification. You should include references to any changes do, as well as a justification. You should include references to any
relevant [issues](https://github.com/Start9Labs/embassy-os/issues). relevant [issues](https://github.com/Start9Labs/start-os/issues).
### Rebasing Changes ### Rebasing Changes

View File

@@ -1,6 +1,5 @@
RASPI_TARGETS := eos_raspberrypi-uninit.img eos_raspberrypi-uninit.tar.gz OS_ARCH := $(shell echo "${OS_ARCH}")
OS_ARCH := $(shell if echo $(RASPI_TARGETS) | grep -qw "$(MAKECMDGOALS)"; then echo raspberrypi; else uname -m; fi) ARCH := $(shell if [ "$(OS_ARCH)" = "raspberrypi" ]; then echo aarch64; else echo $(OS_ARCH) | sed 's/-nonfree$$//g'; fi)
ARCH := $(shell if [ "$(OS_ARCH)" = "raspberrypi" ]; then echo aarch64; else echo $(OS_ARCH); fi)
ENVIRONMENT_FILE = $(shell ./check-environment.sh) ENVIRONMENT_FILE = $(shell ./check-environment.sh)
GIT_HASH_FILE = $(shell ./check-git-hash.sh) GIT_HASH_FILE = $(shell ./check-git-hash.sh)
VERSION_FILE = $(shell ./check-version.sh) VERSION_FILE = $(shell ./check-version.sh)
@@ -19,7 +18,7 @@ FRONTEND_DIAGNOSTIC_UI_SRC := $(shell find frontend/projects/diagnostic-ui)
FRONTEND_INSTALL_WIZARD_SRC := $(shell find frontend/projects/install-wizard) FRONTEND_INSTALL_WIZARD_SRC := $(shell find frontend/projects/install-wizard)
PATCH_DB_CLIENT_SRC := $(shell find patch-db/client -not -path patch-db/client/dist) PATCH_DB_CLIENT_SRC := $(shell find patch-db/client -not -path patch-db/client/dist)
GZIP_BIN := $(shell which pigz || which gzip) GZIP_BIN := $(shell which pigz || which gzip)
ALL_TARGETS := $(EMBASSY_BINS) system-images/compat/docker-images/$(ARCH).tar system-images/utils/docker-images/$(ARCH).tar system-images/binfmt/docker-images/$(ARCH).tar $(EMBASSY_SRC) $(ENVIRONMENT_FILE) $(GIT_HASH_FILE) $(VERSION_FILE) ALL_TARGETS := $(EMBASSY_BINS) system-images/compat/docker-images/$(ARCH).tar system-images/utils/docker-images/$(ARCH).tar system-images/binfmt/docker-images/$(ARCH).tar $(EMBASSY_SRC) $(shell if [ "$(OS_ARCH)" = "raspberrypi" ]; then echo cargo-deps/aarch64-unknown-linux-gnu/release/pi-beep; fi) $(ENVIRONMENT_FILE) $(GIT_HASH_FILE) $(VERSION_FILE)
ifeq ($(REMOTE),) ifeq ($(REMOTE),)
mkdir = mkdir -p $1 mkdir = mkdir -p $1
@@ -35,7 +34,7 @@ endif
.DELETE_ON_ERROR: .DELETE_ON_ERROR:
.PHONY: all gzip install clean format sdk snapshots frontends ui backend reflash eos_raspberrypi.img sudo .PHONY: all gzip install clean format sdk snapshots frontends ui backend reflash startos_raspberrypi.img sudo
all: $(ALL_TARGETS) all: $(ALL_TARGETS)
@@ -43,12 +42,6 @@ sudo:
sudo true sudo true
clean: clean:
rm -f 2022-01-28-raspios-bullseye-arm64-lite.zip
rm -f raspios.img
rm -f eos_raspberrypi-uninit.img
rm -f eos_raspberrypi-uninit.tar.gz
rm -f ubuntu.img
rm -f product_key.txt
rm -f system-images/**/*.tar rm -f system-images/**/*.tar
rm -rf system-images/compat/target rm -rf system-images/compat/target
rm -rf backend/target rm -rf backend/target
@@ -72,17 +65,8 @@ format:
sdk: sdk:
cd backend/ && ./install-sdk.sh cd backend/ && ./install-sdk.sh
eos_raspberrypi-uninit.img: $(ALL_TARGETS) raspios.img cargo-deps/aarch64-unknown-linux-gnu/release/nc-broadcast cargo-deps/aarch64-unknown-linux-gnu/release/pi-beep | sudo startos_raspberrypi.img: $(BUILD_SRC) startos.raspberrypi.squashfs $(VERSION_FILE) $(ENVIRONMENT_FILE) $(GIT_HASH_FILE) cargo-deps/aarch64-unknown-linux-gnu/release/pi-beep | sudo
! test -f eos_raspberrypi-uninit.img || rm eos_raspberrypi-uninit.img ./build/raspberrypi/make-image.sh
./build/raspberry-pi/make-image.sh
lite-upgrade.img: raspios.img cargo-deps/aarch64-unknown-linux-gnu/release/nc-broadcast cargo-deps/aarch64-unknown-linux-gnu/release/pi-beep $(BUILD_SRC) eos.raspberrypi.squashfs
! test -f lite-upgrade.img || rm lite-upgrade.img
./build/raspberry-pi/make-upgrade-image.sh
eos_raspberrypi.img: raspios.img $(BUILD_SRC) eos.raspberrypi.squashfs $(VERSION_FILE) $(ENVIRONMENT_FILE) $(GIT_HASH_FILE) | sudo
! test -f eos_raspberrypi.img || rm eos_raspberrypi.img
./build/raspberry-pi/make-initialized-image.sh
# For creating os images. DO NOT USE # For creating os images. DO NOT USE
install: $(ALL_TARGETS) install: $(ALL_TARGETS)
@@ -91,6 +75,7 @@ install: $(ALL_TARGETS)
$(call cp,backend/target/$(ARCH)-unknown-linux-gnu/release/embassyd,$(DESTDIR)/usr/bin/embassyd) $(call cp,backend/target/$(ARCH)-unknown-linux-gnu/release/embassyd,$(DESTDIR)/usr/bin/embassyd)
$(call cp,backend/target/$(ARCH)-unknown-linux-gnu/release/embassy-cli,$(DESTDIR)/usr/bin/embassy-cli) $(call cp,backend/target/$(ARCH)-unknown-linux-gnu/release/embassy-cli,$(DESTDIR)/usr/bin/embassy-cli)
$(call cp,backend/target/$(ARCH)-unknown-linux-gnu/release/avahi-alias,$(DESTDIR)/usr/bin/avahi-alias) $(call cp,backend/target/$(ARCH)-unknown-linux-gnu/release/avahi-alias,$(DESTDIR)/usr/bin/avahi-alias)
if [ "$(OS_ARCH)" = "raspberrypi" ]; then $(call cp,cargo-deps/aarch64-unknown-linux-gnu/release/pi-beep,$(DESTDIR)/usr/bin/pi-beep); fi
$(call mkdir,$(DESTDIR)/usr/lib) $(call mkdir,$(DESTDIR)/usr/lib)
$(call rm,$(DESTDIR)/usr/lib/embassy) $(call rm,$(DESTDIR)/usr/lib/embassy)
@@ -183,7 +168,7 @@ frontend/config.json: $(GIT_HASH_FILE) frontend/config-sample.json
npm --prefix frontend run-script build-config npm --prefix frontend run-script build-config
frontend/patchdb-ui-seed.json: frontend/package.json frontend/patchdb-ui-seed.json: frontend/package.json
jq '."ack-welcome" = "$(shell yq '.version' frontend/package.json)"' frontend/patchdb-ui-seed.json > ui-seed.tmp jq '."ack-welcome" = $(shell yq '.version' frontend/package.json)' frontend/patchdb-ui-seed.json > ui-seed.tmp
mv ui-seed.tmp frontend/patchdb-ui-seed.json mv ui-seed.tmp frontend/patchdb-ui-seed.json
patch-db/client/node_modules: patch-db/client/package.json patch-db/client/node_modules: patch-db/client/package.json

View File

@@ -1,6 +1,6 @@
# embassyOS # StartOS
[![version](https://img.shields.io/github/v/tag/Start9Labs/embassy-os?color=success)](https://github.com/Start9Labs/embassy-os/releases) [![version](https://img.shields.io/github/v/tag/Start9Labs/start-os?color=success)](https://github.com/Start9Labs/start-os/releases)
[![build](https://github.com/Start9Labs/embassy-os/actions/workflows/product.yaml/badge.svg)](https://github.com/Start9Labs/embassy-os/actions/workflows/product.yaml) [![build](https://github.com/Start9Labs/start-os/actions/workflows/startos-iso.yaml/badge.svg)](https://github.com/Start9Labs/start-os/actions/workflows/startos-iso.yaml)
[![community](https://img.shields.io/badge/community-matrix-yellow)](https://matrix.to/#/#community:matrix.start9labs.com) [![community](https://img.shields.io/badge/community-matrix-yellow)](https://matrix.to/#/#community:matrix.start9labs.com)
[![community](https://img.shields.io/badge/community-telegram-informational)](https://t.me/start9_labs) [![community](https://img.shields.io/badge/community-telegram-informational)](https://t.me/start9_labs)
[![support](https://img.shields.io/badge/support-docs-important)](https://docs.start9.com) [![support](https://img.shields.io/badge/support-docs-important)](https://docs.start9.com)
@@ -12,16 +12,16 @@
### _Welcome to the era of Sovereign Computing_ ### ### _Welcome to the era of Sovereign Computing_ ###
embassyOS is a browser-based, graphical operating system for a personal server. embassyOS facilitates the discovery, installation, network configuration, service configuration, data backup, dependency management, and health monitoring of self-hosted software services. It is the most advanced, secure, reliable, and user friendly personal server OS in the world. StartOS is a browser-based, graphical operating system for a personal server. StartOS facilitates the discovery, installation, network configuration, service configuration, data backup, dependency management, and health monitoring of self-hosted software services. It is the most advanced, secure, reliable, and user friendly personal server OS in the world.
## Running embassyOS ## Running StartOS
There are multiple ways to get your hands on embassyOS. There are multiple ways to get your hands on StartOS.
### :moneybag: Buy an Embassy ### :moneybag: Buy a Start9 server
This is the most convenient option. Simply [buy an Embassy](https://start9.com) from Start9 and plug it in. Depending on where you live, shipping costs and import duties will vary. This is the most convenient option. Simply [buy a server](https://start9.com) from Start9 and plug it in. Depending on where you live, shipping costs and import duties will vary.
### :construction_worker: Build your own Embassy ### :construction_worker: Build your own server
While not as convenient as buying an Embassy, this option is easier than you might imagine, and there are 4 reasons why you might prefer it: This option is easier than you might imagine, and there are 4 reasons why you might prefer it:
1. You already have your own hardware. 1. You already have your own hardware.
1. You want to save on shipping costs. 1. You want to save on shipping costs.
1. You prefer not to divulge your physical address. 1. You prefer not to divulge your physical address.
@@ -29,23 +29,23 @@ While not as convenient as buying an Embassy, this option is easier than you mig
To pursue this option, follow one of our [DIY guides](https://start9.com/latest/diy). To pursue this option, follow one of our [DIY guides](https://start9.com/latest/diy).
### :hammer_and_wrench: Build embassyOS from Source ### :hammer_and_wrench: Build StartOS from Source
embassyOS can be built from source, for personal use, for free. StartOS can be built from source, for personal use, for free.
A detailed guide for doing so can be found [here](https://github.com/Start9Labs/embassy-os/blob/master/build/README.md). A detailed guide for doing so can be found [here](https://github.com/Start9Labs/start-os/blob/master/build/README.md).
## :heart: Contributing ## :heart: Contributing
There are multiple ways to contribute: work directly on embassyOS, package a service for the marketplace, or help with documentation and guides. To learn more about contributing, see [here](https://docs.start9.com/latest/contribute/) or [here](https://github.com/Start9Labs/embassy-os/blob/master/CONTRIBUTING.md). There are multiple ways to contribute: work directly on StartOS, package a service for the marketplace, or help with documentation and guides. To learn more about contributing, see [here](https://docs.start9.com/latest/contribute/) or [here](https://github.com/Start9Labs/start-os/blob/master/CONTRIBUTING.md).
## UI Screenshots ## UI Screenshots
<p align="center"> <p align="center">
<img src="assets/embassyOS.png" alt="embassyOS" width="85%"> <img src="assets/StartOS.png" alt="StartOS" width="85%">
</p> </p>
<p align="center"> <p align="center">
<img src="assets/eOS-preferences.png" alt="Embassy Preferences" width="49%"> <img src="assets/preferences.png" alt="StartOS Preferences" width="49%">
<img src="assets/eOS-ghost.png" alt="Embassy Ghost Service" width="49%"> <img src="assets/ghost.png" alt="StartOS Ghost Service" width="49%">
</p> </p>
<p align="center"> <p align="center">
<img src="assets/eOS-synapse-health-check.png" alt="Embassy Synapse Health Checks" width="49%"> <img src="assets/synapse-health-check.png" alt="StartOS Synapse Health Checks" width="49%">
<img src="assets/eOS-sideload.png" alt="Embassy Sideload Service" width="49%"> <img src="assets/sideload.png" alt="StartOS Sideload Service" width="49%">
</p> </p>

View File

Before

Width:  |  Height:  |  Size: 191 KiB

After

Width:  |  Height:  |  Size: 191 KiB

View File

Before

Width:  |  Height:  |  Size: 281 KiB

After

Width:  |  Height:  |  Size: 281 KiB

View File

Before

Width:  |  Height:  |  Size: 266 KiB

After

Width:  |  Height:  |  Size: 266 KiB

View File

Before

Width:  |  Height:  |  Size: 154 KiB

After

Width:  |  Height:  |  Size: 154 KiB

View File

Before

Width:  |  Height:  |  Size: 213 KiB

After

Width:  |  Height:  |  Size: 213 KiB

2
backend/Cargo.lock generated
View File

@@ -1354,7 +1354,7 @@ dependencies = [
[[package]] [[package]]
name = "embassy-os" name = "embassy-os"
version = "0.3.4" version = "0.3.4-rev.2"
dependencies = [ dependencies = [
"aes", "aes",
"async-compression", "async-compression",

View File

@@ -1,6 +1,6 @@
[package] [package]
authors = ["Aiden McClelland <me@drbonez.dev>"] authors = ["Aiden McClelland <me@drbonez.dev>"]
description = "The core of the Start9 Embassy Operating System" description = "The core of StartOS"
documentation = "https://docs.rs/embassy-os" documentation = "https://docs.rs/embassy-os"
edition = "2021" edition = "2021"
keywords = [ keywords = [
@@ -13,8 +13,8 @@ keywords = [
] ]
name = "embassy-os" name = "embassy-os"
readme = "README.md" readme = "README.md"
repository = "https://github.com/Start9Labs/embassy-os" repository = "https://github.com/Start9Labs/start-os"
version = "0.3.4" version = "0.3.4-rev.2"
[lib] [lib]
name = "embassy" name = "embassy"

View File

@@ -1,27 +1,27 @@
# embassyOS Backend # StartOS Backend
- Requirements: - Requirements:
- [Install Rust](https://rustup.rs) - [Install Rust](https://rustup.rs)
- Recommended: [rust-analyzer](https://rust-analyzer.github.io/) - Recommended: [rust-analyzer](https://rust-analyzer.github.io/)
- [Docker](https://docs.docker.com/get-docker/) - [Docker](https://docs.docker.com/get-docker/)
- [Rust ARM64 Build Container](https://github.com/Start9Labs/rust-arm-builder) - [Rust ARM64 Build Container](https://github.com/Start9Labs/rust-arm-builder)
- Scripts (run withing the `./backend` directory) - Scripts (run within the `./backend` directory)
- `build-prod.sh` - compiles a release build of the artifacts for running on - `build-prod.sh` - compiles a release build of the artifacts for running on
ARM64 ARM64
- A Linux computer or VM - A Linux computer or VM
## Structure ## Structure
The embassyOS backend is broken up into 4 different binaries: The StartOS backend is broken up into 4 different binaries:
- embassyd: This is the main workhorse of embassyOS - any new functionality you - embassyd: This is the main workhorse of StartOS - any new functionality you
want will likely go here want will likely go here
- embassy-init: This is the component responsible for allowing you to set up - embassy-init: This is the component responsible for allowing you to set up
your device, and handles system initialization on startup your device, and handles system initialization on startup
- embassy-cli: This is a CLI tool that will allow you to issue commands to - embassy-cli: This is a CLI tool that will allow you to issue commands to
embassyd and control it similarly to the UI embassyd and control it similarly to the UI
- embassy-sdk: This is a CLI tool that aids in building and packaging services - embassy-sdk: This is a CLI tool that aids in building and packaging services
you wish to deploy to the Embassy you wish to deploy to StartOS
Finally there is a library `embassy` that supports all four of these tools. Finally there is a library `embassy` that supports all four of these tools.
@@ -30,7 +30,7 @@ See [here](/backend/Cargo.toml) for details.
## Building ## Building
You can build the entire operating system image using `make` from the root of You can build the entire operating system image using `make` from the root of
the embassyOS project. This will subsequently invoke the build scripts above to the StartOS project. This will subsequently invoke the build scripts above to
actually create the requisite binaries and put them onto the final operating actually create the requisite binaries and put them onto the final operating
system image. system image.

View File

@@ -3,6 +3,11 @@
set -e set -e
shopt -s expand_aliases shopt -s expand_aliases
if [ -z "$OS_ARCH" ]; then
>&2 echo '$OS_ARCH is required'
exit 1
fi
if [ -z "$ARCH" ]; then if [ -z "$ARCH" ]; then
ARCH=$(uname -m) ARCH=$(uname -m)
fi fi
@@ -17,8 +22,8 @@ if tty -s; then
USE_TTY="-it" USE_TTY="-it"
fi fi
alias 'rust-gnu-builder'='docker run $USE_TTY --rm -v "$HOME/.cargo/registry":/root/.cargo/registry -v "$(pwd)":/home/rust/src -P start9/rust-arm-cross:aarch64' alias 'rust-gnu-builder'='docker run $USE_TTY --rm -e "OS_ARCH=$OS_ARCH" -v "$HOME/.cargo/registry":/root/.cargo/registry -v "$(pwd)":/home/rust/src -P start9/rust-arm-cross:aarch64'
alias 'rust-musl-builder'='docker run $USE_TTY --rm -v "$HOME/.cargo/registry":/root/.cargo/registry -v "$(pwd)":/home/rust/src -P messense/rust-musl-cross:$ARCH-musl' alias 'rust-musl-builder'='docker run $USE_TTY --rm -e "OS_ARCH=$OS_ARCH" -v "$HOME/.cargo/registry":/root/.cargo/registry -v "$(pwd)":/home/rust/src -P messense/rust-musl-cross:$ARCH-musl'
cd .. cd ..
FLAGS="" FLAGS=""

View File

@@ -56,7 +56,7 @@ pub struct Action {
pub input_spec: ConfigSpec, pub input_spec: ConfigSpec,
} }
impl Action { impl Action {
#[instrument] #[instrument(skip_all)]
pub fn validate( pub fn validate(
&self, &self,
container: &Option<DockerContainers>, container: &Option<DockerContainers>,
@@ -74,7 +74,7 @@ impl Action {
}) })
} }
#[instrument(skip(ctx))] #[instrument(skip_all)]
pub async fn execute( pub async fn execute(
&self, &self,
ctx: &RpcContext, ctx: &RpcContext,
@@ -120,7 +120,7 @@ fn display_action_result(action_result: ActionResult, matches: &ArgMatches) {
} }
#[command(about = "Executes an action", display(display_action_result))] #[command(about = "Executes an action", display(display_action_result))]
#[instrument(skip(ctx))] #[instrument(skip_all)]
pub async fn action( pub async fn action(
#[context] ctx: RpcContext, #[context] ctx: RpcContext,
#[arg(rename = "id")] pkg_id: PackageId, #[arg(rename = "id")] pkg_id: PackageId,

View File

@@ -90,7 +90,7 @@ fn gen_pwd() {
) )
} }
#[instrument(skip(ctx, password))] #[instrument(skip_all)]
async fn cli_login( async fn cli_login(
ctx: CliContext, ctx: CliContext,
password: Option<PasswordType>, password: Option<PasswordType>,
@@ -145,7 +145,7 @@ where
display(display_none), display(display_none),
metadata(authenticated = false) metadata(authenticated = false)
)] )]
#[instrument(skip(ctx, password))] #[instrument(skip_all)]
pub async fn login( pub async fn login(
#[context] ctx: RpcContext, #[context] ctx: RpcContext,
#[request] req: &RequestParts, #[request] req: &RequestParts,
@@ -183,7 +183,7 @@ pub async fn login(
} }
#[command(display(display_none), metadata(authenticated = false))] #[command(display(display_none), metadata(authenticated = false))]
#[instrument(skip(ctx))] #[instrument(skip_all)]
pub async fn logout( pub async fn logout(
#[context] ctx: RpcContext, #[context] ctx: RpcContext,
#[request] req: &RequestParts, #[request] req: &RequestParts,
@@ -250,7 +250,7 @@ fn display_sessions(arg: SessionList, matches: &ArgMatches) {
} }
#[command(display(display_sessions))] #[command(display(display_sessions))]
#[instrument(skip(ctx))] #[instrument(skip_all)]
pub async fn list( pub async fn list(
#[context] ctx: RpcContext, #[context] ctx: RpcContext,
#[request] req: &RequestParts, #[request] req: &RequestParts,
@@ -296,7 +296,7 @@ impl AsLogoutSessionId for KillSessionId {
} }
#[command(display(display_none))] #[command(display(display_none))]
#[instrument(skip(ctx))] #[instrument(skip_all)]
pub async fn kill( pub async fn kill(
#[context] ctx: RpcContext, #[context] ctx: RpcContext,
#[arg(parse(parse_comma_separated))] ids: Vec<String>, #[arg(parse(parse_comma_separated))] ids: Vec<String>,
@@ -305,7 +305,7 @@ pub async fn kill(
Ok(()) Ok(())
} }
#[instrument(skip(ctx, old_password, new_password))] #[instrument(skip_all)]
async fn cli_reset_password( async fn cli_reset_password(
ctx: CliContext, ctx: CliContext,
old_password: Option<PasswordType>, old_password: Option<PasswordType>,
@@ -369,7 +369,7 @@ impl SetPasswordReceipt {
custom_cli(cli_reset_password(async, context(CliContext))), custom_cli(cli_reset_password(async, context(CliContext))),
display(display_none) display(display_none)
)] )]
#[instrument(skip(ctx, old_password, new_password))] #[instrument(skip_all)]
pub async fn reset_password( pub async fn reset_password(
#[context] ctx: RpcContext, #[context] ctx: RpcContext,
#[arg(rename = "old-password")] old_password: Option<PasswordType>, #[arg(rename = "old-password")] old_password: Option<PasswordType>,
@@ -403,7 +403,7 @@ pub async fn reset_password(
display(display_none), display(display_none),
metadata(authenticated = false) metadata(authenticated = false)
)] )]
#[instrument(skip(ctx))] #[instrument(skip_all)]
pub async fn get_pubkey(#[context] ctx: RpcContext) -> Result<Jwk, RpcError> { pub async fn get_pubkey(#[context] ctx: RpcContext) -> Result<Jwk, RpcError> {
let secret = ctx.as_ref().clone(); let secret = ctx.as_ref().clone();
let pub_key = secret.to_public_key()?; let pub_key = secret.to_public_key()?;

View File

@@ -35,7 +35,7 @@ fn parse_comma_separated(arg: &str, _: &ArgMatches) -> Result<BTreeSet<PackageId
} }
#[command(rename = "create", display(display_none))] #[command(rename = "create", display(display_none))]
#[instrument(skip(ctx, old_password, password))] #[instrument(skip_all)]
pub async fn backup_all( pub async fn backup_all(
#[context] ctx: RpcContext, #[context] ctx: RpcContext,
#[arg(rename = "target-id")] target_id: BackupTargetId, #[arg(rename = "target-id")] target_id: BackupTargetId,
@@ -161,7 +161,7 @@ pub async fn backup_all(
Ok(()) Ok(())
} }
#[instrument(skip(db, packages))] #[instrument(skip_all)]
async fn assure_backing_up( async fn assure_backing_up(
db: &mut PatchDbHandle, db: &mut PatchDbHandle,
packages: impl IntoIterator<Item = &PackageId>, packages: impl IntoIterator<Item = &PackageId>,
@@ -200,7 +200,7 @@ async fn assure_backing_up(
Ok(()) Ok(())
} }
#[instrument(skip(ctx, db, backup_guard))] #[instrument(skip_all)]
async fn perform_backup<Db: DbHandle>( async fn perform_backup<Db: DbHandle>(
ctx: &RpcContext, ctx: &RpcContext,
mut db: Db, mut db: Db,

View File

@@ -92,7 +92,7 @@ impl BackupActions {
Ok(()) Ok(())
} }
#[instrument(skip(ctx, db))] #[instrument(skip_all)]
pub async fn create<Db: DbHandle>( pub async fn create<Db: DbHandle>(
&self, &self,
ctx: &RpcContext, ctx: &RpcContext,
@@ -189,7 +189,7 @@ impl BackupActions {
}) })
} }
#[instrument(skip(ctx, db))] #[instrument(skip_all)]
pub async fn restore<Db: DbHandle>( pub async fn restore<Db: DbHandle>(
&self, &self,
ctx: &RpcContext, ctx: &RpcContext,

View File

@@ -46,7 +46,7 @@ fn parse_comma_separated(arg: &str, _: &ArgMatches) -> Result<Vec<PackageId>, Er
} }
#[command(rename = "restore", display(display_none))] #[command(rename = "restore", display(display_none))]
#[instrument(skip(ctx, password))] #[instrument(skip_all)]
pub async fn restore_packages_rpc( pub async fn restore_packages_rpc(
#[context] ctx: RpcContext, #[context] ctx: RpcContext,
#[arg(parse(parse_comma_separated))] ids: Vec<PackageId>, #[arg(parse(parse_comma_separated))] ids: Vec<PackageId>,
@@ -169,7 +169,7 @@ impl ProgressInfo {
} }
} }
#[instrument(skip(ctx))] #[instrument(skip_all)]
pub async fn recover_full_embassy( pub async fn recover_full_embassy(
ctx: SetupContext, ctx: SetupContext,
disk_guid: Arc<String>, disk_guid: Arc<String>,
@@ -306,7 +306,7 @@ async fn restore_packages(
Ok((backup_guard, tasks, progress_info)) Ok((backup_guard, tasks, progress_info))
} }
#[instrument(skip(ctx, db, backup_guard))] #[instrument(skip_all)]
async fn assure_restoring( async fn assure_restoring(
ctx: &RpcContext, ctx: &RpcContext,
db: &mut PatchDbHandle, db: &mut PatchDbHandle,
@@ -376,7 +376,7 @@ async fn assure_restoring(
Ok(guards) Ok(guards)
} }
#[instrument(skip(ctx, guard))] #[instrument(skip_all)]
async fn restore_package<'a>( async fn restore_package<'a>(
ctx: RpcContext, ctx: RpcContext,
manifest: Manifest, manifest: Manifest,

View File

@@ -223,7 +223,7 @@ fn display_backup_info(info: BackupInfo, matches: &ArgMatches) {
} }
#[command(display(display_backup_info))] #[command(display(display_backup_info))]
#[instrument(skip(ctx, password))] #[instrument(skip_all)]
pub async fn info( pub async fn info(
#[context] ctx: RpcContext, #[context] ctx: RpcContext,
#[arg(rename = "target-id")] target_id: BackupTargetId, #[arg(rename = "target-id")] target_id: BackupTargetId,

View File

@@ -13,13 +13,50 @@ use embassy::shutdown::Shutdown;
use embassy::sound::CHIME; use embassy::sound::CHIME;
use embassy::util::logger::EmbassyLogger; use embassy::util::logger::EmbassyLogger;
use embassy::util::Invoke; use embassy::util::Invoke;
use embassy::{Error, ErrorKind, ResultExt, IS_RASPBERRY_PI}; use embassy::{Error, ErrorKind, ResultExt, OS_ARCH};
use tokio::process::Command; use tokio::process::Command;
use tracing::instrument; use tracing::instrument;
#[instrument] #[instrument(skip_all)]
async fn setup_or_init(cfg_path: Option<PathBuf>) -> Result<(), Error> { async fn setup_or_init(cfg_path: Option<PathBuf>) -> Result<(), Error> {
if tokio::fs::metadata("/cdrom").await.is_ok() { Command::new("ln")
.arg("-sf")
.arg("/usr/lib/embassy/scripts/fake-apt")
.arg("/usr/local/bin/apt")
.invoke(crate::ErrorKind::OpenSsh)
.await?;
Command::new("ln")
.arg("-sf")
.arg("/usr/lib/embassy/scripts/fake-apt")
.arg("/usr/local/bin/apt-get")
.invoke(crate::ErrorKind::OpenSsh)
.await?;
Command::new("ln")
.arg("-sf")
.arg("/usr/lib/embassy/scripts/fake-apt")
.arg("/usr/local/bin/aptitude")
.invoke(crate::ErrorKind::OpenSsh)
.await?;
Command::new("make-ssl-cert")
.arg("generate-default-snakeoil")
.arg("--force-overwrite")
.invoke(crate::ErrorKind::OpenSsl)
.await?;
if tokio::fs::metadata("/run/live/medium").await.is_ok() {
Command::new("sed")
.arg("-i")
.arg("s/PasswordAuthentication no/PasswordAuthentication yes/g")
.arg("/etc/ssh/sshd_config")
.invoke(crate::ErrorKind::Filesystem)
.await?;
Command::new("systemctl")
.arg("reload")
.arg("ssh")
.invoke(crate::ErrorKind::OpenSsh)
.await?;
let ctx = InstallContext::init(cfg_path).await?; let ctx = InstallContext::init(cfg_path).await?;
let server = WebServer::install(([0, 0, 0, 0], 80).into(), ctx.clone()).await?; let server = WebServer::install(([0, 0, 0, 0], 80).into(), ctx.clone()).await?;
@@ -117,9 +154,9 @@ async fn run_script_if_exists<P: AsRef<Path>>(path: P) {
} }
} }
#[instrument] #[instrument(skip_all)]
async fn inner_main(cfg_path: Option<PathBuf>) -> Result<Option<Shutdown>, Error> { async fn inner_main(cfg_path: Option<PathBuf>) -> Result<Option<Shutdown>, Error> {
if *IS_RASPBERRY_PI && tokio::fs::metadata(STANDBY_MODE_PATH).await.is_ok() { if OS_ARCH == "raspberrypi" && tokio::fs::metadata(STANDBY_MODE_PATH).await.is_ok() {
tokio::fs::remove_file(STANDBY_MODE_PATH).await?; tokio::fs::remove_file(STANDBY_MODE_PATH).await?;
Command::new("sync").invoke(ErrorKind::Filesystem).await?; Command::new("sync").invoke(ErrorKind::Filesystem).await?;
embassy::sound::SHUTDOWN.play().await?; embassy::sound::SHUTDOWN.play().await?;

View File

@@ -12,7 +12,7 @@ use futures::{FutureExt, TryFutureExt};
use tokio::signal::unix::signal; use tokio::signal::unix::signal;
use tracing::instrument; use tracing::instrument;
#[instrument] #[instrument(skip_all)]
async fn inner_main(cfg_path: Option<PathBuf>) -> Result<Option<Shutdown>, Error> { async fn inner_main(cfg_path: Option<PathBuf>) -> Result<Option<Shutdown>, Error> {
let (rpc_ctx, server, shutdown) = { let (rpc_ctx, server, shutdown) = {
let rpc_ctx = RpcContext::init( let rpc_ctx = RpcContext::init(
@@ -25,7 +25,7 @@ async fn inner_main(cfg_path: Option<PathBuf>) -> Result<Option<Shutdown>, Error
), ),
) )
.await?; .await?;
embassy::hostname::sync_hostname(&*rpc_ctx.account.read().await).await?; embassy::hostname::sync_hostname(&rpc_ctx.account.read().await.hostname).await?;
let server = WebServer::main(([0, 0, 0, 0], 80).into(), rpc_ctx.clone()).await?; let server = WebServer::main(([0, 0, 0, 0], 80).into(), rpc_ctx.clone()).await?;
let mut shutdown_recv = rpc_ctx.shutdown.subscribe(); let mut shutdown_recv = rpc_ctx.shutdown.subscribe();

View File

@@ -31,7 +31,7 @@ pub struct ConfigActions {
pub set: PackageProcedure, pub set: PackageProcedure,
} }
impl ConfigActions { impl ConfigActions {
#[instrument] #[instrument(skip_all)]
pub fn validate( pub fn validate(
&self, &self,
container: &Option<DockerContainers>, container: &Option<DockerContainers>,
@@ -47,7 +47,7 @@ impl ConfigActions {
.with_ctx(|_| (crate::ErrorKind::ValidateS9pk, "Config Set"))?; .with_ctx(|_| (crate::ErrorKind::ValidateS9pk, "Config Set"))?;
Ok(()) Ok(())
} }
#[instrument(skip(ctx))] #[instrument(skip_all)]
pub async fn get( pub async fn get(
&self, &self,
ctx: &RpcContext, ctx: &RpcContext,
@@ -71,7 +71,7 @@ impl ConfigActions {
}) })
} }
#[instrument(skip(ctx))] #[instrument(skip_all)]
pub async fn set( pub async fn set(
&self, &self,
ctx: &RpcContext, ctx: &RpcContext,

View File

@@ -214,7 +214,7 @@ impl ConfigGetReceipts {
} }
#[command(display(display_serializable))] #[command(display(display_serializable))]
#[instrument(skip(ctx))] #[instrument(skip_all)]
pub async fn get( pub async fn get(
#[context] ctx: RpcContext, #[context] ctx: RpcContext,
#[parent_data] id: PackageId, #[parent_data] id: PackageId,
@@ -240,7 +240,7 @@ pub async fn get(
display(display_none), display(display_none),
metadata(sync_db = true) metadata(sync_db = true)
)] )]
#[instrument] #[instrument(skip_all)]
pub fn set( pub fn set(
#[parent_data] id: PackageId, #[parent_data] id: PackageId,
#[allow(unused_variables)] #[allow(unused_variables)]
@@ -413,7 +413,7 @@ impl ConfigReceipts {
} }
#[command(rename = "dry", display(display_serializable))] #[command(rename = "dry", display(display_serializable))]
#[instrument(skip(ctx))] #[instrument(skip_all)]
pub async fn set_dry( pub async fn set_dry(
#[context] ctx: RpcContext, #[context] ctx: RpcContext,
#[parent_data] (id, config, timeout): (PackageId, Option<Config>, Option<Duration>), #[parent_data] (id, config, timeout): (PackageId, Option<Config>, Option<Duration>),
@@ -440,7 +440,7 @@ pub async fn set_dry(
Ok(BreakageRes(breakages)) Ok(BreakageRes(breakages))
} }
#[instrument(skip(ctx))] #[instrument(skip_all)]
pub async fn set_impl( pub async fn set_impl(
ctx: RpcContext, ctx: RpcContext,
(id, config, timeout): (PackageId, Option<Config>, Option<Duration>), (id, config, timeout): (PackageId, Option<Config>, Option<Duration>),
@@ -465,7 +465,7 @@ pub async fn set_impl(
Ok(()) Ok(())
} }
#[instrument(skip(ctx, db, receipts))] #[instrument(skip_all)]
pub async fn configure<'a, Db: DbHandle>( pub async fn configure<'a, Db: DbHandle>(
ctx: &RpcContext, ctx: &RpcContext,
db: &'a mut Db, db: &'a mut Db,
@@ -485,7 +485,7 @@ pub async fn configure<'a, Db: DbHandle>(
Ok(()) Ok(())
} }
#[instrument(skip(ctx, db, receipts))] #[instrument(skip_all)]
pub fn configure_rec<'a, Db: DbHandle>( pub fn configure_rec<'a, Db: DbHandle>(
ctx: &'a RpcContext, ctx: &'a RpcContext,
db: &'a mut Db, db: &'a mut Db,
@@ -771,7 +771,7 @@ pub fn configure_rec<'a, Db: DbHandle>(
} }
.boxed() .boxed()
} }
#[instrument] #[instrument(skip_all)]
pub fn not_found() -> Error { pub fn not_found() -> Error {
Error::new(eyre!("Could not find"), crate::ErrorKind::Incoherent) Error::new(eyre!("Could not find"), crate::ErrorKind::Incoherent)
} }

View File

@@ -54,7 +54,8 @@ impl Drop for CliContextSeed {
true, true,
) )
.unwrap(); .unwrap();
let store = self.cookie_store.lock().unwrap(); let mut store = self.cookie_store.lock().unwrap();
store.remove("localhost", "", "local");
store.save_json(&mut *writer).unwrap(); store.save_json(&mut *writer).unwrap();
writer.sync_all().unwrap(); writer.sync_all().unwrap();
std::fs::rename(tmp, &self.cookie_path).unwrap(); std::fs::rename(tmp, &self.cookie_path).unwrap();
@@ -68,7 +69,7 @@ const DEFAULT_PORT: u16 = 5959;
pub struct CliContext(Arc<CliContextSeed>); pub struct CliContext(Arc<CliContextSeed>);
impl CliContext { impl CliContext {
/// BLOCKING /// BLOCKING
#[instrument(skip(matches))] #[instrument(skip_all)]
pub fn init(matches: &ArgMatches) -> Result<Self, crate::Error> { pub fn init(matches: &ArgMatches) -> Result<Self, crate::Error> {
let local_config_path = local_config_path(); let local_config_path = local_config_path();
let base: CliContextConfig = load_config_from_paths( let base: CliContextConfig = load_config_from_paths(
@@ -101,19 +102,22 @@ impl CliContext {
.unwrap_or(Path::new("/")) .unwrap_or(Path::new("/"))
.join(".cookies.json") .join(".cookies.json")
}); });
let cookie_store = Arc::new(CookieStoreMutex::new(if cookie_path.exists() { let cookie_store = Arc::new(CookieStoreMutex::new({
let mut store = CookieStore::load_json(BufReader::new(File::open(&cookie_path)?)) let mut store = if cookie_path.exists() {
.map_err(|e| eyre!("{}", e)) CookieStore::load_json(BufReader::new(File::open(&cookie_path)?))
.with_kind(crate::ErrorKind::Deserialization)?; .map_err(|e| eyre!("{}", e))
.with_kind(crate::ErrorKind::Deserialization)?
} else {
CookieStore::default()
};
if let Ok(local) = std::fs::read_to_string(LOCAL_AUTH_COOKIE_PATH) { if let Ok(local) = std::fs::read_to_string(LOCAL_AUTH_COOKIE_PATH) {
store store
.insert_raw(&Cookie::new("local", local), &"http://localhost".parse()?) .insert_raw(&Cookie::new("local", local), &"http://localhost".parse()?)
.with_kind(crate::ErrorKind::Network)?; .with_kind(crate::ErrorKind::Network)?;
} }
store store
} else {
CookieStore::default()
})); }));
Ok(CliContext(Arc::new(CliContextSeed { Ok(CliContext(Arc::new(CliContextSeed {
base_url: url.clone(), base_url: url.clone(),
rpc_url: { rpc_url: {

View File

@@ -18,7 +18,7 @@ pub struct DiagnosticContextConfig {
pub datadir: Option<PathBuf>, pub datadir: Option<PathBuf>,
} }
impl DiagnosticContextConfig { impl DiagnosticContextConfig {
#[instrument(skip(path))] #[instrument(skip_all)]
pub async fn load<P: AsRef<Path> + Send + 'static>(path: Option<P>) -> Result<Self, Error> { pub async fn load<P: AsRef<Path> + Send + 'static>(path: Option<P>) -> Result<Self, Error> {
tokio::task::spawn_blocking(move || { tokio::task::spawn_blocking(move || {
load_config_from_paths( load_config_from_paths(
@@ -52,7 +52,7 @@ pub struct DiagnosticContextSeed {
#[derive(Clone)] #[derive(Clone)]
pub struct DiagnosticContext(Arc<DiagnosticContextSeed>); pub struct DiagnosticContext(Arc<DiagnosticContextSeed>);
impl DiagnosticContext { impl DiagnosticContext {
#[instrument(skip(path))] #[instrument(skip_all)]
pub async fn init<P: AsRef<Path> + Send + 'static>( pub async fn init<P: AsRef<Path> + Send + 'static>(
path: Option<P>, path: Option<P>,
disk_guid: Option<Arc<String>>, disk_guid: Option<Arc<String>>,

View File

@@ -15,7 +15,7 @@ use crate::Error;
#[serde(rename_all = "kebab-case")] #[serde(rename_all = "kebab-case")]
pub struct InstallContextConfig {} pub struct InstallContextConfig {}
impl InstallContextConfig { impl InstallContextConfig {
#[instrument(skip(path))] #[instrument(skip_all)]
pub async fn load<P: AsRef<Path> + Send + 'static>(path: Option<P>) -> Result<Self, Error> { pub async fn load<P: AsRef<Path> + Send + 'static>(path: Option<P>) -> Result<Self, Error> {
tokio::task::spawn_blocking(move || { tokio::task::spawn_blocking(move || {
load_config_from_paths( load_config_from_paths(
@@ -38,7 +38,7 @@ pub struct InstallContextSeed {
#[derive(Clone)] #[derive(Clone)]
pub struct InstallContext(Arc<InstallContextSeed>); pub struct InstallContext(Arc<InstallContextSeed>);
impl InstallContext { impl InstallContext {
#[instrument(skip(path))] #[instrument(skip_all)]
pub async fn init<P: AsRef<Path> + Send + 'static>(path: Option<P>) -> Result<Self, Error> { pub async fn init<P: AsRef<Path> + Send + 'static>(path: Option<P>) -> Result<Self, Error> {
let _cfg = InstallContextConfig::load(path.as_ref().map(|p| p.as_ref().to_owned())).await?; let _cfg = InstallContextConfig::load(path.as_ref().map(|p| p.as_ref().to_owned())).await?;
let (shutdown, _) = tokio::sync::broadcast::channel(1); let (shutdown, _) = tokio::sync::broadcast::channel(1);

View File

@@ -86,7 +86,7 @@ impl RpcContextConfig {
} }
Ok(db) Ok(db)
} }
#[instrument] #[instrument(skip_all)]
pub async fn secret_store(&self) -> Result<PgPool, Error> { pub async fn secret_store(&self) -> Result<PgPool, Error> {
init_postgres(self.datadir()).await?; init_postgres(self.datadir()).await?;
let secret_store = let secret_store =
@@ -173,7 +173,7 @@ impl RpcCleanReceipts {
#[derive(Clone)] #[derive(Clone)]
pub struct RpcContext(Arc<RpcContextSeed>); pub struct RpcContext(Arc<RpcContextSeed>);
impl RpcContext { impl RpcContext {
#[instrument(skip(cfg_path))] #[instrument(skip_all)]
pub async fn init<P: AsRef<Path> + Send + 'static>( pub async fn init<P: AsRef<Path> + Send + 'static>(
cfg_path: Option<P>, cfg_path: Option<P>,
disk_guid: Arc<String>, disk_guid: Arc<String>,
@@ -260,7 +260,7 @@ impl RpcContext {
Ok(res) Ok(res)
} }
#[instrument(skip(self))] #[instrument(skip_all)]
pub async fn shutdown(self) -> Result<(), Error> { pub async fn shutdown(self) -> Result<(), Error> {
self.managers.empty().await?; self.managers.empty().await?;
self.secret_store.close().await; self.secret_store.close().await;
@@ -270,7 +270,7 @@ impl RpcContext {
Ok(()) Ok(())
} }
#[instrument(skip(self))] #[instrument(skip_all)]
pub async fn cleanup(&self) -> Result<(), Error> { pub async fn cleanup(&self) -> Result<(), Error> {
let mut db = self.db.handle(); let mut db = self.db.handle();
let receipts = RpcCleanReceipts::new(&mut db).await?; let receipts = RpcCleanReceipts::new(&mut db).await?;
@@ -348,7 +348,7 @@ impl RpcContext {
Ok(()) Ok(())
} }
#[instrument(skip(self))] #[instrument(skip_all)]
pub async fn clean_continuations(&self) { pub async fn clean_continuations(&self) {
let mut continuations = self.rpc_stream_continuations.lock().await; let mut continuations = self.rpc_stream_continuations.lock().await;
let mut to_remove = Vec::new(); let mut to_remove = Vec::new();
@@ -362,7 +362,7 @@ impl RpcContext {
} }
} }
#[instrument(skip(self, handler))] #[instrument(skip_all)]
pub async fn add_continuation(&self, guid: RequestGuid, handler: RpcContinuation) { pub async fn add_continuation(&self, guid: RequestGuid, handler: RpcContinuation) {
self.clean_continuations().await; self.clean_continuations().await;
self.rpc_stream_continuations self.rpc_stream_continuations

View File

@@ -25,7 +25,7 @@ pub struct SdkContextSeed {
pub struct SdkContext(Arc<SdkContextSeed>); pub struct SdkContext(Arc<SdkContextSeed>);
impl SdkContext { impl SdkContext {
/// BLOCKING /// BLOCKING
#[instrument(skip(matches))] #[instrument(skip_all)]
pub fn init(matches: &ArgMatches) -> Result<Self, crate::Error> { pub fn init(matches: &ArgMatches) -> Result<Self, crate::Error> {
let local_config_path = local_config_path(); let local_config_path = local_config_path();
let base: SdkContextConfig = load_config_from_paths( let base: SdkContextConfig = load_config_from_paths(
@@ -49,7 +49,7 @@ impl SdkContext {
}))) })))
} }
/// BLOCKING /// BLOCKING
#[instrument] #[instrument(skip_all)]
pub fn developer_key(&self) -> Result<ed25519_dalek::Keypair, Error> { pub fn developer_key(&self) -> Result<ed25519_dalek::Keypair, Error> {
if !self.developer_key_path.exists() { if !self.developer_key_path.exists() {
return Err(Error::new(eyre!("Developer Key does not exist! Please run `embassy-sdk init` before running this command."), crate::ErrorKind::Uninitialized)); return Err(Error::new(eyre!("Developer Key does not exist! Please run `embassy-sdk init` before running this command."), crate::ErrorKind::Uninitialized));

View File

@@ -47,7 +47,7 @@ pub struct SetupContextConfig {
pub datadir: Option<PathBuf>, pub datadir: Option<PathBuf>,
} }
impl SetupContextConfig { impl SetupContextConfig {
#[instrument(skip(path))] #[instrument(skip_all)]
pub async fn load<P: AsRef<Path> + Send + 'static>(path: Option<P>) -> Result<Self, Error> { pub async fn load<P: AsRef<Path> + Send + 'static>(path: Option<P>) -> Result<Self, Error> {
tokio::task::spawn_blocking(move || { tokio::task::spawn_blocking(move || {
load_config_from_paths( load_config_from_paths(
@@ -92,7 +92,7 @@ impl AsRef<Jwk> for SetupContextSeed {
#[derive(Clone)] #[derive(Clone)]
pub struct SetupContext(Arc<SetupContextSeed>); pub struct SetupContext(Arc<SetupContextSeed>);
impl SetupContext { impl SetupContext {
#[instrument(skip(path))] #[instrument(skip_all)]
pub async fn init<P: AsRef<Path> + Send + 'static>(path: Option<P>) -> Result<Self, Error> { pub async fn init<P: AsRef<Path> + Send + 'static>(path: Option<P>) -> Result<Self, Error> {
let cfg = SetupContextConfig::load(path.as_ref().map(|p| p.as_ref().to_owned())).await?; let cfg = SetupContextConfig::load(path.as_ref().map(|p| p.as_ref().to_owned())).await?;
let (shutdown, _) = tokio::sync::broadcast::channel(1); let (shutdown, _) = tokio::sync::broadcast::channel(1);
@@ -110,7 +110,7 @@ impl SetupContext {
setup_result: RwLock::new(None), setup_result: RwLock::new(None),
}))) })))
} }
#[instrument(skip(self))] #[instrument(skip_all)]
pub async fn db(&self, account: &AccountInfo) -> Result<PatchDb, Error> { pub async fn db(&self, account: &AccountInfo) -> Result<PatchDb, Error> {
let db_path = self.datadir.join("main").join("embassy.db"); let db_path = self.datadir.join("main").join("embassy.db");
let db = PatchDb::open(&db_path) let db = PatchDb::open(&db_path)
@@ -122,7 +122,7 @@ impl SetupContext {
} }
Ok(db) Ok(db)
} }
#[instrument(skip(self))] #[instrument(skip_all)]
pub async fn secret_store(&self) -> Result<PgPool, Error> { pub async fn secret_store(&self) -> Result<PgPool, Error> {
init_postgres(&self.datadir).await?; init_postgres(&self.datadir).await?;
let secret_store = let secret_store =

View File

@@ -61,7 +61,7 @@ impl StartReceipts {
} }
#[command(display(display_none), metadata(sync_db = true))] #[command(display(display_none), metadata(sync_db = true))]
#[instrument(skip(ctx))] #[instrument(skip_all)]
pub async fn start(#[context] ctx: RpcContext, #[arg] id: PackageId) -> Result<(), Error> { pub async fn start(#[context] ctx: RpcContext, #[arg] id: PackageId) -> Result<(), Error> {
let mut db = ctx.db.handle(); let mut db = ctx.db.handle();
let mut tx = db.begin().await?; let mut tx = db.begin().await?;
@@ -120,7 +120,7 @@ impl StopReceipts {
} }
} }
#[instrument(skip(db))] #[instrument(skip_all)]
pub async fn stop_common<Db: DbHandle>( pub async fn stop_common<Db: DbHandle>(
db: &mut Db, db: &mut Db,
id: &PackageId, id: &PackageId,
@@ -154,7 +154,7 @@ pub fn stop(#[arg] id: PackageId) -> Result<PackageId, Error> {
} }
#[command(rename = "dry", display(display_serializable))] #[command(rename = "dry", display(display_serializable))]
#[instrument(skip(ctx))] #[instrument(skip_all)]
pub async fn stop_dry( pub async fn stop_dry(
#[context] ctx: RpcContext, #[context] ctx: RpcContext,
#[parent_data] id: PackageId, #[parent_data] id: PackageId,
@@ -170,7 +170,7 @@ pub async fn stop_dry(
Ok(BreakageRes(breakages)) Ok(BreakageRes(breakages))
} }
#[instrument(skip(ctx))] #[instrument(skip_all)]
pub async fn stop_impl(ctx: RpcContext, id: PackageId) -> Result<MainStatus, Error> { pub async fn stop_impl(ctx: RpcContext, id: PackageId) -> Result<MainStatus, Error> {
let mut db = ctx.db.handle(); let mut db = ctx.db.handle();
let mut tx = db.begin().await?; let mut tx = db.begin().await?;

View File

@@ -27,7 +27,7 @@ use crate::middleware::auth::{HasValidSession, HashSessionToken};
use crate::util::serde::{display_serializable, IoFormat}; use crate::util::serde::{display_serializable, IoFormat};
use crate::{Error, ResultExt}; use crate::{Error, ResultExt};
#[instrument(skip(ctx, session, ws_fut))] #[instrument(skip_all)]
async fn ws_handler< async fn ws_handler<
WSFut: Future<Output = Result<Result<WebSocketStream<Upgraded>, HyperError>, JoinError>>, WSFut: Future<Output = Result<Result<WebSocketStream<Upgraded>, HyperError>, JoinError>>,
>( >(
@@ -73,7 +73,7 @@ async fn subscribe_to_session_kill(
recv recv
} }
#[instrument(skip(_has_valid_authentication, kill, sub, stream))] #[instrument(skip_all)]
async fn deal_with_messages( async fn deal_with_messages(
_has_valid_authentication: HasValidSession, _has_valid_authentication: HasValidSession,
mut kill: oneshot::Receiver<()>, mut kill: oneshot::Receiver<()>,
@@ -205,7 +205,7 @@ pub fn put() -> Result<(), RpcError> {
} }
#[command(display(display_serializable))] #[command(display(display_serializable))]
#[instrument(skip(ctx))] #[instrument(skip_all)]
pub async fn ui( pub async fn ui(
#[context] ctx: RpcContext, #[context] ctx: RpcContext,
#[arg] pointer: JsonPointer, #[arg] pointer: JsonPointer,

View File

@@ -191,7 +191,7 @@ impl DependencyError {
(DependencyError::Transitive, _) => DependencyError::Transitive, (DependencyError::Transitive, _) => DependencyError::Transitive,
} }
} }
#[instrument(skip(ctx, db, receipts))] #[instrument(skip_all)]
pub fn try_heal<'a, Db: DbHandle>( pub fn try_heal<'a, Db: DbHandle>(
self, self,
ctx: &'a RpcContext, ctx: &'a RpcContext,
@@ -693,7 +693,7 @@ pub struct ConfigDryRes {
} }
#[command(rename = "dry", display(display_serializable))] #[command(rename = "dry", display(display_serializable))]
#[instrument(skip(ctx))] #[instrument(skip_all)]
pub async fn configure_dry( pub async fn configure_dry(
#[context] ctx: RpcContext, #[context] ctx: RpcContext,
#[parent_data] (pkg_id, dependency_id): (PackageId, PackageId), #[parent_data] (pkg_id, dependency_id): (PackageId, PackageId),
@@ -784,7 +784,7 @@ pub async fn configure_logic(
spec, spec,
}) })
} }
#[instrument(skip(db, current_dependencies, current_dependent_receipt))] #[instrument(skip_all)]
pub async fn add_dependent_to_current_dependents_lists<'a, Db: DbHandle>( pub async fn add_dependent_to_current_dependents_lists<'a, Db: DbHandle>(
db: &mut Db, db: &mut Db,
dependent_id: &PackageId, dependent_id: &PackageId,
@@ -919,7 +919,7 @@ impl BreakTransitiveReceipts {
} }
} }
#[instrument(skip(db, receipts))] #[instrument(skip_all)]
pub fn break_transitive<'a, Db: DbHandle>( pub fn break_transitive<'a, Db: DbHandle>(
db: &'a mut Db, db: &'a mut Db,
id: &'a PackageId, id: &'a PackageId,
@@ -986,7 +986,7 @@ pub fn break_transitive<'a, Db: DbHandle>(
.boxed() .boxed()
} }
#[instrument(skip(ctx, db, locks))] #[instrument(skip_all)]
pub async fn heal_all_dependents_transitive<'a, Db: DbHandle>( pub async fn heal_all_dependents_transitive<'a, Db: DbHandle>(
ctx: &'a RpcContext, ctx: &'a RpcContext,
db: &'a mut Db, db: &'a mut Db,
@@ -1004,7 +1004,7 @@ pub async fn heal_all_dependents_transitive<'a, Db: DbHandle>(
Ok(()) Ok(())
} }
#[instrument(skip(ctx, db, receipts))] #[instrument(skip_all)]
pub fn heal_transitive<'a, Db: DbHandle>( pub fn heal_transitive<'a, Db: DbHandle>(
ctx: &'a RpcContext, ctx: &'a RpcContext,
db: &'a mut Db, db: &'a mut Db,

View File

@@ -12,7 +12,7 @@ use crate::util::display_none;
use crate::{Error, ResultExt}; use crate::{Error, ResultExt};
#[command(cli_only, blocking, display(display_none))] #[command(cli_only, blocking, display(display_none))]
#[instrument(skip(ctx))] #[instrument(skip_all)]
pub fn init(#[context] ctx: SdkContext) -> Result<(), Error> { pub fn init(#[context] ctx: SdkContext) -> Result<(), Error> {
if !ctx.developer_key_path.exists() { if !ctx.developer_key_path.exists() {
let parent = ctx.developer_key_path.parent().unwrap_or(Path::new("/")); let parent = ctx.developer_key_path.parent().unwrap_or(Path::new("/"));

View File

@@ -35,7 +35,7 @@ impl RepairStrategy {
} }
} }
#[instrument] #[instrument(skip_all)]
pub async fn e2fsck_preen( pub async fn e2fsck_preen(
logicalname: impl AsRef<Path> + std::fmt::Debug, logicalname: impl AsRef<Path> + std::fmt::Debug,
) -> Result<RequiresReboot, Error> { ) -> Result<RequiresReboot, Error> {
@@ -59,7 +59,7 @@ fn backup_existing_undo_file<'a>(path: &'a Path) -> BoxFuture<'a, Result<(), Err
.boxed() .boxed()
} }
#[instrument] #[instrument(skip_all)]
pub async fn e2fsck_aggressive( pub async fn e2fsck_aggressive(
logicalname: impl AsRef<Path> + std::fmt::Debug, logicalname: impl AsRef<Path> + std::fmt::Debug,
) -> Result<RequiresReboot, Error> { ) -> Result<RequiresReboot, Error> {

View File

@@ -17,7 +17,7 @@ pub const PASSWORD_PATH: &'static str = "/etc/embassy/password";
pub const DEFAULT_PASSWORD: &'static str = "password"; pub const DEFAULT_PASSWORD: &'static str = "password";
pub const MAIN_FS_SIZE: FsSize = FsSize::Gigabytes(8); pub const MAIN_FS_SIZE: FsSize = FsSize::Gigabytes(8);
#[instrument(skip(disks, datadir, password))] #[instrument(skip_all)]
pub async fn create<I, P>( pub async fn create<I, P>(
disks: &I, disks: &I,
pvscan: &BTreeMap<PathBuf, Option<String>>, pvscan: &BTreeMap<PathBuf, Option<String>>,
@@ -34,7 +34,7 @@ where
Ok(guid) Ok(guid)
} }
#[instrument(skip(disks))] #[instrument(skip_all)]
pub async fn create_pool<I, P>( pub async fn create_pool<I, P>(
disks: &I, disks: &I,
pvscan: &BTreeMap<PathBuf, Option<String>>, pvscan: &BTreeMap<PathBuf, Option<String>>,
@@ -84,7 +84,7 @@ pub enum FsSize {
FreePercentage(usize), FreePercentage(usize),
} }
#[instrument(skip(datadir, password))] #[instrument(skip_all)]
pub async fn create_fs<P: AsRef<Path>>( pub async fn create_fs<P: AsRef<Path>>(
guid: &str, guid: &str,
datadir: P, datadir: P,
@@ -139,7 +139,7 @@ pub async fn create_fs<P: AsRef<Path>>(
Ok(()) Ok(())
} }
#[instrument(skip(datadir, password))] #[instrument(skip_all)]
pub async fn create_all_fs<P: AsRef<Path>>( pub async fn create_all_fs<P: AsRef<Path>>(
guid: &str, guid: &str,
datadir: P, datadir: P,
@@ -157,7 +157,7 @@ pub async fn create_all_fs<P: AsRef<Path>>(
Ok(()) Ok(())
} }
#[instrument(skip(datadir))] #[instrument(skip_all)]
pub async fn unmount_fs<P: AsRef<Path>>(guid: &str, datadir: P, name: &str) -> Result<(), Error> { pub async fn unmount_fs<P: AsRef<Path>>(guid: &str, datadir: P, name: &str) -> Result<(), Error> {
unmount(datadir.as_ref().join(name)).await?; unmount(datadir.as_ref().join(name)).await?;
Command::new("cryptsetup") Command::new("cryptsetup")
@@ -170,7 +170,7 @@ pub async fn unmount_fs<P: AsRef<Path>>(guid: &str, datadir: P, name: &str) -> R
Ok(()) Ok(())
} }
#[instrument(skip(datadir))] #[instrument(skip_all)]
pub async fn unmount_all_fs<P: AsRef<Path>>(guid: &str, datadir: P) -> Result<(), Error> { pub async fn unmount_all_fs<P: AsRef<Path>>(guid: &str, datadir: P) -> Result<(), Error> {
unmount_fs(guid, &datadir, "main").await?; unmount_fs(guid, &datadir, "main").await?;
unmount_fs(guid, &datadir, "package-data").await?; unmount_fs(guid, &datadir, "package-data").await?;
@@ -181,7 +181,7 @@ pub async fn unmount_all_fs<P: AsRef<Path>>(guid: &str, datadir: P) -> Result<()
Ok(()) Ok(())
} }
#[instrument(skip(datadir))] #[instrument(skip_all)]
pub async fn export<P: AsRef<Path>>(guid: &str, datadir: P) -> Result<(), Error> { pub async fn export<P: AsRef<Path>>(guid: &str, datadir: P) -> Result<(), Error> {
Command::new("sync").invoke(ErrorKind::Filesystem).await?; Command::new("sync").invoke(ErrorKind::Filesystem).await?;
unmount_all_fs(guid, datadir).await?; unmount_all_fs(guid, datadir).await?;
@@ -197,7 +197,7 @@ pub async fn export<P: AsRef<Path>>(guid: &str, datadir: P) -> Result<(), Error>
Ok(()) Ok(())
} }
#[instrument(skip(datadir, password))] #[instrument(skip_all)]
pub async fn import<P: AsRef<Path>>( pub async fn import<P: AsRef<Path>>(
guid: &str, guid: &str,
datadir: P, datadir: P,
@@ -213,7 +213,7 @@ pub async fn import<P: AsRef<Path>>(
.is_none() .is_none()
{ {
return Err(Error::new( return Err(Error::new(
eyre!("Embassy disk not found."), eyre!("StartOS disk not found."),
crate::ErrorKind::DiskNotAvailable, crate::ErrorKind::DiskNotAvailable,
)); ));
} }
@@ -223,7 +223,7 @@ pub async fn import<P: AsRef<Path>>(
.any(|id| id == guid) .any(|id| id == guid)
{ {
return Err(Error::new( return Err(Error::new(
eyre!("An Embassy disk was found, but it is not the correct disk for this device."), eyre!("A StartOS disk was found, but it is not the correct disk for this device."),
crate::ErrorKind::IncorrectDisk, crate::ErrorKind::IncorrectDisk,
)); ));
} }
@@ -254,7 +254,7 @@ pub async fn import<P: AsRef<Path>>(
mount_all_fs(guid, datadir, repair, password).await mount_all_fs(guid, datadir, repair, password).await
} }
#[instrument(skip(datadir, password))] #[instrument(skip_all)]
pub async fn mount_fs<P: AsRef<Path>>( pub async fn mount_fs<P: AsRef<Path>>(
guid: &str, guid: &str,
datadir: P, datadir: P,
@@ -285,7 +285,7 @@ pub async fn mount_fs<P: AsRef<Path>>(
Ok(reboot) Ok(reboot)
} }
#[instrument(skip(datadir, password))] #[instrument(skip_all)]
pub async fn mount_all_fs<P: AsRef<Path>>( pub async fn mount_all_fs<P: AsRef<Path>>(
guid: &str, guid: &str,
datadir: P, datadir: P,

View File

@@ -22,6 +22,7 @@ pub const REPAIR_DISK_PATH: &str = "/media/embassy/config/repair-disk";
#[serde(rename_all = "kebab-case")] #[serde(rename_all = "kebab-case")]
pub struct OsPartitionInfo { pub struct OsPartitionInfo {
pub efi: Option<PathBuf>, pub efi: Option<PathBuf>,
pub bios: Option<PathBuf>,
pub boot: PathBuf, pub boot: PathBuf,
pub root: PathBuf, pub root: PathBuf,
} }
@@ -31,6 +32,11 @@ impl OsPartitionInfo {
.as_ref() .as_ref()
.map(|p| p == logicalname.as_ref()) .map(|p| p == logicalname.as_ref())
.unwrap_or(false) .unwrap_or(false)
|| self
.bios
.as_ref()
.map(|p| p == logicalname.as_ref())
.unwrap_or(false)
|| &*self.boot == logicalname.as_ref() || &*self.boot == logicalname.as_ref()
|| &*self.root == logicalname.as_ref() || &*self.root == logicalname.as_ref()
} }

View File

@@ -35,7 +35,7 @@ impl<G: GenericMountGuard> BackupMountGuard<G> {
} }
} }
#[instrument(skip(password))] #[instrument(skip_all)]
pub async fn mount(backup_disk_mount_guard: G, password: &str) -> Result<Self, Error> { pub async fn mount(backup_disk_mount_guard: G, password: &str) -> Result<Self, Error> {
let backup_disk_path = backup_disk_mount_guard.as_ref(); let backup_disk_path = backup_disk_mount_guard.as_ref();
let unencrypted_metadata_path = let unencrypted_metadata_path =
@@ -145,7 +145,7 @@ impl<G: GenericMountGuard> BackupMountGuard<G> {
Ok(()) Ok(())
} }
#[instrument(skip(self))] #[instrument(skip_all)]
pub async fn mount_package_backup( pub async fn mount_package_backup(
&self, &self,
id: &PackageId, id: &PackageId,
@@ -159,7 +159,7 @@ impl<G: GenericMountGuard> BackupMountGuard<G> {
}) })
} }
#[instrument(skip(self))] #[instrument(skip_all)]
pub async fn save(&self) -> Result<(), Error> { pub async fn save(&self) -> Result<(), Error> {
let metadata_path = self.as_ref().join("metadata.cbor"); let metadata_path = self.as_ref().join("metadata.cbor");
let backup_disk_path = self.backup_disk_path(); let backup_disk_path = self.backup_disk_path();
@@ -180,7 +180,7 @@ impl<G: GenericMountGuard> BackupMountGuard<G> {
Ok(()) Ok(())
} }
#[instrument(skip(self))] #[instrument(skip_all)]
pub async fn unmount(mut self) -> Result<(), Error> { pub async fn unmount(mut self) -> Result<(), Error> {
if let Some(guard) = self.encrypted_guard.take() { if let Some(guard) = self.encrypted_guard.take() {
guard.unmount().await?; guard.unmount().await?;
@@ -191,7 +191,7 @@ impl<G: GenericMountGuard> BackupMountGuard<G> {
Ok(()) Ok(())
} }
#[instrument(skip(self))] #[instrument(skip_all)]
pub async fn save_and_unmount(self) -> Result<(), Error> { pub async fn save_and_unmount(self) -> Result<(), Error> {
self.save().await?; self.save().await?;
self.unmount().await?; self.unmount().await?;

View File

@@ -33,7 +33,7 @@ async fn resolve_hostname(hostname: &str) -> Result<IpAddr, Error> {
.parse()?) .parse()?)
} }
#[instrument(skip(path, password, mountpoint))] #[instrument(skip_all)]
pub async fn mount_cifs( pub async fn mount_cifs(
hostname: &str, hostname: &str,
path: impl AsRef<Path>, path: impl AsRef<Path>,

View File

@@ -95,7 +95,7 @@ pub struct TmpMountGuard {
} }
impl TmpMountGuard { impl TmpMountGuard {
/// DRAGONS: if you try to mount something as ro and rw at the same time, the ro mount will be upgraded to rw. /// DRAGONS: if you try to mount something as ro and rw at the same time, the ro mount will be upgraded to rw.
#[instrument(skip(filesystem))] #[instrument(skip_all)]
pub async fn mount(filesystem: &impl FileSystem, mount_type: MountType) -> Result<Self, Error> { pub async fn mount(filesystem: &impl FileSystem, mount_type: MountType) -> Result<Self, Error> {
let mountpoint = tmp_mountpoint(filesystem).await?; let mountpoint = tmp_mountpoint(filesystem).await?;
let mut tmp_mounts = TMP_MOUNTS.lock().await; let mut tmp_mounts = TMP_MOUNTS.lock().await;

View File

@@ -5,7 +5,7 @@ use tracing::instrument;
use crate::util::Invoke; use crate::util::Invoke;
use crate::{Error, ResultExt}; use crate::{Error, ResultExt};
#[instrument(skip(src, dst))] #[instrument(skip_all)]
pub async fn bind<P0: AsRef<Path>, P1: AsRef<Path>>( pub async fn bind<P0: AsRef<Path>, P1: AsRef<Path>>(
src: P0, src: P0,
dst: P1, dst: P1,
@@ -40,7 +40,7 @@ pub async fn bind<P0: AsRef<Path>, P1: AsRef<Path>>(
Ok(()) Ok(())
} }
#[instrument(skip(mountpoint))] #[instrument(skip_all)]
pub async fn unmount<P: AsRef<Path>>(mountpoint: P) -> Result<(), Error> { pub async fn unmount<P: AsRef<Path>>(mountpoint: P) -> Result<(), Error> {
tracing::debug!("Unmounting {}.", mountpoint.as_ref().display()); tracing::debug!("Unmounting {}.", mountpoint.as_ref().display());
tokio::process::Command::new("umount") tokio::process::Command::new("umount")

View File

@@ -69,7 +69,7 @@ lazy_static::lazy_static! {
static ref PARTITION_REGEX: Regex = Regex::new("-part[0-9]+$").unwrap(); static ref PARTITION_REGEX: Regex = Regex::new("-part[0-9]+$").unwrap();
} }
#[instrument(skip(path))] #[instrument(skip_all)]
pub async fn get_partition_table<P: AsRef<Path>>(path: P) -> Result<Option<PartitionTable>, Error> { pub async fn get_partition_table<P: AsRef<Path>>(path: P) -> Result<Option<PartitionTable>, Error> {
Ok(String::from_utf8( Ok(String::from_utf8(
Command::new("fdisk") Command::new("fdisk")
@@ -87,7 +87,7 @@ pub async fn get_partition_table<P: AsRef<Path>>(path: P) -> Result<Option<Parti
})) }))
} }
#[instrument(skip(path))] #[instrument(skip_all)]
pub async fn get_vendor<P: AsRef<Path>>(path: P) -> Result<Option<String>, Error> { pub async fn get_vendor<P: AsRef<Path>>(path: P) -> Result<Option<String>, Error> {
let vendor = tokio::fs::read_to_string( let vendor = tokio::fs::read_to_string(
Path::new(SYS_BLOCK_PATH) Path::new(SYS_BLOCK_PATH)
@@ -110,7 +110,7 @@ pub async fn get_vendor<P: AsRef<Path>>(path: P) -> Result<Option<String>, Error
}) })
} }
#[instrument(skip(path))] #[instrument(skip_all)]
pub async fn get_model<P: AsRef<Path>>(path: P) -> Result<Option<String>, Error> { pub async fn get_model<P: AsRef<Path>>(path: P) -> Result<Option<String>, Error> {
let model = tokio::fs::read_to_string( let model = tokio::fs::read_to_string(
Path::new(SYS_BLOCK_PATH) Path::new(SYS_BLOCK_PATH)
@@ -129,7 +129,7 @@ pub async fn get_model<P: AsRef<Path>>(path: P) -> Result<Option<String>, Error>
Ok(if model.is_empty() { None } else { Some(model) }) Ok(if model.is_empty() { None } else { Some(model) })
} }
#[instrument(skip(path))] #[instrument(skip_all)]
pub async fn get_capacity<P: AsRef<Path>>(path: P) -> Result<u64, Error> { pub async fn get_capacity<P: AsRef<Path>>(path: P) -> Result<u64, Error> {
Ok(String::from_utf8( Ok(String::from_utf8(
Command::new("blockdev") Command::new("blockdev")
@@ -142,7 +142,7 @@ pub async fn get_capacity<P: AsRef<Path>>(path: P) -> Result<u64, Error> {
.parse::<u64>()?) .parse::<u64>()?)
} }
#[instrument(skip(path))] #[instrument(skip_all)]
pub async fn get_label<P: AsRef<Path>>(path: P) -> Result<Option<String>, Error> { pub async fn get_label<P: AsRef<Path>>(path: P) -> Result<Option<String>, Error> {
let label = String::from_utf8( let label = String::from_utf8(
Command::new("lsblk") Command::new("lsblk")
@@ -157,7 +157,7 @@ pub async fn get_label<P: AsRef<Path>>(path: P) -> Result<Option<String>, Error>
Ok(if label.is_empty() { None } else { Some(label) }) Ok(if label.is_empty() { None } else { Some(label) })
} }
#[instrument(skip(path))] #[instrument(skip_all)]
pub async fn get_used<P: AsRef<Path>>(path: P) -> Result<u64, Error> { pub async fn get_used<P: AsRef<Path>>(path: P) -> Result<u64, Error> {
Ok(String::from_utf8( Ok(String::from_utf8(
Command::new("df") Command::new("df")
@@ -175,7 +175,7 @@ pub async fn get_used<P: AsRef<Path>>(path: P) -> Result<u64, Error> {
.parse::<u64>()?) .parse::<u64>()?)
} }
#[instrument(skip(path))] #[instrument(skip_all)]
pub async fn get_available<P: AsRef<Path>>(path: P) -> Result<u64, Error> { pub async fn get_available<P: AsRef<Path>>(path: P) -> Result<u64, Error> {
Ok(String::from_utf8( Ok(String::from_utf8(
Command::new("df") Command::new("df")
@@ -193,7 +193,7 @@ pub async fn get_available<P: AsRef<Path>>(path: P) -> Result<u64, Error> {
.parse::<u64>()?) .parse::<u64>()?)
} }
#[instrument(skip(path))] #[instrument(skip_all)]
pub async fn get_percentage<P: AsRef<Path>>(path: P) -> Result<u64, Error> { pub async fn get_percentage<P: AsRef<Path>>(path: P) -> Result<u64, Error> {
Ok(String::from_utf8( Ok(String::from_utf8(
Command::new("df") Command::new("df")
@@ -212,7 +212,7 @@ pub async fn get_percentage<P: AsRef<Path>>(path: P) -> Result<u64, Error> {
.parse::<u64>()?) .parse::<u64>()?)
} }
#[instrument] #[instrument(skip_all)]
pub async fn pvscan() -> Result<BTreeMap<PathBuf, Option<String>>, Error> { pub async fn pvscan() -> Result<BTreeMap<PathBuf, Option<String>>, Error> {
let pvscan_out = Command::new("pvscan") let pvscan_out = Command::new("pvscan")
.invoke(crate::ErrorKind::DiskManagement) .invoke(crate::ErrorKind::DiskManagement)
@@ -248,7 +248,7 @@ pub async fn recovery_info(
Ok(None) Ok(None)
} }
#[instrument] #[instrument(skip_all)]
pub async fn list(os: &OsPartitionInfo) -> Result<Vec<DiskInfo>, Error> { pub async fn list(os: &OsPartitionInfo) -> Result<Vec<DiskInfo>, Error> {
struct DiskIndex { struct DiskIndex {
parts: IndexSet<PathBuf>, parts: IndexSet<PathBuf>,

View File

@@ -44,7 +44,7 @@ pub fn generate_id() -> String {
id.to_string() id.to_string()
} }
#[instrument] #[instrument(skip_all)]
pub async fn get_current_hostname() -> Result<Hostname, Error> { pub async fn get_current_hostname() -> Result<Hostname, Error> {
let out = Command::new("hostname") let out = Command::new("hostname")
.invoke(ErrorKind::ParseSysInfo) .invoke(ErrorKind::ParseSysInfo)
@@ -53,7 +53,7 @@ pub async fn get_current_hostname() -> Result<Hostname, Error> {
Ok(Hostname(out_string.trim().to_owned())) Ok(Hostname(out_string.trim().to_owned()))
} }
#[instrument] #[instrument(skip_all)]
pub async fn set_hostname(hostname: &Hostname) -> Result<(), Error> { pub async fn set_hostname(hostname: &Hostname) -> Result<(), Error> {
let hostname: &String = &hostname.0; let hostname: &String = &hostname.0;
let _out = Command::new("hostnamectl") let _out = Command::new("hostnamectl")
@@ -64,9 +64,9 @@ pub async fn set_hostname(hostname: &Hostname) -> Result<(), Error> {
Ok(()) Ok(())
} }
#[instrument] #[instrument(skip_all)]
pub async fn sync_hostname(account: &AccountInfo) -> Result<(), Error> { pub async fn sync_hostname(hostname: &Hostname) -> Result<(), Error> {
set_hostname(&account.hostname).await?; set_hostname(hostname).await?;
Command::new("systemctl") Command::new("systemctl")
.arg("restart") .arg("restart")
.arg("avahi-daemon") .arg("avahi-daemon")

View File

@@ -1,5 +1,6 @@
use std::collections::{BTreeMap, HashMap}; use std::collections::{BTreeMap, HashMap};
use std::fs::Permissions; use std::fs::Permissions;
use std::net::SocketAddr;
use std::os::unix::fs::PermissionsExt; use std::os::unix::fs::PermissionsExt;
use std::path::Path; use std::path::Path;
use std::process::Stdio; use std::process::Stdio;
@@ -39,6 +40,10 @@ pub async fn check_time_is_synchronized() -> Result<bool, Error> {
== "NTPSynchronized=yes") == "NTPSynchronized=yes")
} }
pub async fn check_tor_is_ready(tor_control: SocketAddr) -> bool {
tokio::net::TcpStream::connect(tor_control).await.is_ok()
}
pub struct InitReceipts { pub struct InitReceipts {
pub server_version: LockReceipt<crate::util::Version, ()>, pub server_version: LockReceipt<crate::util::Version, ()>,
pub version_range: LockReceipt<emver::VersionRange, ()>, pub version_range: LockReceipt<emver::VersionRange, ()>,
@@ -258,7 +263,7 @@ pub async fn init(cfg: &RpcContextConfig) -> Result<InitResult, Error> {
// write to ca cert store // write to ca cert store
tokio::fs::write( tokio::fs::write(
"/usr/local/share/ca-certificates/embassy-root-ca.crt", "/usr/local/share/ca-certificates/startos-root-ca.crt",
account.root_ca_cert.to_pem()?, account.root_ca_cert.to_pem()?,
) )
.await?; .await?;
@@ -404,6 +409,20 @@ pub async fn init(cfg: &RpcContextConfig) -> Result<InitResult, Error> {
.invoke(crate::ErrorKind::Tor) .invoke(crate::ErrorKind::Tor)
.await?; .await?;
let mut warn_tor_not_ready = true;
for _ in 0..60 {
if check_tor_is_ready(cfg.tor_control.unwrap_or(([127, 0, 0, 1], 9051).into())).await {
warn_tor_not_ready = false;
break;
}
tokio::time::sleep(Duration::from_secs(1)).await;
}
if warn_tor_not_ready {
tracing::warn!("Timed out waiting for tor to start");
} else {
tracing::info!("Tor is started");
}
receipts receipts
.ip_info .ip_info
.set(&mut handle, crate::net::dhcp::init_ips().await?) .set(&mut handle, crate::net::dhcp::init_ips().await?)

View File

@@ -62,7 +62,7 @@ impl UpdateDependencyReceipts {
} }
} }
#[instrument(skip(ctx, db, deps, receipts))] #[instrument(skip_all)]
pub async fn update_dependency_errors_of_dependents<'a, Db: DbHandle>( pub async fn update_dependency_errors_of_dependents<'a, Db: DbHandle>(
ctx: &RpcContext, ctx: &RpcContext,
db: &mut Db, db: &mut Db,
@@ -99,7 +99,7 @@ pub async fn update_dependency_errors_of_dependents<'a, Db: DbHandle>(
Ok(()) Ok(())
} }
#[instrument(skip(ctx))] #[instrument(skip_all)]
pub async fn cleanup(ctx: &RpcContext, id: &PackageId, version: &Version) -> Result<(), Error> { pub async fn cleanup(ctx: &RpcContext, id: &PackageId, version: &Version) -> Result<(), Error> {
let mut errors = ErrorCollection::new(); let mut errors = ErrorCollection::new();
ctx.managers.remove(&(id.clone(), version.clone())).await; ctx.managers.remove(&(id.clone(), version.clone())).await;
@@ -204,7 +204,7 @@ impl CleanupFailedReceipts {
} }
} }
#[instrument(skip(ctx, db, receipts))] #[instrument(skip_all)]
pub async fn cleanup_failed<Db: DbHandle>( pub async fn cleanup_failed<Db: DbHandle>(
ctx: &RpcContext, ctx: &RpcContext,
db: &mut Db, db: &mut Db,
@@ -272,7 +272,7 @@ pub async fn cleanup_failed<Db: DbHandle>(
Ok(()) Ok(())
} }
#[instrument(skip(db, current_dependencies, current_dependent_receipt))] #[instrument(skip_all)]
pub async fn remove_from_current_dependents_lists<'a, Db: DbHandle>( pub async fn remove_from_current_dependents_lists<'a, Db: DbHandle>(
db: &mut Db, db: &mut Db,
id: &'a PackageId, id: &'a PackageId,
@@ -340,7 +340,7 @@ impl UninstallReceipts {
} }
} }
} }
#[instrument(skip(ctx, secrets, db))] #[instrument(skip_all)]
pub async fn uninstall<Ex>( pub async fn uninstall<Ex>(
ctx: &RpcContext, ctx: &RpcContext,
db: &mut PatchDbHandle, db: &mut PatchDbHandle,
@@ -404,7 +404,7 @@ where
Ok(()) Ok(())
} }
#[instrument(skip(secrets))] #[instrument(skip_all)]
pub async fn remove_tor_keys<Ex>(secrets: &mut Ex, id: &PackageId) -> Result<(), Error> pub async fn remove_tor_keys<Ex>(secrets: &mut Ex, id: &PackageId) -> Result<(), Error>
where where
for<'a> &'a mut Ex: Executor<'a, Database = Postgres>, for<'a> &'a mut Ex: Executor<'a, Database = Postgres>,

View File

@@ -116,7 +116,7 @@ impl std::fmt::Display for MinMax {
display(display_none), display(display_none),
metadata(sync_db = true) metadata(sync_db = true)
)] )]
#[instrument(skip(ctx))] #[instrument(skip_all)]
pub async fn install( pub async fn install(
#[context] ctx: RpcContext, #[context] ctx: RpcContext,
#[arg] id: String, #[arg] id: String,
@@ -326,7 +326,7 @@ pub async fn install(
} }
#[command(rpc_only, display(display_none))] #[command(rpc_only, display(display_none))]
#[instrument(skip(ctx))] #[instrument(skip_all)]
pub async fn sideload( pub async fn sideload(
#[context] ctx: RpcContext, #[context] ctx: RpcContext,
#[arg] manifest: Manifest, #[arg] manifest: Manifest,
@@ -482,7 +482,7 @@ pub async fn sideload(
Ok(guid) Ok(guid)
} }
#[instrument(skip(ctx))] #[instrument(skip_all)]
async fn cli_install( async fn cli_install(
ctx: CliContext, ctx: CliContext,
target: String, target: String,
@@ -574,7 +574,7 @@ pub async fn uninstall(#[arg] id: PackageId) -> Result<PackageId, Error> {
} }
#[command(rename = "dry", display(display_serializable))] #[command(rename = "dry", display(display_serializable))]
#[instrument(skip(ctx))] #[instrument(skip_all)]
pub async fn uninstall_dry( pub async fn uninstall_dry(
#[context] ctx: RpcContext, #[context] ctx: RpcContext,
#[parent_data] id: PackageId, #[parent_data] id: PackageId,
@@ -597,7 +597,7 @@ pub async fn uninstall_dry(
Ok(BreakageRes(breakages)) Ok(BreakageRes(breakages))
} }
#[instrument(skip(ctx))] #[instrument(skip_all)]
pub async fn uninstall_impl(ctx: RpcContext, id: PackageId) -> Result<(), Error> { pub async fn uninstall_impl(ctx: RpcContext, id: PackageId) -> Result<(), Error> {
let mut handle = ctx.db.handle(); let mut handle = ctx.db.handle();
let mut tx = handle.begin().await?; let mut tx = handle.begin().await?;
@@ -700,7 +700,7 @@ impl DownloadInstallReceipts {
} }
} }
#[instrument(skip(ctx, temp_manifest, s9pk))] #[instrument(skip_all)]
pub async fn download_install_s9pk( pub async fn download_install_s9pk(
ctx: &RpcContext, ctx: &RpcContext,
temp_manifest: &Manifest, temp_manifest: &Manifest,
@@ -873,7 +873,7 @@ impl InstallS9Receipts {
} }
} }
#[instrument(skip(ctx, rdr))] #[instrument(skip_all)]
pub async fn install_s9pk<R: AsyncRead + AsyncSeek + Unpin + Send + Sync>( pub async fn install_s9pk<R: AsyncRead + AsyncSeek + Unpin + Send + Sync>(
ctx: &RpcContext, ctx: &RpcContext,
pkg_id: &PackageId, pkg_id: &PackageId,
@@ -1402,7 +1402,7 @@ pub async fn install_s9pk<R: AsyncRead + AsyncSeek + Unpin + Send + Sync>(
Ok(()) Ok(())
} }
#[instrument(skip(datadir))] #[instrument(skip_all)]
pub fn load_images<'a, P: AsRef<Path> + 'a + Send + Sync>( pub fn load_images<'a, P: AsRef<Path> + 'a + Send + Sync>(
datadir: P, datadir: P,
) -> BoxFuture<'a, Result<(), Error>> { ) -> BoxFuture<'a, Result<(), Error>> {
@@ -1435,16 +1435,23 @@ pub fn load_images<'a, P: AsRef<Path> + 'a + Send + Sync>(
copy_and_shutdown(&mut File::open(&path).await?, load_in) copy_and_shutdown(&mut File::open(&path).await?, load_in)
.await? .await?
} }
Some("s9pk") => { Some("s9pk") => match async {
copy_and_shutdown( let mut reader = S9pkReader::open(&path, true).await?;
&mut S9pkReader::open(&path, false) copy_and_shutdown(&mut reader.docker_images().await?, load_in)
.await? .await?;
.docker_images() Ok::<_, Error>(())
.await?,
load_in,
)
.await?
} }
.await
{
Ok(()) => (),
Err(e) => {
tracing::error!(
"Error loading docker images from s9pk: {e}"
);
tracing::debug!("{e:?}");
return Ok(());
}
},
_ => unreachable!(), _ => unreachable!(),
}; };

View File

@@ -60,7 +60,7 @@ pub async fn update() -> Result<(), Error> {
Ok(()) Ok(())
} }
#[instrument(skip(ctx))] #[instrument(skip_all)]
#[command(display(display_serializable))] #[command(display(display_serializable))]
pub async fn dry( pub async fn dry(
#[context] ctx: RpcContext, #[context] ctx: RpcContext,

View File

@@ -5,14 +5,12 @@ pub const DEFAULT_MARKETPLACE: &str = "https://registry.start9.com";
pub const BUFFER_SIZE: usize = 1024; pub const BUFFER_SIZE: usize = 1024;
pub const HOST_IP: [u8; 4] = [172, 18, 0, 1]; pub const HOST_IP: [u8; 4] = [172, 18, 0, 1];
pub const TARGET: &str = current_platform::CURRENT_PLATFORM; pub const TARGET: &str = current_platform::CURRENT_PLATFORM;
pub const OS_ARCH: &str = env!("OS_ARCH");
lazy_static::lazy_static! { lazy_static::lazy_static! {
pub static ref ARCH: &'static str = { pub static ref ARCH: &'static str = {
let (arch, _) = TARGET.split_once("-").unwrap(); let (arch, _) = TARGET.split_once("-").unwrap();
arch arch
}; };
pub static ref IS_RASPBERRY_PI: bool = {
*ARCH == "aarch64"
};
} }
pub mod account; pub mod account;

View File

@@ -64,7 +64,7 @@ impl Stream for LogStream {
} }
} }
#[instrument(skip(logs, ws_fut))] #[instrument(skip_all)]
async fn ws_handler< async fn ws_handler<
WSFut: Future<Output = Result<Result<WebSocketStream<Upgraded>, HyperError>, JoinError>>, WSFut: Future<Output = Result<Result<WebSocketStream<Upgraded>, HyperError>, JoinError>>,
>( >(
@@ -409,7 +409,7 @@ async fn journalctl(
}) })
} }
#[instrument] #[instrument(skip_all)]
pub async fn fetch_logs( pub async fn fetch_logs(
id: LogSource, id: LogSource,
limit: Option<usize>, limit: Option<usize>,
@@ -456,7 +456,7 @@ pub async fn fetch_logs(
}) })
} }
#[instrument(skip(ctx))] #[instrument(skip_all)]
pub async fn follow_logs( pub async fn follow_logs(
ctx: RpcContext, ctx: RpcContext,
id: LogSource, id: LogSource,

View File

@@ -1,6 +1,7 @@
use std::collections::BTreeMap; use std::collections::BTreeMap;
use std::sync::atomic::{AtomicBool, Ordering}; use std::sync::atomic::{AtomicBool, Ordering};
use itertools::Itertools;
use patch_db::{DbHandle, LockReceipt, LockType}; use patch_db::{DbHandle, LockReceipt, LockType};
use tracing::instrument; use tracing::instrument;
@@ -90,7 +91,7 @@ impl HealthCheckStatusReceipt {
} }
} }
#[instrument(skip(ctx, db))] #[instrument(skip_all)]
pub async fn check<Db: DbHandle>( pub async fn check<Db: DbHandle>(
ctx: &RpcContext, ctx: &RpcContext,
db: &mut Db, db: &mut Db,
@@ -111,6 +112,7 @@ pub async fn check<Db: DbHandle>(
}; };
let health_results = if let Some(started) = started { let health_results = if let Some(started) = started {
tracing::debug!("Checking health of {}", id);
manifest manifest
.health_checks .health_checks
.check_all( .check_all(
@@ -129,6 +131,24 @@ pub async fn check<Db: DbHandle>(
if !should_commit.load(Ordering::SeqCst) { if !should_commit.load(Ordering::SeqCst) {
return Ok(()); return Ok(());
} }
if !health_results
.iter()
.any(|(_, res)| matches!(res, HealthCheckResult::Failure { .. }))
{
tracing::debug!("All health checks succeeded for {}", id);
} else {
tracing::debug!(
"Some health checks failed for {}: {}",
id,
health_results
.iter()
.filter(|(_, res)| matches!(res, HealthCheckResult::Failure { .. }))
.map(|(id, _)| &*id)
.join(", ")
);
}
let current_dependents = { let current_dependents = {
let mut checkpoint = tx.begin().await?; let mut checkpoint = tx.begin().await?;
let receipts = HealthCheckStatusReceipt::new(&mut checkpoint, id).await?; let receipts = HealthCheckStatusReceipt::new(&mut checkpoint, id).await?;
@@ -153,9 +173,7 @@ pub async fn check<Db: DbHandle>(
current_dependents current_dependents
}; };
tracing::debug!("Checking health of {}", id);
let receipts = crate::dependencies::BreakTransitiveReceipts::new(&mut tx).await?; let receipts = crate::dependencies::BreakTransitiveReceipts::new(&mut tx).await?;
tracing::debug!("Got receipts {}", id);
for (dependent, info) in (current_dependents).0.iter() { for (dependent, info) in (current_dependents).0.iter() {
let failures: BTreeMap<HealthCheckId, HealthCheckResult> = health_results let failures: BTreeMap<HealthCheckId, HealthCheckResult> = health_results

View File

@@ -39,7 +39,7 @@ pub const HEALTH_CHECK_GRACE_PERIOD_SECONDS: u64 = 5;
#[derive(Default)] #[derive(Default)]
pub struct ManagerMap(RwLock<BTreeMap<(PackageId, Version), Arc<Manager>>>); pub struct ManagerMap(RwLock<BTreeMap<(PackageId, Version), Arc<Manager>>>);
impl ManagerMap { impl ManagerMap {
#[instrument(skip(self, ctx, db, secrets))] #[instrument(skip_all)]
pub async fn init<Db: DbHandle, Ex>( pub async fn init<Db: DbHandle, Ex>(
&self, &self,
ctx: &RpcContext, ctx: &RpcContext,
@@ -78,7 +78,7 @@ impl ManagerMap {
Ok(()) Ok(())
} }
#[instrument(skip(self, ctx))] #[instrument(skip_all)]
pub async fn add(&self, ctx: RpcContext, manifest: Manifest) -> Result<(), Error> { pub async fn add(&self, ctx: RpcContext, manifest: Manifest) -> Result<(), Error> {
let mut lock = self.0.write().await; let mut lock = self.0.write().await;
let id = (manifest.id.clone(), manifest.version.clone()); let id = (manifest.id.clone(), manifest.version.clone());
@@ -91,7 +91,7 @@ impl ManagerMap {
Ok(()) Ok(())
} }
#[instrument(skip(self))] #[instrument(skip_all)]
pub async fn remove(&self, id: &(PackageId, Version)) { pub async fn remove(&self, id: &(PackageId, Version)) {
if let Some(man) = self.0.write().await.remove(id) { if let Some(man) = self.0.write().await.remove(id) {
if let Err(e) = man.exit().await { if let Err(e) = man.exit().await {
@@ -101,7 +101,7 @@ impl ManagerMap {
} }
} }
#[instrument(skip(self))] #[instrument(skip_all)]
pub async fn empty(&self) -> Result<(), Error> { pub async fn empty(&self) -> Result<(), Error> {
let res = let res =
futures::future::join_all(std::mem::take(&mut *self.0.write().await).into_iter().map( futures::future::join_all(std::mem::take(&mut *self.0.write().await).into_iter().map(
@@ -128,7 +128,7 @@ impl ManagerMap {
}) })
} }
#[instrument(skip(self))] #[instrument(skip_all)]
pub async fn get(&self, id: &(PackageId, Version)) -> Option<Arc<Manager>> { pub async fn get(&self, id: &(PackageId, Version)) -> Option<Arc<Manager>> {
self.0.read().await.get(id).cloned() self.0.read().await.get(id).cloned()
} }
@@ -174,7 +174,7 @@ pub enum OnStop {
Exit, Exit,
} }
#[instrument(skip(state))] #[instrument(skip_all)]
async fn run_main( async fn run_main(
state: &Arc<ManagerSharedState>, state: &Arc<ManagerSharedState>,
) -> Result<Result<NoOutput, (i32, String)>, Error> { ) -> Result<Result<NoOutput, (i32, String)>, Error> {
@@ -232,7 +232,7 @@ async fn start_up_image(
} }
impl Manager { impl Manager {
#[instrument(skip(ctx))] #[instrument(skip_all)]
async fn create(ctx: RpcContext, manifest: Manifest) -> Result<Self, Error> { async fn create(ctx: RpcContext, manifest: Manifest) -> Result<Self, Error> {
let (on_stop, recv) = channel(OnStop::Sleep); let (on_stop, recv) = channel(OnStop::Sleep);
let seed = Arc::new(ManagerSeed { let seed = Arc::new(ManagerSeed {
@@ -271,7 +271,7 @@ impl Manager {
send_signal(&self.shared, signal).await send_signal(&self.shared, signal).await
} }
#[instrument(skip(self))] #[instrument(skip_all)]
async fn exit(&self) -> Result<(), Error> { async fn exit(&self) -> Result<(), Error> {
self.shared self.shared
.commit_health_check_results .commit_health_check_results
@@ -433,7 +433,7 @@ pub struct PersistentContainer {
} }
impl PersistentContainer { impl PersistentContainer {
#[instrument(skip(seed))] #[instrument(skip_all)]
async fn init(seed: &Arc<ManagerSeed>) -> Result<Option<Self>, Error> { async fn init(seed: &Arc<ManagerSeed>) -> Result<Option<Self>, Error> {
Ok(if let Some(containers) = &seed.manifest.containers { Ok(if let Some(containers) = &seed.manifest.containers {
let (running_docker, rpc_client) = let (running_docker, rpc_client) =
@@ -722,7 +722,7 @@ fn sigterm_timeout(manifest: &Manifest) -> Option<Duration> {
} }
} }
#[instrument(skip(shared))] #[instrument(skip_all)]
async fn stop(shared: &ManagerSharedState) -> Result<(), Error> { async fn stop(shared: &ManagerSharedState) -> Result<(), Error> {
shared shared
.commit_health_check_results .commit_health_check_results
@@ -746,7 +746,7 @@ async fn stop(shared: &ManagerSharedState) -> Result<(), Error> {
Ok(()) Ok(())
} }
#[instrument(skip(shared))] #[instrument(skip_all)]
async fn start(shared: &ManagerSharedState) -> Result<(), Error> { async fn start(shared: &ManagerSharedState) -> Result<(), Error> {
shared.on_stop.send_modify(|status| { shared.on_stop.send_modify(|status| {
if matches!(*status, OnStop::Sleep) { if matches!(*status, OnStop::Sleep) {
@@ -761,7 +761,7 @@ async fn start(shared: &ManagerSharedState) -> Result<(), Error> {
Ok(()) Ok(())
} }
#[instrument(skip(shared))] #[instrument(skip_all)]
async fn pause(shared: &ManagerSharedState) -> Result<(), Error> { async fn pause(shared: &ManagerSharedState) -> Result<(), Error> {
if let Err(e) = shared if let Err(e) = shared
.seed .seed
@@ -778,7 +778,7 @@ async fn pause(shared: &ManagerSharedState) -> Result<(), Error> {
Ok(()) Ok(())
} }
#[instrument(skip(shared))] #[instrument(skip_all)]
async fn resume(shared: &ManagerSharedState) -> Result<(), Error> { async fn resume(shared: &ManagerSharedState) -> Result<(), Error> {
shared shared
.seed .seed

View File

@@ -47,7 +47,7 @@ pub struct EncryptedWire {
encrypted: serde_json::Value, encrypted: serde_json::Value,
} }
impl EncryptedWire { impl EncryptedWire {
#[instrument(skip(current_secret))] #[instrument(skip_all)]
pub fn decrypt(self, current_secret: impl AsRef<Jwk>) -> Option<String> { pub fn decrypt(self, current_secret: impl AsRef<Jwk>) -> Option<String> {
let current_secret = current_secret.as_ref(); let current_secret = current_secret.as_ref();

View File

@@ -24,7 +24,7 @@ pub struct Migrations {
pub to: IndexMap<VersionRange, PackageProcedure>, pub to: IndexMap<VersionRange, PackageProcedure>,
} }
impl Migrations { impl Migrations {
#[instrument] #[instrument(skip_all)]
pub fn validate( pub fn validate(
&self, &self,
container: &Option<DockerContainers>, container: &Option<DockerContainers>,
@@ -55,7 +55,7 @@ impl Migrations {
Ok(()) Ok(())
} }
#[instrument(skip(ctx))] #[instrument(skip_all)]
pub fn from<'a>( pub fn from<'a>(
&'a self, &'a self,
container: &'a Option<DockerContainers>, container: &'a Option<DockerContainers>,
@@ -95,7 +95,7 @@ impl Migrations {
} }
} }
#[instrument(skip(ctx))] #[instrument(skip_all)]
pub fn to<'a>( pub fn to<'a>(
&'a self, &'a self,
ctx: &'a RpcContext, ctx: &'a RpcContext,

View File

@@ -7,4 +7,4 @@ prompt = no
[req_distinguished_name] [req_distinguished_name]
CN = {hostname}.local CN = {hostname}.local
O = Start9 Labs O = Start9 Labs
OU = Embassy OU = StartOS

View File

@@ -11,6 +11,7 @@ use models::PackageId;
use tokio::net::{TcpListener, UdpSocket}; use tokio::net::{TcpListener, UdpSocket};
use tokio::process::Command; use tokio::process::Command;
use tokio::sync::RwLock; use tokio::sync::RwLock;
use tracing::instrument;
use trust_dns_server::authority::MessageResponseBuilder; use trust_dns_server::authority::MessageResponseBuilder;
use trust_dns_server::client::op::{Header, ResponseCode}; use trust_dns_server::client::op::{Header, ResponseCode};
use trust_dns_server::client::rr::{Name, Record, RecordType}; use trust_dns_server::client::rr::{Name, Record, RecordType};
@@ -147,6 +148,7 @@ impl RequestHandler for Resolver {
} }
impl DnsController { impl DnsController {
#[instrument(skip_all)]
pub async fn init(bind: &[SocketAddr]) -> Result<Self, Error> { pub async fn init(bind: &[SocketAddr]) -> Result<Self, Error> {
let services = Arc::new(RwLock::new(BTreeMap::new())); let services = Arc::new(RwLock::new(BTreeMap::new()));
@@ -161,10 +163,16 @@ impl DnsController {
); );
server.register_socket(UdpSocket::bind(bind).await.with_kind(ErrorKind::Network)?); server.register_socket(UdpSocket::bind(bind).await.with_kind(ErrorKind::Network)?);
Command::new("systemd-resolve") Command::new("resolvectl")
.arg("--set-dns=127.0.0.1") .arg("dns")
.arg("--interface=br-start9") .arg("br-start9")
.arg("--set-domain=embassy") .arg("127.0.0.1")
.invoke(ErrorKind::Network)
.await?;
Command::new("resolvectl")
.arg("domain")
.arg("br-start9")
.arg("embassy")
.invoke(ErrorKind::Network) .invoke(ErrorKind::Network)
.await?; .await?;

View File

@@ -16,7 +16,7 @@ use crate::{Error, ResultExt};
#[serde(rename_all = "kebab-case")] #[serde(rename_all = "kebab-case")]
pub struct Interfaces(pub BTreeMap<InterfaceId, Interface>); // TODO pub struct Interfaces(pub BTreeMap<InterfaceId, Interface>); // TODO
impl Interfaces { impl Interfaces {
#[instrument] #[instrument(skip_all)]
pub fn validate(&self) -> Result<(), Error> { pub fn validate(&self) -> Result<(), Error> {
for (_, interface) in &self.0 { for (_, interface) in &self.0 {
interface.validate().with_ctx(|_| { interface.validate().with_ctx(|_| {
@@ -28,7 +28,7 @@ impl Interfaces {
} }
Ok(()) Ok(())
} }
#[instrument(skip(secrets))] #[instrument(skip_all)]
pub async fn install<Ex>( pub async fn install<Ex>(
&self, &self,
secrets: &mut Ex, secrets: &mut Ex,
@@ -90,7 +90,7 @@ pub struct Interface {
pub protocols: IndexSet<String>, pub protocols: IndexSet<String>,
} }
impl Interface { impl Interface {
#[instrument] #[instrument(skip_all)]
pub fn validate(&self) -> Result<(), color_eyre::eyre::Report> { pub fn validate(&self) -> Result<(), color_eyre::eyre::Report> {
if self.tor_config.is_some() && !self.protocols.contains("tcp") { if self.tor_config.is_some() && !self.protocols.contains("tcp") {
color_eyre::eyre::bail!("must support tcp to set up a tor hidden service"); color_eyre::eyre::bail!("must support tcp to set up a tor hidden service");

View File

@@ -5,6 +5,7 @@ use std::sync::{Arc, Weak};
use color_eyre::eyre::eyre; use color_eyre::eyre::eyre;
use tokio::process::{Child, Command}; use tokio::process::{Child, Command};
use tokio::sync::Mutex; use tokio::sync::Mutex;
use tracing::instrument;
use crate::util::Invoke; use crate::util::Invoke;
use crate::{Error, ResultExt}; use crate::{Error, ResultExt};
@@ -51,6 +52,7 @@ pub struct MdnsControllerInner {
} }
impl MdnsControllerInner { impl MdnsControllerInner {
#[instrument(skip_all)]
async fn init() -> Result<Self, Error> { async fn init() -> Result<Self, Error> {
let mut res = MdnsControllerInner { let mut res = MdnsControllerInner {
alias_cmd: None, alias_cmd: None,
@@ -59,6 +61,7 @@ impl MdnsControllerInner {
res.sync().await?; res.sync().await?;
Ok(res) Ok(res)
} }
#[instrument(skip_all)]
async fn sync(&mut self) -> Result<(), Error> { async fn sync(&mut self) -> Result<(), Error> {
if let Some(mut cmd) = self.alias_cmd.take() { if let Some(mut cmd) = self.alias_cmd.take() {
cmd.kill().await.with_kind(crate::ErrorKind::Network)?; cmd.kill().await.with_kind(crate::ErrorKind::Network)?;

View File

@@ -31,7 +31,7 @@ pub struct NetController {
} }
impl NetController { impl NetController {
#[instrument] #[instrument(skip_all)]
pub async fn init( pub async fn init(
tor_control: SocketAddr, tor_control: SocketAddr,
dns_bind: &[SocketAddr], dns_bind: &[SocketAddr],
@@ -139,7 +139,7 @@ impl NetController {
Ok(()) Ok(())
} }
#[instrument(skip(self))] #[instrument(skip_all)]
pub async fn create_service( pub async fn create_service(
self: &Arc<Self>, self: &Arc<Self>,
package: PackageId, package: PackageId,

View File

@@ -143,21 +143,21 @@ pub async fn export_cert(chain: &[&X509], target: &Path) -> Result<(), Error> {
Ok(()) Ok(())
} }
#[instrument] #[instrument(skip_all)]
fn rand_serial() -> Result<Asn1Integer, Error> { fn rand_serial() -> Result<Asn1Integer, Error> {
let mut bn = BigNum::new()?; let mut bn = BigNum::new()?;
bn.rand(64, MsbOption::MAYBE_ZERO, false)?; bn.rand(64, MsbOption::MAYBE_ZERO, false)?;
let asn1 = Asn1Integer::from_bn(&bn)?; let asn1 = Asn1Integer::from_bn(&bn)?;
Ok(asn1) Ok(asn1)
} }
#[instrument] #[instrument(skip_all)]
pub fn generate_key() -> Result<PKey<Private>, Error> { pub fn generate_key() -> Result<PKey<Private>, Error> {
let new_key = EcKey::generate(EC_GROUP.as_ref())?; let new_key = EcKey::generate(EC_GROUP.as_ref())?;
let key = PKey::from_ec_key(new_key)?; let key = PKey::from_ec_key(new_key)?;
Ok(key) Ok(key)
} }
#[instrument] #[instrument(skip_all)]
pub fn make_root_cert(root_key: &PKey<Private>, hostname: &Hostname) -> Result<X509, Error> { pub fn make_root_cert(root_key: &PKey<Private>, hostname: &Hostname) -> Result<X509, Error> {
let mut builder = X509Builder::new()?; let mut builder = X509Builder::new()?;
builder.set_version(CERTIFICATE_VERSION)?; builder.set_version(CERTIFICATE_VERSION)?;
@@ -173,7 +173,7 @@ pub fn make_root_cert(root_key: &PKey<Private>, hostname: &Hostname) -> Result<X
let mut subject_name_builder = X509NameBuilder::new()?; let mut subject_name_builder = X509NameBuilder::new()?;
subject_name_builder.append_entry_by_text("CN", &format!("{} Local Root CA", &*hostname.0))?; subject_name_builder.append_entry_by_text("CN", &format!("{} Local Root CA", &*hostname.0))?;
subject_name_builder.append_entry_by_text("O", "Start9")?; subject_name_builder.append_entry_by_text("O", "Start9")?;
subject_name_builder.append_entry_by_text("OU", "Embassy")?; subject_name_builder.append_entry_by_text("OU", "StartOS")?;
let subject_name = subject_name_builder.build(); let subject_name = subject_name_builder.build();
builder.set_subject_name(&subject_name)?; builder.set_subject_name(&subject_name)?;
@@ -208,7 +208,7 @@ pub fn make_root_cert(root_key: &PKey<Private>, hostname: &Hostname) -> Result<X
let cert = builder.build(); let cert = builder.build();
Ok(cert) Ok(cert)
} }
#[instrument] #[instrument(skip_all)]
pub fn make_int_cert( pub fn make_int_cert(
signer: (&PKey<Private>, &X509), signer: (&PKey<Private>, &X509),
applicant: &PKey<Private>, applicant: &PKey<Private>,
@@ -225,9 +225,9 @@ pub fn make_int_cert(
builder.set_serial_number(&*rand_serial()?)?; builder.set_serial_number(&*rand_serial()?)?;
let mut subject_name_builder = X509NameBuilder::new()?; let mut subject_name_builder = X509NameBuilder::new()?;
subject_name_builder.append_entry_by_text("CN", "Embassy Local Intermediate CA")?; subject_name_builder.append_entry_by_text("CN", "StartOS Local Intermediate CA")?;
subject_name_builder.append_entry_by_text("O", "Start9")?; subject_name_builder.append_entry_by_text("O", "Start9")?;
subject_name_builder.append_entry_by_text("OU", "Embassy")?; subject_name_builder.append_entry_by_text("OU", "StartOS")?;
let subject_name = subject_name_builder.build(); let subject_name = subject_name_builder.build();
builder.set_subject_name(&subject_name)?; builder.set_subject_name(&subject_name)?;
@@ -334,7 +334,7 @@ impl std::fmt::Display for SANInfo {
} }
} }
#[instrument] #[instrument(skip_all)]
pub fn make_leaf_cert( pub fn make_leaf_cert(
signer: (&PKey<Private>, &X509), signer: (&PKey<Private>, &X509),
applicant: (&PKey<Private>, &SANInfo), applicant: (&PKey<Private>, &SANInfo),
@@ -370,7 +370,7 @@ pub fn make_leaf_cert(
.unwrap_or("localhost"), .unwrap_or("localhost"),
)?; )?;
subject_name_builder.append_entry_by_text("O", "Start9")?; subject_name_builder.append_entry_by_text("O", "Start9")?;
subject_name_builder.append_entry_by_text("OU", "Embassy")?; subject_name_builder.append_entry_by_text("OU", "StartOS")?;
let subject_name = subject_name_builder.build(); let subject_name = subject_name_builder.build();
builder.set_subject_name(&subject_name)?; builder.set_subject_name(&subject_name)?;

View File

@@ -93,7 +93,7 @@ pub struct TorControllerInner {
services: BTreeMap<String, BTreeMap<u16, BTreeMap<SocketAddr, Weak<()>>>>, services: BTreeMap<String, BTreeMap<u16, BTreeMap<SocketAddr, Weak<()>>>>,
} }
impl TorControllerInner { impl TorControllerInner {
#[instrument(skip(self))] #[instrument(skip_all)]
async fn add( async fn add(
&mut self, &mut self,
key: &TorSecretKeyV3, key: &TorSecretKeyV3,
@@ -135,7 +135,7 @@ impl TorControllerInner {
Ok(rc) Ok(rc)
} }
#[instrument(skip(self))] #[instrument(skip_all)]
async fn gc(&mut self, key: &TorSecretKeyV3, external: u16) -> Result<(), Error> { async fn gc(&mut self, key: &TorSecretKeyV3, external: u16) -> Result<(), Error> {
let onion_base = key let onion_base = key
.public() .public()
@@ -174,7 +174,7 @@ impl TorControllerInner {
Ok(()) Ok(())
} }
#[instrument] #[instrument(skip_all)]
async fn init(tor_control: SocketAddr) -> Result<Self, Error> { async fn init(tor_control: SocketAddr) -> Result<Self, Error> {
let mut conn = torut::control::UnauthenticatedConn::new( let mut conn = torut::control::UnauthenticatedConn::new(
TcpStream::connect(tor_control).await?, // TODO TcpStream::connect(tor_control).await?, // TODO
@@ -196,7 +196,7 @@ impl TorControllerInner {
}) })
} }
#[instrument(skip(self))] #[instrument(skip_all)]
async fn list_services(&mut self) -> Result<Vec<OnionAddressV3>, Error> { async fn list_services(&mut self) -> Result<Vec<OnionAddressV3>, Error> {
self.connection self.connection
.get_info("onions/current") .get_info("onions/current")

View File

@@ -47,7 +47,7 @@ pub async fn country() -> Result<(), Error> {
} }
#[command(display(display_none))] #[command(display(display_none))]
#[instrument(skip(ctx, password))] #[instrument(skip_all)]
pub async fn add( pub async fn add(
#[context] ctx: RpcContext, #[context] ctx: RpcContext,
#[arg] ssid: String, #[arg] ssid: String,
@@ -103,7 +103,7 @@ pub async fn add(
} }
#[command(display(display_none))] #[command(display(display_none))]
#[instrument(skip(ctx))] #[instrument(skip_all)]
pub async fn connect(#[context] ctx: RpcContext, #[arg] ssid: String) -> Result<(), Error> { pub async fn connect(#[context] ctx: RpcContext, #[arg] ssid: String) -> Result<(), Error> {
let wifi_manager = wifi_manager(&ctx)?; let wifi_manager = wifi_manager(&ctx)?;
if !ssid.is_ascii() { if !ssid.is_ascii() {
@@ -155,7 +155,7 @@ pub async fn connect(#[context] ctx: RpcContext, #[arg] ssid: String) -> Result<
} }
#[command(display(display_none))] #[command(display(display_none))]
#[instrument(skip(ctx))] #[instrument(skip_all)]
pub async fn delete(#[context] ctx: RpcContext, #[arg] ssid: String) -> Result<(), Error> { pub async fn delete(#[context] ctx: RpcContext, #[arg] ssid: String) -> Result<(), Error> {
let wifi_manager = wifi_manager(&ctx)?; let wifi_manager = wifi_manager(&ctx)?;
if !ssid.is_ascii() { if !ssid.is_ascii() {
@@ -173,7 +173,7 @@ pub async fn delete(#[context] ctx: RpcContext, #[arg] ssid: String) -> Result<(
let is_current_removed_and_no_hardwire = let is_current_removed_and_no_hardwire =
is_current_being_removed && !interface_connected(&ctx.ethernet_interface).await?; is_current_being_removed && !interface_connected(&ctx.ethernet_interface).await?;
if is_current_removed_and_no_hardwire { if is_current_removed_and_no_hardwire {
return Err(Error::new(color_eyre::eyre::eyre!("Forbidden: Deleting this Network would make your Embassy Unreachable. Either connect to ethernet or connect to a different WiFi network to remedy this."), ErrorKind::Wifi)); return Err(Error::new(color_eyre::eyre::eyre!("Forbidden: Deleting this network would make your server unreachable. Either connect to ethernet or connect to a different WiFi network to remedy this."), ErrorKind::Wifi));
} }
wpa_supplicant wpa_supplicant
@@ -293,7 +293,7 @@ fn display_wifi_list(info: Vec<WifiListOut>, matches: &ArgMatches) {
} }
#[command(display(display_wifi_info))] #[command(display(display_wifi_info))]
#[instrument(skip(ctx))] #[instrument(skip_all)]
pub async fn get( pub async fn get(
#[context] ctx: RpcContext, #[context] ctx: RpcContext,
#[allow(unused_variables)] #[allow(unused_variables)]
@@ -347,7 +347,7 @@ pub async fn get(
} }
#[command(rename = "get", display(display_wifi_list))] #[command(rename = "get", display(display_wifi_list))]
#[instrument(skip(ctx))] #[instrument(skip_all)]
pub async fn get_available( pub async fn get_available(
#[context] ctx: RpcContext, #[context] ctx: RpcContext,
#[allow(unused_variables)] #[allow(unused_variables)]
@@ -457,7 +457,7 @@ impl WpaCli {
WpaCli { interface } WpaCli { interface }
} }
#[instrument(skip(self, psk))] #[instrument(skip_all)]
pub async fn set_add_network_low(&mut self, ssid: &Ssid, psk: &Psk) -> Result<(), Error> { pub async fn set_add_network_low(&mut self, ssid: &Ssid, psk: &Psk) -> Result<(), Error> {
let _ = Command::new("nmcli") let _ = Command::new("nmcli")
.arg("-a") .arg("-a")
@@ -473,7 +473,7 @@ impl WpaCli {
.await?; .await?;
Ok(()) Ok(())
} }
#[instrument(skip(self, psk))] #[instrument(skip_all)]
pub async fn add_network_low(&mut self, ssid: &Ssid, psk: &Psk) -> Result<(), Error> { pub async fn add_network_low(&mut self, ssid: &Ssid, psk: &Psk) -> Result<(), Error> {
if self.find_networks(ssid).await?.is_empty() { if self.find_networks(ssid).await?.is_empty() {
Command::new("nmcli") Command::new("nmcli")
@@ -567,7 +567,7 @@ impl WpaCli {
.await?; .await?;
Ok(()) Ok(())
} }
#[instrument] #[instrument(skip_all)]
pub async fn list_networks_low(&self) -> Result<BTreeMap<NetworkId, WifiInfo>, Error> { pub async fn list_networks_low(&self) -> Result<BTreeMap<NetworkId, WifiInfo>, Error> {
let r = Command::new("nmcli") let r = Command::new("nmcli")
.arg("-t") .arg("-t")
@@ -596,7 +596,7 @@ impl WpaCli {
.collect::<BTreeMap<NetworkId, WifiInfo>>()) .collect::<BTreeMap<NetworkId, WifiInfo>>())
} }
#[instrument] #[instrument(skip_all)]
pub async fn list_wifi_low(&self) -> Result<WifiList, Error> { pub async fn list_wifi_low(&self) -> Result<WifiList, Error> {
let r = Command::new("nmcli") let r = Command::new("nmcli")
.arg("-g") .arg("-g")
@@ -681,7 +681,7 @@ impl WpaCli {
}) })
.collect()) .collect())
} }
#[instrument(skip(db))] #[instrument(skip_all)]
pub async fn select_network(&mut self, db: impl DbHandle, ssid: &Ssid) -> Result<bool, Error> { pub async fn select_network(&mut self, db: impl DbHandle, ssid: &Ssid) -> Result<bool, Error> {
let m_id = self.check_active_network(ssid).await?; let m_id = self.check_active_network(ssid).await?;
match m_id { match m_id {
@@ -717,7 +717,7 @@ impl WpaCli {
} }
} }
} }
#[instrument] #[instrument(skip_all)]
pub async fn get_current_network(&self) -> Result<Option<Ssid>, Error> { pub async fn get_current_network(&self) -> Result<Option<Ssid>, Error> {
let r = Command::new("iwgetid") let r = Command::new("iwgetid")
.arg(&self.interface) .arg(&self.interface)
@@ -733,7 +733,7 @@ impl WpaCli {
Ok(Some(Ssid(network.to_owned()))) Ok(Some(Ssid(network.to_owned())))
} }
} }
#[instrument(skip(db))] #[instrument(skip_all)]
pub async fn remove_network(&mut self, db: impl DbHandle, ssid: &Ssid) -> Result<bool, Error> { pub async fn remove_network(&mut self, db: impl DbHandle, ssid: &Ssid) -> Result<bool, Error> {
let found_networks = self.find_networks(ssid).await?; let found_networks = self.find_networks(ssid).await?;
if found_networks.is_empty() { if found_networks.is_empty() {
@@ -745,7 +745,7 @@ impl WpaCli {
self.save_config(db).await?; self.save_config(db).await?;
Ok(true) Ok(true)
} }
#[instrument(skip(psk, db))] #[instrument(skip_all)]
pub async fn set_add_network( pub async fn set_add_network(
&mut self, &mut self,
db: impl DbHandle, db: impl DbHandle,
@@ -757,7 +757,7 @@ impl WpaCli {
self.save_config(db).await?; self.save_config(db).await?;
Ok(()) Ok(())
} }
#[instrument(skip(psk, db))] #[instrument(skip_all)]
pub async fn add_network( pub async fn add_network(
&mut self, &mut self,
db: impl DbHandle, db: impl DbHandle,
@@ -771,7 +771,7 @@ impl WpaCli {
} }
} }
#[instrument] #[instrument(skip_all)]
pub async fn interface_connected(interface: &str) -> Result<bool, Error> { pub async fn interface_connected(interface: &str) -> Result<bool, Error> {
let out = Command::new("ifconfig") let out = Command::new("ifconfig")
.arg(interface) .arg(interface)
@@ -792,7 +792,7 @@ pub fn country_code_parse(code: &str, _matches: &ArgMatches) -> Result<CountryCo
}) })
} }
#[instrument(skip(main_datadir))] #[instrument(skip_all)]
pub async fn synchronize_wpa_supplicant_conf<P: AsRef<Path>>( pub async fn synchronize_wpa_supplicant_conf<P: AsRef<Path>>(
main_datadir: P, main_datadir: P,
wifi_iface: &str, wifi_iface: &str,

View File

@@ -23,7 +23,7 @@ pub async fn notification() -> Result<(), Error> {
} }
#[command(display(display_serializable))] #[command(display(display_serializable))]
#[instrument(skip(ctx))] #[instrument(skip_all)]
pub async fn list( pub async fn list(
#[context] ctx: RpcContext, #[context] ctx: RpcContext,
#[arg] before: Option<i32>, #[arg] before: Option<i32>,
@@ -232,7 +232,7 @@ impl NotificationManager {
cache: Mutex::new(HashMap::new()), cache: Mutex::new(HashMap::new()),
} }
} }
#[instrument(skip(self, db))] #[instrument(skip_all)]
pub async fn notify<Db: DbHandle, T: NotificationType>( pub async fn notify<Db: DbHandle, T: NotificationType>(
&self, &self,
db: &mut Db, db: &mut Db,

View File

@@ -1,3 +1,5 @@
use std::path::Path;
use color_eyre::eyre::eyre; use color_eyre::eyre::eyre;
use gpt::disk::LogicalBlockSize; use gpt::disk::LogicalBlockSize;
use gpt::GptConfig; use gpt::GptConfig;
@@ -8,9 +10,10 @@ use crate::os_install::partition_for;
use crate::Error; use crate::Error;
pub async fn partition(disk: &DiskInfo, overwrite: bool) -> Result<OsPartitionInfo, Error> { pub async fn partition(disk: &DiskInfo, overwrite: bool) -> Result<OsPartitionInfo, Error> {
{ let efi = {
let disk = disk.clone(); let disk = disk.clone();
tokio::task::spawn_blocking(move || { tokio::task::spawn_blocking(move || {
let use_efi = Path::new("/sys/firmware/efi").exists();
let mut device = Box::new( let mut device = Box::new(
std::fs::File::options() std::fs::File::options()
.read(true) .read(true)
@@ -44,17 +47,15 @@ pub async fn partition(disk: &DiskInfo, overwrite: bool) -> Result<OsPartitionIn
.map(|(idx, x)| (idx + 1, x)) .map(|(idx, x)| (idx + 1, x))
{ {
if let Some(entry) = gpt.partitions().get(&(idx as u32)) { if let Some(entry) = gpt.partitions().get(&(idx as u32)) {
if entry.first_lba >= 33556480 {
if idx < 3 {
guid_part = Some(entry.clone())
}
break;
}
if part_info.guid.is_some() { if part_info.guid.is_some() {
return Err(Error::new( if entry.first_lba < if use_efi { 33759266 } else { 33570850 } {
eyre!("Not enough space before embassy data"), return Err(Error::new(
crate::ErrorKind::InvalidRequest, eyre!("Not enough space before embassy data"),
)); crate::ErrorKind::InvalidRequest,
));
}
guid_part = Some(entry.clone());
break;
} }
} }
} }
@@ -63,7 +64,19 @@ pub async fn partition(disk: &DiskInfo, overwrite: bool) -> Result<OsPartitionIn
gpt.update_partitions(Default::default())?; gpt.update_partitions(Default::default())?;
gpt.add_partition("efi", 100 * 1024 * 1024, gpt::partition_types::EFI, 0, None)?; let efi = if use_efi {
gpt.add_partition("efi", 100 * 1024 * 1024, gpt::partition_types::EFI, 0, None)?;
true
} else {
gpt.add_partition(
"bios-grub",
8 * 1024 * 1024,
gpt::partition_types::BIOS,
0,
None,
)?;
false
};
gpt.add_partition( gpt.add_partition(
"boot", "boot",
1024 * 1024 * 1024, 1024 * 1024 * 1024,
@@ -108,14 +121,15 @@ pub async fn partition(disk: &DiskInfo, overwrite: bool) -> Result<OsPartitionIn
gpt.write()?; gpt.write()?;
Ok(()) Ok(efi)
}) })
.await .await
.unwrap()?; .unwrap()?
} };
Ok(OsPartitionInfo { Ok(OsPartitionInfo {
efi: Some(partition_for(&disk.logicalname, 1)), efi: efi.then(|| partition_for(&disk.logicalname, 1)),
bios: (!efi).then(|| partition_for(&disk.logicalname, 1)),
boot: partition_for(&disk.logicalname, 2), boot: partition_for(&disk.logicalname, 2),
root: partition_for(&disk.logicalname, 3), root: partition_for(&disk.logicalname, 3),
}) })

View File

@@ -27,18 +27,14 @@ pub async fn partition(disk: &DiskInfo, overwrite: bool) -> Result<OsPartitionIn
.map(|(idx, x)| (idx + 1, x)) .map(|(idx, x)| (idx + 1, x))
{ {
if let Some(entry) = mbr.get_mut(idx) { if let Some(entry) = mbr.get_mut(idx) {
if entry.starting_lba >= 33556480 {
if idx < 3 {
guid_part =
Some(std::mem::replace(entry, MBRPartitionEntry::empty()))
}
break;
}
if part_info.guid.is_some() { if part_info.guid.is_some() {
return Err(Error::new( if entry.starting_lba < 33556480 {
eyre!("Not enough space before embassy data"), return Err(Error::new(
crate::ErrorKind::InvalidRequest, eyre!("Not enough space before embassy data"),
)); crate::ErrorKind::InvalidRequest,
));
}
guid_part = Some(std::mem::replace(entry, MBRPartitionEntry::empty()));
} }
*entry = MBRPartitionEntry::empty(); *entry = MBRPartitionEntry::empty();
} }
@@ -85,6 +81,7 @@ pub async fn partition(disk: &DiskInfo, overwrite: bool) -> Result<OsPartitionIn
Ok(OsPartitionInfo { Ok(OsPartitionInfo {
efi: None, efi: None,
bios: None,
boot: partition_for(&disk.logicalname, 1), boot: partition_for(&disk.logicalname, 1),
root: partition_for(&disk.logicalname, 2), root: partition_for(&disk.logicalname, 2),
}) })

View File

@@ -49,7 +49,7 @@ pub async fn list() -> Result<Vec<DiskInfo>, Error> {
Command::new("grub-probe-default") Command::new("grub-probe-default")
.arg("-t") .arg("-t")
.arg("disk") .arg("disk")
.arg("/cdrom") .arg("/run/live/medium")
.invoke(crate::ErrorKind::Grub) .invoke(crate::ErrorKind::Grub)
.await?, .await?,
)? )?
@@ -93,13 +93,7 @@ pub fn partition_for(disk: impl AsRef<Path>, idx: usize) -> PathBuf {
async fn partition(disk: &mut DiskInfo, overwrite: bool) -> Result<OsPartitionInfo, Error> { async fn partition(disk: &mut DiskInfo, overwrite: bool) -> Result<OsPartitionInfo, Error> {
let partition_type = match (overwrite, disk.partition_table) { let partition_type = match (overwrite, disk.partition_table) {
(true, _) | (_, None) => { (true, _) | (_, None) => PartitionTable::Gpt,
if tokio::fs::metadata("/sys/firmware/efi").await.is_ok() {
PartitionTable::Gpt
} else {
PartitionTable::Mbr
}
}
(_, Some(t)) => t, (_, Some(t)) => t,
}; };
disk.partition_table = Some(partition_type); disk.partition_table = Some(partition_type);
@@ -188,7 +182,7 @@ pub async fn execute(
.arg("-f") .arg("-f")
.arg("-d") .arg("-d")
.arg(&current) .arg(&current)
.arg("/cdrom/casper/filesystem.squashfs") .arg("/run/live/medium/live/filesystem.squashfs")
.invoke(crate::ErrorKind::Filesystem) .invoke(crate::ErrorKind::Filesystem)
.await?; .await?;
@@ -233,7 +227,7 @@ pub async fn execute(
let dev = MountGuard::mount(&Bind::new("/dev"), current.join("dev"), ReadWrite).await?; let dev = MountGuard::mount(&Bind::new("/dev"), current.join("dev"), ReadWrite).await?;
let proc = MountGuard::mount(&Bind::new("/proc"), current.join("proc"), ReadWrite).await?; let proc = MountGuard::mount(&Bind::new("/proc"), current.join("proc"), ReadWrite).await?;
let sys = MountGuard::mount(&Bind::new("/sys"), current.join("sys"), ReadWrite).await?; let sys = MountGuard::mount(&Bind::new("/sys"), current.join("sys"), ReadWrite).await?;
let efivarfs = if let Some(efi) = &part_info.efi { let efivarfs = if tokio::fs::metadata("/sys/firmware/efi").await.is_ok() {
Some( Some(
MountGuard::mount( MountGuard::mount(
&EfiVarFs, &EfiVarFs,
@@ -246,14 +240,9 @@ pub async fn execute(
None None
}; };
Command::new("chroot")
.arg(&current)
.arg("update-grub")
.invoke(crate::ErrorKind::Grub)
.await?;
let mut install = Command::new("chroot"); let mut install = Command::new("chroot");
install.arg(&current).arg("grub-install"); install.arg(&current).arg("grub-install");
if part_info.efi.is_none() { if tokio::fs::metadata("/sys/firmware/efi").await.is_err() {
install.arg("--target=i386-pc"); install.arg("--target=i386-pc");
} else { } else {
match *ARCH { match *ARCH {
@@ -267,6 +256,12 @@ pub async fn execute(
.invoke(crate::ErrorKind::Grub) .invoke(crate::ErrorKind::Grub)
.await?; .await?;
Command::new("chroot")
.arg(&current)
.arg("update-grub2")
.invoke(crate::ErrorKind::Grub)
.await?;
dev.unmount(false).await?; dev.unmount(false).await?;
if let Some(efivarfs) = efivarfs { if let Some(efivarfs) = efivarfs {
efivarfs.unmount(false).await?; efivarfs.unmount(false).await?;

View File

@@ -75,7 +75,7 @@ impl DockerContainer {
/// Idea is that we are going to send it command and get the inputs be filtered back from the manager. /// Idea is that we are going to send it command and get the inputs be filtered back from the manager.
/// Then we could in theory run commands without the cost of running the docker exec which is known to have /// Then we could in theory run commands without the cost of running the docker exec which is known to have
/// a dely of > 200ms which is not acceptable. /// a dely of > 200ms which is not acceptable.
#[instrument(skip(ctx))] #[instrument(skip_all)]
pub async fn long_running_execute( pub async fn long_running_execute(
&self, &self,
ctx: &RpcContext, ctx: &RpcContext,
@@ -212,7 +212,7 @@ impl DockerProcedure {
Ok(()) Ok(())
} }
#[instrument(skip(ctx, input))] #[instrument(skip_all)]
pub async fn execute<I: Serialize, O: DeserializeOwned>( pub async fn execute<I: Serialize, O: DeserializeOwned>(
&self, &self,
ctx: &RpcContext, ctx: &RpcContext,
@@ -226,7 +226,6 @@ impl DockerProcedure {
let name = name.docker_name(); let name = name.docker_name();
let name: Option<&str> = name.as_ref().map(|x| &**x); let name: Option<&str> = name.as_ref().map(|x| &**x);
let mut cmd = tokio::process::Command::new("docker"); let mut cmd = tokio::process::Command::new("docker");
tracing::debug!("{:?} is run", name);
let container_name = Self::container_name(pkg_id, name); let container_name = Self::container_name(pkg_id, name);
cmd.arg("run") cmd.arg("run")
.arg("--rm") .arg("--rm")
@@ -393,7 +392,7 @@ impl DockerProcedure {
) )
} }
#[instrument(skip(_ctx, input))] #[instrument(skip_all)]
pub async fn inject<I: Serialize, O: DeserializeOwned>( pub async fn inject<I: Serialize, O: DeserializeOwned>(
&self, &self,
_ctx: &RpcContext, _ctx: &RpcContext,
@@ -408,7 +407,6 @@ impl DockerProcedure {
let name: Option<&str> = name.as_deref(); let name: Option<&str> = name.as_deref();
let mut cmd = tokio::process::Command::new("docker"); let mut cmd = tokio::process::Command::new("docker");
tracing::debug!("{:?} is exec", name);
cmd.arg("exec"); cmd.arg("exec");
cmd.args(self.docker_args_inject(pkg_id).await?); cmd.args(self.docker_args_inject(pkg_id).await?);
@@ -548,7 +546,7 @@ impl DockerProcedure {
) )
} }
#[instrument(skip(ctx, input))] #[instrument(skip_all)]
pub async fn sandboxed<I: Serialize, O: DeserializeOwned>( pub async fn sandboxed<I: Serialize, O: DeserializeOwned>(
&self, &self,
ctx: &RpcContext, ctx: &RpcContext,

View File

@@ -57,7 +57,7 @@ impl JsProcedure {
Ok(()) Ok(())
} }
#[instrument(skip(directory, input, rpc_client))] #[instrument(skip_all)]
pub async fn execute<I: Serialize, O: DeserializeOwned>( pub async fn execute<I: Serialize, O: DeserializeOwned>(
&self, &self,
directory: &PathBuf, directory: &PathBuf,
@@ -111,7 +111,7 @@ impl JsProcedure {
Ok(res) Ok(res)
} }
#[instrument(skip(ctx, input))] #[instrument(skip_all)]
pub async fn sandboxed<I: Serialize, O: DeserializeOwned>( pub async fn sandboxed<I: Serialize, O: DeserializeOwned>(
&self, &self,
ctx: &RpcContext, ctx: &RpcContext,

View File

@@ -40,7 +40,7 @@ impl PackageProcedure {
_ => false, _ => false,
} }
} }
#[instrument] #[instrument(skip_all)]
pub fn validate( pub fn validate(
&self, &self,
container: &Option<DockerContainers>, container: &Option<DockerContainers>,
@@ -58,7 +58,7 @@ impl PackageProcedure {
} }
} }
#[instrument(skip(ctx, input))] #[instrument(skip_all)]
pub async fn execute<I: Serialize, O: DeserializeOwned + 'static>( pub async fn execute<I: Serialize, O: DeserializeOwned + 'static>(
&self, &self,
ctx: &RpcContext, ctx: &RpcContext,
@@ -121,7 +121,7 @@ impl PackageProcedure {
} }
} }
#[instrument(skip(ctx, input))] #[instrument(skip_all)]
pub async fn sandboxed<I: Serialize, O: DeserializeOwned>( pub async fn sandboxed<I: Serialize, O: DeserializeOwned>(
&self, &self,
container: &Option<DockerContainers>, container: &Option<DockerContainers>,

View File

@@ -18,7 +18,7 @@ pub async fn properties(#[context] ctx: RpcContext, #[arg] id: PackageId) -> Res
Ok(fetch_properties(ctx, id).await?) Ok(fetch_properties(ctx, id).await?)
} }
#[instrument(skip(ctx))] #[instrument(skip_all)]
pub async fn fetch_properties(ctx: RpcContext, id: PackageId) -> Result<Value, Error> { pub async fn fetch_properties(ctx: RpcContext, id: PackageId) -> Result<Value, Error> {
let mut db = ctx.db.handle(); let mut db = ctx.db.handle();

View File

@@ -42,7 +42,7 @@ impl<
> S9pkPacker<'a, W, RLicense, RInstructions, RIcon, RDockerImages, RAssets, RScripts> > S9pkPacker<'a, W, RLicense, RInstructions, RIcon, RDockerImages, RAssets, RScripts>
{ {
/// BLOCKING /// BLOCKING
#[instrument(skip(self))] #[instrument(skip_all)]
pub async fn pack(mut self, key: &ed25519_dalek::Keypair) -> Result<(), Error> { pub async fn pack(mut self, key: &ed25519_dalek::Keypair) -> Result<(), Error> {
let header_pos = self.writer.stream_position().await?; let header_pos = self.writer.stream_position().await?;
if header_pos != 0 { if header_pos != 0 {

View File

@@ -31,7 +31,7 @@ pub mod reader;
pub const SIG_CONTEXT: &'static [u8] = b"s9pk"; pub const SIG_CONTEXT: &'static [u8] = b"s9pk";
#[command(cli_only, display(display_none))] #[command(cli_only, display(display_none))]
#[instrument(skip(ctx))] #[instrument(skip_all)]
pub async fn pack(#[context] ctx: SdkContext, #[arg] path: Option<PathBuf>) -> Result<(), Error> { pub async fn pack(#[context] ctx: SdkContext, #[arg] path: Option<PathBuf>) -> Result<(), Error> {
use tokio::fs::File; use tokio::fs::File;

View File

@@ -91,7 +91,7 @@ pub struct ImageTag {
pub version: Version, pub version: Version,
} }
impl ImageTag { impl ImageTag {
#[instrument] #[instrument(skip_all)]
pub fn validate(&self, id: &PackageId, version: &Version) -> Result<(), Error> { pub fn validate(&self, id: &PackageId, version: &Version) -> Result<(), Error> {
if id != &self.package_id { if id != &self.package_id {
return Err(Error::new( return Err(Error::new(
@@ -168,7 +168,7 @@ impl<R: AsyncRead + AsyncSeek + Unpin + Send + Sync> S9pkReader<InstallProgressT
} }
} }
impl<R: AsyncRead + AsyncSeek + Unpin + Send + Sync> S9pkReader<R> { impl<R: AsyncRead + AsyncSeek + Unpin + Send + Sync> S9pkReader<R> {
#[instrument(skip(self))] #[instrument(skip_all)]
pub async fn validate(&mut self) -> Result<(), Error> { pub async fn validate(&mut self) -> Result<(), Error> {
if self.toc.icon.length > 102_400 { if self.toc.icon.length > 102_400 {
// 100 KiB // 100 KiB
@@ -286,7 +286,7 @@ impl<R: AsyncRead + AsyncSeek + Unpin + Send + Sync> S9pkReader<R> {
Ok(()) Ok(())
} }
#[instrument(skip(self))] #[instrument(skip_all)]
pub async fn image_tags(&mut self) -> Result<Vec<ImageTag>, Error> { pub async fn image_tags(&mut self) -> Result<Vec<ImageTag>, Error> {
let mut tar = tokio_tar::Archive::new(self.docker_images().await?); let mut tar = tokio_tar::Archive::new(self.docker_images().await?);
let mut entries = tar.entries()?; let mut entries = tar.entries()?;
@@ -314,7 +314,7 @@ impl<R: AsyncRead + AsyncSeek + Unpin + Send + Sync> S9pkReader<R> {
crate::ErrorKind::ParseS9pk, crate::ErrorKind::ParseS9pk,
)) ))
} }
#[instrument(skip(rdr))] #[instrument(skip_all)]
pub async fn from_reader(mut rdr: R, check_sig: bool) -> Result<Self, Error> { pub async fn from_reader(mut rdr: R, check_sig: bool) -> Result<Self, Error> {
let header = Header::deserialize(&mut rdr).await?; let header = Header::deserialize(&mut rdr).await?;

View File

@@ -135,7 +135,7 @@ pub async fn attach(
crate::disk::main::export(&*guid, &ctx.datadir).await?; crate::disk::main::export(&*guid, &ctx.datadir).await?;
return Err(Error::new( return Err(Error::new(
eyre!( eyre!(
"Errors were corrected with your disk, but the Embassy must be restarted in order to proceed" "Errors were corrected with your disk, but the server must be restarted in order to proceed"
), ),
ErrorKind::DiskManagement, ErrorKind::DiskManagement,
)); ));
@@ -294,7 +294,7 @@ pub async fn execute(
})); }));
} }
Err(e) => { Err(e) => {
tracing::error!("Error Setting Up Embassy: {}", e); tracing::error!("Error Setting Up Server: {}", e);
tracing::debug!("{:?}", e); tracing::debug!("{:?}", e);
*ctx.setup_status.write().await = Some(Err(e.into())); *ctx.setup_status.write().await = Some(Err(e.into()));
} }
@@ -303,7 +303,7 @@ pub async fn execute(
Ok(()) Ok(())
} }
#[instrument(skip(ctx))] #[instrument(skip_all)]
#[command(rpc_only)] #[command(rpc_only)]
pub async fn complete(#[context] ctx: SetupContext) -> Result<SetupResult, Error> { pub async fn complete(#[context] ctx: SetupContext) -> Result<SetupResult, Error> {
let (guid, setup_result) = if let Some((guid, setup_result)) = &*ctx.setup_result.read().await { let (guid, setup_result) = if let Some((guid, setup_result)) = &*ctx.setup_result.read().await {
@@ -320,14 +320,14 @@ pub async fn complete(#[context] ctx: SetupContext) -> Result<SetupResult, Error
Ok(setup_result) Ok(setup_result)
} }
#[instrument(skip(ctx))] #[instrument(skip_all)]
#[command(rpc_only)] #[command(rpc_only)]
pub async fn exit(#[context] ctx: SetupContext) -> Result<(), Error> { pub async fn exit(#[context] ctx: SetupContext) -> Result<(), Error> {
ctx.shutdown.send(()).expect("failed to shutdown"); ctx.shutdown.send(()).expect("failed to shutdown");
Ok(()) Ok(())
} }
#[instrument(skip(ctx, embassy_password, recovery_password))] #[instrument(skip_all)]
pub async fn execute_inner( pub async fn execute_inner(
ctx: SetupContext, ctx: SetupContext,
embassy_logicalname: PathBuf, embassy_logicalname: PathBuf,
@@ -380,7 +380,7 @@ async fn fresh_setup(
)) ))
} }
#[instrument(skip(ctx, embassy_password, recovery_password))] #[instrument(skip_all)]
async fn recover( async fn recover(
ctx: SetupContext, ctx: SetupContext,
guid: Arc<String>, guid: Arc<String>,
@@ -399,7 +399,7 @@ async fn recover(
.await .await
} }
#[instrument(skip(ctx, embassy_password))] #[instrument(skip_all)]
async fn migrate( async fn migrate(
ctx: SetupContext, ctx: SetupContext,
guid: Arc<String>, guid: Arc<String>,
@@ -429,6 +429,7 @@ async fn migrate(
ignore_existing: false, ignore_existing: false,
exclude: Vec::new(), exclude: Vec::new(),
no_permissions: false, no_permissions: false,
no_owner: false,
}, },
) )
.await?; .await?;
@@ -441,6 +442,7 @@ async fn migrate(
ignore_existing: false, ignore_existing: false,
exclude: vec!["tmp".to_owned()], exclude: vec!["tmp".to_owned()],
no_permissions: false, no_permissions: false,
no_owner: false,
}, },
) )
.await?; .await?;

View File

@@ -8,7 +8,7 @@ use crate::disk::main::export;
use crate::init::{STANDBY_MODE_PATH, SYSTEM_REBUILD_PATH}; use crate::init::{STANDBY_MODE_PATH, SYSTEM_REBUILD_PATH};
use crate::sound::SHUTDOWN; use crate::sound::SHUTDOWN;
use crate::util::{display_none, Invoke}; use crate::util::{display_none, Invoke};
use crate::{Error, ErrorKind, IS_RASPBERRY_PI}; use crate::{Error, ErrorKind, OS_ARCH};
#[derive(Debug, Clone)] #[derive(Debug, Clone)]
pub struct Shutdown { pub struct Shutdown {
@@ -58,7 +58,7 @@ impl Shutdown {
tracing::debug!("{:?}", e); tracing::debug!("{:?}", e);
} }
} }
if !*IS_RASPBERRY_PI || self.restart { if OS_ARCH != "raspberrypi" || self.restart {
if let Err(e) = SHUTDOWN.play().await { if let Err(e) = SHUTDOWN.play().await {
tracing::error!("Error Playing Shutdown Song: {}", e); tracing::error!("Error Playing Shutdown Song: {}", e);
tracing::debug!("{:?}", e); tracing::debug!("{:?}", e);
@@ -66,7 +66,7 @@ impl Shutdown {
} }
}); });
drop(rt); drop(rt);
if *IS_RASPBERRY_PI { if OS_ARCH == "raspberrypi" {
if !self.restart { if !self.restart {
std::fs::write(STANDBY_MODE_PATH, "").unwrap(); std::fs::write(STANDBY_MODE_PATH, "").unwrap();
Command::new("sync").spawn().unwrap().wait().unwrap(); Command::new("sync").spawn().unwrap().wait().unwrap();

View File

@@ -21,19 +21,19 @@ struct SoundInterface {
guard: Option<FileLock>, guard: Option<FileLock>,
} }
impl SoundInterface { impl SoundInterface {
#[instrument] #[instrument(skip_all)]
pub async fn lease() -> Result<Self, Error> { pub async fn lease() -> Result<Self, Error> {
let guard = FileLock::new(SOUND_LOCK_FILE, true).await?; let guard = FileLock::new(SOUND_LOCK_FILE, true).await?;
Ok(SoundInterface { guard: Some(guard) }) Ok(SoundInterface { guard: Some(guard) })
} }
#[instrument(skip(self))] #[instrument(skip_all)]
pub async fn close(mut self) -> Result<(), Error> { pub async fn close(mut self) -> Result<(), Error> {
if let Some(lock) = self.guard.take() { if let Some(lock) = self.guard.take() {
lock.unlock().await?; lock.unlock().await?;
} }
Ok(()) Ok(())
} }
#[instrument(skip(self))] #[instrument(skip_all)]
pub async fn play_for_time_slice( pub async fn play_for_time_slice(
&mut self, &mut self,
tempo_qpm: u16, tempo_qpm: u16,
@@ -59,7 +59,7 @@ impl<'a, T> Song<T>
where where
T: IntoIterator<Item = (Option<Note>, TimeSlice)> + Clone, T: IntoIterator<Item = (Option<Note>, TimeSlice)> + Clone,
{ {
#[instrument(skip(self))] #[instrument(skip_all)]
pub async fn play(&self) -> Result<(), Error> { pub async fn play(&self) -> Result<(), Error> {
let mut sound = SoundInterface::lease().await?; let mut sound = SoundInterface::lease().await?;
for (note, slice) in self.note_sequence.clone() { for (note, slice) in self.note_sequence.clone() {

View File

@@ -57,7 +57,7 @@ pub fn ssh() -> Result<(), Error> {
} }
#[command(display(display_none))] #[command(display(display_none))]
#[instrument(skip(ctx))] #[instrument(skip_all)]
pub async fn add(#[context] ctx: RpcContext, #[arg] key: PubKey) -> Result<SshKeyResponse, Error> { pub async fn add(#[context] ctx: RpcContext, #[arg] key: PubKey) -> Result<SshKeyResponse, Error> {
let pool = &ctx.secret_store; let pool = &ctx.secret_store;
// check fingerprint for duplicates // check fingerprint for duplicates
@@ -92,7 +92,7 @@ pub async fn add(#[context] ctx: RpcContext, #[arg] key: PubKey) -> Result<SshKe
} }
} }
#[command(display(display_none))] #[command(display(display_none))]
#[instrument(skip(ctx))] #[instrument(skip_all)]
pub async fn delete(#[context] ctx: RpcContext, #[arg] fingerprint: String) -> Result<(), Error> { pub async fn delete(#[context] ctx: RpcContext, #[arg] fingerprint: String) -> Result<(), Error> {
let pool = &ctx.secret_store; let pool = &ctx.secret_store;
// check if fingerprint is in DB // check if fingerprint is in DB
@@ -142,7 +142,7 @@ fn display_all_ssh_keys(all: Vec<SshKeyResponse>, matches: &ArgMatches) {
} }
#[command(display(display_all_ssh_keys))] #[command(display(display_all_ssh_keys))]
#[instrument(skip(ctx))] #[instrument(skip_all)]
pub async fn list( pub async fn list(
#[context] ctx: RpcContext, #[context] ctx: RpcContext,
#[allow(unused_variables)] #[allow(unused_variables)]
@@ -172,7 +172,7 @@ pub async fn list(
.collect()) .collect())
} }
#[instrument(skip(pool, dest))] #[instrument(skip_all)]
pub async fn sync_keys_from_db<P: AsRef<Path>>( pub async fn sync_keys_from_db<P: AsRef<Path>>(
pool: &Pool<Postgres>, pool: &Pool<Postgres>,
dest: P, dest: P,

View File

@@ -18,7 +18,7 @@ use crate::{Error, ResultExt};
#[derive(Clone, Debug, Deserialize, Serialize)] #[derive(Clone, Debug, Deserialize, Serialize)]
pub struct HealthChecks(pub BTreeMap<HealthCheckId, HealthCheck>); pub struct HealthChecks(pub BTreeMap<HealthCheckId, HealthCheck>);
impl HealthChecks { impl HealthChecks {
#[instrument] #[instrument(skip_all)]
pub fn validate( pub fn validate(
&self, &self,
container: &Option<DockerContainers>, container: &Option<DockerContainers>,
@@ -71,7 +71,7 @@ pub struct HealthCheck {
pub timeout: Option<Duration>, pub timeout: Option<Duration>,
} }
impl HealthCheck { impl HealthCheck {
#[instrument(skip(ctx))] #[instrument(skip_all)]
pub async fn check( pub async fn check(
&self, &self,
ctx: &RpcContext, ctx: &RpcContext,

View File

@@ -6,6 +6,7 @@ use futures::FutureExt;
use rpc_toolkit::command; use rpc_toolkit::command;
use rpc_toolkit::yajrc::RpcError; use rpc_toolkit::yajrc::RpcError;
use serde::{Deserialize, Deserializer, Serialize, Serializer}; use serde::{Deserialize, Deserializer, Serialize, Serializer};
use tokio::process::Command;
use tokio::sync::broadcast::Receiver; use tokio::sync::broadcast::Receiver;
use tokio::sync::RwLock; use tokio::sync::RwLock;
use tracing::instrument; use tracing::instrument;
@@ -17,8 +18,8 @@ use crate::logs::{
LogResponse, LogSource, LogResponse, LogSource,
}; };
use crate::shutdown::Shutdown; use crate::shutdown::Shutdown;
use crate::util::display_none;
use crate::util::serde::{display_serializable, IoFormat}; use crate::util::serde::{display_serializable, IoFormat};
use crate::util::{display_none, Invoke};
use crate::{Error, ErrorKind, ResultExt}; use crate::{Error, ErrorKind, ResultExt};
pub const SYSTEMD_UNIT: &'static str = "embassyd"; pub const SYSTEMD_UNIT: &'static str = "embassyd";
@@ -510,15 +511,32 @@ async fn launch_disk_task(
} }
} }
#[instrument] #[instrument(skip_all)]
async fn get_temp() -> Result<Celsius, Error> { async fn get_temp() -> Result<Celsius, Error> {
let temp_file = "/sys/class/thermal/thermal_zone0/temp"; let temp = serde_json::from_slice::<serde_json::Value>(
let milli = tokio::fs::read_to_string(temp_file) &Command::new("sensors")
.await .arg("-j")
.with_ctx(|_| (crate::ErrorKind::Filesystem, temp_file))? .invoke(ErrorKind::Filesystem)
.trim() .await?,
.parse::<f64>()?; )
Ok(Celsius(milli / 1000.0)) .with_kind(ErrorKind::Deserialization)?
.as_object()
.into_iter()
.flatten()
.flat_map(|(_, v)| v.as_object())
.flatten()
.flat_map(|(_, v)| v.as_object())
.flatten()
.filter_map(|(k, v)| {
if k.ends_with("_input") {
v.as_f64()
} else {
None
}
})
.reduce(f64::max)
.ok_or_else(|| Error::new(eyre!("No temperatures available"), ErrorKind::Filesystem))?;
Ok(Celsius(temp))
} }
#[derive(Debug, Clone)] #[derive(Debug, Clone)]
@@ -550,7 +568,7 @@ impl ProcStat {
} }
} }
#[instrument] #[instrument(skip_all)]
async fn get_proc_stat() -> Result<ProcStat, Error> { async fn get_proc_stat() -> Result<ProcStat, Error> {
use tokio::io::AsyncBufReadExt; use tokio::io::AsyncBufReadExt;
let mut cpu_line = String::new(); let mut cpu_line = String::new();
@@ -592,7 +610,7 @@ async fn get_proc_stat() -> Result<ProcStat, Error> {
} }
} }
#[instrument] #[instrument(skip_all)]
async fn get_cpu_info(last: &mut ProcStat) -> Result<MetricsCpu, Error> { async fn get_cpu_info(last: &mut ProcStat) -> Result<MetricsCpu, Error> {
let new = get_proc_stat().await?; let new = get_proc_stat().await?;
let total_old = last.total(); let total_old = last.total();
@@ -619,7 +637,7 @@ pub struct MemInfo {
swap_total: Option<u64>, swap_total: Option<u64>,
swap_free: Option<u64>, swap_free: Option<u64>,
} }
#[instrument] #[instrument(skip_all)]
async fn get_mem_info() -> Result<MetricsMemory, Error> { async fn get_mem_info() -> Result<MetricsMemory, Error> {
let contents = tokio::fs::read_to_string("/proc/meminfo").await?; let contents = tokio::fs::read_to_string("/proc/meminfo").await?;
let mut mem_info = MemInfo { let mut mem_info = MemInfo {
@@ -693,7 +711,7 @@ async fn get_mem_info() -> Result<MetricsMemory, Error> {
}) })
} }
#[instrument] #[instrument(skip_all)]
async fn get_disk_info() -> Result<MetricsDisk, Error> { async fn get_disk_info() -> Result<MetricsDisk, Error> {
let package_used_task = get_used("/embassy-data/package-data"); let package_used_task = get_used("/embassy-data/package-data");
let package_available_task = get_available("/embassy-data/package-data"); let package_available_task = get_available("/embassy-data/package-data");

View File

@@ -26,7 +26,7 @@ use crate::sound::{
use crate::update::latest_information::LatestInformation; use crate::update::latest_information::LatestInformation;
use crate::util::Invoke; use crate::util::Invoke;
use crate::version::{Current, VersionT}; use crate::version::{Current, VersionT};
use crate::{Error, ErrorKind, ResultExt, IS_RASPBERRY_PI}; use crate::{Error, ErrorKind, ResultExt, OS_ARCH};
mod latest_information; mod latest_information;
@@ -41,7 +41,7 @@ lazy_static! {
display(display_update_result), display(display_update_result),
metadata(sync_db = true) metadata(sync_db = true)
)] )]
#[instrument(skip(ctx))] #[instrument(skip_all)]
pub async fn update_system( pub async fn update_system(
#[context] ctx: RpcContext, #[context] ctx: RpcContext,
#[arg(rename = "marketplace-url")] marketplace_url: Url, #[arg(rename = "marketplace-url")] marketplace_url: Url,
@@ -75,22 +75,17 @@ fn display_update_result(status: UpdateResult, _: &ArgMatches) {
} }
} }
#[instrument(skip(ctx))] #[instrument(skip_all)]
async fn maybe_do_update( async fn maybe_do_update(
ctx: RpcContext, ctx: RpcContext,
marketplace_url: Url, marketplace_url: Url,
) -> Result<Option<Arc<Revision>>, Error> { ) -> Result<Option<Arc<Revision>>, Error> {
let mut db = ctx.db.handle(); let mut db = ctx.db.handle();
let arch = if *IS_RASPBERRY_PI {
"raspberrypi"
} else {
*crate::ARCH
};
let latest_version: Version = reqwest::get(format!( let latest_version: Version = reqwest::get(format!(
"{}/eos/v0/latest?eos-version={}&arch={}", "{}/eos/v0/latest?eos-version={}&arch={}",
marketplace_url, marketplace_url,
Current::new().semver(), Current::new().semver(),
arch, OS_ARCH,
)) ))
.await .await
.with_kind(ErrorKind::Network)? .with_kind(ErrorKind::Network)?
@@ -194,7 +189,7 @@ async fn maybe_do_update(
Ok(rev) Ok(rev)
} }
#[instrument(skip(ctx, eos_url))] #[instrument(skip_all)]
async fn do_update(ctx: RpcContext, eos_url: EosUrl) -> Result<(), Error> { async fn do_update(ctx: RpcContext, eos_url: EosUrl) -> Result<(), Error> {
let mut rsync = Rsync::new( let mut rsync = Rsync::new(
eos_url.rsync_path()?, eos_url.rsync_path()?,
@@ -241,12 +236,7 @@ impl EosUrl {
.host_str() .host_str()
.ok_or_else(|| Error::new(eyre!("Could not get host of base"), ErrorKind::ParseUrl))?; .ok_or_else(|| Error::new(eyre!("Could not get host of base"), ErrorKind::ParseUrl))?;
let version: &Version = &self.version; let version: &Version = &self.version;
let arch = if *IS_RASPBERRY_PI { Ok(format!("{host}::{version}/{OS_ARCH}/")
"raspberrypi"
} else {
*crate::ARCH
};
Ok(format!("{host}::{version}/{arch}/")
.parse() .parse()
.map_err(|_| Error::new(eyre!("Could not parse path"), ErrorKind::ParseUrl))?) .map_err(|_| Error::new(eyre!("Could not parse path"), ErrorKind::ParseUrl))?)
} }
@@ -306,12 +296,13 @@ async fn sync_boot() -> Result<(), Error> {
ignore_existing: false, ignore_existing: false,
exclude: Vec::new(), exclude: Vec::new(),
no_permissions: false, no_permissions: false,
no_owner: false,
}, },
) )
.await? .await?
.wait() .wait()
.await?; .await?;
if !*IS_RASPBERRY_PI { if OS_ARCH != "raspberrypi" {
let dev_mnt = let dev_mnt =
MountGuard::mount(&Bind::new("/dev"), "/media/embassy/next/dev", ReadWrite).await?; MountGuard::mount(&Bind::new("/dev"), "/media/embassy/next/dev", ReadWrite).await?;
let sys_mnt = let sys_mnt =
@@ -322,7 +313,7 @@ async fn sync_boot() -> Result<(), Error> {
MountGuard::mount(&Bind::new("/boot"), "/media/embassy/next/boot", ReadWrite).await?; MountGuard::mount(&Bind::new("/boot"), "/media/embassy/next/boot", ReadWrite).await?;
Command::new("chroot") Command::new("chroot")
.arg("/media/embassy/next") .arg("/media/embassy/next")
.arg("update-grub") .arg("update-grub2")
.invoke(ErrorKind::MigrationFailed) .invoke(ErrorKind::MigrationFailed)
.await?; .await?;
boot_mnt.unmount(false).await?; boot_mnt.unmount(false).await?;
@@ -333,7 +324,7 @@ async fn sync_boot() -> Result<(), Error> {
Ok(()) Ok(())
} }
#[instrument] #[instrument(skip_all)]
async fn swap_boot_label() -> Result<(), Error> { async fn swap_boot_label() -> Result<(), Error> {
tokio::fs::write("/media/embassy/config/upgrade", b"").await?; tokio::fs::write("/media/embassy/config/upgrade", b"").await?;
Ok(()) Ok(())

View File

@@ -282,7 +282,7 @@ impl Drop for FileLock {
} }
} }
impl FileLock { impl FileLock {
#[instrument(skip(path))] #[instrument(skip_all)]
pub async fn new(path: impl AsRef<Path> + Send + Sync, blocking: bool) -> Result<Self, Error> { pub async fn new(path: impl AsRef<Path> + Send + Sync, blocking: bool) -> Result<Self, Error> {
lazy_static! { lazy_static! {
static ref INTERNAL_LOCKS: Mutex<BTreeMap<PathBuf, Arc<Mutex<()>>>> = static ref INTERNAL_LOCKS: Mutex<BTreeMap<PathBuf, Arc<Mutex<()>>>> =

View File

@@ -20,8 +20,10 @@ mod v0_3_2;
mod v0_3_2_1; mod v0_3_2_1;
mod v0_3_3; mod v0_3_3;
mod v0_3_4; mod v0_3_4;
mod v0_3_4_1;
mod v0_3_4_2;
pub type Current = v0_3_4::Version; pub type Current = v0_3_4_2::Version;
#[derive(serde::Serialize, serde::Deserialize, Debug, Clone)] #[derive(serde::Serialize, serde::Deserialize, Debug, Clone)]
#[serde(untagged)] #[serde(untagged)]
@@ -37,6 +39,8 @@ enum Version {
V0_3_2_1(Wrapper<v0_3_2_1::Version>), V0_3_2_1(Wrapper<v0_3_2_1::Version>),
V0_3_3(Wrapper<v0_3_3::Version>), V0_3_3(Wrapper<v0_3_3::Version>),
V0_3_4(Wrapper<v0_3_4::Version>), V0_3_4(Wrapper<v0_3_4::Version>),
V0_3_4_1(Wrapper<v0_3_4_1::Version>),
V0_3_4_2(Wrapper<v0_3_4_2::Version>),
Other(emver::Version), Other(emver::Version),
} }
@@ -63,6 +67,8 @@ impl Version {
Version::V0_3_2_1(Wrapper(x)) => x.semver(), Version::V0_3_2_1(Wrapper(x)) => x.semver(),
Version::V0_3_3(Wrapper(x)) => x.semver(), Version::V0_3_3(Wrapper(x)) => x.semver(),
Version::V0_3_4(Wrapper(x)) => x.semver(), Version::V0_3_4(Wrapper(x)) => x.semver(),
Version::V0_3_4_1(Wrapper(x)) => x.semver(),
Version::V0_3_4_2(Wrapper(x)) => x.semver(),
Version::Other(x) => x.clone(), Version::Other(x) => x.clone(),
} }
} }
@@ -244,6 +250,14 @@ pub async fn init<Db: DbHandle>(
v.0.migrate_to(&Current::new(), db, secrets, receipts) v.0.migrate_to(&Current::new(), db, secrets, receipts)
.await? .await?
} }
Version::V0_3_4_1(v) => {
v.0.migrate_to(&Current::new(), db, secrets, receipts)
.await?
}
Version::V0_3_4_2(v) => {
v.0.migrate_to(&Current::new(), db, secrets, receipts)
.await?
}
Version::Other(_) => { Version::Other(_) => {
return Err(Error::new( return Err(Error::new(
eyre!("Cannot downgrade"), eyre!("Cannot downgrade"),
@@ -287,6 +301,8 @@ mod tests {
Just(Version::V0_3_2_1(Wrapper(v0_3_2_1::Version::new()))), Just(Version::V0_3_2_1(Wrapper(v0_3_2_1::Version::new()))),
Just(Version::V0_3_3(Wrapper(v0_3_3::Version::new()))), Just(Version::V0_3_3(Wrapper(v0_3_3::Version::new()))),
Just(Version::V0_3_4(Wrapper(v0_3_4::Version::new()))), Just(Version::V0_3_4(Wrapper(v0_3_4::Version::new()))),
Just(Version::V0_3_4_1(Wrapper(v0_3_4_1::Version::new()))),
Just(Version::V0_3_4_2(Wrapper(v0_3_4_2::Version::new()))),
em_version().prop_map(Version::Other), em_version().prop_map(Version::Other),
] ]
} }

View File

@@ -95,7 +95,7 @@ mod legacy {
id.to_string() id.to_string()
} }
#[instrument] #[instrument(skip_all)]
pub async fn get_current_hostname() -> Result<Hostname, Error> { pub async fn get_current_hostname() -> Result<Hostname, Error> {
let out = Command::new("hostname") let out = Command::new("hostname")
.invoke(ErrorKind::ParseSysInfo) .invoke(ErrorKind::ParseSysInfo)
@@ -104,7 +104,7 @@ mod legacy {
Ok(Hostname(out_string.trim().to_owned())) Ok(Hostname(out_string.trim().to_owned()))
} }
#[instrument] #[instrument(skip_all)]
pub async fn set_hostname(hostname: &Hostname) -> Result<(), Error> { pub async fn set_hostname(hostname: &Hostname) -> Result<(), Error> {
let hostname: &String = &hostname.0; let hostname: &String = &hostname.0;
let _out = Command::new("hostnamectl") let _out = Command::new("hostnamectl")
@@ -115,7 +115,7 @@ mod legacy {
Ok(()) Ok(())
} }
#[instrument(skip(handle))] #[instrument(skip_all)]
pub async fn get_id<Db: DbHandle>(handle: &mut Db) -> Result<String, Error> { pub async fn get_id<Db: DbHandle>(handle: &mut Db) -> Result<String, Error> {
let id = crate::db::DatabaseModel::new() let id = crate::db::DatabaseModel::new()
.server_info() .server_info()
@@ -142,7 +142,7 @@ mod legacy {
} }
return Ok(Hostname(format!("embassy-{}", id))); return Ok(Hostname(format!("embassy-{}", id)));
} }
#[instrument(skip(handle))] #[instrument(skip_all)]
pub async fn sync_hostname<Db: DbHandle>(handle: &mut Db) -> Result<(), Error> { pub async fn sync_hostname<Db: DbHandle>(handle: &mut Db) -> Result<(), Error> {
set_hostname(&get_hostname(handle).await?).await?; set_hostname(&get_hostname(handle).await?).await?;
Command::new("systemctl") Command::new("systemctl")

View File

@@ -79,7 +79,7 @@ impl VersionT for Version {
.unwrap_or_else(generate_hostname); .unwrap_or_else(generate_hostname);
account.server_id = server_info.id; account.server_id = server_info.id;
account.save(secrets).await?; account.save(secrets).await?;
sync_hostname(&account).await?; sync_hostname(&account.hostname).await?;
let parsed_url = Some(COMMUNITY_URL.parse().unwrap()); let parsed_url = Some(COMMUNITY_URL.parse().unwrap());
let mut ui = crate::db::DatabaseModel::new().ui().get_mut(db).await?; let mut ui = crate::db::DatabaseModel::new().ui().get_mut(db).await?;

View File

@@ -0,0 +1,30 @@
use async_trait::async_trait;
use emver::VersionRange;
use super::v0_3_0::V0_3_0_COMPAT;
use super::*;
const V0_3_4_1: emver::Version = emver::Version::new(0, 3, 4, 1);
#[derive(Clone, Debug)]
pub struct Version;
#[async_trait]
impl VersionT for Version {
type Previous = v0_3_4::Version;
fn new() -> Self {
Version
}
fn semver(&self) -> emver::Version {
V0_3_4_1
}
fn compat(&self) -> &'static VersionRange {
&*V0_3_0_COMPAT
}
async fn up<Db: DbHandle>(&self, _db: &mut Db, _secrets: &PgPool) -> Result<(), Error> {
Ok(())
}
async fn down<Db: DbHandle>(&self, _db: &mut Db, _secrets: &PgPool) -> Result<(), Error> {
Ok(())
}
}

View File

@@ -0,0 +1,30 @@
use async_trait::async_trait;
use emver::VersionRange;
use super::v0_3_0::V0_3_0_COMPAT;
use super::*;
const V0_3_4_2: emver::Version = emver::Version::new(0, 3, 4, 2);
#[derive(Clone, Debug)]
pub struct Version;
#[async_trait]
impl VersionT for Version {
type Previous = v0_3_4_1::Version;
fn new() -> Self {
Version
}
fn semver(&self) -> emver::Version {
V0_3_4_2
}
fn compat(&self) -> &'static VersionRange {
&*V0_3_0_COMPAT
}
async fn up<Db: DbHandle>(&self, _db: &mut Db, _secrets: &PgPool) -> Result<(), Error> {
Ok(())
}
async fn down<Db: DbHandle>(&self, _db: &mut Db, _secrets: &PgPool) -> Result<(), Error> {
Ok(())
}
}

View File

@@ -21,7 +21,7 @@ pub const BACKUP_DIR: &str = "/media/embassy/backups";
#[derive(Clone, Debug, Default, Deserialize, Serialize)] #[derive(Clone, Debug, Default, Deserialize, Serialize)]
pub struct Volumes(BTreeMap<VolumeId, Volume>); pub struct Volumes(BTreeMap<VolumeId, Volume>);
impl Volumes { impl Volumes {
#[instrument] #[instrument(skip_all)]
pub fn validate(&self, interfaces: &Interfaces) -> Result<(), Error> { pub fn validate(&self, interfaces: &Interfaces) -> Result<(), Error> {
for (id, volume) in &self.0 { for (id, volume) in &self.0 {
volume volume
@@ -30,7 +30,7 @@ impl Volumes {
} }
Ok(()) Ok(())
} }
#[instrument(skip(ctx))] #[instrument(skip_all)]
pub async fn install( pub async fn install(
&self, &self,
ctx: &RpcContext, ctx: &RpcContext,
@@ -142,7 +142,7 @@ pub enum Volume {
Backup { readonly: bool }, Backup { readonly: bool },
} }
impl Volume { impl Volume {
#[instrument] #[instrument(skip_all)]
pub fn validate(&self, interfaces: &Interfaces) -> Result<(), color_eyre::eyre::Report> { pub fn validate(&self, interfaces: &Interfaces) -> Result<(), color_eyre::eyre::Report> {
match self { match self {
Volume::Certificate { interface_id } => { Volume::Certificate { interface_id } => {

View File

@@ -1,4 +1,4 @@
# Building Embassy OS # Building StartOS
⚠️ The commands given assume a Debian or Ubuntu-based environment. _Building in ⚠️ The commands given assume a Debian or Ubuntu-based environment. _Building in
a VM is NOT yet supported_ ⚠️ a VM is NOT yet supported_ ⚠️
@@ -42,15 +42,15 @@ a VM is NOT yet supported_ ⚠️
2. Clone the latest repo with required submodules 2. Clone the latest repo with required submodules
> :information_source: You chan check latest available version > :information_source: You chan check latest available version
> [here](https://github.com/Start9Labs/embassy-os/releases) > [here](https://github.com/Start9Labs/start-os/releases)
``` ```
git clone --recursive https://github.com/Start9Labs/embassy-os.git --branch latest git clone --recursive https://github.com/Start9Labs/start-os.git --branch latest
``` ```
## Build Raspberry Pi Image ## Build Raspberry Pi Image
``` ```
cd embassy-os cd start-os
make embassyos-raspi.img ARCH=aarch64 make embassyos-raspi.img ARCH=aarch64
``` ```
@@ -62,7 +62,7 @@ We recommend [Balena Etcher](https://www.balena.io/etcher/)
## Setup ## Setup
Visit http://embassy.local from any web browser - We recommend Visit http://start.local from any web browser - We recommend
[Firefox](https://www.mozilla.org/firefox/browsers) [Firefox](https://www.mozilla.org/firefox/browsers)
Enter your product key. This is generated during the build process and can be Enter your product key. This is generated during the build process and can be
@@ -70,11 +70,11 @@ found in `product_key.txt`, located in the root directory.
## Troubleshooting ## Troubleshooting
1. I just flashed my SD card, fired up my Embassy, bootup sounds and all, but my 1. I just flashed my SD card, fired up StartOS, bootup sounds and all, but my
browser is saying "Unable to connect" with embassy.local. browser is saying "Unable to connect" with start.local.
- Try doing a hard refresh on your browser, or opening the url in a - Try doing a hard refresh on your browser, or opening the url in a
private/incognito window. If you've ran an instance of Embassy before, private/incognito window. If you've ran an instance of StartOS before,
sometimes you can have a stale cache that will block you from navigating to sometimes you can have a stale cache that will block you from navigating to
the page. the page.
@@ -91,14 +91,14 @@ found in `product_key.txt`, located in the root directory.
- Find the IP of your device - Find the IP of your device
- Run `nc <ip> 8080` and it will print the logs - Run `nc <ip> 8080` and it will print the logs
4. I need to ssh into my Embassy to fix something, but I cannot get to the 4. I need to ssh into my server to fix something, but I cannot get to the
console to add ssh keys normally. console to add ssh keys normally.
- During the Build step, instead of running just - During the Build step, instead of running just
`make embassyos-raspi.img ARCH=aarch64` run `make embassyos-raspi.img ARCH=aarch64` run
`ENVIRONMENT=dev make embassyos-raspi.img ARCH=aarch64`. Flash like normal, `ENVIRONMENT=dev make embassyos-raspi.img ARCH=aarch64`. Flash like normal,
and insert into your Embassy. Boot up your Embassy, and on another computer on and insert into your server. Boot up StartOS, then on another computer on
the same network, ssh into the Embassy with the username `start9` password the same network, ssh into the the server with the username `start9` password
`embassy`. `embassy`.
4. I need to reset my password, how can I do that? 4. I need to reset my password, how can I do that?

View File

@@ -1,34 +1,46 @@
tor
avahi-daemon avahi-daemon
avahi-utils avahi-utils
iotop bash-completion
beep
bmon bmon
lvm2 ca-certificates
htop
cryptsetup
exfat-utils
sqlite3
wireless-tools
net-tools
ecryptfs-utils
cifs-utils cifs-utils
samba-common-bin containerd.io
network-manager curl
vim crda
jq cryptsetup
ncdu
postgresql
pgloader
openssh-server
docker-ce docker-ce
docker-ce-cli docker-ce-cli
containerd.io
docker-compose-plugin docker-compose-plugin
beep dosfstools
e2fsprogs
ecryptfs-utils
exfat-utils
htop
httpdirfs httpdirfs
iotop
iw iw
squashfs-tools jq
rsync libavahi-client3
systemd-timesyncd lm-sensors
lvm2
magic-wormhole magic-wormhole
ncdu
net-tools
network-manager
nyx nyx
openssh-server
pgloader
postgresql
psmisc
rsync
samba-common-bin
sqlite3
squashfs-tools
systemd
systemd-resolved
systemd-sysv
systemd-timesyncd
tor
vim
wireless-tools

View File

@@ -2,14 +2,20 @@
printf "\n" printf "\n"
printf "Welcome to\n" printf "Welcome to\n"
cat << "ASCII" cat << "ASCII"
| ,---.,---. ╭ ━ ━ ━ ╮ ╭ ╮ ╭ ╮ ╭ ━ ━ ━ ┳ ━ ━ ━ ╮
,---.,-.-.|---.,---.,---.,---., .| |`---. ┃ ╭ ━ ╮ ┣ ╯ ╰ ╮ ╭ ╯ ╰ ┫ ╭ ━ ╮ ┃ ╭ ━ ╮ ┃
|---'| | || |,---|`---.`---.| || | | ┃ ╰ ━ ━ ╋ ╮ ╭ ╋ ━ ━ ┳ ┻ ╮ ╭ ┫ ┃ ┃ ┃ ╰ ━ ━ ╮
`---'` ' '`---'`---^`---'`---'`---|`---'`---' ╰ ━ ━ ╮ ┃ ┃ ┃ ┃ ╭ ╮ ┃ ╭ ┫ ┃ ┃ ┃ ┃ ┣ ━ ━ ╮ ┃
`---' ┃ ╰ ━ ╯ ┃ ┃ ╰ ┫ ╭ ╮ ┃ ┃ ┃ ╰ ┫ ╰ ━ ╯ ┃ ╰ ━ ╯ ┃
╰ ━ ━ ━ ╯ ╰ ━ ┻ ╯ ╰ ┻ ╯ ╰ ━ ┻ ━ ━ ━ ┻ ━ ━ ━ ╯
ASCII ASCII
printf " %s (%s %s)\n" "$(uname -o)" "$(uname -r)" "$(uname -m)" printf " %s (%s %s)\n" "$(uname -o)" "$(uname -r)" "$(uname -m)"
printf " $(embassy-cli --version | sed 's/Embassy CLI /embassyOS v/g') - $(embassy-cli git-info)\n" printf " $(embassy-cli --version | sed 's/Embassy CLI /StartOS v/g') - $(embassy-cli git-info)"
if [ -n "$(cat /usr/lib/embassy/ENVIRONMENT.txt)" ]; then
printf " ~ $(cat /usr/lib/embassy/ENVIRONMENT.txt)\n"
else
printf "\n"
fi
printf "\n" printf "\n"
printf " * Documentation: https://start9.com\n" printf " * Documentation: https://start9.com\n"

View File

@@ -13,17 +13,20 @@ mkdir -p /media/embassy/next/run
mkdir -p /media/embassy/next/dev mkdir -p /media/embassy/next/dev
mkdir -p /media/embassy/next/sys mkdir -p /media/embassy/next/sys
mkdir -p /media/embassy/next/proc mkdir -p /media/embassy/next/proc
mkdir -p /media/embassy/next/boot
mount --bind /run /media/embassy/next/run mount --bind /run /media/embassy/next/run
mount --bind /dev /media/embassy/next/dev mount --bind /dev /media/embassy/next/dev
mount --bind /sys /media/embassy/next/sys mount --bind /sys /media/embassy/next/sys
mount --bind /proc /media/embassy/next/proc mount --bind /proc /media/embassy/next/proc
mount --bind /boot /media/embassy/next/boot
chroot /media/embassy/next chroot /media/embassy/next $@
umount /media/embassy/next/run umount /media/embassy/next/run
umount /media/embassy/next/dev umount /media/embassy/next/dev
umount /media/embassy/next/sys umount /media/embassy/next/sys
umount /media/embassy/next/proc umount /media/embassy/next/proc
umount /media/embassy/next/boot
echo 'Upgrading...' echo 'Upgrading...'

View File

@@ -3,15 +3,15 @@
set -e set -e
# install dependencies # install dependencies
apt update /usr/bin/apt update
apt install --no-install-recommends -y xserver-xorg x11-xserver-utils xinit firefox-esr matchbox-window-manager libnss3-tools /usr/bin/apt install --no-install-recommends -y xserver-xorg x11-xserver-utils xinit firefox-esr matchbox-window-manager libnss3-tools
# create kiosk script # create kiosk script
cat > /home/start9/kiosk.sh << 'EOF' cat > /home/start9/kiosk.sh << 'EOF'
#!/bin/sh #!/bin/sh
PROFILE=$(mktemp -d) PROFILE=$(mktemp -d)
if [ -f /usr/local/share/ca-certificates/embassy-root-ca.crt ]; then if [ -f /usr/local/share/ca-certificates/startos-root-ca.crt ]; then
certutil -A -n "Embassy Local Root CA" -t "TCu,Cuw,Tuw" -i /usr/local/share/ca-certificates/embassy-root-ca.crt -d $PROFILE certutil -A -n "StartOS Local Root CA" -t "TCu,Cuw,Tuw" -i /usr/local/share/ca-certificates/startos-root-ca.crt -d $PROFILE
fi fi
cat >> $PROFILE/prefs.js << EOT cat >> $PROFILE/prefs.js << EOT
user_pref("network.proxy.autoconfig_url", "file:///usr/lib/embassy/proxy.pac"); user_pref("network.proxy.autoconfig_url", "file:///usr/lib/embassy/proxy.pac");

21
build/lib/scripts/fake-apt Executable file
View File

@@ -0,0 +1,21 @@
#!/bin/bash
>&2 echo 'THIS IS NOT A STANDARD DEBIAN SYSTEM'
>&2 echo 'USING apt COULD CAUSE IRREPARABLE DAMAGE TO YOUR START9 SERVER'
>&2 echo 'PLEASE TURN BACK NOW!!!'
if [ "$1" == "upgrade" ] && [ "$(whoami)" == "root" ]; then
>&2 echo 'IF YOU THINK RUNNING "sudo apt upgrade" IS A REASONABLE THING TO DO ON THIS SYSTEM, YOU PROBABLY SHOULDN'"'"'T BE ON THE COMMAND LINE.'
>&2 echo 'YOU ARE BEING REMOVED FROM THIS SESSION FOR YOUR OWN SAFETY.'
pkill -9 -t $(tty | sed 's|^/dev/||g')
fi
>&2 echo
>&2 echo 'If you are SURE you know what you are doing, and are willing to accept the DIRE CONSEQUENCES of doing so, you can run the following command to disable this protection:'
>&2 echo ' sudo rm /usr/local/bin/apt'
>&2 echo
>&2 echo 'Otherwise, what you probably want to do is run:'
>&2 echo ' sudo /usr/lib/embassy/scripts/chroot-and-upgrade'
>&2 echo 'You can run apt in this context to add packages to your system.'
>&2 echo 'When you are done with your changes, type "exit" and the device will reboot into a system with the changes applied.'
>&2 echo 'This is still NOT RECOMMENDED if you don'"'"'t know what you are doing, but at least isn'"'"'t guaranteed to break things.'
exit 1

View File

@@ -3,6 +3,6 @@
for mozilladir in $(find /home -name ".mozilla"); do for mozilladir in $(find /home -name ".mozilla"); do
for certDB in $(find ${mozilladir} -name "cert9.db"); do for certDB in $(find ${mozilladir} -name "cert9.db"); do
certDir=$(dirname ${certDB}); certDir=$(dirname ${certDB});
certutil -A -n "Embassy Local Root CA" -t "TCu,Cuw,Tuw" -i /usr/local/share/ca-certificates/embassy-root-ca.crt -d ${certDir} certutil -A -n "StartOS Local Root CA" -t "TCu,Cuw,Tuw" -i /usr/local/share/ca-certificates/startos-root-ca.crt -d ${certDir}
done done
done done

View File

@@ -1,120 +0,0 @@
#!/bin/bash
set -e
function partition_for () {
if [[ "$1" =~ [0-9]+$ ]]; then
echo "$1p$2"
else
echo "$1$2"
fi
}
OSDISK=$1
if [ -z "$OSDISK" ]; then
>&2 echo "usage: $0 <TARGET DISK>"
exit 1
fi
WIFI_IFACE=
for IFACE in $(ls /sys/class/net); do
if [ -d /sys/class/net/$IFACE/wireless ]; then
WIFI_IFACE=$IFACE
break
fi
done
ETH_IFACE=
for IFACE in $(ls /sys/class/net); do
if ! [ -d /sys/class/net/$IFACE/wireless ] && [ -d /sys/class/net/$IFACE/device ]; then
ETH_IFACE=$IFACE
break
fi
done
if [ -z "$ETH_IFACE" ]; then
>&2 echo 'Could not detect ethernet interface'
exit 1
fi
(
echo o # MBR
echo n # New Partition
echo p # Primary
echo 1 # Index #1
echo # Default Starting Position
echo '+1G' # 1GB
echo t # Change Type
echo 0b # W95 FAT32
echo a # Set Bootable
echo n # New Partition
echo p # Primary
echo 2 # Index #2
echo # Default Starting Position
echo '+15G' # 15GB
echo n # New Partition
echo p # Primary
echo 3 # Index #3
echo # Default Starting Position
echo # Use Full Remaining
echo t # Change Type
echo 3 # (Still Index #3)
echo 8e # Linux LVM
echo w # Write Changes
) | fdisk $OSDISK
BOOTPART=`partition_for $OSDISK 1`
ROOTPART=`partition_for $OSDISK 2`
mkfs.vfat $BOOTPART
fatlabel $BOOTPART boot
mkfs.ext4 $ROOTPART
e2label $ROOTPART rootfs
mount $ROOTPART /mnt
mkdir /mnt/config
mkdir /mnt/current
mkdir /mnt/next
mkdir /mnt/current/boot
mount $BOOTPART /mnt/current/boot
unsquashfs -f -d /mnt/current /cdrom/casper/filesystem.squashfs
cat > /mnt/config/config.yaml << EOF
os-partitions:
boot: $BOOTPART
root: $ROOTPART
ethernet-interface: $ETH_IFACE
EOF
if [ -n "$WIFI_IFACE" ]; then
echo "wifi-interface: $WIFI_IFACE" >> /mnt/config/config.yaml
fi
# gen fstab
cat > /mnt/current/etc/fstab << EOF
$BOOTPART /boot vfat defaults 0 2
$ROOTPART / ext4 defaults 0 1
EOF
# gen machine-id
chroot /mnt/current systemd-machine-id-setup
# gen ssh host keys
chroot /mnt/current ssh-keygen -A
mount --bind /dev /mnt/current/dev
mount --bind /sys /mnt/current/sys
mount --bind /proc /mnt/current/proc
chroot /mnt/current update-grub
chroot /mnt/current grub-install $OSDISK
umount /mnt/current/dev
umount /mnt/current/sys
umount /mnt/current/proc
umount /mnt/current/boot
umount /mnt

View File

@@ -0,0 +1,20 @@
#!/bin/bash
if [ -z "$1" ]; then
>&2 echo "usage: $0 <PACKAGE_NAME>"
exit 1
fi
TO_INSTALL=()
while [ -n "$1" ]; do
if ! dpkg -s "$1"; then
TO_INSTALL+=("$1")
fi
shift
done
if [ ${#TO_INSTALL[@]} -ne 0 ]; then
/usr/lib/embassy/scripts/chroot-and-upgrade << EOF
apt-get update && apt-get install -y ${TO_INSTALL[@]}
EOF
fi

Some files were not shown because too many files have changed in this diff Show More