Merge branch 'next/minor' of github.com:Start9Labs/start-os into chore/removing-non-long-running

This commit is contained in:
Aiden McClelland
2023-11-13 15:26:04 -07:00
990 changed files with 3583 additions and 6679 deletions

View File

@@ -41,11 +41,11 @@ on:
push:
branches:
- master
- next
- next/*
pull_request:
branches:
- master
- next
- next/*
env:
NODEJS_VERSION: "18.15.0"
@@ -171,7 +171,7 @@ jobs:
- name: Prevent rebuild of compiled artifacts
run: |
mkdir -p frontend/dist/raw
mkdir -p web/dist/raw
PLATFORM=${{ matrix.platform }} make -t compiled-${{ env.ARCH }}.tar
- name: Run iso build

31
.github/workflows/test.yaml vendored Normal file
View File

@@ -0,0 +1,31 @@
name: Automated Tests
on:
push:
branches:
- master
- next/*
pull_request:
branches:
- master
- next/*
env:
NODEJS_VERSION: "18.15.0"
ENVIRONMENT: dev-unstable
jobs:
test:
name: Run Automated Tests
runs-on: ubuntu-22.04
steps:
- uses: actions/checkout@v3
with:
submodules: recursive
- uses: actions/setup-node@v3
with:
node-version: ${{ env.NODEJS_VERSION }}
- name: Build And Run Tests
run: make test

View File

@@ -1,429 +0,0 @@
# v0.3.3
## Highlights
- x86_64 architecture compatibility
- Kiosk mode - use your Embassy with monitor, keyboard, and mouse (available on x86 builds only, disabled on Raspberry Pi)
- "Updates" tab - view all service updates from all registries in one place
- Various UI/UX improvements
- Various bugfixes and optimizations
## What's Changed
- Minor typo fixes by @kn0wmad in #1887
- Update build pipeline by @moerketh in #1896
- Feature/setup migrate by @elvece in #1841
- Feat/patch migration by @Blu-J in #1890
- make js cancellable by @dr-bonez in #1901
- wip: Making Injectable exec by @Blu-J in #1897
- Fix/debug by @Blu-J in #1909
- chore: Fix on the rsync not having stdout. by @Blu-J in #1911
- install wizard project by @MattDHill in #1893
- chore: Remove the duplicate loggging information that is making usele… by @Blu-J in #1912
- Http proxy by @redragonx in #1772
- fix(marketplace): loosen type in categories component by @waterplea in #1918
- set custom meta title by @MattDHill in #1915
- Feature/git hash by @dr-bonez in #1919
- closes #1900 by @dr-bonez in #1920
- feature/marketplace icons by @dr-bonez in #1921
- Bugfix/0.3.3 migration by @dr-bonez in #1922
- feat: Exposing the rsync that we have to the js by @Blu-J in #1907
- Feature/install wizard disk info by @dr-bonez in #1923
- bump shared and marketplace npm versions by @dr-bonez in #1924
- fix error handling when store unreachable by @dr-bonez in #1925
- wait for network online before launching init by @dr-bonez in #1930
- silence service crash notifications by @dr-bonez in #1929
- disable efi by @dr-bonez in #1931
- Tor daemon fix by @redragonx in #1934
- wait for url to be available before launching kiosk by @dr-bonez in #1933
- fix migration to support portable fatties by @dr-bonez in #1935
- Add guid to partition type by @MattDHill in #1932
- add localhost support to the http server by @redragonx in #1939
- refactor setup wizard by @dr-bonez in #1937
- feat(shared): Ticker add new component and use it in marketplace by @waterplea in #1940
- feat: For ota update using rsyncd by @Blu-J in #1938
- Feat/update progress by @MattDHill in #1944
- Fix/app show hidden by @MattDHill in #1948
- create dpkg and iso workflows by @dr-bonez in #1941
- changing ip addr type by @redragonx in #1950
- Create mountpoints first by @k0gen in #1949
- Hard code registry icons by @MattDHill in #1951
- fix: Cleanup by sending a command and kill when dropped by @Blu-J in #1945
- Update setup wizard styling by @elvece in #1954
- Feature/homepage by @elvece in #1956
- Fix millis by @Blu-J in #1960
- fix accessing dev tools by @MattDHill in #1966
- Update/misc UI fixes by @elvece in #1961
- Embassy-init typo by @redragonx in #1959
- feature: 0.3.2 -> 0.3.3 upgrade by @dr-bonez in #1958
- Fix/migrate by @Blu-J in #1962
- chore: Make validation reject containers by @Blu-J in #1970
- get pubkey and encrypt password on login by @elvece in #1965
- Multiple bugs and styling by @MattDHill in #1975
- filter out usb stick during install by @dr-bonez in #1974
- fix http upgrades by @dr-bonez in #1980
- restore interfaces before creating manager by @dr-bonez in #1982
- fuckit: no patch db locks by @dr-bonez in #1969
- fix websocket hangup error by @dr-bonez in #1981
- revert app show to use header and fix back button by @MattDHill in #1984
- Update/marketplace info by @elvece in #1983
- force docker image removal by @dr-bonez in #1985
- do not error if cannot determine live usb device by @dr-bonez in #1986
- remove community registry from FE defaults by @MattDHill in #1988
- check environment by @dr-bonez in #1990
- fix marketplace search and better category disabling by @MattDHill in #1991
- better migration progress bar by @dr-bonez in #1993
- bump cargo version by @dr-bonez in #1995
- preload icons and pause on setup complete for kiosk mode by @MattDHill in #1997
- use squashfs for rpi updates by @dr-bonez in #1998
- do not start progress at 0 before diff complete by @dr-bonez in #1999
- user must click continue in kiosk on success page by @MattDHill in #2001
- fix regex in image rip script by @dr-bonez in #2002
- fix bug with showing embassy drives and center error text by @MattDHill in #2006
- fix partition type by @dr-bonez in #2007
- lowercase service for alphabetic sorting by @MattDHill in #2008
- dont add updates cat by @MattDHill in #2009
- make downloaded page a full html doc by @MattDHill in #2011
- wait for monitor to be attached before launching firefox by @chrisguida in #2005
- UI fixes by @elvece in #2014
- fix: Stop service before by @Blu-J in #2019
- shield links update by @k0gen in #2018
- fix: Undoing the breaking introduced by trying to stopp by @Blu-J in #2023
- update link rename from embassy -> system by @elvece in #2027
- initialize embassy before restoring packages by @dr-bonez in #2029
- make procfs an optional dependency so sdk can build on macos by @elvece in #2028
- take(1) for recover select by @MattDHill in #2030
- take one from server info to prevent multiple reqs to registries by @MattDHill in #2032
- remove write lock during backup by @MattDHill in #2033
- fix: Ensure that during migration we make the urls have a trailing slash by @Blu-J in #2036
- fix: Make the restores limited # restore at a time by @Blu-J in #2037
- fix error and display of unknown font weight on success page by @elvece in #2038
## Checksums
```
8602e759d3ece7cf503b9ca43e8419109f14e424617c2703b3771c8801483d7e embassyos_amd64.deb
b5c0d8d1af760881a1b5cf32bd7c5b1d1cf6468f6da594a1b4895a866d03a58c embassyos_amd64.iso
fe518453a7e1a8d8c2be43223a1a12adff054468f8082df0560e1ec50df3dbfd embassyos_raspberrypi.img
7b1ff0ada27b6714062aa991ec31c2d95ac4edf254cd464a4fa251905aa47ebd embassyos_raspberrypi.tar.gz
```
# v0.3.2.1
## What's Changed
- Update index.html copy and styling by @elvece in #1855
- increase maximum avahi entry group size by @dr-bonez in #1869
- bump version by @dr-bonez in #1871
### Linux and Mac
Download the `eos.tar.gz` file, then extract and flash the resulting eos.img to your SD Card
Windows
Download the `eos.zip` file, then extract and flash the resulting eos.img to your SD Card
## SHA-256 Checksums
```
c4b17658910dd10c37df134d5d5fdd6478f962ba1b803d24477d563d44430f96 eos.tar.gz
3a8b29878fe222a9d7cbf645c975b12805704b0f39c7daa46033d22380f9828c eos.zip
dedff3eb408ea411812b8f46e6c6ed32bfbd97f61ec2b85a6be40373c0528256 eos.img
```
# v0.3.2
## Highlights
- Autoscrolling for logs
- Improved connectivity between browser and Embassy
- Switch to Postgres for EOS database for better performance
- Multiple bug fixes and under-the-hood improvements
- Various UI/UX enhancements
- Removal of product keys
Update Hash (SHA256): `d8ce908b06baee6420b45be1119e5eb9341ba8df920d1e255f94d1ffb7cc4de9`
Image Hash (SHA256): `e035cd764e5ad9eb1c60e2f7bc3b9bd7248f42a91c69015c8a978a0f94b90bbb`
Note: This image was uploaded as a gzipped POSIX sparse TAR file. The recommended command for unpacking it on systems that support sparse files is `tar --format=posix --sparse -zxvf eos.tar.gz`
## What's Changed
- formatting by @dr-bonez in #1698
- Update README.md by @kn0wmad in #1705
- Update README.md by @dr-bonez in #1703
- feat: migrate to Angular 14 and RxJS 7 by @waterplea in #1681
- 0312 multiple FE by @MattDHill in #1712
- Fix http requests by @MattDHill in #1717
- Add build-essential to README.md by @chrisguida in #1716
- write image to sparse-aware archive format by @dr-bonez in #1709
- fix: Add modification to the max_user_watches by @Blu-J in #1695
- [Feat] follow logs by @chrisguida in #1714
- Update README.md by @dr-bonez in #1728
- fix build for patch-db client for consistency by @elvece in #1722
- fix cli install by @chrisguida in #1720
- highlight instructions if not viewed by @MattDHill in #1731
- Feat: HttpReader by @redragonx in #1733
- Bugfix/dns by @dr-bonez in #1741
- add x86 build and run unittests to backend pipeline by @moerketh in #1682
- [Fix] websocket connecting and patchDB connection monitoring by @MattDHill in #1738
- Set pipeline job timeouts and add ca-certificates to test container by @moerketh in #1753
- Disable bluetooth properly #862 by @redragonx in #1745
- [feat]: resumable downloads by @dr-bonez in #1746
- Fix/empty properties by @elvece in #1764
- use hostname from patchDB as default server name by @MattDHill in #1758
- switch to postgresql by @dr-bonez in #1763
- remove product key from setup flow by @MattDHill in #1750
- pinning cargo dep versions for CLI by @redragonx in #1775
- fix: Js deep dir by @Blu-J in #1784
- 0.3.2 final cleanup by @dr-bonez in #1782
- expect ui marketplace to be undefined by @MattDHill in #1787
- fix init to exit on failure by @dr-bonez in #1788
- fix search to return more accurate results by @MattDHill in #1792
- update backend dependencies by @dr-bonez in #1796
- use base64 for HTTP headers by @dr-bonez in #1795
- fix: Bad cert of *.local.local is now fixed to correct. by @Blu-J in #1798
- fix duplicate patch updates, add scroll button to setup success by @MattDHill in #1800
- level_slider reclaiming that precious RAM memory by @k0gen in #1799
- stop leaking avahi clients by @dr-bonez in #1802
- fix: Deep is_parent was wrong and could be escapped by @Blu-J in #1801
- prevent cfg str generation from running forever by @dr-bonez in #1804
- better RPC error message by @MattDHill in #1803
- Bugfix/marketplace add by @elvece in #1805
- fix mrketplace swtiching by @MattDHill in #1810
- clean up code and logs by @MattDHill in #1809
- fix: Minor fix that matt wanted by @Blu-J in #1808
- onion replace instead of adding tor repository by @k0gen in #1813
- bank Start as embassy hostname from the begining by @k0gen in #1814
- add descriptions to marketplace list page by @elvece in #1812
- Fix/encryption by @elvece in #1811
- restructure initialization by @dr-bonez in #1816
- update license by @MattDHill in #1819
- perform system rebuild after updating by @dr-bonez in #1820
- ignore file not found error for delete by @dr-bonez in #1822
- Multiple by @MattDHill in #1823
- Bugfix/correctly package backend job by @moerketh in #1826
- update patch-db by @dr-bonez in #1831
- give name to logs file by @MattDHill in #1833
- play song during update by @dr-bonez in #1832
- Seed patchdb UI data by @elvece in #1835
- update patch db and enable logging by @dr-bonez in #1837
- reduce patch-db log level to warn by @dr-bonez in #1840
- update ts matches to fix properties ordering bug by @elvece in #1843
- handle multiple image tags having the same hash and increase timeout by @dr-bonez in #1844
- retry pgloader up to 5x by @dr-bonez in #1845
- show connection bar right away by @MattDHill in #1849
- dizzy Rebranding to embassyOS by @k0gen in #1851
- update patch db by @MattDHill in #1852
- camera_flash screenshots update by @k0gen in #1853
- disable concurrency and delete tmpdir before retry by @dr-bonez in #1846
## New Contributors
- @redragonx made their first contribution in #1733
# v0.3.1.1
## What's Changed
- whale2 docker stats fix by @k0gen in #1630
- update backend dependencies by @dr-bonez in #1637
- Fix/receipts health by @Blu-J in #1616
- return correct error on failed os download by @dr-bonez in #1636
- fix build by @dr-bonez in #1639
- Update product.yaml by @dr-bonez in #1638
- handle case where selected union enum is invalid after migration by @MattDHill in #1658
- fix: Resolve fighting with NM by @Blu-J in #1660
- sdk: don't allow mounts in inject actions by @chrisguida in #1653
- feat: Variable args by @Blu-J in #1667
- add readme to system-images folder by @elvece in #1665
- Mask chars beyond 16 by @MattDHill in #1666
- chore: Update to have the new version 0.3.1.1 by @Blu-J in #1668
- feat: Make the rename effect by @Blu-J in #1669
- fix migration, add logging by @dr-bonez in #1674
- run build checks only when relevant FE changes by @elvece in #1664
- trust local ca by @dr-bonez in #1670
- lower log level for docker deser fallback message by @dr-bonez in #1672
- refactor build process by @dr-bonez in #1675
- chore: enable strict mode by @waterplea in #1569
- draft releases notes for 0311 by @MattDHill in #1677
- add standby mode by @dr-bonez in #1671
- feat: atomic writing by @Blu-J in #1673
- allow server.update to update to current version by @dr-bonez in #1679
- allow falsey rpc response by @dr-bonez in #1680
- issue notification when individual package restore fails by @dr-bonez in #1685
- replace bang with question mark in html by @MattDHill in #1683
- only validate mounts for inject if eos >=0.3.1.1 by @dr-bonez in #1686
- add marketplace_url to backup metadata for service by @dr-bonez in #1688
- marketplace published at for service by @MattDHill in #1689
- sync data to fs before shutdown by @dr-bonez in #1690
- messaging for restart, shutdown, rebuild by @MattDHill in #1691
- honor shutdown from diagnostic ui by @dr-bonez in #1692
- ask for sudo password immediately during make by @dr-bonez in #1693
- sync blockdev after update by @dr-bonez in #1694
- set Matt as default assignee by @MattDHill in #1697
- NO_KEY for CI images by @dr-bonez in #1700
- fix typo by @dr-bonez in #1702
# v0.3.1
## What's Changed
- Feat bulk locking by @Blu-J in #1422
- Switching SSH keys to start9 user by @k0gen in #1321
- chore: Convert from ajv to ts-matches by @Blu-J in #1415
- Fix/id params by @elvece in #1414
- make nicer update sound by @ProofOfKeags in #1438
- adds product key to error message in setup flow when there is mismatch by @dr-bonez in #1436
- Update README.md to include yq by @cryptodread in #1385
- yin_yang For the peace of mind yin_yang by @k0gen in #1444
- Feature/update sound by @ProofOfKeags in #1439
- Feature/script packing by @ProofOfKeags in #1435
- rename ActionImplementation to PackageProcedure by @dr-bonez in #1448
- Chore/warning cleanse by @ProofOfKeags in #1447
- refactor packing to async by @ProofOfKeags in #1453
- Add nginx config for proxy redirect by @yzernik in #1421
- Proxy local frontend to remote backend by @elvece in #1452
- Feat/js action by @Blu-J in #1437
- Fix/making js work by @Blu-J in #1456
- fix: Dependency vs dependents by @Blu-J in #1462
- refactor: isolate network toast and login redirect to separate services by @waterplea in #1412
- Fix links in CONTRIBUTING.md, update ToC by @BBlackwo in #1463
- Feature/require script consistency by @ProofOfKeags in #1451
- Chore/version 0 3 1 0 by @Blu-J in #1475
- remove interactive TTY requirement from scripts by @moerketh in #1469
- Disable view in marketplace button when side-loaded by @BBlackwo in #1471
- Link to tor address on LAN setup page (#1277) by @BBlackwo in #1466
- UI version updates and welcome message for 0.3.1 by @elvece in #1479
- Update contribution and frontend readme by @BBlackwo in #1467
- Clean up config by @MattDHill in #1484
- Enable Control Groups for Docker containers by @k0gen in #1468
- Fix/patch db unwrap remove by @Blu-J in #1481
- handles spaces in working dir in make-image.sh by @moerketh in #1487
- UI cosmetic improvements by @MattDHill in #1486
- chore: fix the master by @Blu-J in #1495
- generate unique ca names based off of server id by @ProofOfKeags in #1500
- allow embassy-cli not as root by @dr-bonez in #1501
- fix: potential fix for the docker leaking the errors and such by @Blu-J in #1496
- Fix/memory leak docker by @Blu-J in #1505
- fixes serialization of regex pattern + description by @ProofOfKeags in #1509
- allow interactive TTY if available by @dr-bonez in #1508
- fix "missing proxy" error in embassy-cli by @dr-bonez in #1516
- Feat/js known errors by @Blu-J in #1514
- fixes a bug where nginx will crash if eos goes into diagnostic mode a… by @dr-bonez in #1506
- fix: restart/ uninstall sometimes didn't work by @Blu-J in #1527
- add "error_for_status" to static file downloads by @dr-bonez in #1532
- fixes #1169 by @dr-bonez in #1533
- disable unnecessary services by @dr-bonez in #1535
- chore: Update types to match embassyd by @Blu-J in #1539
- fix: found a unsaturaded args fix by @Blu-J in #1540
- chore: Update the lite types to include the union and enum by @Blu-J in #1542
- Feat: Make the js check for health by @Blu-J in #1543
- fix incorrect error message for deserialization in ValueSpecString by @dr-bonez in #1547
- fix dependency/dependent id issue by @dr-bonez in #1546
- add textarea to ValueSpecString by @dr-bonez in #1534
- Feat/js metadata by @Blu-J in #1548
- feat: uid/gid/mode added to metadata by @Blu-J in #1551
- Strict null checks by @waterplea in #1464
- fix backend builds for safe git config by @elvece in #1549
- update should send version not version spec by @elvece in #1559
- chore: Add tracing for debuging the js procedure slowness by @Blu-J in #1552
- Reset password through setup wizard by @MattDHill in #1490
- feat: Make sdk by @Blu-J in #1564
- fix: Missing a feature flat cfg by @Blu-J in #1563
- fixed sentence that didn't make sense by @BitcoinMechanic in #1565
- refactor(patch-db): use PatchDB class declaratively by @waterplea in #1562
- fix bugs with config and clean up dev options by @MattDHill in #1558
- fix: Make it so we only need the password on the backup by @Blu-J in #1566
- kill all sessions and remove ripple effect by @MattDHill in #1567
- adjust service marketplace button for installation source relevance by @elvece in #1571
- fix connection failure display monitoring and other style changes by @MattDHill in #1573
- add dns server to embassy-os by @dr-bonez in #1572
- Fix/mask generic inputs by @elvece in #1570
- Fix/sideload icon type by @elvece in #1577
- add avahi conditional compilation flags to dns by @dr-bonez in #1579
- selective backups and better drive selection interface by @MattDHill in #1576
- Feat/use modern tor by @kn0wmad in #1575
- update welcome notes for 031 by @MattDHill in #1580
- fix: Properties had a null description by @Blu-J in #1581
- fix backup lock ordering by @dr-bonez in #1582
- Bugfix/backup lock order by @dr-bonez in #1583
- preload redacted and visibility hidden by @MattDHill in #1584
- turn chevron red in config if error by @MattDHill in #1586
- switch to utc by @dr-bonez in #1587
- update patchdb for array patch fix by @elvece in #1588
- filter package ids when backing up by @dr-bonez in #1589
- add select/deselect all to backups and enum lists by @elvece in #1590
- fix: Stop the buffer from dropped pre-maturly by @Blu-J in #1591
- chore: commit the snapshots by @Blu-J in #1592
- nest new entries and message updates better by @MattDHill in #1595
- fix html parsing in logs by @elvece in #1598
- don't crash service if io-format is set for main by @dr-bonez in #1599
- strip html from colors from logs by @elvece in #1604
- feat: fetch effect by @Blu-J in #1605
- Fix/UI misc by @elvece in #1606
- display bottom item in backup list and refactor for cleanliness by @MattDHill in #1609
# v0.3.0.3
## What's Changed
- refactor: decompose app component by @waterplea in #1359
- Update Makefile by @kn0wmad in #1400
- ⬐ smarter wget by @k0gen in #1401
- prevent the kernel from OOMKilling embassyd by @dr-bonez in #1402
- attempt to heal when health check passes by @dr-bonez in #1420
- Feat new locking by @Blu-J in #1384
- version bump by @dr-bonez in #1423
- Update server-show.page.ts by @chrisguida in #1424
- Bump async from 2.6.3 to 2.6.4 in /frontend by @dependabot in #1426
- Update index.html by @mirkoRainer in #1419
## New Contributors
- @dependabot made their first contribution in #1426
- @mirkoRainer made their first contribution in #1419
# v0.3.0.2
- Minor compatibility fixes
- #1392
- #1390
- #1388
# v0.3.0.1
Minor bugfixes and performance improvements
# v0.3.0
- Websockets
- Real-time sync
- Patch DB
- Closely mirror FE and BE state. Most operating systems are connected to their GUI. Here it is served over the web. Patch DB and websockets serve to close the perceptual gap of this inherent challenge.
- Switch kernel from Raspbian to Ubuntu
- 64 bit
- Possibility for alternative hardware
- Merging of lifeline, agent, and appmgr into embassyd
- Elimination of Haskell in favor of pure Rust
- Unified API for interacting with the OS
- Easier to build from source
- OS (quarantined from OS and service data)
- Kernel/boot
- Persistent metadata (disk guid, product key)
- Rootfs (the os)
- Reserved (for updates) - swaps with rootfs
- Revamped OS updates
- Progress indicators
- Non-blocking
- Simple swap on reboot
- Revamped setup flow
- Elimination of Setup App (Apple/Google dependencies gone)
- Setup Wizard on http://embassy.local
- Revamped service config
- Dynamic, validated forms
- Diagnostic UI
- Missing disk, wrong disk, corrupt disk
- Turing complete API for actions, backup/restore, config, properties, notifications, health checks, and dependency requirements
- Optional, arbitrary inputs for actions
- Install, update, recover progress for apps
- Multiple interfaces
- E.g. rpc, p2p, ui
- Health checks
- Developer defined
- Internal, dependencies, and/or external
- Full Embassy backup (diff-based)
- External drive support/requirement
- Single at first
- Groundwork for extension and mirror drives
- Disk encryption
- Random key encrypted with static value
- Groundwork for swapping static value with chosen password
- Session Management
- List all active sessions
- Option to kill
- More robust and extensive logs
- Donations

View File

@@ -1,339 +1,119 @@
<!-- omit in toc -->
# Contributing to StartOS
First off, thanks for taking the time to contribute! ❤️
This guide is for contributing to the StartOS. If you are interested in packaging a service for StartOS, visit the [service packaging guide](https://docs.start9.com/latest/developer-docs/). If you are interested in promoting, providing technical support, creating tutorials, or helping in other ways, please visit the [Start9 website](https://start9.com/contribute).
All types of contributions are encouraged and valued. See the
[Table of Contents](#table-of-contents) for different ways to help and details
about how this project handles them. Please make sure to read the relevant
section before making your contribution. It will make it a lot easier for us
maintainers and smooth out the experience for all involved. The community looks
forward to your contributions. 🎉
> And if you like the project, but just don't have time to contribute, that's
> fine. There are other easy ways to support the project and show your
> appreciation, which we would also be very happy about:
>
> - Star the project
> - Tweet about it
> - Refer this project in your project's readme
> - Mention the project at local meetups and tell your friends/colleagues
> - Buy a [Start9 server](https://start9.com)
## Collaboration
<!-- omit in toc -->
- [Matrix](https://matrix.to/#/#community-dev:matrix.start9labs.com)
- [Telegram](https://t.me/start9_labs/47471)
## Table of Contents
## Project Structure
- [I Have a Question](#i-have-a-question)
- [I Want To Contribute](#i-want-to-contribute)
- [Reporting Bugs](#reporting-bugs)
- [Suggesting Enhancements](#suggesting-enhancements)
- [Project Structure](#project-structure)
- [Your First Code Contribution](#your-first-code-contribution)
- [Setting Up Your Development Environment](#setting-up-your-development-environment)
- [Building The Image](#building-the-image)
- [Improving The Documentation](#improving-the-documentation)
- [Styleguides](#styleguides)
- [Formatting](#formatting)
- [Atomic Commits](#atomic-commits)
- [Commit Messages](#commit-messages)
- [Pull Requests](#pull-requests)
- [Rebasing Changes](#rebasing-changes)
- [Join The Discussion](#join-the-discussion)
- [Join The Project Team](#join-the-project-team)
```bash
/
├── assets/
├── core/
├── build/
├── debian/
├── web/
├── image-recipe/
├── patch-db
└── system-images/
```
#### assets
screenshots for the StartOS README
## I Have a Question
#### core
An API, daemon (startd), CLI (start-cli), and SDK (start-sdk) that together provide the core functionality of StartOS.
> If you want to ask a question, we assume that you have read the available
> [Documentation](https://docs.start9labs.com).
#### build
Auxiliary files and scripts to include in deployed StartOS images
Before you ask a question, it is best to search for existing
[Issues](https://github.com/Start9Labs/start-os/issues) that might help you.
In case you have found a suitable issue and still need clarification, you can
write your question in this issue. It is also advisable to search the internet
for answers first.
#### debian
Maintainer scripts for the StartOS Debian package
If you then still feel the need to ask a question and need clarification, we
recommend the following:
#### web
Web UIs served under various conditions and used to interact with StartOS APIs.
- Open an [Issue](https://github.com/Start9Labs/start-os/issues/new).
- Provide as much context as you can about what you're running into.
- Provide project and platform versions, depending on what seems relevant.
#### image-recipe
Scripts for building StartOS images
We will then take care of the issue as soon as possible.
#### patch-db (submodule)
A diff based data store used to synchronize data between the web interfaces and server.
<!--
You might want to create a separate issue tag for questions and include it in this description. People should then tag their issues accordingly.
#### system-images
Docker images that assist with creating backups.
Depending on how large the project is, you may want to outsource the questioning, e.g. to Stack Overflow or Gitter. You may add additional contact and information possibilities:
- IRC
- Slack
- Gitter
- Stack Overflow tag
- Blog
- FAQ
- Roadmap
- E-Mail List
- Forum
-->
## I Want To Contribute
> ### Legal Notice <!-- omit in toc -->
>
> When contributing to this project, you must agree that you have authored 100%
> of the content, that you have the necessary rights to the content and that the
> content you contribute may be provided under the project license.
### Reporting Bugs
<!-- omit in toc -->
#### Before Submitting a Bug Report
A good bug report shouldn't leave others needing to chase you up for more
information. Therefore, we ask you to investigate carefully, collect information
and describe the issue in detail in your report. Please complete the following
steps in advance to help us fix any potential bug as fast as possible.
- Make sure that you are using the latest version.
- Determine if your bug is really a bug and not an error on your side e.g. using
incompatible environment components/versions (Make sure that you have read the
[documentation](https://start9.com/latest/user-manual). If you are looking for
support, you might want to check [this section](#i-have-a-question)).
- To see if other users have experienced (and potentially already solved) the
same issue you are having, check if there is not already a bug report existing
for your bug or error in the
[bug tracker](https://github.com/Start9Labs/start-os/issues?q=label%3Abug).
- Also make sure to search the internet (including Stack Overflow) to see if
users outside of the GitHub community have discussed the issue.
- Collect information about the bug:
- Stack trace (Traceback)
- Client OS, Platform and Version (Windows/Linux/macOS/iOS/Android,
Firefox/Tor Browser/Consulate)
- Version of the interpreter, compiler, SDK, runtime environment, package
manager, depending on what seems relevant.
- Possibly your input and the output
- Can you reliably reproduce the issue? And can you also reproduce it with
older versions?
<!-- omit in toc -->
#### How Do I Submit a Good Bug Report?
> You must never report security related issues, vulnerabilities or bugs to the
> issue tracker, or elsewhere in public. Instead sensitive bugs must be sent by
> email to <security@start9labs.com>.
<!-- You may add a PGP key to allow the messages to be sent encrypted as well. -->
We use GitHub issues to track bugs and errors. If you run into an issue with the
project:
- Open an [Issue](https://github.com/Start9Labs/start-os/issues/new/choose)
selecting the appropriate type.
- Explain the behavior you would expect and the actual behavior.
- Please provide as much context as possible and describe the _reproduction
steps_ that someone else can follow to recreate the issue on their own. This
usually includes your code. For good bug reports you should isolate the
problem and create a reduced test case.
- Provide the information you collected in the previous section.
Once it's filed:
- The project team will label the issue accordingly.
- A team member will try to reproduce the issue with your provided steps. If
there are no reproduction steps or no obvious way to reproduce the issue, the
team will ask you for those steps and mark the issue as `Question`. Bugs with
the `Question` tag will not be addressed until they are answered.
- If the team is able to reproduce the issue, it will be marked a scoping level
tag, as well as possibly other tags (such as `Security`), and the issue will
be left to be [implemented by someone](#your-first-code-contribution).
<!-- You might want to create an issue template for bugs and errors that can be used as a guide and that defines the structure of the information to be included. If you do so, reference it here in the description. -->
### Suggesting Enhancements
This section guides you through submitting an enhancement suggestion for StartOS, **including completely new features and minor improvements to existing
functionality**. Following these guidelines will help maintainers and the
community to understand your suggestion and find related suggestions.
<!-- omit in toc -->
#### Before Submitting an Enhancement
- Make sure that you are using the latest version.
- Read the [documentation](https://start9.com/latest/user-manual) carefully and
find out if the functionality is already covered, maybe by an individual
configuration.
- Perform a [search](https://github.com/Start9Labs/start-os/issues) to see if
the enhancement has already been suggested. If it has, add a comment to the
existing issue instead of opening a new one.
- Find out whether your idea fits with the scope and aims of the project. It's
up to you to make a strong case to convince the project's developers of the
merits of this feature. Keep in mind that we want features that will be useful
to the majority of our users and not just a small subset. If you're just
targeting a minority of users, consider writing an add-on/plugin library.
<!-- omit in toc -->
#### How Do I Submit a Good Enhancement Suggestion?
Enhancement suggestions are tracked as
[GitHub issues](https://github.com/Start9Labs/start-os/issues).
- Use a **clear and descriptive title** for the issue to identify the
suggestion.
- Provide a **step-by-step description of the suggested enhancement** in as many
details as possible.
- **Describe the current behavior** and **explain which behavior you expected to
see instead** and why. At this point you can also tell which alternatives do
not work for you.
- You may want to **include screenshots and animated GIFs** which help you
demonstrate the steps or point out the part which the suggestion is related
to. You can use [this tool](https://www.cockos.com/licecap/) to record GIFs on
macOS and Windows, and [this tool](https://github.com/colinkeenan/silentcast)
or [this tool](https://github.com/GNOME/byzanz) on Linux.
<!-- this should only be included if the project has a GUI -->
- **Explain why this enhancement would be useful** to most StartOS users. You
may also want to point out the other projects that solved it better and which
could serve as inspiration.
<!-- You might want to create an issue template for enhancement suggestions that can be used as a guide and that defines the structure of the information to be included. If you do so, reference it here in the description. -->
### Project Structure
StartOS is composed of the following components. Please visit the README for
each component to understand the dependency requirements and installation
instructions.
- [`backend`](backend/README.md) (Rust) is a command line utility, daemon, and
software development kit that sets up and manages services and their
environments, provides the interface for the ui, manages system state, and
provides utilities for packaging services for StartOS.
- [`build`](build/README.md) contains scripts and necessary for deploying
StartOS to a debian/raspbian system.
- [`frontend`](frontend/README.md) (Typescript Ionic Angular) is the code that
is deployed to the browser to provide the user interface for StartOS.
- `projects/ui` - Code for the user interface that is displayed when StartOS
is running normally.
- `projects/setup-wizard`(frontend/README.md) - Code for the user interface
that is displayed during the setup and recovery process for StartOS.
- `projects/diagnostic-ui` - Code for the user interface that is displayed
when something has gone wrong with starting up StartOS, which provides
helpful debugging tools.
- `libs` (Rust) is a set of standalone crates that were separated out of
`backend` for the purpose of portability
- `patch-db` - A diff based data store that is used to synchronize data between
the front and backend.
- Notably, `patch-db` has a
[client](https://github.com/Start9Labs/patch-db/tree/master/client) with its
own dependency and installation requirements.
- `system-images` - (Docker, Rust) A suite of utility Docker images that are
preloaded with StartOS to assist with functions relating to services (eg.
configuration, backups, health checks).
### Your First Code Contribution
#### Setting Up Your Development Environment
First, clone the StartOS repository and from the project root, pull in the
submodules for dependent libraries.
## Environment Setup
#### Clone the StartOS repository
```sh
git clone https://github.com/Start9Labs/start-os.git
cd start-os
```
#### Load the PatchDB submodule
```sh
git submodule update --init --recursive
```
Depending on which component of the ecosystem you are interested in contributing
to, follow the installation requirements listed in that component's README
(linked [above](#project-structure))
#### Continue to your project of interest for additional instructions:
- [`core`](core/README.md)
- [`web-interfaces`](web-interfaces/README.md)
- [`build`](build/README.md)
- [`patch-db`](https://github.com/Start9Labs/patch-db)
#### Building The Raspberry Pi Image
## Building
This project uses [GNU Make](https://www.gnu.org/software/make/) to build its components. To build any specific component, simply run `make <TARGET>` replacing `<TARGET>` with the name of the target you'd like to build
This step is for setting up an environment in which to test your code changes if
you do not yet have a StartOS.
### Requirements
- [GNU Make](https://www.gnu.org/software/make/)
- [Docker](https://docs.docker.com/get-docker/)
- [NodeJS v18.15.0](https://docs.npmjs.com/downloading-and-installing-node-js-and-npm)
- [sed](https://www.gnu.org/software/sed/)
- [grep](https://www.gnu.org/software/grep/)
- [awk](https://www.gnu.org/software/gawk/)
- [jq](https://jqlang.github.io/jq/)
- [gzip](https://www.gnu.org/software/gzip/)
- [brotli](https://github.com/google/brotli)
- Requirements
- `ext4fs` (available if running on the Linux kernel)
- [Docker](https://docs.docker.com/get-docker/)
- GNU Make
- Building
- see setup instructions [here](build/README.md)
- run `make startos-raspi.img ARCH=aarch64` from the project root
### Environment variables
- `PLATFORM`: which platform you would like to build for. Must be one of `x86_64`, `x86_64-nonfree`, `aarch64`, `aarch64-nonfree`, `raspberrypi`
- NOTE: `nonfree` images are for including `nonfree` firmware packages in the built ISO
- `ENVIRONMENT`: a hyphen separated set of feature flags to enable
- `dev`: enables password ssh (INSECURE!) and does not compress frontends
- `unstable`: enables assertions that will cause errors on unexpected inconsistencies that are undesirable in production use either for performance or reliability reasons
- `docker`: use `docker` instead of `podman`
- `GIT_BRANCH_AS_HASH`: set to `1` to use the current git branch name as the git hash so that the project does not need to be rebuilt on each commit
### Improving The Documentation
You can find the repository for Start9's documentation
[here](https://github.com/Start9Labs/documentation). If there is something you
would like to see added, let us know, or create an issue yourself. Welcome are
contributions for lacking or incorrect information, broken links, requested
additions, or general style improvements.
Contributions in the form of setup guides for integrations with external
applications are highly encouraged. If you struggled through a process and would
like to share your steps with others, check out the docs for each
[service](https://github.com/Start9Labs/documentation/blob/master/source/user-manuals/available-services/index.rst)
we support. The wrapper repos contain sections for adding integration guides,
such as this
[one](https://github.com/Start9Labs/bitcoind-wrapper/tree/master/docs). These
not only help out others in the community, but inform how we can create a more
seamless and intuitive experience.
## Styleguides
### Formatting
Each component of StartOS contains its own style guide. Code must be formatted
with the formatter designated for each component. These are outlined within each
component folder's README.
### Atomic Commits
Commits
[should be atomic](https://en.wikipedia.org/wiki/Atomic_commit#Atomic_commit_convention)
and diffs should be easy to read. Do not mix any formatting fixes or code moves
with actual code changes.
### Commit Messages
If a commit touches only 1 component, prefix the message with the affected
component. i.e. `backend: update to tokio v0.3`.
### Pull Requests
The body of a pull request should contain sufficient description of what the
changes do, as well as a justification. You should include references to any
relevant [issues](https://github.com/Start9Labs/start-os/issues).
### Rebasing Changes
When a pull request conflicts with the target branch, you may be asked to rebase
it on top of the current target branch. The `git rebase` command will take care
of rebuilding your commits on top of the new base.
This project aims to have a clean git history, where code changes are only made
in non-merge commits. This simplifies auditability because merge commits can be
assumed to not contain arbitrary code changes.
## Join The Discussion
Current or aspiring contributors? Join our community developer
[Matrix channel](https://matrix.to/#/#community-dev:matrix.start9labs.com).
Just interested in or using the project? Join our community
[Telegram](https://t.me/start9_labs) or
[Matrix](https://matrix.to/#/#community:matrix.start9labs.com).
## Join The Project Team
Interested in becoming a part of the Start9 Labs team? Send an email to
<jobs@start9labs.com>
<!-- omit in toc -->
## Attribution
This guide is based on the **contributing-gen**.
[Make your own](https://github.com/bttger/contributing-gen)!
### Useful Make Targets
- `iso`: Create a full `.iso` image
- Only possible from Debian
- Not available for `PLATFORM=raspberrypi`
- Additional Requirements:
- [debspawn](https://github.com/lkhq/debspawn)
- `img`: Create a full `.img` image
- Only possible from Debian
- Only available for `PLATFORM=raspberrypi`
- Additional Requirements:
- [debspawn](https://github.com/lkhq/debspawn)
- `format`: Run automatic code formatting for the project
- Additional Requirements:
- [rust](https://rustup.rs/)
- `test`: Run automated tests for the project
- Additional Requirements:
- [rust](https://rustup.rs/)
- `update`: Deploy the current working project to a device over ssh as if through an over-the-air update
- Requires an argument `REMOTE` which is the ssh address of the device, i.e. `start9@192.168.122.2`
- `reflash`: Deploy the current working project to a device over ssh as if using a live `iso` image to reflash it
- Requires an argument `REMOTE` which is the ssh address of the device, i.e. `start9@192.168.122.2`
- `update-overlay`: Deploy the current working project to a device over ssh to the in-memory overlay without restarting it
- WARNING: changes will be reverted after the device is rebooted
- WARNING: changes to `init` will not take effect as the device is already initialized
- Requires an argument `REMOTE` which is the ssh address of the device, i.e. `start9@192.168.122.2`
- `wormhole`: Deploy the `startbox` to a device using [magic-wormhole](https://github.com/magic-wormhole/magic-wormhole)
- When the build it complete will emit a command to paste into the shell of the device to upgrade it
- Additional Requirements:
- [magic-wormhole](https://github.com/magic-wormhole/magic-wormhole)
- `clean`: Delete all compiled artifacts

114
Makefile
View File

@@ -6,26 +6,26 @@ BASENAME := $(shell ./basename.sh)
PLATFORM := $(shell if [ -f ./PLATFORM.txt ]; then cat ./PLATFORM.txt; else echo unknown; fi)
ARCH := $(shell if [ "$(PLATFORM)" = "raspberrypi" ]; then echo aarch64; else echo $(PLATFORM) | sed 's/-nonfree$$//g'; fi)
IMAGE_TYPE=$(shell if [ "$(PLATFORM)" = raspberrypi ]; then echo img; else echo iso; fi)
EMBASSY_BINS := backend/target/$(ARCH)-unknown-linux-gnu/release/startbox libs/target/aarch64-unknown-linux-musl/release/embassy_container_init libs/target/x86_64-unknown-linux-musl/release/embassy_container_init
EMBASSY_UIS := frontend/dist/raw/ui frontend/dist/raw/setup-wizard frontend/dist/raw/diagnostic-ui frontend/dist/raw/install-wizard
BINS := core/target/$(ARCH)-unknown-linux-gnu/release/startbox core/target/aarch64-unknown-linux-musl/release/container-init core/target/x86_64-unknown-linux-musl/release/container-init
WEB_UIS := web/dist/raw/ui web/dist/raw/setup-wizard web/dist/raw/diagnostic-ui web/dist/raw/install-wizard
BUILD_SRC := $(shell git ls-files build) build/lib/depends build/lib/conflicts
DEBIAN_SRC := $(shell git ls-files debian/)
IMAGE_RECIPE_SRC := $(shell git ls-files image-recipe/)
EMBASSY_SRC := backend/startd.service $(BUILD_SRC)
STARTD_SRC := core/startos/startd.service $(BUILD_SRC)
COMPAT_SRC := $(shell git ls-files system-images/compat/)
UTILS_SRC := $(shell git ls-files system-images/utils/)
BINFMT_SRC := $(shell git ls-files system-images/binfmt/)
BACKEND_SRC := $(shell git ls-files backend) $(shell git ls-files --recurse-submodules patch-db) $(shell git ls-files libs) frontend/dist/static
FRONTEND_SHARED_SRC := $(shell git ls-files frontend/projects/shared) $(shell ls -p frontend/ | grep -v / | sed 's/^/frontend\//g') frontend/node_modules frontend/config.json patch-db/client/dist frontend/patchdb-ui-seed.json
FRONTEND_UI_SRC := $(shell git ls-files frontend/projects/ui)
FRONTEND_SETUP_WIZARD_SRC := $(shell git ls-files frontend/projects/setup-wizard)
FRONTEND_DIAGNOSTIC_UI_SRC := $(shell git ls-files frontend/projects/diagnostic-ui)
FRONTEND_INSTALL_WIZARD_SRC := $(shell git ls-files frontend/projects/install-wizard)
CORE_SRC := $(shell git ls-files core) $(shell git ls-files --recurse-submodules patch-db) web/dist/static web/patchdb-ui-seed.json $(GIT_HASH_FILE)
WEB_SHARED_SRC := $(shell git ls-files web/projects/shared) $(shell ls -p web/ | grep -v / | sed 's/^/web\//g') web/node_modules web/config.json patch-db/client/dist web/patchdb-ui-seed.json
WEB_UI_SRC := $(shell git ls-files web/projects/ui)
WEB_SETUP_WIZARD_SRC := $(shell git ls-files web/projects/setup-wizard)
WEB_DIAGNOSTIC_UI_SRC := $(shell git ls-files web/projects/diagnostic-ui)
WEB_INSTALL_WIZARD_SRC := $(shell git ls-files web/projects/install-wizard)
PATCH_DB_CLIENT_SRC := $(shell git ls-files --recurse-submodules patch-db/client)
GZIP_BIN := $(shell which pigz || which gzip)
TAR_BIN := $(shell which gtar || which tar)
COMPILED_TARGETS := $(EMBASSY_BINS) system-images/compat/docker-images/$(ARCH).tar system-images/utils/docker-images/$(ARCH).tar system-images/binfmt/docker-images/$(ARCH).tar
ALL_TARGETS := $(EMBASSY_SRC) $(ENVIRONMENT_FILE) $(GIT_HASH_FILE) $(VERSION_FILE) $(COMPILED_TARGETS) $(shell if [ "$(PLATFORM)" = "raspberrypi" ]; then echo cargo-deps/aarch64-unknown-linux-gnu/release/pi-beep; fi) $(shell /bin/bash -c 'if [[ "${ENVIRONMENT}" =~ (^|-)unstable($$|-) ]]; then echo cargo-deps/$(ARCH)-unknown-linux-gnu/release/tokio-console; fi') $(PLATFORM_FILE)
COMPILED_TARGETS := $(BINS) system-images/compat/docker-images/$(ARCH).tar system-images/utils/docker-images/$(ARCH).tar system-images/binfmt/docker-images/$(ARCH).tar
ALL_TARGETS := $(STARTD_SRC) $(ENVIRONMENT_FILE) $(GIT_HASH_FILE) $(VERSION_FILE) $(COMPILED_TARGETS) $(shell if [ "$(PLATFORM)" = "raspberrypi" ]; then echo cargo-deps/aarch64-unknown-linux-gnu/release/pi-beep; fi) $(shell /bin/bash -c 'if [[ "${ENVIRONMENT}" =~ (^|-)unstable($$|-) ]]; then echo cargo-deps/$(ARCH)-unknown-linux-gnu/release/tokio-console; fi') $(PLATFORM_FILE)
ifeq ($(REMOTE),)
mkdir = mkdir -p $1
@@ -48,7 +48,7 @@ endif
.DELETE_ON_ERROR:
.PHONY: all metadata install clean format sdk snapshots frontends ui backend reflash deb $(IMAGE_TYPE) squashfs sudo wormhole docker-buildx
.PHONY: all metadata install clean format sdk snapshots uis ui reflash deb $(IMAGE_TYPE) squashfs sudo wormhole test
all: $(ALL_TARGETS)
@@ -60,12 +60,11 @@ sudo:
clean:
rm -f system-images/**/*.tar
rm -rf system-images/compat/target
rm -rf backend/target
rm -rf frontend/.angular
rm -f frontend/config.json
rm -rf frontend/node_modules
rm -rf frontend/dist
rm -rf libs/target
rm -rf core/target
rm -rf web/.angular
rm -f web/config.json
rm -rf web/node_modules
rm -rf web/dist
rm -rf patch-db/client/node_modules
rm -rf patch-db/client/dist
rm -rf patch-db/target
@@ -79,11 +78,14 @@ clean:
rm -f VERSION.txt
format:
cd backend && cargo +nightly fmt
cd libs && cargo +nightly fmt
cd core && cargo +nightly fmt
test: $(BACKEND_SRC) $(ENVIRONMENT_FILE)
cd backend && cargo build && cargo test
cd libs && cargo test
sdk:
cd backend/ && ./install-sdk.sh
cd core && ./install-sdk.sh
deb: results/$(BASENAME).deb
@@ -103,7 +105,7 @@ results/$(BASENAME).$(IMAGE_TYPE) results/$(BASENAME).squashfs: $(IMAGE_RECIPE_S
# For creating os images. DO NOT USE
install: $(ALL_TARGETS)
$(call mkdir,$(DESTDIR)/usr/bin)
$(call cp,backend/target/$(ARCH)-unknown-linux-gnu/release/startbox,$(DESTDIR)/usr/bin/startbox)
$(call cp,core/target/$(ARCH)-unknown-linux-gnu/release/startbox,$(DESTDIR)/usr/bin/startbox)
$(call ln,/usr/bin/startbox,$(DESTDIR)/usr/bin/startd)
$(call ln,/usr/bin/startbox,$(DESTDIR)/usr/bin/start-cli)
$(call ln,/usr/bin/startbox,$(DESTDIR)/usr/bin/start-sdk)
@@ -114,7 +116,7 @@ install: $(ALL_TARGETS)
if /bin/bash -c '[[ "${ENVIRONMENT}" =~ (^|-)unstable($$|-) ]]'; then $(call cp,cargo-deps/$(ARCH)-unknown-linux-gnu/release/tokio-console,$(DESTDIR)/usr/bin/tokio-console); fi
$(call mkdir,$(DESTDIR)/lib/systemd/system)
$(call cp,backend/startd.service,$(DESTDIR)/lib/systemd/system/startd.service)
$(call cp,core/startos/startd.service,$(DESTDIR)/lib/systemd/system/startd.service)
$(call mkdir,$(DESTDIR)/usr/lib)
$(call rm,$(DESTDIR)/usr/lib/startos)
@@ -126,8 +128,8 @@ install: $(ALL_TARGETS)
$(call cp,VERSION.txt,$(DESTDIR)/usr/lib/startos/VERSION.txt)
$(call mkdir,$(DESTDIR)/usr/lib/startos/container)
$(call cp,libs/target/aarch64-unknown-linux-musl/release/embassy_container_init,$(DESTDIR)/usr/lib/startos/container/embassy_container_init.arm64)
$(call cp,libs/target/x86_64-unknown-linux-musl/release/embassy_container_init,$(DESTDIR)/usr/lib/startos/container/embassy_container_init.amd64)
$(call cp,core/target/aarch64-unknown-linux-musl/release/container-init,$(DESTDIR)/usr/lib/startos/container/container-init.arm64)
$(call cp,core/target/x86_64-unknown-linux-musl/release/container-init,$(DESTDIR)/usr/lib/startos/container/container-init.amd64)
$(call mkdir,$(DESTDIR)/usr/lib/startos/system-images)
$(call cp,system-images/compat/docker-images/$(ARCH).tar,$(DESTDIR)/usr/lib/startos/system-images/compat.tar)
@@ -143,8 +145,8 @@ update-overlay: $(ALL_TARGETS)
$(MAKE) install REMOTE=$(REMOTE) SSHPASS=$(SSHPASS) PLATFORM=$(PLATFORM)
$(call ssh,"sudo systemctl start startd")
wormhole: backend/target/$(ARCH)-unknown-linux-gnu/release/startbox
@wormhole send backend/target/$(ARCH)-unknown-linux-gnu/release/startbox 2>&1 | awk -Winteractive '/wormhole receive/ { printf "sudo /usr/lib/startos/scripts/chroot-and-upgrade \"cd /usr/bin && rm startbox && wormhole receive --accept-file %s && chmod +x startbox\"\n", $$3 }'
wormhole: core/target/$(ARCH)-unknown-linux-gnu/release/startbox
@wormhole send core/target/$(ARCH)-unknown-linux-gnu/release/startbox 2>&1 | awk -Winteractive '/wormhole receive/ { printf "sudo /usr/lib/startos/scripts/chroot-and-upgrade \"cd /usr/bin && rm startbox && wormhole receive --accept-file %s && chmod +x startbox\"\n", $$3 }'
update: $(ALL_TARGETS)
@if [ -z "$(REMOTE)" ]; then >&2 echo "Must specify REMOTE" && false; fi
@@ -164,64 +166,64 @@ upload-ota: results/$(BASENAME).squashfs
build/lib/depends build/lib/conflicts: build/dpkg-deps/*
build/dpkg-deps/generate.sh
system-images/compat/docker-images/$(ARCH).tar: $(COMPAT_SRC) backend/Cargo.lock | docker-buildx
system-images/compat/docker-images/$(ARCH).tar: $(COMPAT_SRC) core/Cargo.lock
cd system-images/compat && make docker-images/$(ARCH).tar && touch docker-images/$(ARCH).tar
system-images/utils/docker-images/$(ARCH).tar: $(UTILS_SRC) | docker-buildx
system-images/utils/docker-images/$(ARCH).tar: $(UTILS_SRC)
cd system-images/utils && make docker-images/$(ARCH).tar && touch docker-images/$(ARCH).tar
system-images/binfmt/docker-images/$(ARCH).tar: $(BINFMT_SRC) | docker-buildx
system-images/binfmt/docker-images/$(ARCH).tar: $(BINFMT_SRC)
cd system-images/binfmt && make docker-images/$(ARCH).tar && touch docker-images/$(ARCH).tar
snapshots: libs/snapshot_creator/Cargo.toml
cd libs/ && ./build-v8-snapshot.sh
cd libs/ && ./build-arm-v8-snapshot.sh
snapshots: core/snapshot-creator/Cargo.toml
cd core/ && ARCH=aarch64 ./build-v8-snapshot.sh
cd core/ && ARCH=x86_64 ./build-v8-snapshot.sh
$(EMBASSY_BINS): $(BACKEND_SRC) $(ENVIRONMENT_FILE) $(GIT_HASH_FILE) frontend/patchdb-ui-seed.json
cd backend && ARCH=$(ARCH) ./build-prod.sh
touch $(EMBASSY_BINS)
$(BINS): $(CORE_SRC) $(ENVIRONMENT_FILE)
cd core && ARCH=$(ARCH) ./build-prod.sh
touch $(BINS)
frontend/node_modules: frontend/package.json
npm --prefix frontend ci
web/node_modules: web/package.json
npm --prefix web ci
frontend/dist/raw/ui: $(FRONTEND_UI_SRC) $(FRONTEND_SHARED_SRC)
npm --prefix frontend run build:ui
web/dist/raw/ui: $(WEB_UI_SRC) $(WEB_SHARED_SRC)
npm --prefix web run build:ui
frontend/dist/raw/setup-wizard: $(FRONTEND_SETUP_WIZARD_SRC) $(FRONTEND_SHARED_SRC)
npm --prefix frontend run build:setup
web/dist/raw/setup-wizard: $(WEB_SETUP_WIZARD_SRC) $(WEB_SHARED_SRC)
npm --prefix web run build:setup
frontend/dist/raw/diagnostic-ui: $(FRONTEND_DIAGNOSTIC_UI_SRC) $(FRONTEND_SHARED_SRC)
npm --prefix frontend run build:dui
web/dist/raw/diagnostic-ui: $(WEB_DIAGNOSTIC_UI_SRC) $(WEB_SHARED_SRC)
npm --prefix web run build:dui
frontend/dist/raw/install-wizard: $(FRONTEND_INSTALL_WIZARD_SRC) $(FRONTEND_SHARED_SRC)
npm --prefix frontend run build:install-wiz
web/dist/raw/install-wizard: $(WEB_INSTALL_WIZARD_SRC) $(WEB_SHARED_SRC)
npm --prefix web run build:install-wiz
frontend/dist/static: $(EMBASSY_UIS) $(ENVIRONMENT_FILE)
web/dist/static: $(WEB_UIS) $(ENVIRONMENT_FILE)
./compress-uis.sh
frontend/config.json: $(GIT_HASH_FILE) frontend/config-sample.json
jq '.useMocks = false' frontend/config-sample.json | jq '.gitHash = "$(shell cat GIT_HASH.txt)"' > frontend/config.json
web/config.json: $(GIT_HASH_FILE) web/config-sample.json
jq '.useMocks = false' web/config-sample.json | jq '.gitHash = "$(shell cat GIT_HASH.txt)"' > web/config.json
frontend/patchdb-ui-seed.json: frontend/package.json
jq '."ack-welcome" = $(shell jq '.version' frontend/package.json)' frontend/patchdb-ui-seed.json > ui-seed.tmp
mv ui-seed.tmp frontend/patchdb-ui-seed.json
web/patchdb-ui-seed.json: web/package.json
jq '."ack-welcome" = $(shell jq '.version' web/package.json)' web/patchdb-ui-seed.json > ui-seed.tmp
mv ui-seed.tmp web/patchdb-ui-seed.json
patch-db/client/node_modules: patch-db/client/package.json
npm --prefix patch-db/client ci
patch-db/client/dist: $(PATCH_DB_CLIENT_SRC) patch-db/client/node_modules
! test -d patch-db/client/dist || rm -rf patch-db/client/dist
npm --prefix frontend run build:deps
npm --prefix web run build:deps
# used by github actions
compiled-$(ARCH).tar: $(COMPILED_TARGETS) $(ENVIRONMENT_FILE) $(GIT_HASH_FILE) $(VERSION_FILE)
tar -cvf $@ $^
# this is a convenience step to build all frontends - it is not referenced elsewhere in this file
frontends: $(EMBASSY_UIS)
# this is a convenience step to build all web uis - it is not referenced elsewhere in this file
uis: $(WEB_UIS)
# this is a convenience step to build the UI
ui: frontend/dist/raw/ui
ui: web/dist/raw/ui
cargo-deps/aarch64-unknown-linux-gnu/release/pi-beep:
ARCH=aarch64 ./build-cargo-dep.sh pi-beep

View File

@@ -1,5 +1,5 @@
<div align="center">
<img src="frontend/projects/shared/assets/img/icon.png" alt="StartOS Logo" width="16%" />
<img src="web/projects/shared/assets/img/icon.png" alt="StartOS Logo" width="16%" />
<h1 style="margin-top: 0;">StartOS</h1>
<a href="https://github.com/Start9Labs/start-os/releases">
<img alt="GitHub release (with filter)" src="https://img.shields.io/github/v/release/start9labs/start-os?logo=github">

10
backend/.gitignore vendored
View File

@@ -1,10 +0,0 @@
/target
**/*.rs.bk
.DS_Store
.vscode
secrets.db
*.s9pk
*.sqlite3
.env
.editorconfig
proptest-regressions/*

View File

@@ -1,42 +0,0 @@
# StartOS Backend
- Requirements:
- [Install Rust](https://rustup.rs)
- Recommended: [rust-analyzer](https://rust-analyzer.github.io/)
- [Docker](https://docs.docker.com/get-docker/)
- [Rust ARM64 Build Container](https://github.com/Start9Labs/rust-arm-builder)
- Mac `brew install gnu-tar`
- Scripts (run within the `./backend` directory)
- `build-prod.sh` - compiles a release build of the artifacts for running on
ARM64
- A Linux computer or VM
## Structure
The StartOS backend is packed into a single binary `startbox` that is symlinked under
several different names for different behaviour:
- startd: This is the main workhorse of StartOS - any new functionality you
want will likely go here
- start-cli: This is a CLI tool that will allow you to issue commands to
startd and control it similarly to the UI
- start-sdk: This is a CLI tool that aids in building and packaging services
you wish to deploy to StartOS
Finally there is a library `startos` that supports all of these tools.
See [here](/backend/Cargo.toml) for details.
## Building
You can build the entire operating system image using `make` from the root of
the StartOS project. This will subsequently invoke the build scripts above to
actually create the requisite binaries and put them onto the final operating
system image.
## Questions
If you have questions about how various pieces of the backend system work. Open
an issue and tag the following people
- dr-bonez

View File

@@ -1,23 +0,0 @@
#!/bin/bash
set -e
shopt -s expand_aliases
if [ "$0" != "./build-portable.sh" ]; then
>&2 echo "Must be run from backend directory"
exit 1
fi
USE_TTY=
if tty -s; then
USE_TTY="-it"
fi
alias 'rust-musl-builder'='docker run $USE_TTY --rm -v "$HOME"/.cargo/registry:/root/.cargo/registry -v "$(pwd)":/home/rust/src start9/rust-musl-cross:x86_64-musl'
cd ..
rust-musl-builder sh -c "(cd backend && cargo +beta build --release --target=x86_64-unknown-linux-musl --no-default-features --locked)"
cd backend
sudo chown -R $USER target
sudo chown -R $USER ~/.cargo

View File

@@ -1,21 +0,0 @@
#!/bin/bash
set -e
shopt -s expand_aliases
if [ "$0" != "./install-sdk.sh" ]; then
>&2 echo "Must be run from backend directory"
exit 1
fi
frontend="../frontend/dist/static"
[ -d "$frontend" ] || mkdir -p "$frontend"
if [ -z "$PLATFORM" ]; then
export PLATFORM=$(uname -m)
fi
cargo install --path=. --no-default-features --features=js_engine,sdk,cli --locked
startbox_loc=$(which startbox)
ln -sf $startbox_loc $(dirname $startbox_loc)/start-cli
ln -sf $startbox_loc $(dirname $startbox_loc)/start-sdk

View File

@@ -1,6 +1,6 @@
#!/bin/bash
FE_VERSION="$(cat frontend/package.json | grep '"version"' | sed 's/[ \t]*"version":[ \t]*"\([^"]*\)",/\1/')"
FE_VERSION="$(cat web/package.json | grep '"version"' | sed 's/[ \t]*"version":[ \t]*"\([^"]*\)",/\1/')"
# TODO: Validate other version sources - backend/Cargo.toml, backend/src/version/mod.rs

View File

@@ -1,14 +1,16 @@
#!/bin/bash
cd "$(dirname "${BASH_SOURCE[0]}")"
set -e
rm -rf frontend/dist/static
rm -rf web/dist/static
if ! [[ "$ENVIRONMENT" =~ (^|-)dev($|-) ]]; then
find frontend/dist/raw -type f -not -name '*.gz' -and -not -name '*.br' | xargs -n 1 -P 0 gzip -kf
find frontend/dist/raw -type f -not -name '*.gz' -and -not -name '*.br' | xargs -n 1 -P 0 brotli -kf
find web/dist/raw -type f -not -name '*.gz' -and -not -name '*.br' | xargs -n 1 -P 0 gzip -kf
find web/dist/raw -type f -not -name '*.gz' -and -not -name '*.br' | xargs -n 1 -P 0 brotli -kf
for file in $(find frontend/dist/raw -type f -not -name '*.gz' -and -not -name '*.br'); do
for file in $(find web/dist/raw -type f -not -name '*.gz' -and -not -name '*.br'); do
raw_size=$(du $file | awk '{print $1 * 512}')
gz_size=$(du $file.gz | awk '{print $1 * 512}')
br_size=$(du $file.br | awk '{print $1 * 512}')
@@ -21,4 +23,4 @@ if ! [[ "$ENVIRONMENT" =~ (^|-)dev($|-) ]]; then
done
fi
cp -r frontend/dist/raw frontend/dist/static
cp -r web/dist/raw web/dist/static

View File

@@ -7,4 +7,4 @@ secrets.db
*.sqlite3
.env
.editorconfig
proptest-regressions/*
proptest-regressions/**/*

File diff suppressed because it is too large Load Diff

10
core/Cargo.toml Normal file
View File

@@ -0,0 +1,10 @@
[workspace]
members = [
"container-init",
"helpers",
"js-engine",
"models",
"snapshot-creator",
"startos",
]

35
core/README.md Normal file
View File

@@ -0,0 +1,35 @@
# StartOS Backend
- Requirements:
- [Install Rust](https://rustup.rs)
- Recommended: [rust-analyzer](https://rust-analyzer.github.io/)
- [Docker](https://docs.docker.com/get-docker/)
## Structure
- `startos`: This contains the core library for StartOS that supports building `startbox`.
- `container-init` (ignore: deprecated)
- `js-engine`: This contains the library required to build `deno` to support running `.js` maintainer scripts for v0.3
- `snapshot-creator`: This contains a binary used to build `v8` runtime snapshots, required for initializing `start-deno`
- `helpers`: This contains utility functions used across both `startos` and `js-engine`
- `models`: This contains types that are shared across `startos`, `js-engine`, and `helpers`
## Artifacts
The StartOS backend is packed into a single binary `startbox` that is symlinked under
several different names for different behaviour:
- `startd`: This is the main daemon of StartOS
- `start-cli`: This is a CLI tool that will allow you to issue commands to
`startd` and control it similarly to the UI
- `start-sdk`: This is a CLI tool that aids in building and packaging services
you wish to deploy to StartOS
- `start-deno`: This is a CLI tool invoked by startd to run `.js` maintainer scripts for v0.3
- `avahi-alias`: This is a CLI tool invoked by startd to create aliases in `avahi` for mDNS
## Questions
If you have questions about how various pieces of the backend system work. Open
an issue and tag the following people
- dr-bonez

View File

@@ -1,5 +1,7 @@
#!/bin/bash
cd "$(dirname "${BASH_SOURCE[0]}")"
set -e
shopt -s expand_aliases
@@ -7,11 +9,6 @@ if [ -z "$ARCH" ]; then
ARCH=$(uname -m)
fi
if [ "$0" != "./build-prod.sh" ]; then
>&2 echo "Must be run from backend directory"
exit 1
fi
USE_TTY=
if tty -s; then
USE_TTY="-it"
@@ -28,23 +25,20 @@ set +e
fail=
echo "FEATURES=\"$FEATURES\""
echo "RUSTFLAGS=\"$RUSTFLAGS\""
rust-gnu-builder sh -c "(cd backend && cargo build --release --features avahi-alias,$FEATURES --locked --target=$ARCH-unknown-linux-gnu)"
if test $? -ne 0; then
if ! rust-gnu-builder sh -c "(cd core && cargo build --release --features avahi-alias,$FEATURES --locked --bin startbox --target=$ARCH-unknown-linux-gnu)"; then
fail=true
fi
for ARCH in x86_64 aarch64
do
rust-musl-builder sh -c "(cd libs && cargo build --release --locked --bin embassy_container_init)"
if test $? -ne 0; then
if ! rust-musl-builder sh -c "(cd core && cargo build --release --locked --bin container-init)"; then
fail=true
fi
done
set -e
cd backend
cd core
sudo chown -R $USER target
sudo chown -R $USER ~/.cargo
sudo chown -R $USER ../libs/target
if [ -n "$fail" ]; then
exit 1

View File

@@ -1,12 +1,14 @@
#!/bin/bash
# Reason for this being is that we need to create a snapshot for the deno runtime. It wants to pull 3 files from build, and during the creation it gets embedded, but for some
# reason during the actual runtime it is looking for them. So this will create a docker in arm that creates the snaphot needed for the arm
cd "$(dirname "${BASH_SOURCE[0]}")"
set -e
shopt -s expand_aliases
if [ "$0" != "./build-arm-v8-snapshot.sh" ]; then
>&2 echo "Must be run from backend/workspace directory"
exit 1
if [ -z "$ARCH" ]; then
ARCH=$(uname -m)
fi
USE_TTY=
@@ -18,14 +20,20 @@ alias 'rust-gnu-builder'='docker run $USE_TTY --rm -v "$HOME/.cargo/registry":/u
echo "Building "
cd ..
rust-gnu-builder sh -c "(cd libs/ && cargo build -p snapshot_creator --release --target=aarch64-unknown-linux-gnu)"
rust-gnu-builder sh -c "(cd core/ && cargo build -p snapshot_creator --release --target=${ARCH}-unknown-linux-gnu)"
cd -
if [ "$ARCH" = "aarch64" ]; then
DOCKER_ARCH='arm64/v8'
elif [ "$ARCH" = "x86_64" ]; then
DOCKER_ARCH='amd64'
fi
echo "Creating Arm v8 Snapshot"
docker run $USE_TTY --platform linux/arm64/v8 --mount type=bind,src=$(pwd),dst=/mnt arm64v8/ubuntu:22.04 /bin/sh -c "cd /mnt && /mnt/target/aarch64-unknown-linux-gnu/release/snapshot_creator"
docker run $USE_TTY --platform "linux/${DOCKER_ARCH}" --mount type=bind,src=$(pwd),dst=/mnt ubuntu:22.04 /bin/sh -c "cd /mnt && /mnt/target/${ARCH}-unknown-linux-gnu/release/snapshot_creator"
sudo chown -R $USER target
sudo chown -R $USER ~/.cargo
sudo chown $USER JS_SNAPSHOT.bin
sudo chmod 0644 JS_SNAPSHOT.bin
sudo mv -f JS_SNAPSHOT.bin ./js_engine/src/artifacts/ARM_JS_SNAPSHOT.bin
sudo mv -f JS_SNAPSHOT.bin ./js-engine/src/artifacts/JS_SNAPSHOT.${ARCH}.bin

View File

@@ -1,5 +1,5 @@
[package]
name = "embassy_container_init"
name = "container-init"
version = "0.1.0"
edition = "2021"
rust = "1.66"

View File

@@ -4,7 +4,7 @@ use std::os::unix::process::ExitStatusExt;
use std::process::Stdio;
use std::sync::Arc;
use embassy_container_init::{
use container_init::{
LogParams, OutputParams, OutputStrategy, ProcessGroupId, ProcessId, RunCommandParams,
SendSignalParams, SignalGroupParams,
};
@@ -335,7 +335,7 @@ async fn main() {
use tracing_subscriber::prelude::*;
use tracing_subscriber::{fmt, EnvFilter};
let filter_layer = EnvFilter::new("embassy_container_init=debug");
let filter_layer = EnvFilter::new("container_init=debug");
let fmt_layer = fmt::layer().with_target(true);
tracing_subscriber::registry()

18
core/install-sdk.sh Executable file
View File

@@ -0,0 +1,18 @@
#!/bin/bash
cd "$(dirname "${BASH_SOURCE[0]}")"
set -e
shopt -s expand_aliases
web="../web/dist/static"
[ -d "$web" ] || mkdir -p "$web"
if [ -z "$PLATFORM" ]; then
export PLATFORM=$(uname -m)
fi
cargo install --path=./startos --no-default-features --features=js_engine,sdk,cli --locked
startbox_loc=$(which startbox)
ln -sf $startbox_loc $(dirname $startbox_loc)/start-cli
ln -sf $startbox_loc $(dirname $startbox_loc)/start-sdk

View File

@@ -1,5 +1,5 @@
[package]
name = "js_engine"
name = "js-engine"
version = "0.1.0"
edition = "2021"
@@ -10,7 +10,7 @@ async-trait = "0.1.74"
dashmap = "5.5.3"
deno_core = "=0.222.0"
deno_ast = { version = "=0.29.5", features = ["transpiling"] }
embassy_container_init = { path = "../embassy_container_init" }
container-init = { path = "../container-init" }
reqwest = { version = "0.11.22" }
sha2 = "0.10.8"
itertools = "0.11.0"

View File

@@ -85,10 +85,10 @@ pub struct MetadataJs {
}
#[cfg(target_arch = "x86_64")]
const SNAPSHOT_BYTES: &[u8] = include_bytes!("./artifacts/JS_SNAPSHOT.bin");
const SNAPSHOT_BYTES: &[u8] = include_bytes!("./artifacts/JS_SNAPSHOT.x86_64.bin");
#[cfg(target_arch = "aarch64")]
const SNAPSHOT_BYTES: &[u8] = include_bytes!("./artifacts/ARM_JS_SNAPSHOT.bin");
const SNAPSHOT_BYTES: &[u8] = include_bytes!("./artifacts/JS_SNAPSHOT.aarch64.bin");
#[derive(Clone)]
struct JsContext {
@@ -371,10 +371,10 @@ mod fns {
use std::rc::Rc;
use std::time::Duration;
use container_init::ProcessId;
use deno_core::anyhow::{anyhow, bail};
use deno_core::error::AnyError;
use deno_core::*;
use embassy_container_init::ProcessId;
use helpers::{to_tmp_path, AtomicFile, Rsync, RsyncOptions};
use itertools::Itertools;
use models::VolumeId;
@@ -1201,11 +1201,11 @@ mod fns {
#[tokio::test]
async fn test_is_subset() {
assert!(
!is_subset("/home/drbonez", "/home/drbonez/code/fakedir/../../..")
.await
.unwrap()
)
let home = std::env::var("HOME").unwrap();
let home = Path::new(&home);
assert!(!is_subset(home, &home.join("code/fakedir/../../.."))
.await
.unwrap())
}
}

View File

@@ -1,18 +1,19 @@
use std::{future::Future, pin::Pin, sync::Arc, time::Duration};
use std::future::Future;
use std::pin::Pin;
use std::sync::Arc;
use std::time::Duration;
use color_eyre::eyre::bail;
use embassy_container_init::{Input, Output, ProcessId, RpcId};
use tokio::sync::{
mpsc::{UnboundedReceiver, UnboundedSender},
Mutex,
};
use container_init::{Input, Output, ProcessId, RpcId};
use tokio::sync::mpsc::{UnboundedReceiver, UnboundedSender};
use tokio::sync::Mutex;
/// Used by the js-executor, it is the ability to just create a command in an already running exec
pub type ExecCommand = Arc<
dyn Fn(
String,
Vec<String>,
UnboundedSender<embassy_container_init::Output>,
UnboundedSender<container_init::Output>,
Option<Duration>,
) -> Pin<Box<dyn Future<Output = Result<RpcId, String>> + 'static>>
+ Send
@@ -33,7 +34,7 @@ pub trait CommandInserter {
&self,
command: String,
args: Vec<String>,
sender: UnboundedSender<embassy_container_init::Output>,
sender: UnboundedSender<container_init::Output>,
timeout: Option<Duration>,
) -> Pin<Box<dyn Future<Output = Option<RpcId>>>>;

View File

@@ -0,0 +1,199 @@
use std::collections::BTreeMap;
use std::path::Path;
use futures::future::BoxFuture;
use futures::FutureExt;
use imbl_value::InternedString;
use tokio::io::AsyncRead;
use crate::prelude::*;
use crate::s9pk::merkle_archive::hash::{Hash, HashWriter};
use crate::s9pk::merkle_archive::sink::{Sink, TrackingWriter};
use crate::s9pk::merkle_archive::source::{ArchiveSource, FileSource, Section};
use crate::s9pk::merkle_archive::write_queue::WriteQueue;
use crate::s9pk::merkle_archive::{varint, Entry, EntryContents};
#[derive(Debug)]
pub struct DirectoryContents<S>(BTreeMap<InternedString, Entry<S>>);
impl<S> DirectoryContents<S> {
pub fn new() -> Self {
Self(BTreeMap::new())
}
#[instrument(skip_all)]
pub fn get_path(&self, path: impl AsRef<Path>) -> Option<&Entry<S>> {
let mut dir = Some(self);
let mut res = None;
for segment in path.as_ref().into_iter() {
let segment = segment.to_str()?;
if segment == "/" {
continue;
}
res = dir?.get(segment);
if let Some(EntryContents::Directory(d)) = res.as_ref().map(|e| e.as_contents()) {
dir = Some(d);
} else {
dir = None
}
}
res
}
pub fn insert_path(&mut self, path: impl AsRef<Path>, entry: Entry<S>) -> Result<(), Error> {
let path = path.as_ref();
let (parent, Some(file)) = (path.parent(), path.file_name().and_then(|f| f.to_str()))
else {
return Err(Error::new(
eyre!("cannot create file at root"),
ErrorKind::Pack,
));
};
let mut dir = self;
for segment in parent.into_iter().flatten() {
let segment = segment
.to_str()
.ok_or_else(|| Error::new(eyre!("non-utf8 path segment"), ErrorKind::Utf8))?;
if segment == "/" {
continue;
}
if !dir.contains_key(segment) {
dir.insert(
segment.into(),
Entry::new(EntryContents::Directory(DirectoryContents::new())),
);
}
if let Some(EntryContents::Directory(d)) =
dir.get_mut(segment).map(|e| e.as_contents_mut())
{
dir = d;
} else {
return Err(Error::new(eyre!("failed to insert entry at path {path:?}: ancestor exists and is not a directory"), ErrorKind::Pack));
}
}
dir.insert(file.into(), entry);
Ok(())
}
pub const fn header_size() -> u64 {
8 // position: u64 BE
+ 8 // size: u64 BE
}
#[instrument(skip_all)]
pub async fn serialize_header<W: Sink>(&self, position: u64, w: &mut W) -> Result<u64, Error> {
use tokio::io::AsyncWriteExt;
let size = self.toc_size();
w.write_all(&position.to_be_bytes()).await?;
w.write_all(&size.to_be_bytes()).await?;
Ok(position)
}
pub fn toc_size(&self) -> u64 {
self.0.iter().fold(
varint::serialized_varint_size(self.0.len() as u64),
|acc, (name, entry)| {
acc + varint::serialized_varstring_size(&**name) + entry.header_size()
},
)
}
}
impl<S: ArchiveSource> DirectoryContents<Section<S>> {
#[instrument(skip_all)]
pub fn deserialize<'a>(
source: &'a S,
header: &'a mut (impl AsyncRead + Unpin + Send),
sighash: Hash,
) -> BoxFuture<'a, Result<Self, Error>> {
async move {
use tokio::io::AsyncReadExt;
let mut position = [0u8; 8];
header.read_exact(&mut position).await?;
let position = u64::from_be_bytes(position);
let mut size = [0u8; 8];
header.read_exact(&mut size).await?;
let size = u64::from_be_bytes(size);
let mut toc_reader = source.fetch(position, size).await?;
let len = varint::deserialize_varint(&mut toc_reader).await?;
let mut entries = BTreeMap::new();
for _ in 0..len {
entries.insert(
varint::deserialize_varstring(&mut toc_reader).await?.into(),
Entry::deserialize(source, &mut toc_reader).await?,
);
}
let res = Self(entries);
if res.sighash().await? == sighash {
Ok(res)
} else {
Err(Error::new(
eyre!("hash sum does not match"),
ErrorKind::InvalidSignature,
))
}
}
.boxed()
}
}
impl<S: FileSource> DirectoryContents<S> {
#[instrument(skip_all)]
pub fn update_hashes<'a>(&'a mut self, only_missing: bool) -> BoxFuture<'a, Result<(), Error>> {
async move {
for (_, entry) in &mut self.0 {
entry.update_hash(only_missing).await?;
}
Ok(())
}
.boxed()
}
#[instrument(skip_all)]
pub fn sighash<'a>(&'a self) -> BoxFuture<'a, Result<Hash, Error>> {
async move {
let mut hasher = TrackingWriter::new(0, HashWriter::new());
let mut sig_contents = BTreeMap::new();
for (name, entry) in &self.0 {
sig_contents.insert(name.clone(), entry.to_missing().await?);
}
Self(sig_contents)
.serialize_toc(&mut WriteQueue::new(0), &mut hasher)
.await?;
Ok(hasher.into_inner().finalize())
}
.boxed()
}
#[instrument(skip_all)]
pub async fn serialize_toc<'a, W: Sink>(
&'a self,
queue: &mut WriteQueue<'a, S>,
w: &mut W,
) -> Result<(), Error> {
varint::serialize_varint(self.0.len() as u64, w).await?;
for (name, entry) in self.0.iter() {
varint::serialize_varstring(&**name, w).await?;
entry.serialize_header(queue.add(entry).await?, w).await?;
}
Ok(())
}
}
impl<S> std::ops::Deref for DirectoryContents<S> {
type Target = BTreeMap<InternedString, Entry<S>>;
fn deref(&self) -> &Self::Target {
&self.0
}
}
impl<S> std::ops::DerefMut for DirectoryContents<S> {
fn deref_mut(&mut self) -> &mut Self::Target {
&mut self.0
}
}

View File

@@ -0,0 +1,82 @@
use tokio::io::AsyncRead;
use crate::prelude::*;
use crate::s9pk::merkle_archive::hash::{Hash, HashWriter};
use crate::s9pk::merkle_archive::sink::{Sink, TrackingWriter};
use crate::s9pk::merkle_archive::source::{ArchiveSource, FileSource, Section};
#[derive(Debug)]
pub struct FileContents<S>(S);
impl<S> FileContents<S> {
pub fn new(source: S) -> Self {
Self(source)
}
pub const fn header_size() -> u64 {
8 // position: u64 BE
+ 8 // size: u64 BE
}
}
impl<S: ArchiveSource> FileContents<Section<S>> {
#[instrument(skip_all)]
pub async fn deserialize(
source: &S,
header: &mut (impl AsyncRead + Unpin + Send),
) -> Result<Self, Error> {
use tokio::io::AsyncReadExt;
let mut position = [0u8; 8];
header.read_exact(&mut position).await?;
let position = u64::from_be_bytes(position);
let mut size = [0u8; 8];
header.read_exact(&mut size).await?;
let size = u64::from_be_bytes(size);
Ok(Self(source.section(position, size)))
}
}
impl<S: FileSource> FileContents<S> {
pub async fn hash(&self) -> Result<Hash, Error> {
let mut hasher = TrackingWriter::new(0, HashWriter::new());
self.serialize_body(&mut hasher, None).await?;
Ok(hasher.into_inner().finalize())
}
#[instrument(skip_all)]
pub async fn serialize_header<W: Sink>(&self, position: u64, w: &mut W) -> Result<u64, Error> {
use tokio::io::AsyncWriteExt;
let size = self.0.size().await?;
w.write_all(&position.to_be_bytes()).await?;
w.write_all(&size.to_be_bytes()).await?;
Ok(position)
}
#[instrument(skip_all)]
pub async fn serialize_body<W: Sink>(
&self,
w: &mut W,
verify: Option<Hash>,
) -> Result<(), Error> {
let start = if verify.is_some() {
Some(w.current_position().await?)
} else {
None
};
self.0.copy_verify(w, verify).await?;
if let Some(start) = start {
ensure_code!(
w.current_position().await? - start == self.0.size().await?,
ErrorKind::Pack,
"FileSource::copy wrote a number of bytes that does not match FileSource::size"
);
}
Ok(())
}
}
impl<S> std::ops::Deref for FileContents<S> {
type Target = S;
fn deref(&self) -> &Self::Target {
&self.0
}
}

View File

@@ -0,0 +1,97 @@
pub use blake3::Hash;
use blake3::Hasher;
use tokio::io::AsyncWrite;
use crate::prelude::*;
#[pin_project::pin_project]
pub struct HashWriter {
hasher: Hasher,
}
impl HashWriter {
pub fn new() -> Self {
Self {
hasher: Hasher::new(),
}
}
pub fn finalize(self) -> Hash {
self.hasher.finalize()
}
}
impl AsyncWrite for HashWriter {
fn poll_write(
self: std::pin::Pin<&mut Self>,
_cx: &mut std::task::Context<'_>,
buf: &[u8],
) -> std::task::Poll<Result<usize, std::io::Error>> {
self.project().hasher.update(buf);
std::task::Poll::Ready(Ok(buf.len()))
}
fn poll_flush(
self: std::pin::Pin<&mut Self>,
_cx: &mut std::task::Context<'_>,
) -> std::task::Poll<Result<(), std::io::Error>> {
std::task::Poll::Ready(Ok(()))
}
fn poll_shutdown(
self: std::pin::Pin<&mut Self>,
_cx: &mut std::task::Context<'_>,
) -> std::task::Poll<Result<(), std::io::Error>> {
std::task::Poll::Ready(Ok(()))
}
}
#[pin_project::pin_project]
pub struct VerifyingWriter<W> {
verify: Option<(Hasher, Hash)>,
#[pin]
writer: W,
}
impl<W: AsyncWrite> VerifyingWriter<W> {
pub fn new(w: W, verify: Option<Hash>) -> Self {
Self {
verify: verify.map(|v| (Hasher::new(), v)),
writer: w,
}
}
pub fn verify(self) -> Result<W, Error> {
if let Some((actual, expected)) = self.verify {
ensure_code!(
actual.finalize() == expected,
ErrorKind::InvalidSignature,
"hash sum does not match"
);
}
Ok(self.writer)
}
}
impl<W: AsyncWrite> AsyncWrite for VerifyingWriter<W> {
fn poll_write(
self: std::pin::Pin<&mut Self>,
cx: &mut std::task::Context<'_>,
buf: &[u8],
) -> std::task::Poll<Result<usize, std::io::Error>> {
let this = self.project();
match this.writer.poll_write(cx, buf) {
std::task::Poll::Ready(Ok(written)) => {
if let Some((h, _)) = this.verify {
h.update(&buf[..written]);
}
std::task::Poll::Ready(Ok(written))
}
a => a,
}
}
fn poll_flush(
self: std::pin::Pin<&mut Self>,
cx: &mut std::task::Context<'_>,
) -> std::task::Poll<Result<(), std::io::Error>> {
self.project().writer.poll_flush(cx)
}
fn poll_shutdown(
self: std::pin::Pin<&mut Self>,
cx: &mut std::task::Context<'_>,
) -> std::task::Poll<Result<(), std::io::Error>> {
self.project().writer.poll_shutdown(cx)
}
}

View File

@@ -0,0 +1,268 @@
use ed25519_dalek::{Signature, SigningKey, VerifyingKey};
use tokio::io::AsyncRead;
use crate::prelude::*;
use crate::s9pk::merkle_archive::directory_contents::DirectoryContents;
use crate::s9pk::merkle_archive::file_contents::FileContents;
use crate::s9pk::merkle_archive::hash::Hash;
use crate::s9pk::merkle_archive::sink::Sink;
use crate::s9pk::merkle_archive::source::{ArchiveSource, FileSource, Section};
use crate::s9pk::merkle_archive::write_queue::WriteQueue;
pub mod directory_contents;
pub mod file_contents;
pub mod hash;
pub mod sink;
pub mod source;
#[cfg(test)]
mod test;
pub mod varint;
pub mod write_queue;
#[derive(Debug)]
enum Signer {
Signed(VerifyingKey, Signature),
Signer(SigningKey),
}
#[derive(Debug)]
pub struct MerkleArchive<S> {
signer: Signer,
contents: DirectoryContents<S>,
}
impl<S> MerkleArchive<S> {
pub fn new(contents: DirectoryContents<S>, signer: SigningKey) -> Self {
Self {
signer: Signer::Signer(signer),
contents,
}
}
pub const fn header_size() -> u64 {
32 // pubkey
+ 64 // signature
+ DirectoryContents::<Section<S>>::header_size()
}
pub fn contents(&self) -> &DirectoryContents<S> {
&self.contents
}
}
impl<S: ArchiveSource> MerkleArchive<Section<S>> {
#[instrument(skip_all)]
pub async fn deserialize(
source: &S,
header: &mut (impl AsyncRead + Unpin + Send),
) -> Result<Self, Error> {
use tokio::io::AsyncReadExt;
let mut pubkey = [0u8; 32];
header.read_exact(&mut pubkey).await?;
let pubkey = VerifyingKey::from_bytes(&pubkey)?;
let mut signature = [0u8; 64];
header.read_exact(&mut signature).await?;
let signature = Signature::from_bytes(&signature);
let mut sighash = [0u8; 32];
header.read_exact(&mut sighash).await?;
let sighash = Hash::from_bytes(sighash);
let contents = DirectoryContents::deserialize(source, header, sighash).await?;
pubkey.verify_strict(contents.sighash().await?.as_bytes(), &signature)?;
Ok(Self {
signer: Signer::Signed(pubkey, signature),
contents,
})
}
}
impl<S: FileSource> MerkleArchive<S> {
pub async fn update_hashes(&mut self, only_missing: bool) -> Result<(), Error> {
self.contents.update_hashes(only_missing).await
}
#[instrument(skip_all)]
pub async fn serialize<W: Sink>(&self, w: &mut W, verify: bool) -> Result<(), Error> {
use tokio::io::AsyncWriteExt;
let sighash = self.contents.sighash().await?;
let (pubkey, signature) = match &self.signer {
Signer::Signed(pubkey, signature) => (*pubkey, *signature),
Signer::Signer(s) => (s.into(), ed25519_dalek::Signer::sign(s, sighash.as_bytes())),
};
w.write_all(pubkey.as_bytes()).await?;
w.write_all(&signature.to_bytes()).await?;
w.write_all(sighash.as_bytes()).await?;
let mut next_pos = w.current_position().await?;
next_pos += DirectoryContents::<S>::header_size();
self.contents.serialize_header(next_pos, w).await?;
next_pos += self.contents.toc_size();
let mut queue = WriteQueue::new(next_pos);
self.contents.serialize_toc(&mut queue, w).await?;
queue.serialize(w, verify).await?;
Ok(())
}
}
#[derive(Debug)]
pub struct Entry<S> {
hash: Option<Hash>,
contents: EntryContents<S>,
}
impl<S> Entry<S> {
pub fn new(contents: EntryContents<S>) -> Self {
Self {
hash: None,
contents,
}
}
pub fn hash(&self) -> Option<Hash> {
self.hash
}
pub fn as_contents(&self) -> &EntryContents<S> {
&self.contents
}
pub fn as_contents_mut(&mut self) -> &mut EntryContents<S> {
self.hash = None;
&mut self.contents
}
pub fn into_contents(self) -> EntryContents<S> {
self.contents
}
pub fn header_size(&self) -> u64 {
32 // hash
+ self.contents.header_size()
}
}
impl<S: ArchiveSource> Entry<Section<S>> {
#[instrument(skip_all)]
pub async fn deserialize(
source: &S,
header: &mut (impl AsyncRead + Unpin + Send),
) -> Result<Self, Error> {
use tokio::io::AsyncReadExt;
let mut hash = [0u8; 32];
header.read_exact(&mut hash).await?;
let hash = Hash::from_bytes(hash);
let contents = EntryContents::deserialize(source, header, hash).await?;
Ok(Self {
hash: Some(hash),
contents,
})
}
}
impl<S: FileSource> Entry<S> {
pub async fn to_missing(&self) -> Result<Self, Error> {
let hash = if let Some(hash) = self.hash {
hash
} else {
self.contents.hash().await?
};
Ok(Self {
hash: Some(hash),
contents: EntryContents::Missing,
})
}
pub async fn update_hash(&mut self, only_missing: bool) -> Result<(), Error> {
if let EntryContents::Directory(d) = &mut self.contents {
d.update_hashes(only_missing).await?;
}
self.hash = Some(self.contents.hash().await?);
Ok(())
}
#[instrument(skip_all)]
pub async fn serialize_header<W: Sink>(
&self,
position: u64,
w: &mut W,
) -> Result<Option<u64>, Error> {
use tokio::io::AsyncWriteExt;
let hash = if let Some(hash) = self.hash {
hash
} else {
self.contents.hash().await?
};
w.write_all(hash.as_bytes()).await?;
self.contents.serialize_header(position, w).await
}
}
#[derive(Debug)]
pub enum EntryContents<S> {
Missing,
File(FileContents<S>),
Directory(DirectoryContents<S>),
}
impl<S> EntryContents<S> {
fn type_id(&self) -> u8 {
match self {
Self::Missing => 0,
Self::File(_) => 1,
Self::Directory(_) => 2,
}
}
pub fn header_size(&self) -> u64 {
1 // type
+ match self {
Self::Missing => 0,
Self::File(_) => FileContents::<S>::header_size(),
Self::Directory(_) => DirectoryContents::<S>::header_size(),
}
}
}
impl<S: ArchiveSource> EntryContents<Section<S>> {
#[instrument(skip_all)]
pub async fn deserialize(
source: &S,
header: &mut (impl AsyncRead + Unpin + Send),
hash: Hash,
) -> Result<Self, Error> {
use tokio::io::AsyncReadExt;
let mut type_id = [0u8];
header.read_exact(&mut type_id).await?;
match type_id[0] {
0 => Ok(Self::Missing),
1 => Ok(Self::File(FileContents::deserialize(source, header).await?)),
2 => Ok(Self::Directory(
DirectoryContents::deserialize(source, header, hash).await?,
)),
id => Err(Error::new(
eyre!("Unknown type id {id} found in MerkleArchive"),
ErrorKind::ParseS9pk,
)),
}
}
}
impl<S: FileSource> EntryContents<S> {
pub async fn hash(&self) -> Result<Hash, Error> {
match self {
Self::Missing => Err(Error::new(
eyre!("Cannot compute hash of missing file"),
ErrorKind::Pack,
)),
Self::File(f) => f.hash().await,
Self::Directory(d) => d.sighash().await,
}
}
#[instrument(skip_all)]
pub async fn serialize_header<W: Sink>(
&self,
position: u64,
w: &mut W,
) -> Result<Option<u64>, Error> {
use tokio::io::AsyncWriteExt;
w.write_all(&[self.type_id()]).await?;
Ok(match self {
Self::Missing => None,
Self::File(f) => Some(f.serialize_header(position, w).await?),
Self::Directory(d) => Some(d.serialize_header(position, w).await?),
})
}
}

View File

@@ -0,0 +1,70 @@
use tokio::io::{AsyncSeek, AsyncWrite};
use crate::prelude::*;
#[async_trait::async_trait]
pub trait Sink: AsyncWrite + Unpin + Send {
async fn current_position(&mut self) -> Result<u64, Error>;
}
#[async_trait::async_trait]
impl<S: AsyncWrite + AsyncSeek + Unpin + Send> Sink for S {
async fn current_position(&mut self) -> Result<u64, Error> {
use tokio::io::AsyncSeekExt;
Ok(self.stream_position().await?)
}
}
#[async_trait::async_trait]
impl<W: AsyncWrite + Unpin + Send> Sink for TrackingWriter<W> {
async fn current_position(&mut self) -> Result<u64, Error> {
Ok(self.position)
}
}
#[pin_project::pin_project]
pub struct TrackingWriter<W> {
position: u64,
#[pin]
writer: W,
}
impl<W> TrackingWriter<W> {
pub fn new(start: u64, w: W) -> Self {
Self {
position: start,
writer: w,
}
}
pub fn into_inner(self) -> W {
self.writer
}
}
impl<W: AsyncWrite + Unpin + Send> AsyncWrite for TrackingWriter<W> {
fn poll_write(
self: std::pin::Pin<&mut Self>,
cx: &mut std::task::Context<'_>,
buf: &[u8],
) -> std::task::Poll<Result<usize, std::io::Error>> {
let this = self.project();
match this.writer.poll_write(cx, buf) {
std::task::Poll::Ready(Ok(written)) => {
*this.position += written as u64;
std::task::Poll::Ready(Ok(written))
}
a => a,
}
}
fn poll_flush(
self: std::pin::Pin<&mut Self>,
cx: &mut std::task::Context<'_>,
) -> std::task::Poll<Result<(), std::io::Error>> {
self.project().writer.poll_flush(cx)
}
fn poll_shutdown(
self: std::pin::Pin<&mut Self>,
cx: &mut std::task::Context<'_>,
) -> std::task::Poll<Result<(), std::io::Error>> {
self.project().writer.poll_shutdown(cx)
}
}

View File

@@ -0,0 +1,91 @@
use std::sync::Arc;
use bytes::Bytes;
use futures::stream::BoxStream;
use futures::{StreamExt, TryStreamExt};
use http::header::{ACCEPT_RANGES, RANGE};
use reqwest::{Client, Url};
use tokio::io::AsyncRead;
use tokio::sync::Mutex;
use tokio_util::io::StreamReader;
use crate::prelude::*;
use crate::s9pk::merkle_archive::source::ArchiveSource;
#[derive(Clone)]
pub struct HttpSource {
url: Url,
client: Client,
range_support: Result<
(),
(), // Arc<Mutex<Option<RangelessReader>>>
>,
}
impl HttpSource {
pub async fn new(client: Client, url: Url) -> Result<Self, Error> {
let range_support = client
.head(url.clone())
.send()
.await
.with_kind(ErrorKind::Network)?
.error_for_status()
.with_kind(ErrorKind::Network)?
.headers()
.get(ACCEPT_RANGES)
.and_then(|s| s.to_str().ok())
== Some("bytes");
Ok(Self {
url,
client,
range_support: if range_support {
Ok(())
} else {
todo!() // Err(Arc::new(Mutex::new(None)))
},
})
}
}
#[async_trait::async_trait]
impl ArchiveSource for HttpSource {
type Reader = HttpReader;
async fn fetch(&self, position: u64, size: u64) -> Result<Self::Reader, Error> {
match self.range_support {
Ok(_) => Ok(HttpReader::Range(StreamReader::new(if size > 0 {
self.client
.get(self.url.clone())
.header(RANGE, format!("bytes={}-{}", position, position + size - 1))
.send()
.await
.with_kind(ErrorKind::Network)?
.error_for_status()
.with_kind(ErrorKind::Network)?
.bytes_stream()
.map_err(|e| std::io::Error::new(std::io::ErrorKind::Other, e))
.boxed()
} else {
futures::stream::empty().boxed()
}))),
_ => todo!(),
}
}
}
#[pin_project::pin_project(project = HttpReaderProj)]
pub enum HttpReader {
Range(#[pin] StreamReader<BoxStream<'static, Result<Bytes, std::io::Error>>, Bytes>),
// Rangeless(#[pin] RangelessReader),
}
impl AsyncRead for HttpReader {
fn poll_read(
self: std::pin::Pin<&mut Self>,
cx: &mut std::task::Context<'_>,
buf: &mut tokio::io::ReadBuf<'_>,
) -> std::task::Poll<std::io::Result<()>> {
match self.project() {
HttpReaderProj::Range(r) => r.poll_read(cx, buf),
// HttpReaderProj::Rangeless(r) => r.poll_read(cx, buf),
}
}
}
// type RangelessReader = StreamReader<BoxStream<'static, Bytes>, Bytes>;

View File

@@ -0,0 +1,120 @@
use std::path::PathBuf;
use std::sync::Arc;
use blake3::Hash;
use tokio::fs::File;
use tokio::io::{AsyncRead, AsyncWrite};
use crate::prelude::*;
use crate::s9pk::merkle_archive::hash::VerifyingWriter;
pub mod http;
pub mod multi_cursor_file;
#[async_trait::async_trait]
pub trait FileSource: Send + Sync + Sized + 'static {
type Reader: AsyncRead + Unpin + Send;
async fn size(&self) -> Result<u64, Error>;
async fn reader(&self) -> Result<Self::Reader, Error>;
async fn copy<W: AsyncWrite + Unpin + Send>(&self, w: &mut W) -> Result<(), Error> {
tokio::io::copy(&mut self.reader().await?, w).await?;
Ok(())
}
async fn copy_verify<W: AsyncWrite + Unpin + Send>(
&self,
w: &mut W,
verify: Option<Hash>,
) -> Result<(), Error> {
let mut w = VerifyingWriter::new(w, verify);
tokio::io::copy(&mut self.reader().await?, &mut w).await?;
w.verify()?;
Ok(())
}
async fn to_vec(&self, verify: Option<Hash>) -> Result<Vec<u8>, Error> {
let mut vec = Vec::with_capacity(self.size().await? as usize);
self.copy_verify(&mut vec, verify).await?;
Ok(vec)
}
}
#[async_trait::async_trait]
impl FileSource for PathBuf {
type Reader = File;
async fn size(&self) -> Result<u64, Error> {
Ok(tokio::fs::metadata(self).await?.len())
}
async fn reader(&self) -> Result<Self::Reader, Error> {
Ok(File::open(self).await?)
}
}
#[async_trait::async_trait]
impl FileSource for Arc<[u8]> {
type Reader = std::io::Cursor<Self>;
async fn size(&self) -> Result<u64, Error> {
Ok(self.len() as u64)
}
async fn reader(&self) -> Result<Self::Reader, Error> {
Ok(std::io::Cursor::new(self.clone()))
}
async fn copy<W: AsyncWrite + Unpin + Send>(&self, w: &mut W) -> Result<(), Error> {
use tokio::io::AsyncWriteExt;
w.write_all(&*self).await?;
Ok(())
}
}
#[async_trait::async_trait]
pub trait ArchiveSource: Clone + Send + Sync + Sized + 'static {
type Reader: AsyncRead + Unpin + Send;
async fn fetch(&self, position: u64, size: u64) -> Result<Self::Reader, Error>;
async fn copy_to<W: AsyncWrite + Unpin + Send>(
&self,
position: u64,
size: u64,
w: &mut W,
) -> Result<(), Error> {
tokio::io::copy(&mut self.fetch(position, size).await?, w).await?;
Ok(())
}
fn section(&self, position: u64, size: u64) -> Section<Self> {
Section {
source: self.clone(),
position,
size,
}
}
}
#[async_trait::async_trait]
impl ArchiveSource for Arc<[u8]> {
type Reader = tokio::io::Take<std::io::Cursor<Self>>;
async fn fetch(&self, position: u64, size: u64) -> Result<Self::Reader, Error> {
use tokio::io::AsyncReadExt;
let mut cur = std::io::Cursor::new(self.clone());
cur.set_position(position);
Ok(cur.take(size))
}
}
#[derive(Debug)]
pub struct Section<S> {
source: S,
position: u64,
size: u64,
}
#[async_trait::async_trait]
impl<S: ArchiveSource> FileSource for Section<S> {
type Reader = S::Reader;
async fn size(&self) -> Result<u64, Error> {
Ok(self.size)
}
async fn reader(&self) -> Result<Self::Reader, Error> {
self.source.fetch(self.position, self.size).await
}
async fn copy<W: AsyncWrite + Unpin + Send>(&self, w: &mut W) -> Result<(), Error> {
self.source.copy_to(self.position, self.size, w).await
}
}

View File

@@ -0,0 +1,84 @@
use std::io::SeekFrom;
use std::os::fd::{AsRawFd, RawFd};
use std::path::{Path, PathBuf};
use std::sync::Arc;
use tokio::fs::File;
use tokio::io::AsyncRead;
use tokio::sync::{Mutex, OwnedMutexGuard};
use crate::disk::mount::filesystem::loop_dev::LoopDev;
use crate::prelude::*;
use crate::s9pk::merkle_archive::source::{ArchiveSource, Section};
#[derive(Clone)]
pub struct MultiCursorFile {
fd: RawFd,
file: Arc<Mutex<File>>,
}
impl MultiCursorFile {
fn path(&self) -> PathBuf {
Path::new("/proc/self/fd").join(self.fd.to_string())
}
}
impl From<File> for MultiCursorFile {
fn from(value: File) -> Self {
Self {
fd: value.as_raw_fd(),
file: Arc::new(Mutex::new(value)),
}
}
}
#[pin_project::pin_project]
pub struct FileSectionReader {
#[pin]
file: OwnedMutexGuard<File>,
remaining: u64,
}
impl AsyncRead for FileSectionReader {
fn poll_read(
self: std::pin::Pin<&mut Self>,
cx: &mut std::task::Context<'_>,
buf: &mut tokio::io::ReadBuf<'_>,
) -> std::task::Poll<std::io::Result<()>> {
let this = self.project();
if *this.remaining == 0 {
return std::task::Poll::Ready(Ok(()));
}
let before = buf.filled().len() as u64;
let res = std::pin::Pin::new(&mut **this.file.get_mut())
.poll_read(cx, &mut buf.take(*this.remaining as usize));
*this.remaining = this
.remaining
.saturating_sub(buf.filled().len() as u64 - before);
res
}
}
#[async_trait::async_trait]
impl ArchiveSource for MultiCursorFile {
type Reader = FileSectionReader;
async fn fetch(&self, position: u64, size: u64) -> Result<Self::Reader, Error> {
use tokio::io::AsyncSeekExt;
let mut file = if let Ok(file) = self.file.clone().try_lock_owned() {
file
} else {
Arc::new(Mutex::new(File::open(self.path()).await?))
.try_lock_owned()
.expect("freshly created")
};
file.seek(SeekFrom::Start(position)).await?;
Ok(Self::Reader {
file,
remaining: size,
})
}
}
impl From<Section<MultiCursorFile>> for LoopDev<PathBuf> {
fn from(value: Section<MultiCursorFile>) -> Self {
LoopDev::new(value.source.path(), value.position, value.size)
}
}

View File

@@ -0,0 +1,138 @@
use std::collections::BTreeMap;
use std::io::Cursor;
use std::path::{Path, PathBuf};
use std::sync::Arc;
use ed25519_dalek::SigningKey;
use crate::prelude::*;
use crate::s9pk::merkle_archive::directory_contents::DirectoryContents;
use crate::s9pk::merkle_archive::file_contents::FileContents;
use crate::s9pk::merkle_archive::sink::TrackingWriter;
use crate::s9pk::merkle_archive::source::FileSource;
use crate::s9pk::merkle_archive::{Entry, EntryContents, MerkleArchive};
/// Creates a MerkleArchive (a1) with the provided files at the provided paths. NOTE: later files can overwrite previous files/directories at the same path
/// Tests:
/// - a1.update_hashes(): returns Ok(_)
/// - a1.serialize(verify: true): returns Ok(s1)
/// - MerkleArchive::deserialize(s1): returns Ok(a2)
/// - a2: contains all expected files with expected content
/// - a2.serialize(verify: true): returns Ok(s2)
/// - s1 == s2
#[instrument]
fn test(files: Vec<(PathBuf, String)>) -> Result<(), Error> {
let mut root = DirectoryContents::<Arc<[u8]>>::new();
let mut check_set = BTreeMap::<PathBuf, String>::new();
for (path, content) in files {
if let Err(e) = root.insert_path(
&path,
Entry::new(EntryContents::File(FileContents::new(
content.clone().into_bytes().into(),
))),
) {
eprintln!("failed to insert file at {path:?}: {e}");
} else {
let path = path.strip_prefix("/").unwrap_or(&path);
let mut remaining = check_set.split_off(path);
while {
if let Some((p, s)) = remaining.pop_first() {
if !p.starts_with(path) {
remaining.insert(p, s);
false
} else {
true
}
} else {
false
}
} {}
check_set.append(&mut remaining);
check_set.insert(path.to_owned(), content);
}
}
let key = SigningKey::generate(&mut rand::thread_rng());
let mut a1 = MerkleArchive::new(root, key);
tokio::runtime::Builder::new_current_thread()
.enable_io()
.build()
.unwrap()
.block_on(async move {
a1.update_hashes(true).await?;
let mut s1 = Vec::new();
a1.serialize(&mut TrackingWriter::new(0, &mut s1), true)
.await?;
let s1: Arc<[u8]> = s1.into();
let a2 = MerkleArchive::deserialize(&s1, &mut Cursor::new(s1.clone())).await?;
for (path, content) in check_set {
match a2
.contents
.get_path(&path)
.map(|e| (e.as_contents(), e.hash()))
{
Some((EntryContents::File(f), hash)) => {
ensure_code!(
&f.to_vec(hash).await? == content.as_bytes(),
ErrorKind::ParseS9pk,
"File at {path:?} does not match input"
)
}
_ => {
return Err(Error::new(
eyre!("expected file at {path:?}"),
ErrorKind::ParseS9pk,
))
}
}
}
let mut s2 = Vec::new();
a2.serialize(&mut TrackingWriter::new(0, &mut s2), true)
.await?;
let s2: Arc<[u8]> = s2.into();
ensure_code!(s1 == s2, ErrorKind::Pack, "s1 does not match s2");
Ok(())
})
}
proptest::proptest! {
#[test]
fn property_test(files: Vec<(PathBuf, String)>) {
let files: Vec<(PathBuf, String)> = files.into_iter().filter(|(p, _)| p.file_name().is_some() && p.iter().all(|s| s.to_str().is_some())).collect();
if let Err(e) = test(files.clone()) {
panic!("{e}\nInput: {files:#?}\n{e:?}");
}
}
}
#[test]
fn test_example_1() {
if let Err(e) = test(vec![(Path::new("foo").into(), "bar".into())]) {
panic!("{e}\n{e:?}");
}
}
#[test]
fn test_example_2() {
if let Err(e) = test(vec![
(Path::new("a/a.txt").into(), "a.txt".into()),
(Path::new("a/b/a.txt").into(), "a.txt".into()),
(Path::new("a/b/b/a.txt").into(), "a.txt".into()),
(Path::new("a/b/c.txt").into(), "c.txt".into()),
(Path::new("a/c.txt").into(), "c.txt".into()),
]) {
panic!("{e}\n{e:?}");
}
}
#[test]
fn test_example_3() {
if let Err(e) = test(vec![
(Path::new("b/a").into(), "𑦪".into()),
(Path::new("a/c/a").into(), "·".into()),
]) {
panic!("{e}\n{e:?}");
}
}

View File

@@ -0,0 +1,159 @@
use integer_encoding::VarInt;
use tokio::io::{AsyncRead, AsyncWrite};
use crate::prelude::*;
/// Most-significant byte, == 0x80
pub const MSB: u8 = 0b1000_0000;
const MAX_STR_LEN: u64 = 1024 * 1024; // 1 MiB
pub fn serialized_varint_size(n: u64) -> u64 {
VarInt::required_space(n) as u64
}
pub async fn serialize_varint<W: AsyncWrite + Unpin + Send>(
n: u64,
w: &mut W,
) -> Result<(), Error> {
use tokio::io::AsyncWriteExt;
let mut buf = [0 as u8; 10];
let b = n.encode_var(&mut buf);
w.write_all(&buf[0..b]).await?;
Ok(())
}
pub fn serialized_varstring_size(s: &str) -> u64 {
serialized_varint_size(s.len() as u64) + s.len() as u64
}
pub async fn serialize_varstring<W: AsyncWrite + Unpin + Send>(
s: &str,
w: &mut W,
) -> Result<(), Error> {
use tokio::io::AsyncWriteExt;
serialize_varint(s.len() as u64, w).await?;
w.write_all(s.as_bytes()).await?;
Ok(())
}
#[derive(Default)]
struct VarIntProcessor {
buf: [u8; 10],
maxsize: usize,
i: usize,
}
impl VarIntProcessor {
fn new() -> VarIntProcessor {
VarIntProcessor {
maxsize: (std::mem::size_of::<u64>() * 8 + 7) / 7,
..VarIntProcessor::default()
}
}
fn push(&mut self, b: u8) -> Result<(), Error> {
if self.i >= self.maxsize {
return Err(Error::new(
eyre!("Unterminated varint"),
ErrorKind::ParseS9pk,
));
}
self.buf[self.i] = b;
self.i += 1;
Ok(())
}
fn finished(&self) -> bool {
self.i > 0 && (self.buf[self.i - 1] & MSB == 0)
}
fn decode(&self) -> Option<u64> {
Some(u64::decode_var(&self.buf[0..self.i])?.0)
}
}
pub async fn deserialize_varint<R: AsyncRead + Unpin>(r: &mut R) -> Result<u64, Error> {
use tokio::io::AsyncReadExt;
let mut buf = [0 as u8; 1];
let mut p = VarIntProcessor::new();
while !p.finished() {
r.read_exact(&mut buf).await?;
p.push(buf[0])?;
}
p.decode()
.ok_or_else(|| Error::new(eyre!("Reached EOF"), ErrorKind::ParseS9pk))
}
pub async fn deserialize_varstring<R: AsyncRead + Unpin>(r: &mut R) -> Result<String, Error> {
use tokio::io::AsyncReadExt;
let len = std::cmp::min(deserialize_varint(r).await?, MAX_STR_LEN);
let mut res = String::with_capacity(len as usize);
r.take(len).read_to_string(&mut res).await?;
Ok(res)
}
#[cfg(test)]
mod test {
use std::io::Cursor;
use crate::prelude::*;
fn test_int(n: u64) -> Result<(), Error> {
let n1 = n;
tokio::runtime::Builder::new_current_thread()
.enable_io()
.build()
.unwrap()
.block_on(async move {
let mut v = Vec::new();
super::serialize_varint(n1, &mut v).await?;
let n2 = super::deserialize_varint(&mut Cursor::new(v)).await?;
ensure_code!(n1 == n2, ErrorKind::Deserialization, "n1 does not match n2");
Ok(())
})
}
fn test_string(s: &str) -> Result<(), Error> {
let s1 = s;
tokio::runtime::Builder::new_current_thread()
.enable_io()
.build()
.unwrap()
.block_on(async move {
let mut v: Vec<u8> = Vec::new();
super::serialize_varstring(&s1, &mut v).await?;
let s2 = super::deserialize_varstring(&mut Cursor::new(v)).await?;
ensure_code!(
s1 == &s2,
ErrorKind::Deserialization,
"s1 does not match s2"
);
Ok(())
})
}
proptest::proptest! {
#[test]
fn proptest_int(n: u64) {
if let Err(e) = test_int(n) {
panic!("{e}\nInput: {n}\n{e:?}");
}
}
#[test]
fn proptest_string(s: String) {
if let Err(e) = test_string(&s) {
panic!("{e}\nInput: {s:?}\n{e:?}");
}
}
}
}

View File

@@ -0,0 +1,47 @@
use std::collections::VecDeque;
use crate::prelude::*;
use crate::s9pk::merkle_archive::sink::Sink;
use crate::s9pk::merkle_archive::source::FileSource;
use crate::s9pk::merkle_archive::{Entry, EntryContents};
use crate::util::MaybeOwned;
pub struct WriteQueue<'a, S> {
next_available_position: u64,
queue: VecDeque<&'a Entry<S>>,
}
impl<'a, S> WriteQueue<'a, S> {
pub fn new(next_available_position: u64) -> Self {
Self {
next_available_position,
queue: VecDeque::new(),
}
}
}
impl<'a, S: FileSource> WriteQueue<'a, S> {
pub async fn add(&mut self, entry: &'a Entry<S>) -> Result<u64, Error> {
let res = self.next_available_position;
let size = match entry.as_contents() {
EntryContents::Missing => return Ok(0),
EntryContents::File(f) => f.size().await?,
EntryContents::Directory(d) => d.toc_size(),
};
self.next_available_position += size;
self.queue.push_back(entry);
Ok(res)
}
pub async fn serialize<W: Sink>(&mut self, w: &mut W, verify: bool) -> Result<(), Error> {
loop {
let Some(next) = self.queue.pop_front() else {
break;
};
match next.as_contents() {
EntryContents::Missing => (),
EntryContents::File(f) => f.serialize_body(w, next.hash.filter(|_| verify)).await?,
EntryContents::Directory(d) => d.serialize_toc(self, w).await?,
}
}
Ok(())
}
}

View File

@@ -220,7 +220,7 @@ impl<R: AsyncRead + AsyncSeek + Unpin + Send + Sync> S9pkReader<R> {
&validated_image_ids,
)?;
#[cfg(feature = "js_engine")]
#[cfg(feature = "js-engine")]
if man.containers.is_some()
|| matches!(man.main, crate::procedure::PackageProcedure::Script(_))
{

41
core/src/s9pk/v2/mod.rs Normal file
View File

@@ -0,0 +1,41 @@
use crate::prelude::*;
use crate::s9pk::merkle_archive::sink::Sink;
use crate::s9pk::merkle_archive::source::{ArchiveSource, FileSource, Section};
use crate::s9pk::merkle_archive::MerkleArchive;
const MAGIC_AND_VERSION: &[u8] = &[0x3b, 0x3b, 0x02];
pub struct S9pk<S>(MerkleArchive<S>);
impl<S: FileSource> S9pk<S> {
pub async fn serialize<W: Sink>(&mut self, w: &mut W, verify: bool) -> Result<(), Error> {
use tokio::io::AsyncWriteExt;
w.write_all(MAGIC_AND_VERSION).await?;
self.0.serialize(w, verify).await?;
Ok(())
}
}
impl<S: ArchiveSource> S9pk<Section<S>> {
pub async fn deserialize(source: &S) -> Result<Self, Error> {
use tokio::io::AsyncReadExt;
let mut header = source
.fetch(
0,
MAGIC_AND_VERSION.len() as u64 + MerkleArchive::<Section<S>>::header_size(),
)
.await?;
let mut magic_version = [0u8; 3];
header.read_exact(&mut magic_version).await?;
ensure_code!(
&magic_version == MAGIC_AND_VERSION,
ErrorKind::ParseS9pk,
"Invalid Magic or Unexpected Version"
);
Ok(Self(MerkleArchive::deserialize(source, &mut header).await?))
}
}

View File

@@ -0,0 +1,89 @@
## Magic
`0x3b3b`
## Version
`0x02` (varint)
## Merkle Archive
### Header
- ed25519 pubkey (32B)
- ed25519 signature of TOC sighash (64B)
- TOC sighash: (32B)
- TOC position: (8B: u64 BE)
- TOC size: (8B: u64 BE)
### TOC
- number of entries (varint)
- FOREACH section
- name (varstring)
- hash (32B: BLAKE-3 of file contents / TOC sighash)
- TYPE (1B)
- TYPE=MISSING (`0x00`)
- TYPE=FILE (`0x01`)
- position (8B: u64 BE)
- size (8B: u64 BE)
- TYPE=TOC (`0x02`)
- position (8B: u64 BE)
- size (8B: u64 BE)
#### SigHash
Hash of TOC with all contents MISSING
### FILE
`<File contents>`
# Example
`foo/bar/baz.txt`
ROOT TOC:
- 1 section
- name: foo
hash: sighash('a)
type: TOC
position: 'a
length: _
'a:
- 1 section
- name: bar
hash: sighash('b)
type: TOC
position: 'b
size: _
'b:
- 2 sections
- name: baz.txt
hash: hash('c)
type: FILE
position: 'c
length: _
- name: qux
hash: `<unverifiable>`
type: MISSING
'c: `<CONTENTS OF baz.txt>`
"foo/"
hash: _
size: 15b
"bar.txt"
hash: _
size: 5b
`<CONTENTS OF foo/>` (
"baz.txt"
hash: _
size: 2b
)
`<CONTENTS OF bar.txt>` ("hello")
`<CONTENTS OF baz.txt>` ("hi")

Some files were not shown because too many files have changed in this diff Show More