mirror of
https://github.com/Start9Labs/start-os.git
synced 2026-03-27 02:41:53 +00:00
Compare commits
49 Commits
chore/ts-b
...
sdk-commen
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
4ba55860dd | ||
|
|
3974c09369 | ||
|
|
f0b41a3a4c | ||
|
|
86ecc4cc99 | ||
|
|
d1162272f0 | ||
|
|
13ac469ed0 | ||
|
|
5294e8f444 | ||
|
|
b7da7cd59f | ||
|
|
c196f250f6 | ||
|
|
bee8a0f9d8 | ||
|
|
8213e45b85 | ||
|
|
e9b9925c0e | ||
|
|
0724989792 | ||
|
|
804560d43c | ||
|
|
31352a72c3 | ||
|
|
c7a4f0f9cb | ||
|
|
7879668c40 | ||
|
|
6a01b5eab1 | ||
|
|
80cb2d9ba5 | ||
|
|
8c1a452742 | ||
|
|
135afd0251 | ||
|
|
35f3274f29 | ||
|
|
9af5b87c92 | ||
|
|
66b5bc1897 | ||
|
|
7909941b70 | ||
|
|
4527046f2e | ||
|
|
5a292e6e2a | ||
|
|
84149be3c1 | ||
|
|
d562466fc4 | ||
|
|
9c3053f103 | ||
|
|
dce975410f | ||
|
|
783ce4b3b6 | ||
|
|
675a03bdc5 | ||
|
|
485fced691 | ||
|
|
a22707c1cb | ||
|
|
74e10ec473 | ||
|
|
e25e0f0c12 | ||
|
|
4cae00cb33 | ||
|
|
313b2df540 | ||
|
|
5fbc73755d | ||
|
|
bc4478b0b9 | ||
|
|
68141112b7 | ||
|
|
ccafb599a6 | ||
|
|
52272feb3e | ||
|
|
1abad93646 | ||
|
|
c9468dda02 | ||
|
|
6a1b1627c5 | ||
|
|
cfbace1d91 | ||
|
|
d97ab59bab |
@@ -26,7 +26,7 @@ make test-core # Run Rust tests
|
||||
## Operating Rules
|
||||
|
||||
- Always verify cross-layer changes using the order described in [ARCHITECTURE.md](ARCHITECTURE.md#cross-layer-verification)
|
||||
- Check component-level CLAUDE.md files for component-specific conventions
|
||||
- Check component-level CLAUDE.md files for component-specific conventions. ALWAYS read it before operating on that component.
|
||||
- Follow existing patterns before inventing new ones
|
||||
|
||||
## Supplementary Documentation
|
||||
|
||||
7
Makefile
7
Makefile
@@ -139,6 +139,11 @@ install-tunnel: core/target/$(RUST_ARCH)-unknown-linux-musl/$(PROFILE)/tunnelbox
|
||||
$(call mkdir,$(DESTDIR)/usr/lib/startos/scripts)
|
||||
$(call cp,build/lib/scripts/forward-port,$(DESTDIR)/usr/lib/startos/scripts/forward-port)
|
||||
|
||||
$(call mkdir,$(DESTDIR)/etc/apt/sources.list.d)
|
||||
$(call cp,apt/start9.list,$(DESTDIR)/etc/apt/sources.list.d/start9.list)
|
||||
$(call mkdir,$(DESTDIR)/usr/share/keyrings)
|
||||
$(call cp,apt/start9.gpg,$(DESTDIR)/usr/share/keyrings/start9.gpg)
|
||||
|
||||
core/target/$(RUST_ARCH)-unknown-linux-musl/$(PROFILE)/tunnelbox: $(CORE_SRC) $(ENVIRONMENT_FILE) $(GIT_HASH_FILE) web/dist/static/start-tunnel/index.html
|
||||
ARCH=$(ARCH) PROFILE=$(PROFILE) ./core/build/build-tunnelbox.sh
|
||||
|
||||
@@ -278,7 +283,7 @@ core/bindings/index.ts: $(call ls-files, core) $(ENVIRONMENT_FILE)
|
||||
rm -rf core/bindings
|
||||
./core/build/build-ts.sh
|
||||
ls core/bindings/*.ts | sed 's/core\/bindings\/\([^.]*\)\.ts/export { \1 } from ".\/\1";/g' | grep -v '"./index"' | tee core/bindings/index.ts
|
||||
npm --prefix sdk exec -- prettier --config ./sdk/base/package.json -w ./core/bindings/*.ts
|
||||
npm --prefix sdk/base exec -- prettier --config=./sdk/base/package.json -w './core/bindings/**/*.ts'
|
||||
touch core/bindings/index.ts
|
||||
|
||||
sdk/dist/package.json sdk/baseDist/package.json: $(call ls-files, sdk) sdk/base/lib/osBindings/index.ts
|
||||
|
||||
215
TODO.md
215
TODO.md
@@ -1,215 +0,0 @@
|
||||
# AI Agent TODOs
|
||||
|
||||
Pending tasks for AI agents. Remove items when completed.
|
||||
|
||||
## Unreviewed CLAUDE.md Sections
|
||||
|
||||
- [ ] Architecture - Web (`/web`) - @MattDHill
|
||||
|
||||
## Features
|
||||
|
||||
- [ ] Support preferred external ports besides 443 - @dr-bonez
|
||||
|
||||
**Problem**: Currently, port 443 is the only preferred external port that is actually honored. When a
|
||||
service requests `preferred_external_port: 8443` (or any non-443 value) for SSL, the system ignores
|
||||
the preference and assigns a dynamic-range port (49152-65535). The `preferred_external_port` is only
|
||||
used as a label for Tor mappings and as a trigger for the port-443 special case in `update()`.
|
||||
|
||||
**Goal**: Honor `preferred_external_port` for both SSL and non-SSL binds when the requested port is
|
||||
available, with proper conflict resolution and fallback to dynamic-range allocation.
|
||||
|
||||
### Design
|
||||
|
||||
**Key distinction**: There are two separate concepts for SSL port usage:
|
||||
1. **Port ownership** (`assigned_ssl_port`) — A port exclusively owned by a binding, allocated from
|
||||
`AvailablePorts`. Used for server hostnames (`.local`, mDNS, etc.) and iptables forwards.
|
||||
2. **Domain SSL port** — The port used for domain-based vhost entries. A binding does NOT need to own
|
||||
a port to have a domain vhost on it. The VHostController already supports multiple hostnames on the
|
||||
same port via SNI. Any binding can create a domain vhost entry on any SSL port that the
|
||||
VHostController has a listener for, regardless of who "owns" that port.
|
||||
|
||||
For example: the OS owns port 443 as its `assigned_ssl_port`. A service with
|
||||
`preferred_external_port: 443` won't get 443 as its `assigned_ssl_port` (it's taken), but it CAN
|
||||
still have domain vhost entries on port 443 — SNI routes by hostname.
|
||||
|
||||
#### 1. Preferred Port Allocation for Ownership ✅ DONE
|
||||
|
||||
`AvailablePorts::try_alloc(port) -> Option<u16>` added to `forward.rs`. `BindInfo::new()` and
|
||||
`BindInfo::update()` attempt the preferred port first, falling back to dynamic-range allocation.
|
||||
|
||||
#### 2. Per-Address Enable/Disable ✅ DONE
|
||||
|
||||
Gateway-level `private_disabled`/`public_enabled` on `NetInfo` replaced with per-address
|
||||
`DerivedAddressInfo` on `BindInfo`. `hostname_info` removed from `Host` — computed addresses now
|
||||
live in `BindInfo.addresses.possible`.
|
||||
|
||||
**`DerivedAddressInfo` struct** (on `BindInfo`):
|
||||
|
||||
```rust
|
||||
pub struct DerivedAddressInfo {
|
||||
pub private_disabled: BTreeSet<HostnameInfo>,
|
||||
pub public_enabled: BTreeSet<HostnameInfo>,
|
||||
pub possible: BTreeSet<HostnameInfo>, // COMPUTED by update()
|
||||
}
|
||||
```
|
||||
|
||||
`DerivedAddressInfo::enabled()` returns `possible` filtered by the two sets. `HostnameInfo` derives
|
||||
`Ord` for `BTreeSet` usage. `AddressFilter` (implementing `InterfaceFilter`) derives enabled
|
||||
gateway set from `DerivedAddressInfo` for vhost/forward filtering.
|
||||
|
||||
**RPC endpoint**: `set-gateway-enabled` replaced with `set-address-enabled` (on both
|
||||
`server.host.binding` and `package.host.binding`).
|
||||
|
||||
**How disabling works per address type** (enforcement deferred to Section 3):
|
||||
- **WAN/LAN IP:port**: Will be enforced via **source-IP gating** in the vhost layer (Section 3).
|
||||
- **Hostname-based addresses** (`.local`, domains): Disabled by **not creating the vhost/SNI
|
||||
entry** for that hostname.
|
||||
|
||||
#### 3. Eliminate the Port 5443 Hack: Source-IP-Based WAN Blocking (`vhost.rs`, `net_controller.rs`)
|
||||
|
||||
**Current problem**: The `if ssl.preferred_external_port == 443` branch (line 341 of
|
||||
`net_controller.rs`) creates a bespoke dual-vhost setup: port 5443 for private-only access and port
|
||||
443 for public (or public+private). This exists because both public and private traffic arrive on the
|
||||
same port 443 listener, and the current `InterfaceFilter`/`PublicFilter` model distinguishes
|
||||
public/private by which _network interface_ the connection arrived on — which doesn't work when both
|
||||
traffic types share a listener.
|
||||
|
||||
**Solution**: Determine public vs private based on **source IP** at the vhost level. Traffic arriving
|
||||
from the gateway IP should be treated as public (the gateway may MASQUERADE/NAT internet traffic, so
|
||||
anything from the gateway is potentially public). Traffic from LAN IPs is private.
|
||||
|
||||
This applies to **all** vhost targets, not just port 443:
|
||||
- **Add a `public` field to `ProxyTarget`** (or an enum: `Public`, `Private`, `Both`) indicating
|
||||
what traffic this target accepts, derived from the binding's user-controlled `public` field.
|
||||
- **Modify `VHostTarget::filter()`** (`vhost.rs:342`): Instead of (or in addition to) checking the
|
||||
network interface via `GatewayInfo`, check the source IP of the TCP connection against known gateway
|
||||
IPs. If the source IP matches a gateway or IP outside the subnet, the connection is public;
|
||||
otherwise it's private. Use this to gate against the target's `public` field.
|
||||
- **Eliminate the 5443 port entirely**: A single vhost entry on port 443 (or any shared SSL port) can
|
||||
serve both public and private traffic, with per-target source-IP gating determining which backend
|
||||
handles which connections.
|
||||
|
||||
#### 4. Port Forward Mapping in Patch-DB
|
||||
|
||||
When a binding is marked `public = true`, StartOS must record the required port forwards in patch-db
|
||||
so the frontend can display them to the user. The user then configures these on their router manually.
|
||||
|
||||
For each public binding, store:
|
||||
- The external port the router should forward (the actual vhost port used for domains, or the
|
||||
`assigned_port` / `assigned_ssl_port` for non-domain access)
|
||||
- The protocol (TCP/UDP)
|
||||
- The StartOS LAN IP as the forward target
|
||||
- Which service/binding this forward is for (for display purposes)
|
||||
|
||||
This mapping should be in the public database model so the frontend can read and display it.
|
||||
|
||||
#### 5. Simplify `update()` Domain Vhost Logic (`net_controller.rs`)
|
||||
|
||||
With source-IP gating in the vhost controller:
|
||||
- **Remove the `== 443` special case** and the 5443 secondary vhost.
|
||||
- For **server hostnames** (`.local`, mDNS, embassy, startos, localhost): use `assigned_ssl_port`
|
||||
(the port the binding owns).
|
||||
- For **domain-based vhost entries**: attempt to use `preferred_external_port` as the vhost port.
|
||||
This succeeds if the port is either unused or already has an SSL listener (SNI handles sharing).
|
||||
It fails only if the port is already in use by a non-SSL binding, or is a restricted port. On
|
||||
failure, fall back to `assigned_ssl_port`.
|
||||
- The binding's `public` field determines the `ProxyTarget`'s public/private gating.
|
||||
- Hostname info must exactly match the actual vhost port used: for server hostnames, report
|
||||
`ssl_port: assigned_ssl_port`. For domains, report `ssl_port: preferred_external_port` if it was
|
||||
successfully used for the domain vhost, otherwise report `ssl_port: assigned_ssl_port`.
|
||||
|
||||
#### 6. Reachability Test Endpoint
|
||||
|
||||
New RPC endpoint that tests whether an address is actually reachable, with diagnostic info on
|
||||
failure.
|
||||
|
||||
**RPC endpoint** (`binding.rs` or new file):
|
||||
- **`test-address`** — Test reachability of a specific address.
|
||||
|
||||
```ts
|
||||
interface BindingTestAddressParams {
|
||||
internalPort: number;
|
||||
address: HostnameInfo;
|
||||
}
|
||||
```
|
||||
|
||||
The backend simply performs the raw checks and returns the results. The **frontend** owns all
|
||||
interpretation — it already knows the address type, expected IP, expected port, etc. from the
|
||||
`HostnameInfo` data, so it can compare against the backend results and construct fix messaging.
|
||||
|
||||
```ts
|
||||
interface TestAddressResult {
|
||||
dns: string[] | null; // resolved IPs, null if not a domain address or lookup failed
|
||||
portOpen: boolean | null; // TCP connect result, null if not applicable
|
||||
}
|
||||
```
|
||||
|
||||
This yields two RPC methods:
|
||||
- `server.host.binding.test-address`
|
||||
- `package.host.binding.test-address`
|
||||
|
||||
The frontend already has the full `HostnameInfo` context (expected IP, domain, port, gateway,
|
||||
public/private). It compares the backend's raw results against the expected state and constructs
|
||||
localized fix instructions. For example:
|
||||
- `dns` returned but doesn't contain the expected WAN IP → "Update DNS A record for {domain}
|
||||
to {wanIp}"
|
||||
- `dns` is `null` for a domain address → "DNS lookup failed for {domain}"
|
||||
- `portOpen` is `false` → "Configure port forward on your router: external {port} TCP →
|
||||
{lanIp}:{port}"
|
||||
|
||||
### Key Files
|
||||
|
||||
| File | Role |
|
||||
| ------------------------------------ | ------------------------------------------------------------------------------------------------------------------------------------------------------- |
|
||||
| `core/src/net/forward.rs` | `AvailablePorts` — port pool allocation, `try_alloc()` for preferred ports |
|
||||
| `core/src/net/host/binding.rs` | `Bindings` (Map wrapper for patchdb), `BindInfo`/`NetInfo`/`DerivedAddressInfo`/`AddressFilter` — per-address enable/disable, `set-address-enabled` RPC |
|
||||
| `core/src/net/net_controller.rs:259` | `NetServiceData::update()` — computes `DerivedAddressInfo.possible`, vhost/forward/DNS reconciliation, 5443 hack removal |
|
||||
| `core/src/net/vhost.rs` | `VHostController` / `ProxyTarget` — source-IP gating for public/private |
|
||||
| `core/src/net/gateway.rs` | `InterfaceFilter` trait and filter types (`AddressFilter`, `PublicFilter`, etc.) |
|
||||
| `core/src/net/service_interface.rs` | `HostnameInfo` — derives `Ord` for `BTreeSet` usage |
|
||||
| `core/src/net/host/address.rs` | `HostAddress` (flattened struct), domain CRUD endpoints |
|
||||
| `sdk/base/lib/interfaces/Host.ts` | SDK `MultiHost.bindPort()` — no changes needed |
|
||||
| `core/src/db/model/public.rs` | Public DB model — port forward mapping |
|
||||
|
||||
- [ ] Extract TS-exported types into a lightweight sub-crate for fast binding generation
|
||||
|
||||
**Problem**: `make ts-bindings` compiles the entire `start-os` crate (with all dependencies: tokio,
|
||||
axum, openssl, etc.) just to run test functions that serialize type definitions to `.ts` files.
|
||||
Even in debug mode, this takes minutes. The generated output is pure type info — no runtime code
|
||||
is needed.
|
||||
|
||||
**Goal**: Generate TS bindings in seconds by isolating exported types in a small crate with minimal
|
||||
dependencies.
|
||||
|
||||
**Approach**: Create a `core/bindings-types/` sub-crate containing (or re-exporting) all 168
|
||||
`#[ts(export)]` types. This crate depends only on `serde`, `ts-rs`, `exver`, and other type-only
|
||||
crates — not on tokio, axum, openssl, etc. Then `build-ts.sh` runs `cargo test -p bindings-types`
|
||||
instead of `cargo test -p start-os`.
|
||||
|
||||
**Challenge**: The exported types are scattered across `core/src/` and reference each other and
|
||||
other crate types. Extracting them requires either moving the type definitions into the sub-crate
|
||||
(and importing them back into `start-os`) or restructuring to share a common types crate.
|
||||
|
||||
- [ ] Use auto-generated RPC types in the frontend instead of manual duplicates
|
||||
|
||||
**Problem**: The web frontend manually defines ~755 lines of API request/response types in
|
||||
`web/projects/ui/src/app/services/api/api.types.ts` that can drift from the actual Rust types.
|
||||
|
||||
**Current state**: The Rust backend already has `#[ts(export)]` on RPC param types (e.g.
|
||||
`AddTunnelParams`, `SetWifiEnabledParams`, `LoginParams`), and they are generated into
|
||||
`core/bindings/`. However, commit `71b83245b` ("Chore/unexport api ts #2585", April 2024)
|
||||
deliberately stopped building them into the SDK and had the frontend maintain its own types.
|
||||
|
||||
**Goal**: Reverse that decision — pipe the generated RPC types through the SDK into the frontend
|
||||
so `api.types.ts` can import them instead of duplicating them. This eliminates drift between
|
||||
backend and frontend API contracts.
|
||||
|
||||
- [ ] Auto-configure port forwards via UPnP/NAT-PMP/PCP - @dr-bonez
|
||||
|
||||
**Blocked by**: "Support preferred external ports besides 443" (must be implemented and tested
|
||||
end-to-end first).
|
||||
|
||||
**Goal**: When a binding is marked public, automatically configure port forwards on the user's router
|
||||
using UPnP, NAT-PMP, or PCP, instead of requiring manual router configuration. Fall back to
|
||||
displaying manual instructions (the port forward mapping from patch-db) when auto-configuration is
|
||||
unavailable or fails.
|
||||
BIN
apt/start9.gpg
Normal file
BIN
apt/start9.gpg
Normal file
Binary file not shown.
1
apt/start9.list
Normal file
1
apt/start9.list
Normal file
@@ -0,0 +1 @@
|
||||
deb [arch=amd64,arm64,riscv64 signed-by=/usr/share/keyrings/start9.gpg] https://start9-debs.nyc3.cdn.digitaloceanspaces.com stable main
|
||||
138
build/apt/publish-deb.sh
Executable file
138
build/apt/publish-deb.sh
Executable file
@@ -0,0 +1,138 @@
|
||||
#!/bin/bash
|
||||
#
|
||||
# Publish .deb files to an S3-hosted apt repository.
|
||||
#
|
||||
# Usage: publish-deb.sh <deb-file-or-directory> [<deb-file-or-directory> ...]
|
||||
#
|
||||
# Environment variables:
|
||||
# GPG_PRIVATE_KEY - Armored GPG private key (imported if set)
|
||||
# GPG_KEY_ID - GPG key ID for signing
|
||||
# S3_ACCESS_KEY - S3 access key
|
||||
# S3_SECRET_KEY - S3 secret key
|
||||
# S3_ENDPOINT - S3 endpoint (default: https://nyc3.digitaloceanspaces.com)
|
||||
# S3_BUCKET - S3 bucket name (default: start9-debs)
|
||||
# SUITE - Apt suite name (default: stable)
|
||||
# COMPONENT - Apt component name (default: main)
|
||||
|
||||
set -e
|
||||
|
||||
if [ $# -eq 0 ]; then
|
||||
echo "Usage: $0 <deb-file-or-directory> [...]" >&2
|
||||
exit 1
|
||||
fi
|
||||
|
||||
BUCKET="${S3_BUCKET:-start9-debs}"
|
||||
ENDPOINT="${S3_ENDPOINT:-https://nyc3.digitaloceanspaces.com}"
|
||||
SUITE="${SUITE:-stable}"
|
||||
COMPONENT="${COMPONENT:-main}"
|
||||
REPO_DIR="$(mktemp -d)"
|
||||
|
||||
cleanup() {
|
||||
rm -rf "$REPO_DIR"
|
||||
}
|
||||
trap cleanup EXIT
|
||||
|
||||
# Import GPG key if provided
|
||||
if [ -n "$GPG_PRIVATE_KEY" ]; then
|
||||
echo "$GPG_PRIVATE_KEY" | gpg --batch --import 2>/dev/null
|
||||
fi
|
||||
|
||||
# Configure s3cmd
|
||||
if [ -n "$S3_ACCESS_KEY" ] && [ -n "$S3_SECRET_KEY" ]; then
|
||||
S3CMD_CONFIG="$(mktemp)"
|
||||
cat > "$S3CMD_CONFIG" <<EOF
|
||||
[default]
|
||||
access_key = ${S3_ACCESS_KEY}
|
||||
secret_key = ${S3_SECRET_KEY}
|
||||
host_base = $(echo "$ENDPOINT" | sed 's|https://||')
|
||||
host_bucket = %(bucket)s.$(echo "$ENDPOINT" | sed 's|https://||')
|
||||
use_https = True
|
||||
EOF
|
||||
s3() {
|
||||
s3cmd -c "$S3CMD_CONFIG" "$@"
|
||||
}
|
||||
else
|
||||
# Fall back to default ~/.s3cfg
|
||||
S3CMD_CONFIG=""
|
||||
s3() {
|
||||
s3cmd "$@"
|
||||
}
|
||||
fi
|
||||
|
||||
# Sync existing repo from S3
|
||||
echo "Syncing existing repo from s3://${BUCKET}/ ..."
|
||||
s3 sync --no-mime-magic "s3://${BUCKET}/" "$REPO_DIR/" 2>/dev/null || true
|
||||
|
||||
# Collect all .deb files from arguments
|
||||
DEB_FILES=()
|
||||
for arg in "$@"; do
|
||||
if [ -d "$arg" ]; then
|
||||
while IFS= read -r -d '' f; do
|
||||
DEB_FILES+=("$f")
|
||||
done < <(find "$arg" -name '*.deb' -print0)
|
||||
elif [ -f "$arg" ]; then
|
||||
DEB_FILES+=("$arg")
|
||||
else
|
||||
echo "Warning: $arg is not a file or directory, skipping" >&2
|
||||
fi
|
||||
done
|
||||
|
||||
if [ ${#DEB_FILES[@]} -eq 0 ]; then
|
||||
echo "No .deb files found" >&2
|
||||
exit 1
|
||||
fi
|
||||
|
||||
# Copy each deb to the pool, renaming to standard format
|
||||
for deb in "${DEB_FILES[@]}"; do
|
||||
PKG_NAME="$(dpkg-deb --field "$deb" Package)"
|
||||
POOL_DIR="$REPO_DIR/pool/${COMPONENT}/${PKG_NAME:0:1}/${PKG_NAME}"
|
||||
mkdir -p "$POOL_DIR"
|
||||
cp "$deb" "$POOL_DIR/"
|
||||
dpkg-name -o "$POOL_DIR/$(basename "$deb")" 2>/dev/null || true
|
||||
echo "Added: $(basename "$deb") -> pool/${COMPONENT}/${PKG_NAME:0:1}/${PKG_NAME}/"
|
||||
done
|
||||
|
||||
# Generate Packages indices for each architecture
|
||||
for arch in amd64 arm64 riscv64; do
|
||||
BINARY_DIR="$REPO_DIR/dists/${SUITE}/${COMPONENT}/binary-${arch}"
|
||||
mkdir -p "$BINARY_DIR"
|
||||
(
|
||||
cd "$REPO_DIR"
|
||||
dpkg-scanpackages --arch "$arch" pool/ > "$BINARY_DIR/Packages"
|
||||
gzip -k -f "$BINARY_DIR/Packages"
|
||||
)
|
||||
echo "Generated Packages index for ${arch}"
|
||||
done
|
||||
|
||||
# Generate Release file
|
||||
(
|
||||
cd "$REPO_DIR/dists/${SUITE}"
|
||||
apt-ftparchive release \
|
||||
-o "APT::FTPArchive::Release::Origin=Start9" \
|
||||
-o "APT::FTPArchive::Release::Label=Start9" \
|
||||
-o "APT::FTPArchive::Release::Suite=${SUITE}" \
|
||||
-o "APT::FTPArchive::Release::Codename=${SUITE}" \
|
||||
-o "APT::FTPArchive::Release::Architectures=amd64 arm64 riscv64" \
|
||||
-o "APT::FTPArchive::Release::Components=${COMPONENT}" \
|
||||
. > Release
|
||||
)
|
||||
echo "Generated Release file"
|
||||
|
||||
# Sign if GPG key is available
|
||||
if [ -n "$GPG_KEY_ID" ]; then
|
||||
(
|
||||
cd "$REPO_DIR/dists/${SUITE}"
|
||||
gpg --default-key "$GPG_KEY_ID" --batch --yes --detach-sign -o Release.gpg Release
|
||||
gpg --default-key "$GPG_KEY_ID" --batch --yes --clearsign -o InRelease Release
|
||||
)
|
||||
echo "Signed Release file with key ${GPG_KEY_ID}"
|
||||
else
|
||||
echo "Warning: GPG_KEY_ID not set, Release file is unsigned" >&2
|
||||
fi
|
||||
|
||||
# Upload to S3
|
||||
echo "Uploading to s3://${BUCKET}/ ..."
|
||||
s3 sync --acl-public --no-mime-magic "$REPO_DIR/" "s3://${BUCKET}/"
|
||||
|
||||
[ -n "$S3CMD_CONFIG" ] && rm -f "$S3CMD_CONFIG"
|
||||
echo "Done."
|
||||
@@ -55,6 +55,7 @@ socat
|
||||
sqlite3
|
||||
squashfs-tools
|
||||
squashfs-tools-ng
|
||||
ssl-cert
|
||||
sudo
|
||||
systemd
|
||||
systemd-resolved
|
||||
|
||||
1
build/dpkg-deps/dev.depends
Normal file
1
build/dpkg-deps/dev.depends
Normal file
@@ -0,0 +1 @@
|
||||
+ nmap
|
||||
@@ -41,7 +41,7 @@ if [ "$IB_TARGET_PLATFORM" = "x86_64" ] || [ "$IB_TARGET_PLATFORM" = "x86_64-non
|
||||
elif [ "$IB_TARGET_PLATFORM" = "aarch64" ] || [ "$IB_TARGET_PLATFORM" = "aarch64-nonfree" ] || [ "$IB_TARGET_PLATFORM" = "raspberrypi" ] || [ "$IB_TARGET_PLATFORM" = "rockchip64" ]; then
|
||||
IB_TARGET_ARCH=arm64
|
||||
QEMU_ARCH=aarch64
|
||||
elif [ "$IB_TARGET_PLATFORM" = "riscv64" ]; then
|
||||
elif [ "$IB_TARGET_PLATFORM" = "riscv64" ] || [ "$IB_TARGET_PLATFORM" = "riscv64-nonfree" ]; then
|
||||
IB_TARGET_ARCH=riscv64
|
||||
QEMU_ARCH=riscv64
|
||||
else
|
||||
@@ -205,7 +205,7 @@ cat > config/hooks/normal/9000-install-startos.hook.chroot << EOF
|
||||
|
||||
set -e
|
||||
|
||||
if [ "${NON_FREE}" = "1" ] && [ "${IB_TARGET_PLATFORM}" != "raspberrypi" ]; then
|
||||
if [ "${NON_FREE}" = "1" ] && [ "${IB_TARGET_PLATFORM}" != "raspberrypi" ] && [ "${IB_TARGET_PLATFORM}" != "riscv64-nonfree" ]; then
|
||||
# install a specific NVIDIA driver version
|
||||
|
||||
# ---------------- configuration ----------------
|
||||
|
||||
@@ -13,7 +13,7 @@ for kind in INPUT FORWARD ACCEPT; do
|
||||
iptables -A $kind -j "${NAME}_${kind}"
|
||||
fi
|
||||
done
|
||||
for kind in PREROUTING OUTPUT; do
|
||||
for kind in PREROUTING OUTPUT POSTROUTING; do
|
||||
if ! iptables -t nat -C $kind -j "${NAME}_${kind}" 2> /dev/null; then
|
||||
iptables -t nat -N "${NAME}_${kind}" 2> /dev/null
|
||||
iptables -t nat -A $kind -j "${NAME}_${kind}"
|
||||
@@ -26,7 +26,7 @@ trap 'err=1' ERR
|
||||
for kind in INPUT FORWARD ACCEPT; do
|
||||
iptables -F "${NAME}_${kind}" 2> /dev/null
|
||||
done
|
||||
for kind in PREROUTING OUTPUT; do
|
||||
for kind in PREROUTING OUTPUT POSTROUTING; do
|
||||
iptables -t nat -F "${NAME}_${kind}" 2> /dev/null
|
||||
done
|
||||
if [ "$UNDO" = 1 ]; then
|
||||
@@ -40,6 +40,11 @@ fi
|
||||
if [ -n "$src_subnet" ]; then
|
||||
iptables -t nat -A ${NAME}_PREROUTING -s "$src_subnet" -d "$sip" -p tcp --dport "$sport" -j DNAT --to-destination "$dip:$dport"
|
||||
iptables -t nat -A ${NAME}_PREROUTING -s "$src_subnet" -d "$sip" -p udp --dport "$sport" -j DNAT --to-destination "$dip:$dport"
|
||||
# Also allow containers on the bridge subnet to reach this forward
|
||||
if [ -n "$bridge_subnet" ]; then
|
||||
iptables -t nat -A ${NAME}_PREROUTING -s "$bridge_subnet" -d "$sip" -p tcp --dport "$sport" -j DNAT --to-destination "$dip:$dport"
|
||||
iptables -t nat -A ${NAME}_PREROUTING -s "$bridge_subnet" -d "$sip" -p udp --dport "$sport" -j DNAT --to-destination "$dip:$dport"
|
||||
fi
|
||||
else
|
||||
iptables -t nat -A ${NAME}_PREROUTING -d "$sip" -p tcp --dport "$sport" -j DNAT --to-destination "$dip:$dport"
|
||||
iptables -t nat -A ${NAME}_PREROUTING -d "$sip" -p udp --dport "$sport" -j DNAT --to-destination "$dip:$dport"
|
||||
@@ -53,4 +58,15 @@ iptables -t nat -A ${NAME}_OUTPUT -d "$sip" -p udp --dport "$sport" -j DNAT --to
|
||||
iptables -A ${NAME}_FORWARD -d $dip -p tcp --dport $dport -m state --state NEW -j ACCEPT
|
||||
iptables -A ${NAME}_FORWARD -d $dip -p udp --dport $dport -m state --state NEW -j ACCEPT
|
||||
|
||||
# NAT hairpin: masquerade traffic from the bridge subnet or host to the DNAT
|
||||
# target, so replies route back through the host for proper NAT reversal.
|
||||
# Container-to-container hairpin (source is on the bridge subnet)
|
||||
if [ -n "$bridge_subnet" ]; then
|
||||
iptables -t nat -A ${NAME}_POSTROUTING -s "$bridge_subnet" -d "$dip" -p tcp --dport "$dport" -j MASQUERADE
|
||||
iptables -t nat -A ${NAME}_POSTROUTING -s "$bridge_subnet" -d "$dip" -p udp --dport "$dport" -j MASQUERADE
|
||||
fi
|
||||
# Host-to-container hairpin (host connects to its own gateway IP, source is sip)
|
||||
iptables -t nat -A ${NAME}_POSTROUTING -s "$sip" -d "$dip" -p tcp --dport "$dport" -j MASQUERADE
|
||||
iptables -t nat -A ${NAME}_POSTROUTING -s "$sip" -d "$dip" -p udp --dport "$dport" -j MASQUERADE
|
||||
|
||||
exit $err
|
||||
|
||||
13
container-runtime/package-lock.json
generated
13
container-runtime/package-lock.json
generated
@@ -19,7 +19,6 @@
|
||||
"lodash.merge": "^4.6.2",
|
||||
"mime": "^4.0.7",
|
||||
"node-fetch": "^3.1.0",
|
||||
"ts-matches": "^6.3.2",
|
||||
"tslib": "^2.5.3",
|
||||
"typescript": "^5.1.3",
|
||||
"yaml": "^2.3.1"
|
||||
@@ -38,7 +37,7 @@
|
||||
},
|
||||
"../sdk/dist": {
|
||||
"name": "@start9labs/start-sdk",
|
||||
"version": "0.4.0-beta.48",
|
||||
"version": "0.4.0-beta.51",
|
||||
"license": "MIT",
|
||||
"dependencies": {
|
||||
"@iarna/toml": "^3.0.0",
|
||||
@@ -49,8 +48,8 @@
|
||||
"ini": "^5.0.0",
|
||||
"isomorphic-fetch": "^3.0.0",
|
||||
"mime": "^4.0.7",
|
||||
"ts-matches": "^6.3.2",
|
||||
"yaml": "^2.7.1"
|
||||
"yaml": "^2.7.1",
|
||||
"zod": "^4.3.6"
|
||||
},
|
||||
"devDependencies": {
|
||||
"@types/jest": "^29.4.0",
|
||||
@@ -6494,12 +6493,6 @@
|
||||
}
|
||||
}
|
||||
},
|
||||
"node_modules/ts-matches": {
|
||||
"version": "6.3.2",
|
||||
"resolved": "https://registry.npmjs.org/ts-matches/-/ts-matches-6.3.2.tgz",
|
||||
"integrity": "sha512-UhSgJymF8cLd4y0vV29qlKVCkQpUtekAaujXbQVc729FezS8HwqzepqvtjzQ3HboatIqN/Idor85O2RMwT7lIQ==",
|
||||
"license": "MIT"
|
||||
},
|
||||
"node_modules/tslib": {
|
||||
"version": "2.8.1",
|
||||
"resolved": "https://registry.npmjs.org/tslib/-/tslib-2.8.1.tgz",
|
||||
|
||||
@@ -28,7 +28,6 @@
|
||||
"lodash.merge": "^4.6.2",
|
||||
"mime": "^4.0.7",
|
||||
"node-fetch": "^3.1.0",
|
||||
"ts-matches": "^6.3.2",
|
||||
"tslib": "^2.5.3",
|
||||
"typescript": "^5.1.3",
|
||||
"yaml": "^2.3.1"
|
||||
|
||||
@@ -3,33 +3,39 @@ import {
|
||||
types as T,
|
||||
utils,
|
||||
VersionRange,
|
||||
z,
|
||||
} from "@start9labs/start-sdk"
|
||||
import * as net from "net"
|
||||
import { object, string, number, literals, some, unknown } from "ts-matches"
|
||||
import { Effects } from "../Models/Effects"
|
||||
|
||||
import { CallbackHolder } from "../Models/CallbackHolder"
|
||||
import { asError } from "@start9labs/start-sdk/base/lib/util"
|
||||
const matchRpcError = object({
|
||||
error: object({
|
||||
code: number,
|
||||
message: string,
|
||||
data: some(
|
||||
string,
|
||||
object({
|
||||
details: string,
|
||||
debug: string.nullable().optional(),
|
||||
}),
|
||||
)
|
||||
const matchRpcError = z.object({
|
||||
error: z.object({
|
||||
code: z.number(),
|
||||
message: z.string(),
|
||||
data: z
|
||||
.union([
|
||||
z.string(),
|
||||
z.object({
|
||||
details: z.string(),
|
||||
debug: z.string().nullable().optional(),
|
||||
}),
|
||||
])
|
||||
.nullable()
|
||||
.optional(),
|
||||
}),
|
||||
})
|
||||
const testRpcError = matchRpcError.test
|
||||
const testRpcResult = object({
|
||||
result: unknown,
|
||||
}).test
|
||||
type RpcError = typeof matchRpcError._TYPE
|
||||
function testRpcError(v: unknown): v is RpcError {
|
||||
return matchRpcError.safeParse(v).success
|
||||
}
|
||||
const matchRpcResult = z.object({
|
||||
result: z.unknown(),
|
||||
})
|
||||
function testRpcResult(v: unknown): v is z.infer<typeof matchRpcResult> {
|
||||
return matchRpcResult.safeParse(v).success
|
||||
}
|
||||
type RpcError = z.infer<typeof matchRpcError>
|
||||
|
||||
const SOCKET_PATH = "/media/startos/rpc/host.sock"
|
||||
let hostSystemId = 0
|
||||
@@ -71,7 +77,7 @@ const rpcRoundFor =
|
||||
"Error in host RPC:",
|
||||
utils.asError({ method, params, error: res.error }),
|
||||
)
|
||||
if (string.test(res.error.data)) {
|
||||
if (typeof res.error.data === "string") {
|
||||
message += ": " + res.error.data
|
||||
console.error(`Details: ${res.error.data}`)
|
||||
} else {
|
||||
@@ -253,6 +259,14 @@ export function makeEffects(context: EffectContext): Effects {
|
||||
callback: context.callbacks?.addCallback(options.callback) || null,
|
||||
}) as ReturnType<T.Effects["getSystemSmtp"]>
|
||||
},
|
||||
getOutboundGateway(
|
||||
...[options]: Parameters<T.Effects["getOutboundGateway"]>
|
||||
) {
|
||||
return rpcRound("get-outbound-gateway", {
|
||||
...options,
|
||||
callback: context.callbacks?.addCallback(options.callback) || null,
|
||||
}) as ReturnType<T.Effects["getOutboundGateway"]>
|
||||
},
|
||||
listServiceInterfaces(
|
||||
...[options]: Parameters<T.Effects["listServiceInterfaces"]>
|
||||
) {
|
||||
@@ -316,6 +330,31 @@ export function makeEffects(context: EffectContext): Effects {
|
||||
T.Effects["setDataVersion"]
|
||||
>
|
||||
},
|
||||
plugin: {
|
||||
url: {
|
||||
register(
|
||||
...[options]: Parameters<T.Effects["plugin"]["url"]["register"]>
|
||||
) {
|
||||
return rpcRound("plugin.url.register", options) as ReturnType<
|
||||
T.Effects["plugin"]["url"]["register"]
|
||||
>
|
||||
},
|
||||
exportUrl(
|
||||
...[options]: Parameters<T.Effects["plugin"]["url"]["exportUrl"]>
|
||||
) {
|
||||
return rpcRound("plugin.url.export-url", options) as ReturnType<
|
||||
T.Effects["plugin"]["url"]["exportUrl"]
|
||||
>
|
||||
},
|
||||
clearUrls(
|
||||
...[options]: Parameters<T.Effects["plugin"]["url"]["clearUrls"]>
|
||||
) {
|
||||
return rpcRound("plugin.url.clear-urls", options) as ReturnType<
|
||||
T.Effects["plugin"]["url"]["clearUrls"]
|
||||
>
|
||||
},
|
||||
},
|
||||
},
|
||||
}
|
||||
if (context.callbacks?.onLeaveContext)
|
||||
self.onLeaveContext(() => {
|
||||
|
||||
@@ -1,25 +1,13 @@
|
||||
// @ts-check
|
||||
|
||||
import * as net from "net"
|
||||
import {
|
||||
object,
|
||||
some,
|
||||
string,
|
||||
literal,
|
||||
array,
|
||||
number,
|
||||
matches,
|
||||
any,
|
||||
shape,
|
||||
anyOf,
|
||||
literals,
|
||||
} from "ts-matches"
|
||||
|
||||
import {
|
||||
ExtendedVersion,
|
||||
types as T,
|
||||
utils,
|
||||
VersionRange,
|
||||
z,
|
||||
} from "@start9labs/start-sdk"
|
||||
import * as fs from "fs"
|
||||
|
||||
@@ -29,89 +17,92 @@ import { jsonPath, unNestPath } from "../Models/JsonPath"
|
||||
import { System } from "../Interfaces/System"
|
||||
import { makeEffects } from "./EffectCreator"
|
||||
type MaybePromise<T> = T | Promise<T>
|
||||
export const matchRpcResult = anyOf(
|
||||
object({ result: any }),
|
||||
object({
|
||||
error: object({
|
||||
code: number,
|
||||
message: string,
|
||||
data: object({
|
||||
details: string.optional(),
|
||||
debug: any.optional(),
|
||||
})
|
||||
export const matchRpcResult = z.union([
|
||||
z.object({ result: z.any() }),
|
||||
z.object({
|
||||
error: z.object({
|
||||
code: z.number(),
|
||||
message: z.string(),
|
||||
data: z
|
||||
.object({
|
||||
details: z.string().optional(),
|
||||
debug: z.any().optional(),
|
||||
})
|
||||
.nullable()
|
||||
.optional(),
|
||||
}),
|
||||
}),
|
||||
)
|
||||
])
|
||||
|
||||
export type RpcResult = typeof matchRpcResult._TYPE
|
||||
export type RpcResult = z.infer<typeof matchRpcResult>
|
||||
type SocketResponse = ({ jsonrpc: "2.0"; id: IdType } & RpcResult) | null
|
||||
|
||||
const SOCKET_PARENT = "/media/startos/rpc"
|
||||
const SOCKET_PATH = "/media/startos/rpc/service.sock"
|
||||
const jsonrpc = "2.0" as const
|
||||
|
||||
const isResult = object({ result: any }).test
|
||||
const isResultSchema = z.object({ result: z.any() })
|
||||
const isResult = (v: unknown): v is z.infer<typeof isResultSchema> =>
|
||||
isResultSchema.safeParse(v).success
|
||||
|
||||
const idType = some(string, number, literal(null))
|
||||
const idType = z.union([z.string(), z.number(), z.literal(null)])
|
||||
type IdType = null | string | number | undefined
|
||||
const runType = object({
|
||||
const runType = z.object({
|
||||
id: idType.optional(),
|
||||
method: literal("execute"),
|
||||
params: object({
|
||||
id: string,
|
||||
procedure: string,
|
||||
input: any,
|
||||
timeout: number.nullable().optional(),
|
||||
method: z.literal("execute"),
|
||||
params: z.object({
|
||||
id: z.string(),
|
||||
procedure: z.string(),
|
||||
input: z.any(),
|
||||
timeout: z.number().nullable().optional(),
|
||||
}),
|
||||
})
|
||||
const sandboxRunType = object({
|
||||
const sandboxRunType = z.object({
|
||||
id: idType.optional(),
|
||||
method: literal("sandbox"),
|
||||
params: object({
|
||||
id: string,
|
||||
procedure: string,
|
||||
input: any,
|
||||
timeout: number.nullable().optional(),
|
||||
method: z.literal("sandbox"),
|
||||
params: z.object({
|
||||
id: z.string(),
|
||||
procedure: z.string(),
|
||||
input: z.any(),
|
||||
timeout: z.number().nullable().optional(),
|
||||
}),
|
||||
})
|
||||
const callbackType = object({
|
||||
method: literal("callback"),
|
||||
params: object({
|
||||
id: number,
|
||||
args: array,
|
||||
const callbackType = z.object({
|
||||
method: z.literal("callback"),
|
||||
params: z.object({
|
||||
id: z.number(),
|
||||
args: z.array(z.unknown()),
|
||||
}),
|
||||
})
|
||||
const initType = object({
|
||||
const initType = z.object({
|
||||
id: idType.optional(),
|
||||
method: literal("init"),
|
||||
params: object({
|
||||
id: string,
|
||||
kind: literals("install", "update", "restore").nullable(),
|
||||
method: z.literal("init"),
|
||||
params: z.object({
|
||||
id: z.string(),
|
||||
kind: z.enum(["install", "update", "restore"]).nullable(),
|
||||
}),
|
||||
})
|
||||
const startType = object({
|
||||
const startType = z.object({
|
||||
id: idType.optional(),
|
||||
method: literal("start"),
|
||||
method: z.literal("start"),
|
||||
})
|
||||
const stopType = object({
|
||||
const stopType = z.object({
|
||||
id: idType.optional(),
|
||||
method: literal("stop"),
|
||||
method: z.literal("stop"),
|
||||
})
|
||||
const exitType = object({
|
||||
const exitType = z.object({
|
||||
id: idType.optional(),
|
||||
method: literal("exit"),
|
||||
params: object({
|
||||
id: string,
|
||||
target: string.nullable(),
|
||||
method: z.literal("exit"),
|
||||
params: z.object({
|
||||
id: z.string(),
|
||||
target: z.string().nullable(),
|
||||
}),
|
||||
})
|
||||
const evalType = object({
|
||||
const evalType = z.object({
|
||||
id: idType.optional(),
|
||||
method: literal("eval"),
|
||||
params: object({
|
||||
script: string,
|
||||
method: z.literal("eval"),
|
||||
params: z.object({
|
||||
script: z.string(),
|
||||
}),
|
||||
})
|
||||
|
||||
@@ -144,7 +135,9 @@ const handleRpc = (id: IdType, result: Promise<RpcResult>) =>
|
||||
},
|
||||
}))
|
||||
|
||||
const hasId = object({ id: idType }).test
|
||||
const hasIdSchema = z.object({ id: idType })
|
||||
const hasId = (v: unknown): v is z.infer<typeof hasIdSchema> =>
|
||||
hasIdSchema.safeParse(v).success
|
||||
export class RpcListener {
|
||||
shouldExit = false
|
||||
unixSocketServer = net.createServer(async (server) => {})
|
||||
@@ -246,40 +239,52 @@ export class RpcListener {
|
||||
}
|
||||
|
||||
private dealWithInput(input: unknown): MaybePromise<SocketResponse> {
|
||||
return matches(input)
|
||||
.when(runType, async ({ id, params }) => {
|
||||
const parsed = z.object({ method: z.string() }).safeParse(input)
|
||||
if (!parsed.success) {
|
||||
console.warn(
|
||||
`Couldn't parse the following input ${JSON.stringify(input)}`,
|
||||
)
|
||||
return {
|
||||
jsonrpc,
|
||||
id: (input as any)?.id,
|
||||
error: {
|
||||
code: -32602,
|
||||
message: "invalid params",
|
||||
data: {
|
||||
details: JSON.stringify(input),
|
||||
},
|
||||
},
|
||||
}
|
||||
}
|
||||
|
||||
switch (parsed.data.method) {
|
||||
case "execute": {
|
||||
const { id, params } = runType.parse(input)
|
||||
const system = this.system
|
||||
const procedure = jsonPath.unsafeCast(params.procedure)
|
||||
const { input, timeout, id: eventId } = params
|
||||
const result = this.getResult(
|
||||
procedure,
|
||||
system,
|
||||
eventId,
|
||||
timeout,
|
||||
input,
|
||||
)
|
||||
const procedure = jsonPath.parse(params.procedure)
|
||||
const { input: inp, timeout, id: eventId } = params
|
||||
const result = this.getResult(procedure, system, eventId, timeout, inp)
|
||||
|
||||
return handleRpc(id, result)
|
||||
})
|
||||
.when(sandboxRunType, async ({ id, params }) => {
|
||||
}
|
||||
case "sandbox": {
|
||||
const { id, params } = sandboxRunType.parse(input)
|
||||
const system = this.system
|
||||
const procedure = jsonPath.unsafeCast(params.procedure)
|
||||
const { input, timeout, id: eventId } = params
|
||||
const result = this.getResult(
|
||||
procedure,
|
||||
system,
|
||||
eventId,
|
||||
timeout,
|
||||
input,
|
||||
)
|
||||
const procedure = jsonPath.parse(params.procedure)
|
||||
const { input: inp, timeout, id: eventId } = params
|
||||
const result = this.getResult(procedure, system, eventId, timeout, inp)
|
||||
|
||||
return handleRpc(id, result)
|
||||
})
|
||||
.when(callbackType, async ({ params: { id, args } }) => {
|
||||
}
|
||||
case "callback": {
|
||||
const {
|
||||
params: { id, args },
|
||||
} = callbackType.parse(input)
|
||||
this.callCallback(id, args)
|
||||
return null
|
||||
})
|
||||
.when(startType, async ({ id }) => {
|
||||
}
|
||||
case "start": {
|
||||
const { id } = startType.parse(input)
|
||||
const callbacks =
|
||||
this.callbacks?.getChild("main") || this.callbacks?.child("main")
|
||||
const effects = makeEffects({
|
||||
@@ -290,8 +295,9 @@ export class RpcListener {
|
||||
id,
|
||||
this.system.start(effects).then((result) => ({ result })),
|
||||
)
|
||||
})
|
||||
.when(stopType, async ({ id }) => {
|
||||
}
|
||||
case "stop": {
|
||||
const { id } = stopType.parse(input)
|
||||
return handleRpc(
|
||||
id,
|
||||
this.system.stop().then((result) => {
|
||||
@@ -300,8 +306,9 @@ export class RpcListener {
|
||||
return { result }
|
||||
}),
|
||||
)
|
||||
})
|
||||
.when(exitType, async ({ id, params }) => {
|
||||
}
|
||||
case "exit": {
|
||||
const { id, params } = exitType.parse(input)
|
||||
return handleRpc(
|
||||
id,
|
||||
(async () => {
|
||||
@@ -323,8 +330,9 @@ export class RpcListener {
|
||||
}
|
||||
})().then((result) => ({ result })),
|
||||
)
|
||||
})
|
||||
.when(initType, async ({ id, params }) => {
|
||||
}
|
||||
case "init": {
|
||||
const { id, params } = initType.parse(input)
|
||||
return handleRpc(
|
||||
id,
|
||||
(async () => {
|
||||
@@ -349,8 +357,9 @@ export class RpcListener {
|
||||
}
|
||||
})().then((result) => ({ result })),
|
||||
)
|
||||
})
|
||||
.when(evalType, async ({ id, params }) => {
|
||||
}
|
||||
case "eval": {
|
||||
const { id, params } = evalType.parse(input)
|
||||
return handleRpc(
|
||||
id,
|
||||
(async () => {
|
||||
@@ -375,41 +384,28 @@ export class RpcListener {
|
||||
}
|
||||
})(),
|
||||
)
|
||||
})
|
||||
.when(
|
||||
shape({ id: idType.optional(), method: string }),
|
||||
({ id, method }) => ({
|
||||
}
|
||||
default: {
|
||||
const { id, method } = z
|
||||
.object({ id: idType.optional(), method: z.string() })
|
||||
.passthrough()
|
||||
.parse(input)
|
||||
return {
|
||||
jsonrpc,
|
||||
id,
|
||||
error: {
|
||||
code: -32601,
|
||||
message: `Method not found`,
|
||||
message: "Method not found",
|
||||
data: {
|
||||
details: method,
|
||||
},
|
||||
},
|
||||
}),
|
||||
)
|
||||
|
||||
.defaultToLazy(() => {
|
||||
console.warn(
|
||||
`Couldn't parse the following input ${JSON.stringify(input)}`,
|
||||
)
|
||||
return {
|
||||
jsonrpc,
|
||||
id: (input as any)?.id,
|
||||
error: {
|
||||
code: -32602,
|
||||
message: "invalid params",
|
||||
data: {
|
||||
details: JSON.stringify(input),
|
||||
},
|
||||
},
|
||||
}
|
||||
})
|
||||
}
|
||||
}
|
||||
}
|
||||
private getResult(
|
||||
procedure: typeof jsonPath._TYPE,
|
||||
procedure: z.infer<typeof jsonPath>,
|
||||
system: System,
|
||||
eventId: string,
|
||||
timeout: number | null | undefined,
|
||||
@@ -437,6 +433,7 @@ export class RpcListener {
|
||||
return system.getActionInput(
|
||||
effects,
|
||||
procedures[2],
|
||||
input?.prefill ?? null,
|
||||
timeout || null,
|
||||
)
|
||||
case procedures[1] === "actions" && procedures[3] === "run":
|
||||
@@ -448,26 +445,18 @@ export class RpcListener {
|
||||
)
|
||||
}
|
||||
}
|
||||
})().then(ensureResultTypeShape, (error) =>
|
||||
matches(error)
|
||||
.when(
|
||||
object({
|
||||
error: string,
|
||||
code: number.defaultTo(0),
|
||||
}),
|
||||
(error) => ({
|
||||
error: {
|
||||
code: error.code,
|
||||
message: error.error,
|
||||
},
|
||||
}),
|
||||
)
|
||||
.defaultToLazy(() => ({
|
||||
error: {
|
||||
code: 0,
|
||||
message: String(error),
|
||||
},
|
||||
})),
|
||||
)
|
||||
})().then(ensureResultTypeShape, (error) => {
|
||||
const errorSchema = z.object({
|
||||
error: z.string(),
|
||||
code: z.number().default(0),
|
||||
})
|
||||
const parsed = errorSchema.safeParse(error)
|
||||
if (parsed.success) {
|
||||
return {
|
||||
error: { code: parsed.data.code, message: parsed.data.error },
|
||||
}
|
||||
}
|
||||
return { error: { code: 0, message: String(error) } }
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
@@ -2,7 +2,7 @@ import * as fs from "fs/promises"
|
||||
import * as cp from "child_process"
|
||||
import { SubContainer, types as T } from "@start9labs/start-sdk"
|
||||
import { promisify } from "util"
|
||||
import { DockerProcedure, VolumeId } from "../../../Models/DockerProcedure"
|
||||
import { DockerProcedure } from "../../../Models/DockerProcedure"
|
||||
import { Volume } from "./matchVolume"
|
||||
import {
|
||||
CommandOptions,
|
||||
@@ -28,7 +28,7 @@ export class DockerProcedureContainer extends Drop {
|
||||
effects: T.Effects,
|
||||
packageId: string,
|
||||
data: DockerProcedure,
|
||||
volumes: { [id: VolumeId]: Volume },
|
||||
volumes: { [id: string]: Volume },
|
||||
name: string,
|
||||
options: { subcontainer?: SubContainer<SDKManifest> } = {},
|
||||
) {
|
||||
@@ -47,7 +47,7 @@ export class DockerProcedureContainer extends Drop {
|
||||
effects: T.Effects,
|
||||
packageId: string,
|
||||
data: DockerProcedure,
|
||||
volumes: { [id: VolumeId]: Volume },
|
||||
volumes: { [id: string]: Volume },
|
||||
name: string,
|
||||
) {
|
||||
const subcontainer = await SubContainerOwned.of(
|
||||
@@ -64,7 +64,7 @@ export class DockerProcedureContainer extends Drop {
|
||||
? `${subcontainer.rootfs}${mounts[mount]}`
|
||||
: `${subcontainer.rootfs}/${mounts[mount]}`
|
||||
await fs.mkdir(path, { recursive: true })
|
||||
const volumeMount = volumes[mount]
|
||||
const volumeMount: Volume = volumes[mount]
|
||||
if (volumeMount.type === "data") {
|
||||
await subcontainer.mount(
|
||||
Mounts.of().mountVolume({
|
||||
@@ -90,7 +90,7 @@ export class DockerProcedureContainer extends Drop {
|
||||
...new Set(
|
||||
Object.values(hostInfo?.bindings || {})
|
||||
.flatMap((b) => b.addresses.available)
|
||||
.map((h) => h.host),
|
||||
.map((h) => h.hostname),
|
||||
).values(),
|
||||
]
|
||||
const certChain = await effects.getSslCertificate({
|
||||
|
||||
@@ -15,26 +15,11 @@ import { System } from "../../../Interfaces/System"
|
||||
import { matchManifest, Manifest } from "./matchManifest"
|
||||
import * as childProcess from "node:child_process"
|
||||
import { DockerProcedureContainer } from "./DockerProcedureContainer"
|
||||
import { DockerProcedure } from "../../../Models/DockerProcedure"
|
||||
import { promisify } from "node:util"
|
||||
import * as U from "./oldEmbassyTypes"
|
||||
import { MainLoop } from "./MainLoop"
|
||||
import {
|
||||
matches,
|
||||
boolean,
|
||||
dictionary,
|
||||
literal,
|
||||
literals,
|
||||
object,
|
||||
string,
|
||||
unknown,
|
||||
any,
|
||||
tuple,
|
||||
number,
|
||||
anyOf,
|
||||
deferred,
|
||||
Parser,
|
||||
array,
|
||||
} from "ts-matches"
|
||||
import { z } from "@start9labs/start-sdk"
|
||||
import { AddSslOptions } from "@start9labs/start-sdk/base/lib/osBindings"
|
||||
import {
|
||||
BindOptionsByProtocol,
|
||||
@@ -57,6 +42,15 @@ function todo(): never {
|
||||
throw new Error("Not implemented")
|
||||
}
|
||||
|
||||
/**
|
||||
* Local type for procedure values from the manifest.
|
||||
* The manifest's zod schemas use ZodTypeAny casts that produce `unknown` in zod v4.
|
||||
* This type restores the expected shape for type-safe property access.
|
||||
*/
|
||||
type Procedure =
|
||||
| (DockerProcedure & { type: "docker" })
|
||||
| { type: "script"; args: unknown[] | null }
|
||||
|
||||
const MANIFEST_LOCATION = "/usr/lib/startos/package/embassyManifest.json"
|
||||
export const EMBASSY_JS_LOCATION = "/usr/lib/startos/package/embassy.js"
|
||||
|
||||
@@ -65,26 +59,24 @@ const configFile = FileHelper.json(
|
||||
base: new Volume("embassy"),
|
||||
subpath: "config.json",
|
||||
},
|
||||
matches.any,
|
||||
z.any(),
|
||||
)
|
||||
const dependsOnFile = FileHelper.json(
|
||||
{
|
||||
base: new Volume("embassy"),
|
||||
subpath: "dependsOn.json",
|
||||
},
|
||||
dictionary([string, array(string)]),
|
||||
z.record(z.string(), z.array(z.string())),
|
||||
)
|
||||
|
||||
const matchResult = object({
|
||||
result: any,
|
||||
const matchResult = z.object({
|
||||
result: z.any(),
|
||||
})
|
||||
const matchError = object({
|
||||
error: string,
|
||||
const matchError = z.object({
|
||||
error: z.string(),
|
||||
})
|
||||
const matchErrorCode = object<{
|
||||
"error-code": [number, string] | readonly [number, string]
|
||||
}>({
|
||||
"error-code": tuple(number, string),
|
||||
const matchErrorCode = z.object({
|
||||
"error-code": z.tuple([z.number(), z.string()]),
|
||||
})
|
||||
|
||||
const assertNever = (
|
||||
@@ -96,29 +88,34 @@ const assertNever = (
|
||||
/**
|
||||
Should be changing the type for specific properties, and this is mostly a transformation for the old return types to the newer one.
|
||||
*/
|
||||
function isMatchResult(a: unknown): a is z.infer<typeof matchResult> {
|
||||
return matchResult.safeParse(a).success
|
||||
}
|
||||
function isMatchError(a: unknown): a is z.infer<typeof matchError> {
|
||||
return matchError.safeParse(a).success
|
||||
}
|
||||
function isMatchErrorCode(a: unknown): a is z.infer<typeof matchErrorCode> {
|
||||
return matchErrorCode.safeParse(a).success
|
||||
}
|
||||
const fromReturnType = <A>(a: U.ResultType<A>): A => {
|
||||
if (matchResult.test(a)) {
|
||||
if (isMatchResult(a)) {
|
||||
return a.result
|
||||
}
|
||||
if (matchError.test(a)) {
|
||||
if (isMatchError(a)) {
|
||||
console.info({ passedErrorStack: new Error().stack, error: a.error })
|
||||
throw { error: a.error }
|
||||
}
|
||||
if (matchErrorCode.test(a)) {
|
||||
if (isMatchErrorCode(a)) {
|
||||
const [code, message] = a["error-code"]
|
||||
throw { error: message, code }
|
||||
}
|
||||
return assertNever(a)
|
||||
return assertNever(a as never)
|
||||
}
|
||||
|
||||
const matchSetResult = object({
|
||||
"depends-on": dictionary([string, array(string)])
|
||||
.nullable()
|
||||
.optional(),
|
||||
dependsOn: dictionary([string, array(string)])
|
||||
.nullable()
|
||||
.optional(),
|
||||
signal: literals(
|
||||
const matchSetResult = z.object({
|
||||
"depends-on": z.record(z.string(), z.array(z.string())).nullable().optional(),
|
||||
dependsOn: z.record(z.string(), z.array(z.string())).nullable().optional(),
|
||||
signal: z.enum([
|
||||
"SIGTERM",
|
||||
"SIGHUP",
|
||||
"SIGINT",
|
||||
@@ -151,7 +148,7 @@ const matchSetResult = object({
|
||||
"SIGPWR",
|
||||
"SIGSYS",
|
||||
"SIGINFO",
|
||||
),
|
||||
]),
|
||||
})
|
||||
|
||||
type OldGetConfigRes = {
|
||||
@@ -233,33 +230,29 @@ const asProperty = (x: PackagePropertiesV2): PropertiesReturn =>
|
||||
Object.fromEntries(
|
||||
Object.entries(x).map(([key, value]) => [key, asProperty_(value)]),
|
||||
)
|
||||
const [matchPackageProperties, setMatchPackageProperties] =
|
||||
deferred<PackagePropertiesV2>()
|
||||
const matchPackagePropertyObject: Parser<unknown, PackagePropertyObject> =
|
||||
object({
|
||||
value: matchPackageProperties,
|
||||
type: literal("object"),
|
||||
description: string,
|
||||
})
|
||||
const matchPackagePropertyObject: z.ZodType<PackagePropertyObject> = z.object({
|
||||
value: z.lazy(() => matchPackageProperties),
|
||||
type: z.literal("object"),
|
||||
description: z.string(),
|
||||
})
|
||||
|
||||
const matchPackagePropertyString: Parser<unknown, PackagePropertyString> =
|
||||
object({
|
||||
type: literal("string"),
|
||||
description: string.nullable().optional(),
|
||||
value: string,
|
||||
copyable: boolean.nullable().optional(),
|
||||
qr: boolean.nullable().optional(),
|
||||
masked: boolean.nullable().optional(),
|
||||
})
|
||||
setMatchPackageProperties(
|
||||
dictionary([
|
||||
string,
|
||||
anyOf(matchPackagePropertyObject, matchPackagePropertyString),
|
||||
]),
|
||||
const matchPackagePropertyString: z.ZodType<PackagePropertyString> = z.object({
|
||||
type: z.literal("string"),
|
||||
description: z.string().nullable().optional(),
|
||||
value: z.string(),
|
||||
copyable: z.boolean().nullable().optional(),
|
||||
qr: z.boolean().nullable().optional(),
|
||||
masked: z.boolean().nullable().optional(),
|
||||
})
|
||||
const matchPackageProperties: z.ZodType<PackagePropertiesV2> = z.lazy(() =>
|
||||
z.record(
|
||||
z.string(),
|
||||
z.union([matchPackagePropertyObject, matchPackagePropertyString]),
|
||||
),
|
||||
)
|
||||
|
||||
const matchProperties = object({
|
||||
version: literal(2),
|
||||
const matchProperties = z.object({
|
||||
version: z.literal(2),
|
||||
data: matchPackageProperties,
|
||||
})
|
||||
|
||||
@@ -303,7 +296,7 @@ export class SystemForEmbassy implements System {
|
||||
})
|
||||
const manifestData = await fs.readFile(manifestLocation, "utf-8")
|
||||
return new SystemForEmbassy(
|
||||
matchManifest.unsafeCast(JSON.parse(manifestData)),
|
||||
matchManifest.parse(JSON.parse(manifestData)),
|
||||
moduleCode,
|
||||
)
|
||||
}
|
||||
@@ -389,7 +382,9 @@ export class SystemForEmbassy implements System {
|
||||
delete this.currentRunning
|
||||
if (currentRunning) {
|
||||
await currentRunning.clean({
|
||||
timeout: fromDuration(this.manifest.main["sigterm-timeout"] || "30s"),
|
||||
timeout: fromDuration(
|
||||
(this.manifest.main["sigterm-timeout"] as any) || "30s",
|
||||
),
|
||||
})
|
||||
}
|
||||
}
|
||||
@@ -510,6 +505,7 @@ export class SystemForEmbassy implements System {
|
||||
async getActionInput(
|
||||
effects: Effects,
|
||||
actionId: string,
|
||||
_prefill: Record<string, unknown> | null,
|
||||
timeoutMs: number | null,
|
||||
): Promise<T.ActionInput | null> {
|
||||
if (actionId === "config") {
|
||||
@@ -622,7 +618,7 @@ export class SystemForEmbassy implements System {
|
||||
effects: Effects,
|
||||
timeoutMs: number | null,
|
||||
): Promise<void> {
|
||||
const backup = this.manifest.backup.create
|
||||
const backup = this.manifest.backup.create as Procedure
|
||||
if (backup.type === "docker") {
|
||||
const commands = [backup.entrypoint, ...backup.args]
|
||||
const container = await DockerProcedureContainer.of(
|
||||
@@ -655,7 +651,7 @@ export class SystemForEmbassy implements System {
|
||||
encoding: "utf-8",
|
||||
})
|
||||
.catch((_) => null)
|
||||
const restoreBackup = this.manifest.backup.restore
|
||||
const restoreBackup = this.manifest.backup.restore as Procedure
|
||||
if (restoreBackup.type === "docker") {
|
||||
const commands = [restoreBackup.entrypoint, ...restoreBackup.args]
|
||||
const container = await DockerProcedureContainer.of(
|
||||
@@ -688,7 +684,7 @@ export class SystemForEmbassy implements System {
|
||||
effects: Effects,
|
||||
timeoutMs: number | null,
|
||||
): Promise<OldGetConfigRes> {
|
||||
const config = this.manifest.config?.get
|
||||
const config = this.manifest.config?.get as Procedure | undefined
|
||||
if (!config) return { spec: {} }
|
||||
if (config.type === "docker") {
|
||||
const commands = [config.entrypoint, ...config.args]
|
||||
@@ -730,7 +726,7 @@ export class SystemForEmbassy implements System {
|
||||
)
|
||||
await updateConfig(effects, this.manifest, spec, newConfig)
|
||||
await configFile.write(effects, newConfig)
|
||||
const setConfigValue = this.manifest.config?.set
|
||||
const setConfigValue = this.manifest.config?.set as Procedure | undefined
|
||||
if (!setConfigValue) return
|
||||
if (setConfigValue.type === "docker") {
|
||||
const commands = [
|
||||
@@ -745,7 +741,7 @@ export class SystemForEmbassy implements System {
|
||||
this.manifest.volumes,
|
||||
`Set Config - ${commands.join(" ")}`,
|
||||
)
|
||||
const answer = matchSetResult.unsafeCast(
|
||||
const answer = matchSetResult.parse(
|
||||
JSON.parse(
|
||||
(await container.execFail(commands, timeoutMs)).stdout.toString(),
|
||||
),
|
||||
@@ -758,7 +754,7 @@ export class SystemForEmbassy implements System {
|
||||
const method = moduleCode.setConfig
|
||||
if (!method) throw new Error("Expecting that the method setConfig exists")
|
||||
|
||||
const answer = matchSetResult.unsafeCast(
|
||||
const answer = matchSetResult.parse(
|
||||
await method(
|
||||
polyfillEffects(effects, this.manifest),
|
||||
newConfig as U.Config,
|
||||
@@ -787,7 +783,11 @@ export class SystemForEmbassy implements System {
|
||||
const requiredDeps = {
|
||||
...Object.fromEntries(
|
||||
Object.entries(this.manifest.dependencies ?? {})
|
||||
.filter(([k, v]) => v?.requirement.type === "required")
|
||||
.filter(
|
||||
([k, v]) =>
|
||||
(v?.requirement as { type: string } | undefined)?.type ===
|
||||
"required",
|
||||
)
|
||||
.map((x) => [x[0], []]) || [],
|
||||
),
|
||||
}
|
||||
@@ -855,7 +855,7 @@ export class SystemForEmbassy implements System {
|
||||
}
|
||||
|
||||
if (migration) {
|
||||
const [_, procedure] = migration
|
||||
const [_, procedure] = migration as readonly [unknown, Procedure]
|
||||
if (procedure.type === "docker") {
|
||||
const commands = [procedure.entrypoint, ...procedure.args]
|
||||
const container = await DockerProcedureContainer.of(
|
||||
@@ -893,7 +893,10 @@ export class SystemForEmbassy implements System {
|
||||
effects: Effects,
|
||||
timeoutMs: number | null,
|
||||
): Promise<PropertiesReturn> {
|
||||
const setConfigValue = this.manifest.properties
|
||||
const setConfigValue = this.manifest.properties as
|
||||
| Procedure
|
||||
| null
|
||||
| undefined
|
||||
if (!setConfigValue) throw new Error("There is no properties")
|
||||
if (setConfigValue.type === "docker") {
|
||||
const commands = [setConfigValue.entrypoint, ...setConfigValue.args]
|
||||
@@ -904,7 +907,7 @@ export class SystemForEmbassy implements System {
|
||||
this.manifest.volumes,
|
||||
`Properties - ${commands.join(" ")}`,
|
||||
)
|
||||
const properties = matchProperties.unsafeCast(
|
||||
const properties = matchProperties.parse(
|
||||
JSON.parse(
|
||||
(await container.execFail(commands, timeoutMs)).stdout.toString(),
|
||||
),
|
||||
@@ -915,7 +918,7 @@ export class SystemForEmbassy implements System {
|
||||
const method = moduleCode.properties
|
||||
if (!method)
|
||||
throw new Error("Expecting that the method properties exists")
|
||||
const properties = matchProperties.unsafeCast(
|
||||
const properties = matchProperties.parse(
|
||||
await method(polyfillEffects(effects, this.manifest)).then(
|
||||
fromReturnType,
|
||||
),
|
||||
@@ -930,7 +933,8 @@ export class SystemForEmbassy implements System {
|
||||
formData: unknown,
|
||||
timeoutMs: number | null,
|
||||
): Promise<T.ActionResult> {
|
||||
const actionProcedure = this.manifest.actions?.[actionId]?.implementation
|
||||
const actionProcedure = this.manifest.actions?.[actionId]
|
||||
?.implementation as Procedure | undefined
|
||||
const toActionResult = ({
|
||||
message,
|
||||
value,
|
||||
@@ -997,7 +1001,9 @@ export class SystemForEmbassy implements System {
|
||||
oldConfig: unknown,
|
||||
timeoutMs: number | null,
|
||||
): Promise<object> {
|
||||
const actionProcedure = this.manifest.dependencies?.[id]?.config?.check
|
||||
const actionProcedure = this.manifest.dependencies?.[id]?.config?.check as
|
||||
| Procedure
|
||||
| undefined
|
||||
if (!actionProcedure) return { message: "Action not found", value: null }
|
||||
if (actionProcedure.type === "docker") {
|
||||
const commands = [
|
||||
@@ -1089,40 +1095,50 @@ export class SystemForEmbassy implements System {
|
||||
}
|
||||
}
|
||||
|
||||
const matchPointer = object({
|
||||
type: literal("pointer"),
|
||||
const matchPointer = z.object({
|
||||
type: z.literal("pointer"),
|
||||
})
|
||||
|
||||
const matchPointerPackage = object({
|
||||
subtype: literal("package"),
|
||||
target: literals("tor-key", "tor-address", "lan-address"),
|
||||
"package-id": string,
|
||||
interface: string,
|
||||
const matchPointerPackage = z.object({
|
||||
subtype: z.literal("package"),
|
||||
target: z.enum(["tor-key", "tor-address", "lan-address"]),
|
||||
"package-id": z.string(),
|
||||
interface: z.string(),
|
||||
})
|
||||
const matchPointerConfig = object({
|
||||
subtype: literal("package"),
|
||||
target: literals("config"),
|
||||
"package-id": string,
|
||||
selector: string,
|
||||
multi: boolean,
|
||||
const matchPointerConfig = z.object({
|
||||
subtype: z.literal("package"),
|
||||
target: z.enum(["config"]),
|
||||
"package-id": z.string(),
|
||||
selector: z.string(),
|
||||
multi: z.boolean(),
|
||||
})
|
||||
const matchSpec = object({
|
||||
spec: object,
|
||||
const matchSpec = z.object({
|
||||
spec: z.record(z.string(), z.unknown()),
|
||||
})
|
||||
const matchVariants = object({ variants: dictionary([string, unknown]) })
|
||||
const matchVariants = z.object({ variants: z.record(z.string(), z.unknown()) })
|
||||
function isMatchPointer(v: unknown): v is z.infer<typeof matchPointer> {
|
||||
return matchPointer.safeParse(v).success
|
||||
}
|
||||
function isMatchSpec(v: unknown): v is z.infer<typeof matchSpec> {
|
||||
return matchSpec.safeParse(v).success
|
||||
}
|
||||
function isMatchVariants(v: unknown): v is z.infer<typeof matchVariants> {
|
||||
return matchVariants.safeParse(v).success
|
||||
}
|
||||
function cleanSpecOfPointers<T>(mutSpec: T): T {
|
||||
if (!object.test(mutSpec)) return mutSpec
|
||||
if (typeof mutSpec !== "object" || mutSpec === null) return mutSpec
|
||||
for (const key in mutSpec) {
|
||||
const value = mutSpec[key]
|
||||
if (matchSpec.test(value)) value.spec = cleanSpecOfPointers(value.spec)
|
||||
if (matchVariants.test(value))
|
||||
if (isMatchSpec(value))
|
||||
value.spec = cleanSpecOfPointers(value.spec) as Record<string, unknown>
|
||||
if (isMatchVariants(value))
|
||||
value.variants = Object.fromEntries(
|
||||
Object.entries(value.variants).map(([key, value]) => [
|
||||
key,
|
||||
cleanSpecOfPointers(value),
|
||||
]),
|
||||
)
|
||||
if (!matchPointer.test(value)) continue
|
||||
if (!isMatchPointer(value)) continue
|
||||
delete mutSpec[key]
|
||||
// // if (value.target === )
|
||||
}
|
||||
@@ -1245,7 +1261,7 @@ async function updateConfig(
|
||||
: catchFn(
|
||||
() =>
|
||||
filled.addressInfo!.filter({ kind: "mdns" })!.hostnames[0]
|
||||
.host,
|
||||
.hostname,
|
||||
) || ""
|
||||
mutConfigValue[key] = url
|
||||
}
|
||||
@@ -1268,7 +1284,7 @@ function extractServiceInterfaceId(manifest: Manifest, specInterface: string) {
|
||||
}
|
||||
async function convertToNewConfig(value: OldGetConfigRes) {
|
||||
try {
|
||||
const valueSpec: OldConfigSpec = matchOldConfigSpec.unsafeCast(value.spec)
|
||||
const valueSpec: OldConfigSpec = matchOldConfigSpec.parse(value.spec)
|
||||
const spec = transformConfigSpec(valueSpec)
|
||||
if (!value.config) return { spec, config: null }
|
||||
const config = transformOldConfigToNew(valueSpec, value.config) ?? null
|
||||
|
||||
@@ -4,9 +4,9 @@ import synapseManifest from "./__fixtures__/synapseManifest"
|
||||
|
||||
describe("matchManifest", () => {
|
||||
test("gittea", () => {
|
||||
matchManifest.unsafeCast(giteaManifest)
|
||||
matchManifest.parse(giteaManifest)
|
||||
})
|
||||
test("synapse", () => {
|
||||
matchManifest.unsafeCast(synapseManifest)
|
||||
matchManifest.parse(synapseManifest)
|
||||
})
|
||||
})
|
||||
|
||||
@@ -1,126 +1,121 @@
|
||||
import {
|
||||
object,
|
||||
literal,
|
||||
string,
|
||||
array,
|
||||
boolean,
|
||||
dictionary,
|
||||
literals,
|
||||
number,
|
||||
unknown,
|
||||
some,
|
||||
every,
|
||||
} from "ts-matches"
|
||||
import { z } from "@start9labs/start-sdk"
|
||||
import { matchVolume } from "./matchVolume"
|
||||
import { matchDockerProcedure } from "../../../Models/DockerProcedure"
|
||||
|
||||
const matchJsProcedure = object({
|
||||
type: literal("script"),
|
||||
args: array(unknown).nullable().optional().defaultTo([]),
|
||||
const matchJsProcedure = z.object({
|
||||
type: z.literal("script"),
|
||||
args: z.array(z.unknown()).nullable().optional().default([]),
|
||||
})
|
||||
|
||||
const matchProcedure = some(matchDockerProcedure, matchJsProcedure)
|
||||
export type Procedure = typeof matchProcedure._TYPE
|
||||
const matchProcedure = z.union([matchDockerProcedure, matchJsProcedure])
|
||||
export type Procedure = z.infer<typeof matchProcedure>
|
||||
|
||||
const matchAction = object({
|
||||
name: string,
|
||||
description: string,
|
||||
warning: string.nullable().optional(),
|
||||
const matchAction = z.object({
|
||||
name: z.string(),
|
||||
description: z.string(),
|
||||
warning: z.string().nullable().optional(),
|
||||
implementation: matchProcedure,
|
||||
"allowed-statuses": array(literals("running", "stopped")),
|
||||
"input-spec": unknown.nullable().optional(),
|
||||
"allowed-statuses": z.array(z.enum(["running", "stopped"])),
|
||||
"input-spec": z.unknown().nullable().optional(),
|
||||
})
|
||||
export const matchManifest = object({
|
||||
id: string,
|
||||
title: string,
|
||||
version: string,
|
||||
export const matchManifest = z.object({
|
||||
id: z.string(),
|
||||
title: z.string(),
|
||||
version: z.string(),
|
||||
main: matchDockerProcedure,
|
||||
assets: object({
|
||||
assets: string.nullable().optional(),
|
||||
scripts: string.nullable().optional(),
|
||||
})
|
||||
assets: z
|
||||
.object({
|
||||
assets: z.string().nullable().optional(),
|
||||
scripts: z.string().nullable().optional(),
|
||||
})
|
||||
.nullable()
|
||||
.optional(),
|
||||
"health-checks": dictionary([
|
||||
string,
|
||||
every(
|
||||
"health-checks": z.record(
|
||||
z.string(),
|
||||
z.intersection(
|
||||
matchProcedure,
|
||||
object({
|
||||
name: string,
|
||||
["success-message"]: string.nullable().optional(),
|
||||
z.object({
|
||||
name: z.string(),
|
||||
"success-message": z.string().nullable().optional(),
|
||||
}),
|
||||
),
|
||||
]),
|
||||
config: object({
|
||||
get: matchProcedure,
|
||||
set: matchProcedure,
|
||||
})
|
||||
),
|
||||
config: z
|
||||
.object({
|
||||
get: matchProcedure,
|
||||
set: matchProcedure,
|
||||
})
|
||||
.nullable()
|
||||
.optional(),
|
||||
properties: matchProcedure.nullable().optional(),
|
||||
volumes: dictionary([string, matchVolume]),
|
||||
interfaces: dictionary([
|
||||
string,
|
||||
object({
|
||||
name: string,
|
||||
description: string,
|
||||
"tor-config": object({
|
||||
"port-mapping": dictionary([string, string]),
|
||||
})
|
||||
volumes: z.record(z.string(), matchVolume),
|
||||
interfaces: z.record(
|
||||
z.string(),
|
||||
z.object({
|
||||
name: z.string(),
|
||||
description: z.string(),
|
||||
"tor-config": z
|
||||
.object({
|
||||
"port-mapping": z.record(z.string(), z.string()),
|
||||
})
|
||||
.nullable()
|
||||
.optional(),
|
||||
"lan-config": dictionary([
|
||||
string,
|
||||
object({
|
||||
ssl: boolean,
|
||||
internal: number,
|
||||
}),
|
||||
])
|
||||
"lan-config": z
|
||||
.record(
|
||||
z.string(),
|
||||
z.object({
|
||||
ssl: z.boolean(),
|
||||
internal: z.number(),
|
||||
}),
|
||||
)
|
||||
.nullable()
|
||||
.optional(),
|
||||
ui: boolean,
|
||||
protocols: array(string),
|
||||
ui: z.boolean(),
|
||||
protocols: z.array(z.string()),
|
||||
}),
|
||||
]),
|
||||
backup: object({
|
||||
),
|
||||
backup: z.object({
|
||||
create: matchProcedure,
|
||||
restore: matchProcedure,
|
||||
}),
|
||||
migrations: object({
|
||||
to: dictionary([string, matchProcedure]),
|
||||
from: dictionary([string, matchProcedure]),
|
||||
})
|
||||
migrations: z
|
||||
.object({
|
||||
to: z.record(z.string(), matchProcedure),
|
||||
from: z.record(z.string(), matchProcedure),
|
||||
})
|
||||
.nullable()
|
||||
.optional(),
|
||||
dependencies: dictionary([
|
||||
string,
|
||||
object({
|
||||
version: string,
|
||||
requirement: some(
|
||||
object({
|
||||
type: literal("opt-in"),
|
||||
how: string,
|
||||
}),
|
||||
object({
|
||||
type: literal("opt-out"),
|
||||
how: string,
|
||||
}),
|
||||
object({
|
||||
type: literal("required"),
|
||||
}),
|
||||
),
|
||||
description: string.nullable().optional(),
|
||||
config: object({
|
||||
check: matchProcedure,
|
||||
"auto-configure": matchProcedure,
|
||||
dependencies: z.record(
|
||||
z.string(),
|
||||
z
|
||||
.object({
|
||||
version: z.string(),
|
||||
requirement: z.union([
|
||||
z.object({
|
||||
type: z.literal("opt-in"),
|
||||
how: z.string(),
|
||||
}),
|
||||
z.object({
|
||||
type: z.literal("opt-out"),
|
||||
how: z.string(),
|
||||
}),
|
||||
z.object({
|
||||
type: z.literal("required"),
|
||||
}),
|
||||
]),
|
||||
description: z.string().nullable().optional(),
|
||||
config: z
|
||||
.object({
|
||||
check: matchProcedure,
|
||||
"auto-configure": matchProcedure,
|
||||
})
|
||||
.nullable()
|
||||
.optional(),
|
||||
})
|
||||
.nullable()
|
||||
.optional(),
|
||||
})
|
||||
.nullable()
|
||||
.optional(),
|
||||
]),
|
||||
),
|
||||
|
||||
actions: dictionary([string, matchAction]),
|
||||
actions: z.record(z.string(), matchAction),
|
||||
})
|
||||
export type Manifest = typeof matchManifest._TYPE
|
||||
export type Manifest = z.infer<typeof matchManifest>
|
||||
|
||||
@@ -1,32 +1,32 @@
|
||||
import { object, literal, string, boolean, some } from "ts-matches"
|
||||
import { z } from "@start9labs/start-sdk"
|
||||
|
||||
const matchDataVolume = object({
|
||||
type: literal("data"),
|
||||
readonly: boolean.optional(),
|
||||
const matchDataVolume = z.object({
|
||||
type: z.literal("data"),
|
||||
readonly: z.boolean().optional(),
|
||||
})
|
||||
const matchAssetVolume = object({
|
||||
type: literal("assets"),
|
||||
const matchAssetVolume = z.object({
|
||||
type: z.literal("assets"),
|
||||
})
|
||||
const matchPointerVolume = object({
|
||||
type: literal("pointer"),
|
||||
"package-id": string,
|
||||
"volume-id": string,
|
||||
path: string,
|
||||
readonly: boolean,
|
||||
const matchPointerVolume = z.object({
|
||||
type: z.literal("pointer"),
|
||||
"package-id": z.string(),
|
||||
"volume-id": z.string(),
|
||||
path: z.string(),
|
||||
readonly: z.boolean(),
|
||||
})
|
||||
const matchCertificateVolume = object({
|
||||
type: literal("certificate"),
|
||||
"interface-id": string,
|
||||
const matchCertificateVolume = z.object({
|
||||
type: z.literal("certificate"),
|
||||
"interface-id": z.string(),
|
||||
})
|
||||
const matchBackupVolume = object({
|
||||
type: literal("backup"),
|
||||
readonly: boolean,
|
||||
const matchBackupVolume = z.object({
|
||||
type: z.literal("backup"),
|
||||
readonly: z.boolean(),
|
||||
})
|
||||
export const matchVolume = some(
|
||||
export const matchVolume = z.union([
|
||||
matchDataVolume,
|
||||
matchAssetVolume,
|
||||
matchPointerVolume,
|
||||
matchCertificateVolume,
|
||||
matchBackupVolume,
|
||||
)
|
||||
export type Volume = typeof matchVolume._TYPE
|
||||
])
|
||||
export type Volume = z.infer<typeof matchVolume>
|
||||
|
||||
@@ -12,43 +12,43 @@ import nostrConfig2 from "./__fixtures__/nostrConfig2"
|
||||
|
||||
describe("transformConfigSpec", () => {
|
||||
test("matchOldConfigSpec(embassyPages.homepage.variants[web-page])", () => {
|
||||
matchOldConfigSpec.unsafeCast(
|
||||
matchOldConfigSpec.parse(
|
||||
fixtureEmbassyPagesConfig.homepage.variants["web-page"],
|
||||
)
|
||||
})
|
||||
test("matchOldConfigSpec(embassyPages)", () => {
|
||||
matchOldConfigSpec.unsafeCast(fixtureEmbassyPagesConfig)
|
||||
matchOldConfigSpec.parse(fixtureEmbassyPagesConfig)
|
||||
})
|
||||
test("transformConfigSpec(embassyPages)", () => {
|
||||
const spec = matchOldConfigSpec.unsafeCast(fixtureEmbassyPagesConfig)
|
||||
const spec = matchOldConfigSpec.parse(fixtureEmbassyPagesConfig)
|
||||
expect(transformConfigSpec(spec)).toMatchSnapshot()
|
||||
})
|
||||
|
||||
test("matchOldConfigSpec(RTL.nodes)", () => {
|
||||
matchOldValueSpecList.unsafeCast(fixtureRTLConfig.nodes)
|
||||
matchOldValueSpecList.parse(fixtureRTLConfig.nodes)
|
||||
})
|
||||
test("matchOldConfigSpec(RTL)", () => {
|
||||
matchOldConfigSpec.unsafeCast(fixtureRTLConfig)
|
||||
matchOldConfigSpec.parse(fixtureRTLConfig)
|
||||
})
|
||||
test("transformConfigSpec(RTL)", () => {
|
||||
const spec = matchOldConfigSpec.unsafeCast(fixtureRTLConfig)
|
||||
const spec = matchOldConfigSpec.parse(fixtureRTLConfig)
|
||||
expect(transformConfigSpec(spec)).toMatchSnapshot()
|
||||
})
|
||||
|
||||
test("transformConfigSpec(searNXG)", () => {
|
||||
const spec = matchOldConfigSpec.unsafeCast(searNXG)
|
||||
const spec = matchOldConfigSpec.parse(searNXG)
|
||||
expect(transformConfigSpec(spec)).toMatchSnapshot()
|
||||
})
|
||||
test("transformConfigSpec(bitcoind)", () => {
|
||||
const spec = matchOldConfigSpec.unsafeCast(bitcoind)
|
||||
const spec = matchOldConfigSpec.parse(bitcoind)
|
||||
expect(transformConfigSpec(spec)).toMatchSnapshot()
|
||||
})
|
||||
test("transformConfigSpec(nostr)", () => {
|
||||
const spec = matchOldConfigSpec.unsafeCast(nostr)
|
||||
const spec = matchOldConfigSpec.parse(nostr)
|
||||
expect(transformConfigSpec(spec)).toMatchSnapshot()
|
||||
})
|
||||
test("transformConfigSpec(nostr2)", () => {
|
||||
const spec = matchOldConfigSpec.unsafeCast(nostrConfig2)
|
||||
const spec = matchOldConfigSpec.parse(nostrConfig2)
|
||||
expect(transformConfigSpec(spec)).toMatchSnapshot()
|
||||
})
|
||||
})
|
||||
|
||||
@@ -1,19 +1,4 @@
|
||||
import { IST } from "@start9labs/start-sdk"
|
||||
import {
|
||||
dictionary,
|
||||
object,
|
||||
anyOf,
|
||||
string,
|
||||
literals,
|
||||
array,
|
||||
number,
|
||||
boolean,
|
||||
Parser,
|
||||
deferred,
|
||||
every,
|
||||
nill,
|
||||
literal,
|
||||
} from "ts-matches"
|
||||
import { IST, z } from "@start9labs/start-sdk"
|
||||
|
||||
export function transformConfigSpec(oldSpec: OldConfigSpec): IST.InputSpec {
|
||||
return Object.entries(oldSpec).reduce((inputSpec, [key, oldVal]) => {
|
||||
@@ -82,7 +67,7 @@ export function transformConfigSpec(oldSpec: OldConfigSpec): IST.InputSpec {
|
||||
name: oldVal.name,
|
||||
description: oldVal.description || null,
|
||||
warning: oldVal.warning || null,
|
||||
spec: transformConfigSpec(matchOldConfigSpec.unsafeCast(oldVal.spec)),
|
||||
spec: transformConfigSpec(matchOldConfigSpec.parse(oldVal.spec)),
|
||||
}
|
||||
} else if (oldVal.type === "string") {
|
||||
newVal = {
|
||||
@@ -121,7 +106,7 @@ export function transformConfigSpec(oldSpec: OldConfigSpec): IST.InputSpec {
|
||||
...obj,
|
||||
[id]: {
|
||||
name: oldVal.tag["variant-names"][id] || id,
|
||||
spec: transformConfigSpec(matchOldConfigSpec.unsafeCast(spec)),
|
||||
spec: transformConfigSpec(matchOldConfigSpec.parse(spec)),
|
||||
},
|
||||
}),
|
||||
{} as Record<string, { name: string; spec: IST.InputSpec }>,
|
||||
@@ -153,7 +138,7 @@ export function transformOldConfigToNew(
|
||||
|
||||
if (isObject(val)) {
|
||||
newVal = transformOldConfigToNew(
|
||||
matchOldConfigSpec.unsafeCast(val.spec),
|
||||
matchOldConfigSpec.parse(val.spec),
|
||||
config[key],
|
||||
)
|
||||
}
|
||||
@@ -172,7 +157,7 @@ export function transformOldConfigToNew(
|
||||
newVal = {
|
||||
selection,
|
||||
value: transformOldConfigToNew(
|
||||
matchOldConfigSpec.unsafeCast(val.variants[selection]),
|
||||
matchOldConfigSpec.parse(val.variants[selection]),
|
||||
config[key],
|
||||
),
|
||||
}
|
||||
@@ -183,10 +168,7 @@ export function transformOldConfigToNew(
|
||||
|
||||
if (isObjectList(val)) {
|
||||
newVal = (config[key] as object[]).map((obj) =>
|
||||
transformOldConfigToNew(
|
||||
matchOldConfigSpec.unsafeCast(val.spec.spec),
|
||||
obj,
|
||||
),
|
||||
transformOldConfigToNew(matchOldConfigSpec.parse(val.spec.spec), obj),
|
||||
)
|
||||
} else if (isUnionList(val)) return obj
|
||||
}
|
||||
@@ -212,7 +194,7 @@ export function transformNewConfigToOld(
|
||||
|
||||
if (isObject(val)) {
|
||||
newVal = transformNewConfigToOld(
|
||||
matchOldConfigSpec.unsafeCast(val.spec),
|
||||
matchOldConfigSpec.parse(val.spec),
|
||||
config[key],
|
||||
)
|
||||
}
|
||||
@@ -221,7 +203,7 @@ export function transformNewConfigToOld(
|
||||
newVal = {
|
||||
[val.tag.id]: config[key].selection,
|
||||
...transformNewConfigToOld(
|
||||
matchOldConfigSpec.unsafeCast(val.variants[config[key].selection]),
|
||||
matchOldConfigSpec.parse(val.variants[config[key].selection]),
|
||||
config[key].value,
|
||||
),
|
||||
}
|
||||
@@ -230,10 +212,7 @@ export function transformNewConfigToOld(
|
||||
if (isList(val)) {
|
||||
if (isObjectList(val)) {
|
||||
newVal = (config[key] as object[]).map((obj) =>
|
||||
transformNewConfigToOld(
|
||||
matchOldConfigSpec.unsafeCast(val.spec.spec),
|
||||
obj,
|
||||
),
|
||||
transformNewConfigToOld(matchOldConfigSpec.parse(val.spec.spec), obj),
|
||||
)
|
||||
} else if (isUnionList(val)) return obj
|
||||
}
|
||||
@@ -337,9 +316,7 @@ function getListSpec(
|
||||
default: oldVal.default as Record<string, unknown>[],
|
||||
spec: {
|
||||
type: "object",
|
||||
spec: transformConfigSpec(
|
||||
matchOldConfigSpec.unsafeCast(oldVal.spec.spec),
|
||||
),
|
||||
spec: transformConfigSpec(matchOldConfigSpec.parse(oldVal.spec.spec)),
|
||||
uniqueBy: oldVal.spec["unique-by"] || null,
|
||||
displayAs: oldVal.spec["display-as"] || null,
|
||||
},
|
||||
@@ -393,211 +370,281 @@ function isUnionList(
|
||||
}
|
||||
|
||||
export type OldConfigSpec = Record<string, OldValueSpec>
|
||||
const [_matchOldConfigSpec, setMatchOldConfigSpec] = deferred<unknown>()
|
||||
export const matchOldConfigSpec = _matchOldConfigSpec as Parser<
|
||||
unknown,
|
||||
OldConfigSpec
|
||||
>
|
||||
export const matchOldDefaultString = anyOf(
|
||||
string,
|
||||
object({ charset: string, len: number }),
|
||||
export const matchOldConfigSpec: z.ZodType<OldConfigSpec> = z.lazy(() =>
|
||||
z.record(z.string(), matchOldValueSpec),
|
||||
)
|
||||
type OldDefaultString = typeof matchOldDefaultString._TYPE
|
||||
export const matchOldDefaultString = z.union([
|
||||
z.string(),
|
||||
z.object({ charset: z.string(), len: z.number() }),
|
||||
])
|
||||
type OldDefaultString = z.infer<typeof matchOldDefaultString>
|
||||
|
||||
export const matchOldValueSpecString = object({
|
||||
type: literals("string"),
|
||||
name: string,
|
||||
masked: boolean.nullable().optional(),
|
||||
copyable: boolean.nullable().optional(),
|
||||
nullable: boolean.nullable().optional(),
|
||||
placeholder: string.nullable().optional(),
|
||||
pattern: string.nullable().optional(),
|
||||
"pattern-description": string.nullable().optional(),
|
||||
export const matchOldValueSpecString = z.object({
|
||||
type: z.enum(["string"]),
|
||||
name: z.string(),
|
||||
masked: z.boolean().nullable().optional(),
|
||||
copyable: z.boolean().nullable().optional(),
|
||||
nullable: z.boolean().nullable().optional(),
|
||||
placeholder: z.string().nullable().optional(),
|
||||
pattern: z.string().nullable().optional(),
|
||||
"pattern-description": z.string().nullable().optional(),
|
||||
default: matchOldDefaultString.nullable().optional(),
|
||||
textarea: boolean.nullable().optional(),
|
||||
description: string.nullable().optional(),
|
||||
warning: string.nullable().optional(),
|
||||
textarea: z.boolean().nullable().optional(),
|
||||
description: z.string().nullable().optional(),
|
||||
warning: z.string().nullable().optional(),
|
||||
})
|
||||
|
||||
export const matchOldValueSpecNumber = object({
|
||||
type: literals("number"),
|
||||
nullable: boolean,
|
||||
name: string,
|
||||
range: string,
|
||||
integral: boolean,
|
||||
default: number.nullable().optional(),
|
||||
description: string.nullable().optional(),
|
||||
warning: string.nullable().optional(),
|
||||
units: string.nullable().optional(),
|
||||
placeholder: anyOf(number, string).nullable().optional(),
|
||||
export const matchOldValueSpecNumber = z.object({
|
||||
type: z.enum(["number"]),
|
||||
nullable: z.boolean(),
|
||||
name: z.string(),
|
||||
range: z.string(),
|
||||
integral: z.boolean(),
|
||||
default: z.number().nullable().optional(),
|
||||
description: z.string().nullable().optional(),
|
||||
warning: z.string().nullable().optional(),
|
||||
units: z.string().nullable().optional(),
|
||||
placeholder: z.union([z.number(), z.string()]).nullable().optional(),
|
||||
})
|
||||
type OldValueSpecNumber = typeof matchOldValueSpecNumber._TYPE
|
||||
type OldValueSpecNumber = z.infer<typeof matchOldValueSpecNumber>
|
||||
|
||||
export const matchOldValueSpecBoolean = object({
|
||||
type: literals("boolean"),
|
||||
default: boolean,
|
||||
name: string,
|
||||
description: string.nullable().optional(),
|
||||
warning: string.nullable().optional(),
|
||||
export const matchOldValueSpecBoolean = z.object({
|
||||
type: z.enum(["boolean"]),
|
||||
default: z.boolean(),
|
||||
name: z.string(),
|
||||
description: z.string().nullable().optional(),
|
||||
warning: z.string().nullable().optional(),
|
||||
})
|
||||
type OldValueSpecBoolean = typeof matchOldValueSpecBoolean._TYPE
|
||||
type OldValueSpecBoolean = z.infer<typeof matchOldValueSpecBoolean>
|
||||
|
||||
const matchOldValueSpecObject = object({
|
||||
type: literals("object"),
|
||||
spec: _matchOldConfigSpec,
|
||||
name: string,
|
||||
description: string.nullable().optional(),
|
||||
warning: string.nullable().optional(),
|
||||
type OldValueSpecObject = {
|
||||
type: "object"
|
||||
spec: OldConfigSpec
|
||||
name: string
|
||||
description?: string | null
|
||||
warning?: string | null
|
||||
}
|
||||
const matchOldValueSpecObject: z.ZodType<OldValueSpecObject> = z.object({
|
||||
type: z.enum(["object"]),
|
||||
spec: z.lazy(() => matchOldConfigSpec),
|
||||
name: z.string(),
|
||||
description: z.string().nullable().optional(),
|
||||
warning: z.string().nullable().optional(),
|
||||
})
|
||||
type OldValueSpecObject = typeof matchOldValueSpecObject._TYPE
|
||||
|
||||
const matchOldValueSpecEnum = object({
|
||||
values: array(string),
|
||||
"value-names": dictionary([string, string]),
|
||||
type: literals("enum"),
|
||||
default: string,
|
||||
name: string,
|
||||
description: string.nullable().optional(),
|
||||
warning: string.nullable().optional(),
|
||||
const matchOldValueSpecEnum = z.object({
|
||||
values: z.array(z.string()),
|
||||
"value-names": z.record(z.string(), z.string()),
|
||||
type: z.enum(["enum"]),
|
||||
default: z.string(),
|
||||
name: z.string(),
|
||||
description: z.string().nullable().optional(),
|
||||
warning: z.string().nullable().optional(),
|
||||
})
|
||||
type OldValueSpecEnum = typeof matchOldValueSpecEnum._TYPE
|
||||
type OldValueSpecEnum = z.infer<typeof matchOldValueSpecEnum>
|
||||
|
||||
const matchOldUnionTagSpec = object({
|
||||
id: string, // The name of the field containing one of the union variants
|
||||
"variant-names": dictionary([string, string]), // The name of each variant
|
||||
name: string,
|
||||
description: string.nullable().optional(),
|
||||
warning: string.nullable().optional(),
|
||||
const matchOldUnionTagSpec = z.object({
|
||||
id: z.string(), // The name of the field containing one of the union variants
|
||||
"variant-names": z.record(z.string(), z.string()), // The name of each variant
|
||||
name: z.string(),
|
||||
description: z.string().nullable().optional(),
|
||||
warning: z.string().nullable().optional(),
|
||||
})
|
||||
const matchOldValueSpecUnion = object({
|
||||
type: literals("union"),
|
||||
type OldValueSpecUnion = {
|
||||
type: "union"
|
||||
tag: z.infer<typeof matchOldUnionTagSpec>
|
||||
variants: Record<string, OldConfigSpec>
|
||||
default: string
|
||||
}
|
||||
const matchOldValueSpecUnion: z.ZodType<OldValueSpecUnion> = z.object({
|
||||
type: z.enum(["union"]),
|
||||
tag: matchOldUnionTagSpec,
|
||||
variants: dictionary([string, _matchOldConfigSpec]),
|
||||
default: string,
|
||||
variants: z.record(
|
||||
z.string(),
|
||||
z.lazy(() => matchOldConfigSpec),
|
||||
),
|
||||
default: z.string(),
|
||||
})
|
||||
type OldValueSpecUnion = typeof matchOldValueSpecUnion._TYPE
|
||||
|
||||
const [matchOldUniqueBy, setOldUniqueBy] = deferred<OldUniqueBy>()
|
||||
type OldUniqueBy =
|
||||
| null
|
||||
| string
|
||||
| { any: OldUniqueBy[] }
|
||||
| { all: OldUniqueBy[] }
|
||||
|
||||
setOldUniqueBy(
|
||||
anyOf(
|
||||
nill,
|
||||
string,
|
||||
object({ any: array(matchOldUniqueBy) }),
|
||||
object({ all: array(matchOldUniqueBy) }),
|
||||
),
|
||||
const matchOldUniqueBy: z.ZodType<OldUniqueBy> = z.lazy(() =>
|
||||
z.union([
|
||||
z.null(),
|
||||
z.string(),
|
||||
z.object({ any: z.array(matchOldUniqueBy) }),
|
||||
z.object({ all: z.array(matchOldUniqueBy) }),
|
||||
]),
|
||||
)
|
||||
|
||||
const matchOldListValueSpecObject = object({
|
||||
spec: _matchOldConfigSpec, // this is a mapped type of the config object at this level, replacing the object's values with specs on those values
|
||||
"unique-by": matchOldUniqueBy.nullable().optional(), // indicates whether duplicates can be permitted in the list
|
||||
"display-as": string.nullable().optional(), // this should be a handlebars template which can make use of the entire config which corresponds to 'spec'
|
||||
})
|
||||
const matchOldListValueSpecUnion = object({
|
||||
type OldListValueSpecObject = {
|
||||
spec: OldConfigSpec
|
||||
"unique-by"?: OldUniqueBy | null
|
||||
"display-as"?: string | null
|
||||
}
|
||||
const matchOldListValueSpecObject: z.ZodType<OldListValueSpecObject> = z.object(
|
||||
{
|
||||
spec: z.lazy(() => matchOldConfigSpec), // this is a mapped type of the config object at this level, replacing the object's values with specs on those values
|
||||
"unique-by": matchOldUniqueBy.nullable().optional(), // indicates whether duplicates can be permitted in the list
|
||||
"display-as": z.string().nullable().optional(), // this should be a handlebars template which can make use of the entire config which corresponds to 'spec'
|
||||
},
|
||||
)
|
||||
type OldListValueSpecUnion = {
|
||||
"unique-by"?: OldUniqueBy | null
|
||||
"display-as"?: string | null
|
||||
tag: z.infer<typeof matchOldUnionTagSpec>
|
||||
variants: Record<string, OldConfigSpec>
|
||||
}
|
||||
const matchOldListValueSpecUnion: z.ZodType<OldListValueSpecUnion> = z.object({
|
||||
"unique-by": matchOldUniqueBy.nullable().optional(),
|
||||
"display-as": string.nullable().optional(),
|
||||
"display-as": z.string().nullable().optional(),
|
||||
tag: matchOldUnionTagSpec,
|
||||
variants: dictionary([string, _matchOldConfigSpec]),
|
||||
variants: z.record(
|
||||
z.string(),
|
||||
z.lazy(() => matchOldConfigSpec),
|
||||
),
|
||||
})
|
||||
const matchOldListValueSpecString = object({
|
||||
masked: boolean.nullable().optional(),
|
||||
copyable: boolean.nullable().optional(),
|
||||
pattern: string.nullable().optional(),
|
||||
"pattern-description": string.nullable().optional(),
|
||||
placeholder: string.nullable().optional(),
|
||||
const matchOldListValueSpecString = z.object({
|
||||
masked: z.boolean().nullable().optional(),
|
||||
copyable: z.boolean().nullable().optional(),
|
||||
pattern: z.string().nullable().optional(),
|
||||
"pattern-description": z.string().nullable().optional(),
|
||||
placeholder: z.string().nullable().optional(),
|
||||
})
|
||||
|
||||
const matchOldListValueSpecEnum = object({
|
||||
values: array(string),
|
||||
"value-names": dictionary([string, string]),
|
||||
const matchOldListValueSpecEnum = z.object({
|
||||
values: z.array(z.string()),
|
||||
"value-names": z.record(z.string(), z.string()),
|
||||
})
|
||||
const matchOldListValueSpecNumber = object({
|
||||
range: string,
|
||||
integral: boolean,
|
||||
units: string.nullable().optional(),
|
||||
placeholder: anyOf(number, string).nullable().optional(),
|
||||
const matchOldListValueSpecNumber = z.object({
|
||||
range: z.string(),
|
||||
integral: z.boolean(),
|
||||
units: z.string().nullable().optional(),
|
||||
placeholder: z.union([z.number(), z.string()]).nullable().optional(),
|
||||
})
|
||||
|
||||
type OldValueSpecListBase = {
|
||||
type: "list"
|
||||
range: string
|
||||
default: string[] | number[] | OldDefaultString[] | Record<string, unknown>[]
|
||||
name: string
|
||||
description?: string | null
|
||||
warning?: string | null
|
||||
}
|
||||
|
||||
type OldValueSpecList = OldValueSpecListBase &
|
||||
(
|
||||
| { subtype: "string"; spec: z.infer<typeof matchOldListValueSpecString> }
|
||||
| { subtype: "enum"; spec: z.infer<typeof matchOldListValueSpecEnum> }
|
||||
| { subtype: "object"; spec: OldListValueSpecObject }
|
||||
| { subtype: "number"; spec: z.infer<typeof matchOldListValueSpecNumber> }
|
||||
| { subtype: "union"; spec: OldListValueSpecUnion }
|
||||
)
|
||||
|
||||
// represents a spec for a list
|
||||
export const matchOldValueSpecList = every(
|
||||
object({
|
||||
type: literals("list"),
|
||||
range: string, // '[0,1]' (inclusive) OR '[0,*)' (right unbounded), normal math rules
|
||||
default: anyOf(
|
||||
array(string),
|
||||
array(number),
|
||||
array(matchOldDefaultString),
|
||||
array(object),
|
||||
),
|
||||
name: string,
|
||||
description: string.nullable().optional(),
|
||||
warning: string.nullable().optional(),
|
||||
}),
|
||||
anyOf(
|
||||
object({
|
||||
subtype: literals("string"),
|
||||
spec: matchOldListValueSpecString,
|
||||
export const matchOldValueSpecList: z.ZodType<OldValueSpecList> =
|
||||
z.intersection(
|
||||
z.object({
|
||||
type: z.enum(["list"]),
|
||||
range: z.string(), // '[0,1]' (inclusive) OR '[0,*)' (right unbounded), normal math rules
|
||||
default: z.union([
|
||||
z.array(z.string()),
|
||||
z.array(z.number()),
|
||||
z.array(matchOldDefaultString),
|
||||
z.array(z.object({}).passthrough()),
|
||||
]),
|
||||
name: z.string(),
|
||||
description: z.string().nullable().optional(),
|
||||
warning: z.string().nullable().optional(),
|
||||
}),
|
||||
object({
|
||||
subtype: literals("enum"),
|
||||
spec: matchOldListValueSpecEnum,
|
||||
}),
|
||||
object({
|
||||
subtype: literals("object"),
|
||||
spec: matchOldListValueSpecObject,
|
||||
}),
|
||||
object({
|
||||
subtype: literals("number"),
|
||||
spec: matchOldListValueSpecNumber,
|
||||
}),
|
||||
object({
|
||||
subtype: literals("union"),
|
||||
spec: matchOldListValueSpecUnion,
|
||||
}),
|
||||
),
|
||||
)
|
||||
type OldValueSpecList = typeof matchOldValueSpecList._TYPE
|
||||
z.union([
|
||||
z.object({
|
||||
subtype: z.enum(["string"]),
|
||||
spec: matchOldListValueSpecString,
|
||||
}),
|
||||
z.object({
|
||||
subtype: z.enum(["enum"]),
|
||||
spec: matchOldListValueSpecEnum,
|
||||
}),
|
||||
z.object({
|
||||
subtype: z.enum(["object"]),
|
||||
spec: matchOldListValueSpecObject,
|
||||
}),
|
||||
z.object({
|
||||
subtype: z.enum(["number"]),
|
||||
spec: matchOldListValueSpecNumber,
|
||||
}),
|
||||
z.object({
|
||||
subtype: z.enum(["union"]),
|
||||
spec: matchOldListValueSpecUnion,
|
||||
}),
|
||||
]),
|
||||
) as unknown as z.ZodType<OldValueSpecList>
|
||||
|
||||
const matchOldValueSpecPointer = every(
|
||||
object({
|
||||
type: literal("pointer"),
|
||||
}),
|
||||
anyOf(
|
||||
object({
|
||||
subtype: literal("package"),
|
||||
target: literals("tor-key", "tor-address", "lan-address"),
|
||||
"package-id": string,
|
||||
interface: string,
|
||||
}),
|
||||
object({
|
||||
subtype: literal("package"),
|
||||
target: literals("config"),
|
||||
"package-id": string,
|
||||
selector: string,
|
||||
multi: boolean,
|
||||
}),
|
||||
),
|
||||
type OldValueSpecPointer = {
|
||||
type: "pointer"
|
||||
} & (
|
||||
| {
|
||||
subtype: "package"
|
||||
target: "tor-key" | "tor-address" | "lan-address"
|
||||
"package-id": string
|
||||
interface: string
|
||||
}
|
||||
| {
|
||||
subtype: "package"
|
||||
target: "config"
|
||||
"package-id": string
|
||||
selector: string
|
||||
multi: boolean
|
||||
}
|
||||
)
|
||||
type OldValueSpecPointer = typeof matchOldValueSpecPointer._TYPE
|
||||
const matchOldValueSpecPointer: z.ZodType<OldValueSpecPointer> = z.intersection(
|
||||
z.object({
|
||||
type: z.literal("pointer"),
|
||||
}),
|
||||
z.union([
|
||||
z.object({
|
||||
subtype: z.literal("package"),
|
||||
target: z.enum(["tor-key", "tor-address", "lan-address"]),
|
||||
"package-id": z.string(),
|
||||
interface: z.string(),
|
||||
}),
|
||||
z.object({
|
||||
subtype: z.literal("package"),
|
||||
target: z.enum(["config"]),
|
||||
"package-id": z.string(),
|
||||
selector: z.string(),
|
||||
multi: z.boolean(),
|
||||
}),
|
||||
]),
|
||||
) as unknown as z.ZodType<OldValueSpecPointer>
|
||||
|
||||
export const matchOldValueSpec = anyOf(
|
||||
type OldValueSpecString = z.infer<typeof matchOldValueSpecString>
|
||||
|
||||
type OldValueSpec =
|
||||
| OldValueSpecString
|
||||
| OldValueSpecNumber
|
||||
| OldValueSpecBoolean
|
||||
| OldValueSpecObject
|
||||
| OldValueSpecEnum
|
||||
| OldValueSpecList
|
||||
| OldValueSpecUnion
|
||||
| OldValueSpecPointer
|
||||
|
||||
export const matchOldValueSpec: z.ZodType<OldValueSpec> = z.union([
|
||||
matchOldValueSpecString,
|
||||
matchOldValueSpecNumber,
|
||||
matchOldValueSpecBoolean,
|
||||
matchOldValueSpecObject,
|
||||
matchOldValueSpecObject as z.ZodType<OldValueSpecObject>,
|
||||
matchOldValueSpecEnum,
|
||||
matchOldValueSpecList,
|
||||
matchOldValueSpecUnion,
|
||||
matchOldValueSpecPointer,
|
||||
)
|
||||
type OldValueSpec = typeof matchOldValueSpec._TYPE
|
||||
|
||||
setMatchOldConfigSpec(dictionary([string, matchOldValueSpec]))
|
||||
matchOldValueSpecList as z.ZodType<OldValueSpecList>,
|
||||
matchOldValueSpecUnion as z.ZodType<OldValueSpecUnion>,
|
||||
matchOldValueSpecPointer as z.ZodType<OldValueSpecPointer>,
|
||||
])
|
||||
|
||||
export class Range {
|
||||
min?: number
|
||||
|
||||
@@ -47,11 +47,12 @@ export class SystemForStartOs implements System {
|
||||
getActionInput(
|
||||
effects: Effects,
|
||||
id: string,
|
||||
prefill: Record<string, unknown> | null,
|
||||
timeoutMs: number | null,
|
||||
): Promise<T.ActionInput | null> {
|
||||
const action = this.abi.actions.get(id)
|
||||
if (!action) throw new Error(`Action ${id} not found`)
|
||||
return action.getInput({ effects })
|
||||
return action.getInput({ effects, prefill })
|
||||
}
|
||||
runAction(
|
||||
effects: Effects,
|
||||
|
||||
@@ -33,6 +33,7 @@ export type System = {
|
||||
getActionInput(
|
||||
effects: Effects,
|
||||
actionId: string,
|
||||
prefill: Record<string, unknown> | null,
|
||||
timeoutMs: number | null,
|
||||
): Promise<T.ActionInput | null>
|
||||
|
||||
|
||||
@@ -1,41 +1,19 @@
|
||||
import {
|
||||
object,
|
||||
literal,
|
||||
string,
|
||||
boolean,
|
||||
array,
|
||||
dictionary,
|
||||
literals,
|
||||
number,
|
||||
Parser,
|
||||
some,
|
||||
} from "ts-matches"
|
||||
import { z } from "@start9labs/start-sdk"
|
||||
import { matchDuration } from "./Duration"
|
||||
|
||||
const VolumeId = string
|
||||
const Path = string
|
||||
|
||||
export type VolumeId = string
|
||||
export type Path = string
|
||||
export const matchDockerProcedure = object({
|
||||
type: literal("docker"),
|
||||
image: string,
|
||||
system: boolean.optional(),
|
||||
entrypoint: string,
|
||||
args: array(string).defaultTo([]),
|
||||
mounts: dictionary([VolumeId, Path]).optional(),
|
||||
"io-format": literals(
|
||||
"json",
|
||||
"json-pretty",
|
||||
"yaml",
|
||||
"cbor",
|
||||
"toml",
|
||||
"toml-pretty",
|
||||
)
|
||||
export const matchDockerProcedure = z.object({
|
||||
type: z.literal("docker"),
|
||||
image: z.string(),
|
||||
system: z.boolean().optional(),
|
||||
entrypoint: z.string(),
|
||||
args: z.array(z.string()).default([]),
|
||||
mounts: z.record(z.string(), z.string()).optional(),
|
||||
"io-format": z
|
||||
.enum(["json", "json-pretty", "yaml", "cbor", "toml", "toml-pretty"])
|
||||
.nullable()
|
||||
.optional(),
|
||||
"sigterm-timeout": some(number, matchDuration).onMismatch(30),
|
||||
inject: boolean.defaultTo(false),
|
||||
"sigterm-timeout": z.union([z.number(), matchDuration]).catch(30),
|
||||
inject: z.boolean().default(false),
|
||||
})
|
||||
|
||||
export type DockerProcedure = typeof matchDockerProcedure._TYPE
|
||||
export type DockerProcedure = z.infer<typeof matchDockerProcedure>
|
||||
|
||||
@@ -1,11 +1,11 @@
|
||||
import { string } from "ts-matches"
|
||||
import { z } from "@start9labs/start-sdk"
|
||||
|
||||
export type TimeUnit = "d" | "h" | "s" | "ms" | "m" | "µs" | "ns"
|
||||
export type Duration = `${number}${TimeUnit}`
|
||||
|
||||
const durationRegex = /^([0-9]*(\.[0-9]+)?)(ns|µs|ms|s|m|d)$/
|
||||
|
||||
export const matchDuration = string.refine(isDuration)
|
||||
export const matchDuration = z.string().refine(isDuration)
|
||||
export function isDuration(value: string): value is Duration {
|
||||
return durationRegex.test(value)
|
||||
}
|
||||
|
||||
@@ -1,10 +1,10 @@
|
||||
import { literals, some, string } from "ts-matches"
|
||||
import { z } from "@start9labs/start-sdk"
|
||||
|
||||
type NestedPath<A extends string, B extends string> = `/${A}/${string}/${B}`
|
||||
type NestedPaths = NestedPath<"actions", "run" | "getInput">
|
||||
// prettier-ignore
|
||||
type UnNestPaths<A> =
|
||||
A extends `${infer A}/${infer B}` ? [...UnNestPaths<A>, ... UnNestPaths<B>] :
|
||||
type UnNestPaths<A> =
|
||||
A extends `${infer A}/${infer B}` ? [...UnNestPaths<A>, ... UnNestPaths<B>] :
|
||||
[A]
|
||||
|
||||
export function unNestPath<A extends string>(a: A): UnNestPaths<A> {
|
||||
@@ -17,14 +17,14 @@ function isNestedPath(path: string): path is NestedPaths {
|
||||
return true
|
||||
return false
|
||||
}
|
||||
export const jsonPath = some(
|
||||
literals(
|
||||
export const jsonPath = z.union([
|
||||
z.enum([
|
||||
"/packageInit",
|
||||
"/packageUninit",
|
||||
"/backup/create",
|
||||
"/backup/restore",
|
||||
),
|
||||
string.refine(isNestedPath, "isNestedPath"),
|
||||
)
|
||||
]),
|
||||
z.string().refine(isNestedPath),
|
||||
])
|
||||
|
||||
export type JsonPath = typeof jsonPath._TYPE
|
||||
export type JsonPath = z.infer<typeof jsonPath>
|
||||
|
||||
@@ -16,6 +16,6 @@ case $ARCH in
|
||||
esac
|
||||
|
||||
docker run --rm $USE_TTY --platform=$DOCKER_PLATFORM -eARCH --privileged -v "$(pwd):/root/start-os" start9/build-env /root/start-os/container-runtime/update-image.sh
|
||||
if [ "$(ls -nd "rootfs.${ARCH}.squashfs" | awk '{ print $3 }')" != "$UID" ]; then
|
||||
if [ "$(ls -nd "container-runtime/rootfs.${ARCH}.squashfs" | awk '{ print $3 }')" != "$UID" ]; then
|
||||
docker run --rm $USE_TTY -v "$(pwd):/root/start-os" start9/build-env chown -R $UID:$UID /root/start-os/container-runtime
|
||||
fi
|
||||
@@ -53,6 +53,8 @@ Patch-DB provides diff-based state synchronization. Changes to `db/model/public.
|
||||
- `.mutate(|v| ...)` — Deserialize, mutate, reserialize
|
||||
- For maps: `.keys()`, `.as_idx(&key)`, `.as_idx_mut(&key)`, `.insert()`, `.remove()`, `.contains_key()`
|
||||
|
||||
See [patchdb.md](patchdb.md) for `TypedDbWatch<T>` construction, API, and usage patterns.
|
||||
|
||||
## i18n
|
||||
|
||||
See [i18n-patterns.md](i18n-patterns.md) for internationalization key conventions and the `t!()` macro.
|
||||
@@ -64,6 +66,7 @@ See [core-rust-patterns.md](core-rust-patterns.md) for common utilities (Invoke
|
||||
## Related Documentation
|
||||
|
||||
- [rpc-toolkit.md](rpc-toolkit.md) — JSON-RPC handler patterns
|
||||
- [patchdb.md](patchdb.md) — Patch-DB watch patterns and TypedDbWatch
|
||||
- [i18n-patterns.md](i18n-patterns.md) — Internationalization conventions
|
||||
- [core-rust-patterns.md](core-rust-patterns.md) — Common Rust utilities
|
||||
- [s9pk-structure.md](s9pk-structure.md) — S9PK package format
|
||||
|
||||
@@ -23,3 +23,5 @@ cd sdk && make baseDist dist # Rebuild SDK after ts-bindings
|
||||
- When adding RPC endpoints, follow the patterns in [rpc-toolkit.md](rpc-toolkit.md)
|
||||
- When modifying `#[ts(export)]` types, regenerate bindings and rebuild the SDK (see [ARCHITECTURE.md](../ARCHITECTURE.md#build-pipeline))
|
||||
- When adding i18n keys, add all 5 locales in `core/locales/i18n.yaml` (see [i18n-patterns.md](i18n-patterns.md))
|
||||
- When using DB watches, follow the `TypedDbWatch<T>` patterns in [patchdb.md](patchdb.md)
|
||||
- **Always use `.invoke(ErrorKind::...)` instead of `.status()` when running CLI commands** via `tokio::process::Command`. The `Invoke` trait (from `crate::util::Invoke`) captures stdout/stderr and checks exit codes properly. Using `.status()` leaks stderr directly to system logs, creating noise. For check-then-act patterns (e.g. `iptables -C`), use `.invoke(...).await.is_ok()` / `.is_err()` instead of `.status().await.map_or(false, |s| s.success())`.
|
||||
|
||||
@@ -994,6 +994,27 @@ disk.mount.binding:
|
||||
fr_FR: "Liaison de %{src} à %{dst}"
|
||||
pl_PL: "Wiązanie %{src} do %{dst}"
|
||||
|
||||
hostname.empty:
|
||||
en_US: "Hostname cannot be empty"
|
||||
de_DE: "Der Hostname darf nicht leer sein"
|
||||
es_ES: "El nombre de host no puede estar vacío"
|
||||
fr_FR: "Le nom d'hôte ne peut pas être vide"
|
||||
pl_PL: "Nazwa hosta nie może być pusta"
|
||||
|
||||
hostname.invalid-character:
|
||||
en_US: "Invalid character in hostname: %{char}"
|
||||
de_DE: "Ungültiges Zeichen im Hostnamen: %{char}"
|
||||
es_ES: "Carácter no válido en el nombre de host: %{char}"
|
||||
fr_FR: "Caractère invalide dans le nom d'hôte : %{char}"
|
||||
pl_PL: "Nieprawidłowy znak w nazwie hosta: %{char}"
|
||||
|
||||
hostname.must-provide-name-or-hostname:
|
||||
en_US: "Must provide at least one of: name, hostname"
|
||||
de_DE: "Es muss mindestens eines angegeben werden: name, hostname"
|
||||
es_ES: "Se debe proporcionar al menos uno de: name, hostname"
|
||||
fr_FR: "Vous devez fournir au moins l'un des éléments suivants : name, hostname"
|
||||
pl_PL: "Należy podać co najmniej jedno z: name, hostname"
|
||||
|
||||
# init.rs
|
||||
init.running-preinit:
|
||||
en_US: "Running preinit.sh"
|
||||
@@ -1243,6 +1264,21 @@ backup.target.cifs.target-not-found-id:
|
||||
fr_FR: "ID de cible de sauvegarde %{id} non trouvé"
|
||||
pl_PL: "Nie znaleziono ID celu kopii zapasowej %{id}"
|
||||
|
||||
# service/effects/net/plugin.rs
|
||||
net.plugin.manifest-missing-plugin:
|
||||
en_US: "manifest does not declare the \"%{plugin}\" plugin"
|
||||
de_DE: "Manifest deklariert das Plugin \"%{plugin}\" nicht"
|
||||
es_ES: "el manifiesto no declara el plugin \"%{plugin}\""
|
||||
fr_FR: "le manifeste ne déclare pas le plugin \"%{plugin}\""
|
||||
pl_PL: "manifest nie deklaruje wtyczki \"%{plugin}\""
|
||||
|
||||
net.plugin.binding-not-found:
|
||||
en_US: "binding not found: %{binding}"
|
||||
de_DE: "Bindung nicht gefunden: %{binding}"
|
||||
es_ES: "enlace no encontrado: %{binding}"
|
||||
fr_FR: "liaison introuvable : %{binding}"
|
||||
pl_PL: "powiązanie nie znalezione: %{binding}"
|
||||
|
||||
# net/ssl.rs
|
||||
net.ssl.unreachable:
|
||||
en_US: "unreachable"
|
||||
@@ -1790,6 +1826,28 @@ registry.package.remove-mirror.unauthorized:
|
||||
fr_FR: "Non autorisé"
|
||||
pl_PL: "Brak autoryzacji"
|
||||
|
||||
# registry/package/index.rs
|
||||
registry.package.index.metadata-mismatch:
|
||||
en_US: "package metadata mismatch: remove the existing version first, then re-add"
|
||||
de_DE: "Paketmetadaten stimmen nicht überein: vorhandene Version zuerst entfernen, dann erneut hinzufügen"
|
||||
es_ES: "discrepancia de metadatos del paquete: elimine la versión existente primero, luego vuelva a agregarla"
|
||||
fr_FR: "discordance des métadonnées du paquet : supprimez d'abord la version existante, puis ajoutez-la à nouveau"
|
||||
pl_PL: "niezgodność metadanych pakietu: najpierw usuń istniejącą wersję, a następnie dodaj ponownie"
|
||||
|
||||
registry.package.index.icon-mismatch:
|
||||
en_US: "package icon mismatch: remove the existing version first, then re-add"
|
||||
de_DE: "Paketsymbol stimmt nicht überein: vorhandene Version zuerst entfernen, dann erneut hinzufügen"
|
||||
es_ES: "discrepancia del icono del paquete: elimine la versión existente primero, luego vuelva a agregarla"
|
||||
fr_FR: "discordance de l'icône du paquet : supprimez d'abord la version existante, puis ajoutez-la à nouveau"
|
||||
pl_PL: "niezgodność ikony pakietu: najpierw usuń istniejącą wersję, a następnie dodaj ponownie"
|
||||
|
||||
registry.package.index.dependency-metadata-mismatch:
|
||||
en_US: "dependency metadata mismatch: remove the existing version first, then re-add"
|
||||
de_DE: "Abhängigkeitsmetadaten stimmen nicht überein: vorhandene Version zuerst entfernen, dann erneut hinzufügen"
|
||||
es_ES: "discrepancia de metadatos de dependencia: elimine la versión existente primero, luego vuelva a agregarla"
|
||||
fr_FR: "discordance des métadonnées de dépendance : supprimez d'abord la version existante, puis ajoutez-la à nouveau"
|
||||
pl_PL: "niezgodność metadanych zależności: najpierw usuń istniejącą wersję, a następnie dodaj ponownie"
|
||||
|
||||
# registry/package/get.rs
|
||||
registry.package.get.version-not-found:
|
||||
en_US: "Could not find a version of %{id} that satisfies %{version}"
|
||||
@@ -3087,7 +3145,7 @@ help.arg.smtp-from:
|
||||
fr_FR: "Adresse de l'expéditeur"
|
||||
pl_PL: "Adres nadawcy e-mail"
|
||||
|
||||
help.arg.smtp-login:
|
||||
help.arg.smtp-username:
|
||||
en_US: "SMTP authentication username"
|
||||
de_DE: "SMTP-Authentifizierungsbenutzername"
|
||||
es_ES: "Nombre de usuario de autenticación SMTP"
|
||||
@@ -3108,13 +3166,20 @@ help.arg.smtp-port:
|
||||
fr_FR: "Port du serveur SMTP"
|
||||
pl_PL: "Port serwera SMTP"
|
||||
|
||||
help.arg.smtp-server:
|
||||
help.arg.smtp-host:
|
||||
en_US: "SMTP server hostname"
|
||||
de_DE: "SMTP-Server-Hostname"
|
||||
es_ES: "Nombre de host del servidor SMTP"
|
||||
fr_FR: "Nom d'hôte du serveur SMTP"
|
||||
pl_PL: "Nazwa hosta serwera SMTP"
|
||||
|
||||
help.arg.smtp-security:
|
||||
en_US: "Connection security mode (starttls or tls)"
|
||||
de_DE: "Verbindungssicherheitsmodus (starttls oder tls)"
|
||||
es_ES: "Modo de seguridad de conexión (starttls o tls)"
|
||||
fr_FR: "Mode de sécurité de connexion (starttls ou tls)"
|
||||
pl_PL: "Tryb zabezpieczeń połączenia (starttls lub tls)"
|
||||
|
||||
help.arg.smtp-to:
|
||||
en_US: "Email recipient address"
|
||||
de_DE: "E-Mail-Empfängeradresse"
|
||||
@@ -3935,6 +4000,13 @@ about.allow-gateway-infer-inbound-access-from-wan:
|
||||
fr_FR: "Permettre à cette passerelle de déduire si elle a un accès entrant depuis le WAN en fonction de son adresse IPv4"
|
||||
pl_PL: "Pozwól tej bramce wywnioskować, czy ma dostęp przychodzący z WAN na podstawie adresu IPv4"
|
||||
|
||||
about.apply-available-update:
|
||||
en_US: "Apply available update"
|
||||
de_DE: "Verfügbares Update anwenden"
|
||||
es_ES: "Aplicar actualización disponible"
|
||||
fr_FR: "Appliquer la mise à jour disponible"
|
||||
pl_PL: "Zastosuj dostępną aktualizację"
|
||||
|
||||
about.calculate-blake3-hash-for-file:
|
||||
en_US: "Calculate blake3 hash for a file"
|
||||
de_DE: "Blake3-Hash für eine Datei berechnen"
|
||||
@@ -3949,6 +4021,20 @@ about.cancel-install-package:
|
||||
fr_FR: "Annuler l'installation d'un paquet"
|
||||
pl_PL: "Anuluj instalację pakietu"
|
||||
|
||||
about.check-dns-configuration:
|
||||
en_US: "Check DNS configuration for a gateway"
|
||||
de_DE: "DNS-Konfiguration für ein Gateway prüfen"
|
||||
es_ES: "Verificar la configuración DNS de un gateway"
|
||||
fr_FR: "Vérifier la configuration DNS d'une passerelle"
|
||||
pl_PL: "Sprawdź konfigurację DNS bramy"
|
||||
|
||||
about.check-for-updates:
|
||||
en_US: "Check for available updates"
|
||||
de_DE: "Nach verfügbaren Updates suchen"
|
||||
es_ES: "Buscar actualizaciones disponibles"
|
||||
fr_FR: "Vérifier les mises à jour disponibles"
|
||||
pl_PL: "Sprawdź dostępne aktualizacje"
|
||||
|
||||
about.check-update-startos:
|
||||
en_US: "Check a given registry for StartOS updates and update if available"
|
||||
de_DE: "Ein bestimmtes Registry auf StartOS-Updates prüfen und bei Verfügbarkeit aktualisieren"
|
||||
@@ -5139,6 +5225,13 @@ about.set-country:
|
||||
fr_FR: "Définir le pays"
|
||||
pl_PL: "Ustaw kraj"
|
||||
|
||||
about.set-hostname:
|
||||
en_US: "Set the server hostname"
|
||||
de_DE: "Den Server-Hostnamen festlegen"
|
||||
es_ES: "Establecer el nombre de host del servidor"
|
||||
fr_FR: "Définir le nom d'hôte du serveur"
|
||||
pl_PL: "Ustaw nazwę hosta serwera"
|
||||
|
||||
about.set-gateway-enabled-for-binding:
|
||||
en_US: "Set gateway enabled for binding"
|
||||
de_DE: "Gateway für Bindung aktivieren"
|
||||
|
||||
105
core/patchdb.md
Normal file
105
core/patchdb.md
Normal file
@@ -0,0 +1,105 @@
|
||||
# Patch-DB Patterns
|
||||
|
||||
## Model<T> and HasModel
|
||||
|
||||
Types stored in the database derive `HasModel`, which generates typed accessor methods on `Model<T>`:
|
||||
|
||||
```rust
|
||||
#[derive(Debug, Deserialize, Serialize, HasModel)]
|
||||
#[serde(rename_all = "camelCase")]
|
||||
#[model = "Model<Self>"]
|
||||
pub struct ServerInfo {
|
||||
pub version: Version,
|
||||
pub network: NetworkInfo,
|
||||
// ...
|
||||
}
|
||||
```
|
||||
|
||||
**Generated accessors** (one per field):
|
||||
- `as_version()` — `&Model<Version>`
|
||||
- `as_version_mut()` — `&mut Model<Version>`
|
||||
- `into_version()` — `Model<Version>`
|
||||
|
||||
**`Model<T>` APIs:**
|
||||
- `.de()` — Deserialize to `T`
|
||||
- `.ser(&value)` — Serialize from `T`
|
||||
- `.mutate(|v| ...)` — Deserialize, mutate, reserialize
|
||||
- For maps: `.keys()`, `.as_idx(&key)`, `.insert()`, `.remove()`, `.contains_key()`
|
||||
|
||||
## Database Access
|
||||
|
||||
```rust
|
||||
// Read-only snapshot
|
||||
let snap = db.peek().await;
|
||||
let version = snap.as_public().as_server_info().as_version().de()?;
|
||||
|
||||
// Atomic mutation
|
||||
db.mutate(|db| {
|
||||
db.as_public_mut().as_server_info_mut().as_version_mut().ser(&new_version)?;
|
||||
Ok(())
|
||||
}).await;
|
||||
```
|
||||
|
||||
## TypedDbWatch<T>
|
||||
|
||||
Watch a JSON pointer path for changes and deserialize as a typed value. Requires `T: HasModel`.
|
||||
|
||||
### Construction
|
||||
|
||||
```rust
|
||||
use patch_db::json_ptr::JsonPointer;
|
||||
|
||||
let ptr: JsonPointer = "/public/serverInfo".parse().unwrap();
|
||||
let mut watch = db.watch(ptr).await.typed::<ServerInfo>();
|
||||
```
|
||||
|
||||
### API
|
||||
|
||||
- `watch.peek()?.de()?` — Get current value as `T`
|
||||
- `watch.changed().await?` — Wait until the watched path changes
|
||||
- `watch.peek()?.as_field().de()?` — Access nested fields via `HasModel` accessors
|
||||
|
||||
### Usage Patterns
|
||||
|
||||
**Wait for a condition, then proceed:**
|
||||
|
||||
```rust
|
||||
// Wait for DB version to match current OS version
|
||||
let current = Current::default().semver();
|
||||
let mut watch = db
|
||||
.watch("/public/serverInfo".parse().unwrap())
|
||||
.await
|
||||
.typed::<ServerInfo>();
|
||||
loop {
|
||||
let server_info = watch.peek()?.de()?;
|
||||
if server_info.version == current {
|
||||
break;
|
||||
}
|
||||
watch.changed().await?;
|
||||
}
|
||||
```
|
||||
|
||||
**React to changes in a loop:**
|
||||
|
||||
```rust
|
||||
// From net_controller.rs — react to host changes
|
||||
let mut watch = db
|
||||
.watch("/public/serverInfo/network/host".parse().unwrap())
|
||||
.await
|
||||
.typed::<Host>();
|
||||
loop {
|
||||
if let Err(e) = watch.changed().await {
|
||||
tracing::error!("DB watch disconnected: {e}");
|
||||
break;
|
||||
}
|
||||
let host = watch.peek()?.de()?;
|
||||
// ... process host ...
|
||||
}
|
||||
```
|
||||
|
||||
### Real Examples
|
||||
|
||||
- `net_controller.rs:469` — Watch `Hosts` for package network changes
|
||||
- `net_controller.rs:493` — Watch `Host` for main UI network changes
|
||||
- `service_actor.rs:37` — Watch `StatusInfo` for service state transitions
|
||||
- `gateway.rs:1212` — Wait for DB migrations to complete before syncing
|
||||
@@ -6,7 +6,7 @@ use openssl::pkey::{PKey, Private};
|
||||
use openssl::x509::X509;
|
||||
|
||||
use crate::db::model::DatabaseModel;
|
||||
use crate::hostname::{Hostname, generate_hostname, generate_id};
|
||||
use crate::hostname::{ServerHostnameInfo, generate_hostname, generate_id};
|
||||
use crate::net::ssl::{gen_nistp256, make_root_cert};
|
||||
use crate::prelude::*;
|
||||
use crate::util::serde::Pem;
|
||||
@@ -23,7 +23,7 @@ fn hash_password(password: &str) -> Result<String, Error> {
|
||||
#[derive(Clone)]
|
||||
pub struct AccountInfo {
|
||||
pub server_id: String,
|
||||
pub hostname: Hostname,
|
||||
pub hostname: ServerHostnameInfo,
|
||||
pub password: String,
|
||||
pub root_ca_key: PKey<Private>,
|
||||
pub root_ca_cert: X509,
|
||||
@@ -31,11 +31,19 @@ pub struct AccountInfo {
|
||||
pub developer_key: ed25519_dalek::SigningKey,
|
||||
}
|
||||
impl AccountInfo {
|
||||
pub fn new(password: &str, start_time: SystemTime) -> Result<Self, Error> {
|
||||
pub fn new(
|
||||
password: &str,
|
||||
start_time: SystemTime,
|
||||
hostname: Option<ServerHostnameInfo>,
|
||||
) -> Result<Self, Error> {
|
||||
let server_id = generate_id();
|
||||
let hostname = generate_hostname();
|
||||
let hostname = if let Some(h) = hostname {
|
||||
h
|
||||
} else {
|
||||
ServerHostnameInfo::from_hostname(generate_hostname())
|
||||
};
|
||||
let root_ca_key = gen_nistp256()?;
|
||||
let root_ca_cert = make_root_cert(&root_ca_key, &hostname, start_time)?;
|
||||
let root_ca_cert = make_root_cert(&root_ca_key, &hostname.hostname, start_time)?;
|
||||
let ssh_key = ssh_key::PrivateKey::from(ssh_key::private::Ed25519Keypair::random(
|
||||
&mut ssh_key::rand_core::OsRng::default(),
|
||||
));
|
||||
@@ -54,7 +62,7 @@ impl AccountInfo {
|
||||
|
||||
pub fn load(db: &DatabaseModel) -> Result<Self, Error> {
|
||||
let server_id = db.as_public().as_server_info().as_id().de()?;
|
||||
let hostname = Hostname(db.as_public().as_server_info().as_hostname().de()?);
|
||||
let hostname = ServerHostnameInfo::load(db.as_public().as_server_info())?;
|
||||
let password = db.as_private().as_password().de()?;
|
||||
let key_store = db.as_private().as_key_store();
|
||||
let cert_store = key_store.as_local_certs();
|
||||
@@ -77,7 +85,7 @@ impl AccountInfo {
|
||||
pub fn save(&self, db: &mut DatabaseModel) -> Result<(), Error> {
|
||||
let server_info = db.as_public_mut().as_server_info_mut();
|
||||
server_info.as_id_mut().ser(&self.server_id)?;
|
||||
server_info.as_hostname_mut().ser(&self.hostname.0)?;
|
||||
self.hostname.save(server_info)?;
|
||||
server_info
|
||||
.as_pubkey_mut()
|
||||
.ser(&self.ssh_key.public_key().to_openssh()?)?;
|
||||
@@ -115,8 +123,8 @@ impl AccountInfo {
|
||||
|
||||
pub fn hostnames(&self) -> impl IntoIterator<Item = InternedString> + Send + '_ {
|
||||
[
|
||||
self.hostname.no_dot_host_name(),
|
||||
self.hostname.local_domain_name(),
|
||||
(*self.hostname.hostname).clone(),
|
||||
self.hostname.hostname.local_domain_name(),
|
||||
]
|
||||
}
|
||||
}
|
||||
|
||||
@@ -67,6 +67,10 @@ pub struct GetActionInputParams {
|
||||
pub package_id: PackageId,
|
||||
#[arg(help = "help.arg.action-id")]
|
||||
pub action_id: ActionId,
|
||||
#[ts(type = "Record<string, unknown> | null")]
|
||||
#[serde(default)]
|
||||
#[arg(skip)]
|
||||
pub prefill: Option<Value>,
|
||||
}
|
||||
|
||||
#[instrument(skip_all)]
|
||||
@@ -75,6 +79,7 @@ pub async fn get_action_input(
|
||||
GetActionInputParams {
|
||||
package_id,
|
||||
action_id,
|
||||
prefill,
|
||||
}: GetActionInputParams,
|
||||
) -> Result<Option<ActionInput>, Error> {
|
||||
ctx.services
|
||||
@@ -82,7 +87,7 @@ pub async fn get_action_input(
|
||||
.await
|
||||
.as_ref()
|
||||
.or_not_found(lazy_format!("Manager for {}", package_id))?
|
||||
.get_action_input(Guid::new(), action_id)
|
||||
.get_action_input(Guid::new(), action_id, prefill.unwrap_or(Value::Null))
|
||||
.await
|
||||
}
|
||||
|
||||
|
||||
@@ -271,9 +271,9 @@ async fn perform_backup(
|
||||
package_backups.insert(
|
||||
id.clone(),
|
||||
PackageBackupInfo {
|
||||
os_version: manifest.as_os_version().de()?,
|
||||
os_version: manifest.as_metadata().as_os_version().de()?,
|
||||
version: manifest.as_version().de()?,
|
||||
title: manifest.as_title().de()?,
|
||||
title: manifest.as_metadata().as_title().de()?,
|
||||
timestamp: Utc::now(),
|
||||
},
|
||||
);
|
||||
@@ -338,7 +338,7 @@ async fn perform_backup(
|
||||
let timestamp = Utc::now();
|
||||
|
||||
backup_guard.unencrypted_metadata.version = crate::version::Current::default().semver().into();
|
||||
backup_guard.unencrypted_metadata.hostname = ctx.account.peek(|a| a.hostname.clone());
|
||||
backup_guard.unencrypted_metadata.hostname = ctx.account.peek(|a| a.hostname.hostname.clone());
|
||||
backup_guard.unencrypted_metadata.timestamp = timestamp.clone();
|
||||
backup_guard.metadata.version = crate::version::Current::default().semver().into();
|
||||
backup_guard.metadata.timestamp = Some(timestamp);
|
||||
|
||||
@@ -6,7 +6,7 @@ use serde::{Deserialize, Serialize};
|
||||
use ssh_key::private::Ed25519Keypair;
|
||||
|
||||
use crate::account::AccountInfo;
|
||||
use crate::hostname::{Hostname, generate_hostname, generate_id};
|
||||
use crate::hostname::{ServerHostname, ServerHostnameInfo, generate_hostname, generate_id};
|
||||
use crate::prelude::*;
|
||||
use crate::util::serde::{Base32, Base64, Pem};
|
||||
|
||||
@@ -27,10 +27,12 @@ impl<'de> Deserialize<'de> for OsBackup {
|
||||
.map_err(serde::de::Error::custom)?,
|
||||
1 => patch_db::value::from_value::<OsBackupV1>(tagged.rest)
|
||||
.map_err(serde::de::Error::custom)?
|
||||
.project(),
|
||||
.project()
|
||||
.map_err(serde::de::Error::custom)?,
|
||||
2 => patch_db::value::from_value::<OsBackupV2>(tagged.rest)
|
||||
.map_err(serde::de::Error::custom)?
|
||||
.project(),
|
||||
.project()
|
||||
.map_err(serde::de::Error::custom)?,
|
||||
v => {
|
||||
return Err(serde::de::Error::custom(&format!(
|
||||
"Unknown backup version {v}"
|
||||
@@ -75,7 +77,7 @@ impl OsBackupV0 {
|
||||
Ok(OsBackup {
|
||||
account: AccountInfo {
|
||||
server_id: generate_id(),
|
||||
hostname: generate_hostname(),
|
||||
hostname: ServerHostnameInfo::from_hostname(generate_hostname()),
|
||||
password: Default::default(),
|
||||
root_ca_key: self.root_ca_key.0,
|
||||
root_ca_cert: self.root_ca_cert.0,
|
||||
@@ -104,11 +106,11 @@ struct OsBackupV1 {
|
||||
ui: Value, // JSON Value
|
||||
}
|
||||
impl OsBackupV1 {
|
||||
fn project(self) -> OsBackup {
|
||||
OsBackup {
|
||||
fn project(self) -> Result<OsBackup, Error> {
|
||||
Ok(OsBackup {
|
||||
account: AccountInfo {
|
||||
server_id: self.server_id,
|
||||
hostname: Hostname(self.hostname),
|
||||
hostname: ServerHostnameInfo::from_hostname(ServerHostname::new(self.hostname)?),
|
||||
password: Default::default(),
|
||||
root_ca_key: self.root_ca_key.0,
|
||||
root_ca_cert: self.root_ca_cert.0,
|
||||
@@ -116,7 +118,7 @@ impl OsBackupV1 {
|
||||
developer_key: ed25519_dalek::SigningKey::from_bytes(&self.net_key),
|
||||
},
|
||||
ui: self.ui,
|
||||
}
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
@@ -134,11 +136,11 @@ struct OsBackupV2 {
|
||||
ui: Value, // JSON Value
|
||||
}
|
||||
impl OsBackupV2 {
|
||||
fn project(self) -> OsBackup {
|
||||
OsBackup {
|
||||
fn project(self) -> Result<OsBackup, Error> {
|
||||
Ok(OsBackup {
|
||||
account: AccountInfo {
|
||||
server_id: self.server_id,
|
||||
hostname: Hostname(self.hostname),
|
||||
hostname: ServerHostnameInfo::from_hostname(ServerHostname::new(self.hostname)?),
|
||||
password: Default::default(),
|
||||
root_ca_key: self.root_ca_key.0,
|
||||
root_ca_cert: self.root_ca_cert.0,
|
||||
@@ -146,12 +148,12 @@ impl OsBackupV2 {
|
||||
developer_key: self.compat_s9pk_key.0,
|
||||
},
|
||||
ui: self.ui,
|
||||
}
|
||||
})
|
||||
}
|
||||
fn unproject(backup: &OsBackup) -> Self {
|
||||
Self {
|
||||
server_id: backup.account.server_id.clone(),
|
||||
hostname: backup.account.hostname.0.clone(),
|
||||
hostname: (*backup.account.hostname.hostname).clone(),
|
||||
root_ca_key: Pem(backup.account.root_ca_key.clone()),
|
||||
root_ca_cert: Pem(backup.account.root_ca_cert.clone()),
|
||||
ssh_key: Pem(backup.account.ssh_key.clone()),
|
||||
|
||||
@@ -17,6 +17,7 @@ use crate::db::model::Database;
|
||||
use crate::disk::mount::backup::BackupMountGuard;
|
||||
use crate::disk::mount::filesystem::ReadWrite;
|
||||
use crate::disk::mount::guard::{GenericMountGuard, TmpMountGuard};
|
||||
use crate::hostname::ServerHostnameInfo;
|
||||
use crate::init::init;
|
||||
use crate::prelude::*;
|
||||
use crate::progress::ProgressUnits;
|
||||
@@ -90,6 +91,7 @@ pub async fn recover_full_server(
|
||||
server_id: &str,
|
||||
recovery_password: &str,
|
||||
kiosk: Option<bool>,
|
||||
hostname: Option<ServerHostnameInfo>,
|
||||
SetupExecuteProgress {
|
||||
init_phases,
|
||||
restore_phase,
|
||||
@@ -115,6 +117,10 @@ pub async fn recover_full_server(
|
||||
)
|
||||
.with_kind(ErrorKind::PasswordHashGeneration)?;
|
||||
|
||||
if let Some(h) = hostname {
|
||||
os_backup.account.hostname = h;
|
||||
}
|
||||
|
||||
let kiosk = Some(kiosk.unwrap_or(true)).filter(|_| &*PLATFORM != "raspberrypi");
|
||||
sync_kiosk(kiosk).await?;
|
||||
|
||||
@@ -183,7 +189,7 @@ pub async fn recover_full_server(
|
||||
|
||||
Ok((
|
||||
SetupResult {
|
||||
hostname: os_backup.account.hostname,
|
||||
hostname: os_backup.account.hostname.hostname,
|
||||
root_ca: Pem(os_backup.account.root_ca_cert),
|
||||
needs_restart: ctx.install_rootfs.peek(|a| a.is_some()),
|
||||
},
|
||||
|
||||
@@ -218,7 +218,10 @@ pub struct CifsRemoveParams {
|
||||
pub id: BackupTargetId,
|
||||
}
|
||||
|
||||
pub async fn remove(ctx: RpcContext, CifsRemoveParams { id }: CifsRemoveParams) -> Result<(), Error> {
|
||||
pub async fn remove(
|
||||
ctx: RpcContext,
|
||||
CifsRemoveParams { id }: CifsRemoveParams,
|
||||
) -> Result<(), Error> {
|
||||
let id = if let BackupTargetId::Cifs { id } = id {
|
||||
id
|
||||
} else {
|
||||
|
||||
@@ -70,7 +70,8 @@ async fn inner_main(
|
||||
};
|
||||
|
||||
let (rpc_ctx, shutdown) = async {
|
||||
crate::hostname::sync_hostname(&rpc_ctx.account.peek(|a| a.hostname.clone())).await?;
|
||||
crate::hostname::sync_hostname(&rpc_ctx.account.peek(|a| a.hostname.hostname.clone()))
|
||||
.await?;
|
||||
|
||||
let mut shutdown_recv = rpc_ctx.shutdown.subscribe();
|
||||
|
||||
@@ -147,10 +148,7 @@ pub fn main(args: impl IntoIterator<Item = OsString>) {
|
||||
.build()
|
||||
.expect(&t!("bins.startd.failed-to-initialize-runtime"));
|
||||
let res = rt.block_on(async {
|
||||
let mut server = WebServer::new(
|
||||
Acceptor::new(WildcardListener::new(80)?),
|
||||
refresher(),
|
||||
);
|
||||
let mut server = WebServer::new(Acceptor::new(WildcardListener::new(80)?), refresher());
|
||||
match inner_main(&mut server, &config).await {
|
||||
Ok(a) => {
|
||||
server.shutdown().await;
|
||||
|
||||
@@ -7,13 +7,13 @@ use clap::Parser;
|
||||
use futures::FutureExt;
|
||||
use rpc_toolkit::CliApp;
|
||||
use rust_i18n::t;
|
||||
use tokio::net::TcpListener;
|
||||
use tokio::signal::unix::signal;
|
||||
use tracing::instrument;
|
||||
use visit_rs::Visit;
|
||||
|
||||
use crate::context::CliContext;
|
||||
use crate::context::config::ClientConfig;
|
||||
use tokio::net::TcpListener;
|
||||
use crate::net::tls::TlsListener;
|
||||
use crate::net::web_server::{Accept, Acceptor, MetadataVisitor, WebServer};
|
||||
use crate::prelude::*;
|
||||
|
||||
@@ -165,8 +165,7 @@ impl RpcContext {
|
||||
{
|
||||
(net_ctrl, os_net_service)
|
||||
} else {
|
||||
let net_ctrl =
|
||||
Arc::new(NetController::init(db.clone(), &account.hostname, socks_proxy).await?);
|
||||
let net_ctrl = Arc::new(NetController::init(db.clone(), socks_proxy).await?);
|
||||
webserver.send_modify(|wl| wl.set_ip_info(net_ctrl.net_iface.watcher.subscribe()));
|
||||
let os_net_service = net_ctrl.os_bindings().await?;
|
||||
(net_ctrl, os_net_service)
|
||||
@@ -533,7 +532,7 @@ impl RpcContext {
|
||||
for (package_id, action_id) in tasks {
|
||||
if let Some(service) = self.services.get(&package_id).await.as_ref() {
|
||||
if let Some(input) = service
|
||||
.get_action_input(procedure_id.clone(), action_id.clone())
|
||||
.get_action_input(procedure_id.clone(), action_id.clone(), Value::Null)
|
||||
.await
|
||||
.log_err()
|
||||
.flatten()
|
||||
|
||||
@@ -19,7 +19,7 @@ use crate::MAIN_DATA;
|
||||
use crate::context::RpcContext;
|
||||
use crate::context::config::ServerConfig;
|
||||
use crate::disk::mount::guard::{MountGuard, TmpMountGuard};
|
||||
use crate::hostname::Hostname;
|
||||
use crate::hostname::ServerHostname;
|
||||
use crate::net::gateway::WildcardListener;
|
||||
use crate::net::web_server::{WebServer, WebServerAcceptorSetter};
|
||||
use crate::prelude::*;
|
||||
@@ -45,7 +45,7 @@ lazy_static::lazy_static! {
|
||||
#[ts(export)]
|
||||
pub struct SetupResult {
|
||||
#[ts(type = "string")]
|
||||
pub hostname: Hostname,
|
||||
pub hostname: ServerHostname,
|
||||
pub root_ca: Pem<X509>,
|
||||
pub needs_restart: bool,
|
||||
}
|
||||
|
||||
@@ -45,7 +45,12 @@ impl Database {
|
||||
.collect(),
|
||||
ssh_privkey: Pem(account.ssh_key.clone()),
|
||||
ssh_pubkeys: SshKeys::new(),
|
||||
available_ports: AvailablePorts::new(),
|
||||
available_ports: {
|
||||
let mut ports = AvailablePorts::new();
|
||||
ports.set_ssl(80, false);
|
||||
ports.set_ssl(443, true);
|
||||
ports
|
||||
},
|
||||
sessions: Sessions::new(),
|
||||
notifications: Notifications::new(),
|
||||
cifs: CifsTargets::new(),
|
||||
|
||||
@@ -381,9 +381,10 @@ pub struct PackageDataEntry {
|
||||
pub hosts: Hosts,
|
||||
#[ts(type = "string[]")]
|
||||
pub store_exposed_dependents: Vec<JsonPointer>,
|
||||
#[serde(default)]
|
||||
#[ts(type = "string | null")]
|
||||
pub outbound_gateway: Option<GatewayId>,
|
||||
#[serde(default)]
|
||||
pub plugin: PackagePlugin,
|
||||
}
|
||||
impl AsRef<PackageDataEntry> for PackageDataEntry {
|
||||
fn as_ref(&self) -> &PackageDataEntry {
|
||||
@@ -391,6 +392,21 @@ impl AsRef<PackageDataEntry> for PackageDataEntry {
|
||||
}
|
||||
}
|
||||
|
||||
#[derive(Debug, Default, Deserialize, Serialize, HasModel, TS)]
|
||||
#[serde(rename_all = "camelCase")]
|
||||
#[model = "Model<Self>"]
|
||||
#[ts(export)]
|
||||
pub struct PackagePlugin {
|
||||
pub url: Option<UrlPluginRegistration>,
|
||||
}
|
||||
|
||||
#[derive(Debug, Clone, Deserialize, Serialize, TS)]
|
||||
#[serde(rename_all = "camelCase")]
|
||||
#[ts(export)]
|
||||
pub struct UrlPluginRegistration {
|
||||
pub table_action: ActionId,
|
||||
}
|
||||
|
||||
#[derive(Debug, Clone, Default, Deserialize, Serialize, TS)]
|
||||
#[ts(export)]
|
||||
pub struct CurrentDependencies(pub BTreeMap<PackageId, CurrentDependencyInfo>);
|
||||
|
||||
@@ -13,6 +13,7 @@ use openssl::hash::MessageDigest;
|
||||
use patch_db::{HasModel, Value};
|
||||
use serde::{Deserialize, Serialize};
|
||||
use ts_rs::TS;
|
||||
use url::Url;
|
||||
|
||||
use crate::account::AccountInfo;
|
||||
use crate::db::DbAccessByKey;
|
||||
@@ -58,7 +59,8 @@ impl Public {
|
||||
platform: get_platform(),
|
||||
id: account.server_id.clone(),
|
||||
version: Current::default().semver(),
|
||||
hostname: account.hostname.no_dot_host_name(),
|
||||
name: account.hostname.name.clone(),
|
||||
hostname: (*account.hostname.hostname).clone(),
|
||||
last_backup: None,
|
||||
package_version_compat: Current::default().compat().clone(),
|
||||
post_init_migration_todos: BTreeMap::new(),
|
||||
@@ -143,6 +145,7 @@ impl Public {
|
||||
zram: true,
|
||||
governor: None,
|
||||
smtp: None,
|
||||
ifconfig_url: default_ifconfig_url(),
|
||||
ram: 0,
|
||||
devices: Vec::new(),
|
||||
kiosk,
|
||||
@@ -164,19 +167,21 @@ fn get_platform() -> InternedString {
|
||||
(&*PLATFORM).into()
|
||||
}
|
||||
|
||||
pub fn default_ifconfig_url() -> Url {
|
||||
"https://ifconfig.co".parse().unwrap()
|
||||
}
|
||||
|
||||
#[derive(Debug, Deserialize, Serialize, HasModel, TS)]
|
||||
#[serde(rename_all = "camelCase")]
|
||||
#[model = "Model<Self>"]
|
||||
#[ts(export)]
|
||||
pub struct ServerInfo {
|
||||
#[serde(default = "get_arch")]
|
||||
#[ts(type = "string")]
|
||||
pub arch: InternedString,
|
||||
#[serde(default = "get_platform")]
|
||||
#[ts(type = "string")]
|
||||
pub platform: InternedString,
|
||||
pub id: String,
|
||||
#[ts(type = "string")]
|
||||
pub name: InternedString,
|
||||
pub hostname: InternedString,
|
||||
#[ts(type = "string")]
|
||||
pub version: Version,
|
||||
@@ -200,6 +205,9 @@ pub struct ServerInfo {
|
||||
pub zram: bool,
|
||||
pub governor: Option<Governor>,
|
||||
pub smtp: Option<SmtpValue>,
|
||||
#[serde(default = "default_ifconfig_url")]
|
||||
#[ts(type = "string")]
|
||||
pub ifconfig_url: Url,
|
||||
#[ts(type = "number")]
|
||||
pub ram: u64,
|
||||
pub devices: Vec<LshwDevice>,
|
||||
|
||||
@@ -45,7 +45,7 @@ impl TS for DepInfo {
|
||||
"DepInfo".into()
|
||||
}
|
||||
fn inline() -> String {
|
||||
"{ description: string | null, optional: boolean } & MetadataSrc".into()
|
||||
"{ description: LocaleString | null, optional: boolean } & MetadataSrc".into()
|
||||
}
|
||||
fn inline_flattened() -> String {
|
||||
Self::inline()
|
||||
@@ -54,7 +54,8 @@ impl TS for DepInfo {
|
||||
where
|
||||
Self: 'static,
|
||||
{
|
||||
v.visit::<MetadataSrc>()
|
||||
v.visit::<MetadataSrc>();
|
||||
v.visit::<LocaleString>();
|
||||
}
|
||||
fn output_path() -> Option<&'static std::path::Path> {
|
||||
Some(Path::new("DepInfo.ts"))
|
||||
|
||||
@@ -19,7 +19,7 @@ use super::mount::filesystem::block_dev::BlockDev;
|
||||
use super::mount::guard::TmpMountGuard;
|
||||
use crate::disk::OsPartitionInfo;
|
||||
use crate::disk::mount::guard::GenericMountGuard;
|
||||
use crate::hostname::Hostname;
|
||||
use crate::hostname::ServerHostname;
|
||||
use crate::prelude::*;
|
||||
use crate::util::Invoke;
|
||||
use crate::util::serde::IoFormat;
|
||||
@@ -61,7 +61,7 @@ pub struct PartitionInfo {
|
||||
#[ts(export)]
|
||||
#[serde(rename_all = "camelCase")]
|
||||
pub struct StartOsRecoveryInfo {
|
||||
pub hostname: Hostname,
|
||||
pub hostname: ServerHostname,
|
||||
#[ts(type = "string")]
|
||||
pub version: exver::Version,
|
||||
#[ts(type = "string")]
|
||||
|
||||
@@ -3,6 +3,7 @@ use std::fmt::{Debug, Display};
|
||||
use axum::http::StatusCode;
|
||||
use axum::http::uri::InvalidUri;
|
||||
use color_eyre::eyre::eyre;
|
||||
use imbl_value::InternedString;
|
||||
use num_enum::TryFromPrimitive;
|
||||
use patch_db::Value;
|
||||
use rpc_toolkit::reqwest;
|
||||
@@ -204,17 +205,12 @@ pub struct Error {
|
||||
|
||||
impl Display for Error {
|
||||
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
|
||||
write!(f, "{}: {:#}", &self.kind.as_str(), self.source)
|
||||
write!(f, "{}: {}", &self.kind.as_str(), self.display_src())
|
||||
}
|
||||
}
|
||||
impl Debug for Error {
|
||||
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
|
||||
write!(
|
||||
f,
|
||||
"{}: {:?}",
|
||||
&self.kind.as_str(),
|
||||
self.debug.as_ref().unwrap_or(&self.source)
|
||||
)
|
||||
write!(f, "{}: {}", &self.kind.as_str(), self.display_dbg())
|
||||
}
|
||||
}
|
||||
impl Error {
|
||||
@@ -235,8 +231,13 @@ impl Error {
|
||||
}
|
||||
pub fn clone_output(&self) -> Self {
|
||||
Error {
|
||||
source: eyre!("{}", self.source),
|
||||
debug: self.debug.as_ref().map(|e| eyre!("{e}")),
|
||||
source: eyre!("{:#}", self.source),
|
||||
debug: Some(
|
||||
self.debug
|
||||
.as_ref()
|
||||
.map(|e| eyre!("{e}"))
|
||||
.unwrap_or_else(|| eyre!("{:?}", self.source)),
|
||||
),
|
||||
kind: self.kind,
|
||||
info: self.info.clone(),
|
||||
task: None,
|
||||
@@ -257,6 +258,30 @@ impl Error {
|
||||
self.task.take();
|
||||
self
|
||||
}
|
||||
|
||||
pub fn display_src(&self) -> impl Display {
|
||||
struct D<'a>(&'a Error);
|
||||
impl<'a> Display for D<'a> {
|
||||
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
|
||||
write!(f, "{:#}", self.0.source)
|
||||
}
|
||||
}
|
||||
D(self)
|
||||
}
|
||||
|
||||
pub fn display_dbg(&self) -> impl Display {
|
||||
struct D<'a>(&'a Error);
|
||||
impl<'a> Display for D<'a> {
|
||||
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
|
||||
if let Some(debug) = &self.0.debug {
|
||||
write!(f, "{}", debug)
|
||||
} else {
|
||||
write!(f, "{:?}", self.0.source)
|
||||
}
|
||||
}
|
||||
}
|
||||
D(self)
|
||||
}
|
||||
}
|
||||
impl axum::response::IntoResponse for Error {
|
||||
fn into_response(self) -> axum::response::Response {
|
||||
@@ -433,9 +458,11 @@ impl Debug for ErrorData {
|
||||
impl std::error::Error for ErrorData {}
|
||||
impl From<Error> for ErrorData {
|
||||
fn from(value: Error) -> Self {
|
||||
let details = value.display_src().to_string();
|
||||
let debug = value.display_dbg().to_string();
|
||||
Self {
|
||||
details: value.to_string(),
|
||||
debug: format!("{:?}", value),
|
||||
details,
|
||||
debug,
|
||||
info: value.info,
|
||||
}
|
||||
}
|
||||
@@ -623,13 +650,10 @@ impl<T> ResultExt<T, Error> for Result<T, Error> {
|
||||
fn with_ctx<F: FnOnce(&Error) -> (ErrorKind, D), D: Display>(self, f: F) -> Result<T, Error> {
|
||||
self.map_err(|e| {
|
||||
let (kind, ctx) = f(&e);
|
||||
let ctx = InternedString::from_display(&ctx);
|
||||
let source = e.source;
|
||||
let with_ctx = format!("{ctx}: {source}");
|
||||
let source = source.wrap_err(with_ctx);
|
||||
let debug = e.debug.map(|e| {
|
||||
let with_ctx = format!("{ctx}: {e}");
|
||||
e.wrap_err(with_ctx)
|
||||
});
|
||||
let source = source.wrap_err(ctx.clone());
|
||||
let debug = e.debug.map(|e| e.wrap_err(ctx));
|
||||
Error {
|
||||
kind,
|
||||
source,
|
||||
|
||||
@@ -1,26 +1,58 @@
|
||||
use clap::Parser;
|
||||
use imbl_value::InternedString;
|
||||
use lazy_format::lazy_format;
|
||||
use rand::{Rng, rng};
|
||||
use serde::{Deserialize, Serialize};
|
||||
use tokio::process::Command;
|
||||
use tracing::instrument;
|
||||
use ts_rs::TS;
|
||||
|
||||
use crate::context::RpcContext;
|
||||
use crate::db::model::public::ServerInfo;
|
||||
use crate::prelude::*;
|
||||
use crate::util::Invoke;
|
||||
use crate::{Error, ErrorKind};
|
||||
|
||||
#[derive(Clone, Debug, Default, serde::Deserialize, serde::Serialize, ts_rs::TS)]
|
||||
#[ts(type = "string")]
|
||||
pub struct Hostname(pub InternedString);
|
||||
|
||||
lazy_static::lazy_static! {
|
||||
static ref ADJECTIVES: Vec<String> = include_str!("./assets/adjectives.txt").lines().map(|x| x.to_string()).collect();
|
||||
static ref NOUNS: Vec<String> = include_str!("./assets/nouns.txt").lines().map(|x| x.to_string()).collect();
|
||||
}
|
||||
impl AsRef<str> for Hostname {
|
||||
fn as_ref(&self) -> &str {
|
||||
pub struct ServerHostname(InternedString);
|
||||
impl std::ops::Deref for ServerHostname {
|
||||
type Target = InternedString;
|
||||
fn deref(&self) -> &Self::Target {
|
||||
&self.0
|
||||
}
|
||||
}
|
||||
impl AsRef<str> for ServerHostname {
|
||||
fn as_ref(&self) -> &str {
|
||||
&***self
|
||||
}
|
||||
}
|
||||
|
||||
impl ServerHostname {
|
||||
fn validate(&self) -> Result<(), Error> {
|
||||
if self.0.is_empty() {
|
||||
return Err(Error::new(
|
||||
eyre!("{}", t!("hostname.empty")),
|
||||
ErrorKind::InvalidRequest,
|
||||
));
|
||||
}
|
||||
if let Some(c) = self
|
||||
.0
|
||||
.chars()
|
||||
.find(|c| !(c.is_ascii_alphanumeric() || c == &'-') || c.is_ascii_uppercase())
|
||||
{
|
||||
return Err(Error::new(
|
||||
eyre!("{}", t!("hostname.invalid-character", char = c)),
|
||||
ErrorKind::InvalidRequest,
|
||||
));
|
||||
}
|
||||
Ok(())
|
||||
}
|
||||
|
||||
pub fn new(hostname: InternedString) -> Result<Self, Error> {
|
||||
let res = Self(hostname);
|
||||
res.validate()?;
|
||||
Ok(res)
|
||||
}
|
||||
|
||||
impl Hostname {
|
||||
pub fn lan_address(&self) -> InternedString {
|
||||
InternedString::from_display(&lazy_format!("https://{}.local", self.0))
|
||||
}
|
||||
@@ -29,17 +61,135 @@ impl Hostname {
|
||||
InternedString::from_display(&lazy_format!("{}.local", self.0))
|
||||
}
|
||||
|
||||
pub fn no_dot_host_name(&self) -> InternedString {
|
||||
self.0.clone()
|
||||
pub fn load(server_info: &Model<ServerInfo>) -> Result<Self, Error> {
|
||||
Ok(Self(server_info.as_hostname().de()?))
|
||||
}
|
||||
|
||||
pub fn save(&self, server_info: &mut Model<ServerInfo>) -> Result<(), Error> {
|
||||
server_info.as_hostname_mut().ser(&**self)
|
||||
}
|
||||
}
|
||||
|
||||
pub fn generate_hostname() -> Hostname {
|
||||
let mut rng = rng();
|
||||
let adjective = &ADJECTIVES[rng.random_range(0..ADJECTIVES.len())];
|
||||
let noun = &NOUNS[rng.random_range(0..NOUNS.len())];
|
||||
Hostname(InternedString::from_display(&lazy_format!(
|
||||
"{adjective}-{noun}"
|
||||
#[derive(Clone, Debug, Default, serde::Deserialize, serde::Serialize, ts_rs::TS)]
|
||||
#[ts(type = "string")]
|
||||
pub struct ServerHostnameInfo {
|
||||
pub name: InternedString,
|
||||
pub hostname: ServerHostname,
|
||||
}
|
||||
|
||||
lazy_static::lazy_static! {
|
||||
static ref ADJECTIVES: Vec<String> = include_str!("./assets/adjectives.txt").lines().map(|x| x.to_string()).collect();
|
||||
static ref NOUNS: Vec<String> = include_str!("./assets/nouns.txt").lines().map(|x| x.to_string()).collect();
|
||||
}
|
||||
impl AsRef<str> for ServerHostnameInfo {
|
||||
fn as_ref(&self) -> &str {
|
||||
&self.hostname
|
||||
}
|
||||
}
|
||||
|
||||
fn normalize(s: &str) -> InternedString {
|
||||
let mut prev_was_dash = true;
|
||||
let mut normalized = s
|
||||
.chars()
|
||||
.filter_map(|c| {
|
||||
if c.is_alphanumeric() {
|
||||
prev_was_dash = false;
|
||||
Some(c.to_ascii_lowercase())
|
||||
} else if (c == '-' || c.is_whitespace()) && !prev_was_dash {
|
||||
prev_was_dash = true;
|
||||
Some('-')
|
||||
} else {
|
||||
None
|
||||
}
|
||||
})
|
||||
.collect::<String>();
|
||||
while normalized.ends_with('-') {
|
||||
normalized.pop();
|
||||
}
|
||||
if normalized.len() < 4 {
|
||||
generate_hostname().0
|
||||
} else {
|
||||
normalized.into()
|
||||
}
|
||||
}
|
||||
|
||||
fn denormalize(s: &str) -> InternedString {
|
||||
let mut cap = true;
|
||||
s.chars()
|
||||
.map(|c| {
|
||||
if c == '-' {
|
||||
cap = true;
|
||||
' '
|
||||
} else if cap {
|
||||
cap = false;
|
||||
c.to_ascii_uppercase()
|
||||
} else {
|
||||
c
|
||||
}
|
||||
})
|
||||
.collect::<String>()
|
||||
.into()
|
||||
}
|
||||
|
||||
impl ServerHostnameInfo {
|
||||
pub fn new(
|
||||
name: Option<InternedString>,
|
||||
hostname: Option<InternedString>,
|
||||
) -> Result<Self, Error> {
|
||||
Self::new_opt(name, hostname)
|
||||
.map(|h| h.unwrap_or_else(|| ServerHostnameInfo::from_hostname(generate_hostname())))
|
||||
}
|
||||
|
||||
pub fn new_opt(
|
||||
name: Option<InternedString>,
|
||||
hostname: Option<InternedString>,
|
||||
) -> Result<Option<Self>, Error> {
|
||||
let name = name.filter(|n| !n.is_empty());
|
||||
let hostname = hostname.filter(|h| !h.is_empty());
|
||||
Ok(match (name, hostname) {
|
||||
(Some(name), Some(hostname)) => Some(ServerHostnameInfo {
|
||||
name,
|
||||
hostname: ServerHostname::new(hostname)?,
|
||||
}),
|
||||
(Some(name), None) => Some(ServerHostnameInfo::from_name(name)),
|
||||
(None, Some(hostname)) => Some(ServerHostnameInfo::from_hostname(ServerHostname::new(
|
||||
hostname,
|
||||
)?)),
|
||||
(None, None) => None,
|
||||
})
|
||||
}
|
||||
|
||||
pub fn from_hostname(hostname: ServerHostname) -> Self {
|
||||
Self {
|
||||
name: denormalize(&**hostname),
|
||||
hostname,
|
||||
}
|
||||
}
|
||||
|
||||
pub fn from_name(name: InternedString) -> Self {
|
||||
Self {
|
||||
hostname: ServerHostname(normalize(&*name)),
|
||||
name,
|
||||
}
|
||||
}
|
||||
|
||||
pub fn load(server_info: &Model<ServerInfo>) -> Result<Self, Error> {
|
||||
Ok(Self {
|
||||
name: server_info.as_name().de()?,
|
||||
hostname: ServerHostname::load(server_info)?,
|
||||
})
|
||||
}
|
||||
|
||||
pub fn save(&self, server_info: &mut Model<ServerInfo>) -> Result<(), Error> {
|
||||
server_info.as_name_mut().ser(&self.name)?;
|
||||
self.hostname.save(server_info)
|
||||
}
|
||||
}
|
||||
|
||||
pub fn generate_hostname() -> ServerHostname {
|
||||
let num = rand::random::<u16>();
|
||||
ServerHostname(InternedString::from_display(&lazy_format!(
|
||||
"startos-{num:04x}"
|
||||
)))
|
||||
}
|
||||
|
||||
@@ -49,17 +199,17 @@ pub fn generate_id() -> String {
|
||||
}
|
||||
|
||||
#[instrument(skip_all)]
|
||||
pub async fn get_current_hostname() -> Result<Hostname, Error> {
|
||||
pub async fn get_current_hostname() -> Result<InternedString, Error> {
|
||||
let out = Command::new("hostname")
|
||||
.invoke(ErrorKind::ParseSysInfo)
|
||||
.await?;
|
||||
let out_string = String::from_utf8(out)?;
|
||||
Ok(Hostname(out_string.trim().into()))
|
||||
Ok(out_string.trim().into())
|
||||
}
|
||||
|
||||
#[instrument(skip_all)]
|
||||
pub async fn set_hostname(hostname: &Hostname) -> Result<(), Error> {
|
||||
let hostname = &*hostname.0;
|
||||
pub async fn set_hostname(hostname: &ServerHostname) -> Result<(), Error> {
|
||||
let hostname = &***hostname;
|
||||
Command::new("hostnamectl")
|
||||
.arg("--static")
|
||||
.arg("set-hostname")
|
||||
@@ -78,7 +228,7 @@ pub async fn set_hostname(hostname: &Hostname) -> Result<(), Error> {
|
||||
}
|
||||
|
||||
#[instrument(skip_all)]
|
||||
pub async fn sync_hostname(hostname: &Hostname) -> Result<(), Error> {
|
||||
pub async fn sync_hostname(hostname: &ServerHostname) -> Result<(), Error> {
|
||||
set_hostname(hostname).await?;
|
||||
Command::new("systemctl")
|
||||
.arg("restart")
|
||||
@@ -87,3 +237,37 @@ pub async fn sync_hostname(hostname: &Hostname) -> Result<(), Error> {
|
||||
.await?;
|
||||
Ok(())
|
||||
}
|
||||
|
||||
#[derive(Deserialize, Serialize, Parser, TS)]
|
||||
#[serde(rename_all = "camelCase")]
|
||||
#[command(rename_all = "kebab-case")]
|
||||
#[ts(export)]
|
||||
pub struct SetServerHostnameParams {
|
||||
name: Option<InternedString>,
|
||||
hostname: Option<InternedString>,
|
||||
}
|
||||
|
||||
pub async fn set_hostname_rpc(
|
||||
ctx: RpcContext,
|
||||
SetServerHostnameParams { name, hostname }: SetServerHostnameParams,
|
||||
) -> Result<(), Error> {
|
||||
let Some(hostname) = ServerHostnameInfo::new_opt(name, hostname)? else {
|
||||
return Err(Error::new(
|
||||
eyre!("{}", t!("hostname.must-provide-name-or-hostname")),
|
||||
ErrorKind::InvalidRequest,
|
||||
));
|
||||
};
|
||||
ctx.db
|
||||
.mutate(|db| hostname.save(db.as_public_mut().as_server_info_mut()))
|
||||
.await
|
||||
.result?;
|
||||
ctx.account.mutate(|a| a.hostname = hostname.clone());
|
||||
sync_hostname(&hostname.hostname).await?;
|
||||
|
||||
Ok(())
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn test_generate_hostname() {
|
||||
assert_eq!(dbg!(generate_hostname().0).len(), 12);
|
||||
}
|
||||
|
||||
@@ -18,7 +18,7 @@ use crate::context::{CliContext, InitContext, RpcContext};
|
||||
use crate::db::model::Database;
|
||||
use crate::db::model::public::ServerStatus;
|
||||
use crate::developer::OS_DEVELOPER_KEY_PATH;
|
||||
use crate::hostname::Hostname;
|
||||
use crate::hostname::ServerHostname;
|
||||
use crate::middleware::auth::local::LocalAuthContext;
|
||||
use crate::net::gateway::WildcardListener;
|
||||
use crate::net::net_controller::{NetController, NetService};
|
||||
@@ -191,15 +191,16 @@ pub async fn init(
|
||||
.arg(OS_DEVELOPER_KEY_PATH)
|
||||
.invoke(ErrorKind::Filesystem)
|
||||
.await?;
|
||||
let hostname = ServerHostname::load(peek.as_public().as_server_info())?;
|
||||
crate::ssh::sync_keys(
|
||||
&Hostname(peek.as_public().as_server_info().as_hostname().de()?),
|
||||
&hostname,
|
||||
&peek.as_private().as_ssh_privkey().de()?,
|
||||
&peek.as_private().as_ssh_pubkeys().de()?,
|
||||
SSH_DIR,
|
||||
)
|
||||
.await?;
|
||||
crate::ssh::sync_keys(
|
||||
&Hostname(peek.as_public().as_server_info().as_hostname().de()?),
|
||||
&hostname,
|
||||
&peek.as_private().as_ssh_privkey().de()?,
|
||||
&Default::default(),
|
||||
"/root/.ssh",
|
||||
@@ -211,12 +212,7 @@ pub async fn init(
|
||||
|
||||
start_net.start();
|
||||
let net_ctrl = Arc::new(
|
||||
NetController::init(
|
||||
db.clone(),
|
||||
&account.hostname,
|
||||
cfg.socks_listen.unwrap_or(DEFAULT_SOCKS_LISTEN),
|
||||
)
|
||||
.await?,
|
||||
NetController::init(db.clone(), cfg.socks_listen.unwrap_or(DEFAULT_SOCKS_LISTEN)).await?,
|
||||
);
|
||||
webserver.send_modify(|wl| wl.set_ip_info(net_ctrl.net_iface.watcher.subscribe()));
|
||||
let os_net_service = net_ctrl.os_bindings().await?;
|
||||
|
||||
@@ -377,6 +377,20 @@ pub fn server<C: Context>() -> ParentHandler<C> {
|
||||
"host",
|
||||
net::host::server_host_api::<C>().with_about("about.commands-host-system-ui"),
|
||||
)
|
||||
.subcommand(
|
||||
"set-hostname",
|
||||
from_fn_async(hostname::set_hostname_rpc)
|
||||
.no_display()
|
||||
.with_about("about.set-hostname")
|
||||
.with_call_remote::<CliContext>(),
|
||||
)
|
||||
.subcommand(
|
||||
"set-ifconfig-url",
|
||||
from_fn_async(system::set_ifconfig_url)
|
||||
.no_display()
|
||||
.with_about("about.set-ifconfig-url")
|
||||
.with_call_remote::<CliContext>(),
|
||||
)
|
||||
.subcommand(
|
||||
"set-keyboard",
|
||||
from_fn_async(system::set_keyboard)
|
||||
@@ -548,4 +562,12 @@ pub fn package<C: Context>() -> ParentHandler<C> {
|
||||
"host",
|
||||
net::host::host_api::<C>().with_about("about.manage-network-hosts-package"),
|
||||
)
|
||||
.subcommand(
|
||||
"set-outbound-gateway",
|
||||
from_fn_async(net::gateway::set_outbound_gateway)
|
||||
.with_metadata("sync_db", Value::Bool(true))
|
||||
.no_display()
|
||||
.with_about("about.set-outbound-gateway-package")
|
||||
.with_call_remote::<CliContext>(),
|
||||
)
|
||||
}
|
||||
|
||||
@@ -17,3 +17,6 @@ lxc.net.0.link = lxcbr0
|
||||
lxc.net.0.flags = up
|
||||
|
||||
lxc.rootfs.options = rshared
|
||||
|
||||
# Environment
|
||||
lxc.environment = LANG={lang}
|
||||
|
||||
@@ -174,10 +174,15 @@ impl LxcContainer {
|
||||
config: LxcConfig,
|
||||
) -> Result<Self, Error> {
|
||||
let guid = new_guid();
|
||||
let lang = std::env::var("LANG").unwrap_or_else(|_| "C.UTF-8".into());
|
||||
let machine_id = hex::encode(rand::random::<[u8; 16]>());
|
||||
let container_dir = Path::new(LXC_CONTAINER_DIR).join(&*guid);
|
||||
tokio::fs::create_dir_all(&container_dir).await?;
|
||||
let config_str = format!(include_str!("./config.template"), guid = &*guid);
|
||||
let config_str = format!(
|
||||
include_str!("./config.template"),
|
||||
guid = &*guid,
|
||||
lang = &lang,
|
||||
);
|
||||
tokio::fs::write(container_dir.join("config"), config_str).await?;
|
||||
let rootfs_dir = container_dir.join("rootfs");
|
||||
let rootfs = OverlayGuard::mount(
|
||||
@@ -215,6 +220,13 @@ impl LxcContainer {
|
||||
100000,
|
||||
)
|
||||
.await?;
|
||||
write_file_owned_atomic(
|
||||
rootfs_dir.join("etc/default/locale"),
|
||||
format!("LANG={lang}\n"),
|
||||
100000,
|
||||
100000,
|
||||
)
|
||||
.await?;
|
||||
Command::new("sed")
|
||||
.arg("-i")
|
||||
.arg(format!("s/LXC_NAME/{guid}/g"))
|
||||
|
||||
@@ -11,7 +11,8 @@ use futures::{FutureExt, StreamExt, TryStreamExt};
|
||||
use hickory_server::authority::{AuthorityObject, Catalog, MessageResponseBuilder};
|
||||
use hickory_server::proto::op::{Header, ResponseCode};
|
||||
use hickory_server::proto::rr::{Name, Record, RecordType};
|
||||
use hickory_server::resolver::config::{ResolverConfig, ResolverOpts};
|
||||
use hickory_server::proto::xfer::Protocol;
|
||||
use hickory_server::resolver::config::{NameServerConfig, ResolverConfig, ResolverOpts};
|
||||
use hickory_server::server::{Request, RequestHandler, ResponseHandler, ResponseInfo};
|
||||
use hickory_server::store::forwarder::{ForwardAuthority, ForwardConfig};
|
||||
use hickory_server::{ServerFuture, resolver as hickory_resolver};
|
||||
@@ -206,6 +207,7 @@ pub async fn dump_table(
|
||||
struct ResolveMap {
|
||||
private_domains: BTreeMap<InternedString, Weak<()>>,
|
||||
services: BTreeMap<Option<PackageId>, BTreeMap<Ipv4Addr, Weak<()>>>,
|
||||
challenges: BTreeMap<InternedString, (InternedString, Weak<()>)>,
|
||||
}
|
||||
|
||||
pub struct DnsController {
|
||||
@@ -240,22 +242,60 @@ impl Resolver {
|
||||
let mut prev = crate::util::serde::hash_serializable::<sha2::Sha256, _>(&(
|
||||
ResolverConfig::new(),
|
||||
ResolverOpts::default(),
|
||||
Option::<std::collections::VecDeque<SocketAddr>>::None,
|
||||
))
|
||||
.unwrap_or_default();
|
||||
loop {
|
||||
if let Err(e) = async {
|
||||
let mut stream = file_string_stream("/run/systemd/resolve/resolv.conf")
|
||||
.filter_map(|a| futures::future::ready(a.transpose()))
|
||||
.boxed();
|
||||
while let Some(conf) = stream.try_next().await? {
|
||||
let (config, mut opts) =
|
||||
hickory_resolver::system_conf::parse_resolv_conf(conf)
|
||||
.with_kind(ErrorKind::ParseSysInfo)?;
|
||||
opts.timeout = Duration::from_secs(30);
|
||||
let res: Result<(), Error> = async {
|
||||
let mut file_stream =
|
||||
file_string_stream("/run/systemd/resolve/resolv.conf")
|
||||
.filter_map(|a| futures::future::ready(a.transpose()))
|
||||
.boxed();
|
||||
let mut static_sub = db
|
||||
.subscribe(
|
||||
"/public/serverInfo/network/dns/staticServers"
|
||||
.parse()
|
||||
.unwrap(),
|
||||
)
|
||||
.await;
|
||||
let mut last_config: Option<(ResolverConfig, ResolverOpts)> = None;
|
||||
loop {
|
||||
let got_file = tokio::select! {
|
||||
res = file_stream.try_next() => {
|
||||
let conf = res?
|
||||
.ok_or_else(|| Error::new(
|
||||
eyre!("resolv.conf stream ended"),
|
||||
ErrorKind::Network,
|
||||
))?;
|
||||
let (config, mut opts) =
|
||||
hickory_resolver::system_conf::parse_resolv_conf(conf)
|
||||
.with_kind(ErrorKind::ParseSysInfo)?;
|
||||
opts.timeout = Duration::from_secs(30);
|
||||
last_config = Some((config, opts));
|
||||
true
|
||||
}
|
||||
_ = static_sub.recv() => false,
|
||||
};
|
||||
let Some((ref config, ref opts)) = last_config else {
|
||||
continue;
|
||||
};
|
||||
let static_servers: Option<std::collections::VecDeque<SocketAddr>> = db
|
||||
.peek()
|
||||
.await
|
||||
.as_public()
|
||||
.as_server_info()
|
||||
.as_network()
|
||||
.as_dns()
|
||||
.as_static_servers()
|
||||
.de()?;
|
||||
let hash = crate::util::serde::hash_serializable::<sha2::Sha256, _>(
|
||||
&(&config, &opts),
|
||||
&(config, opts, &static_servers),
|
||||
)?;
|
||||
if hash != prev {
|
||||
if hash == prev {
|
||||
prev = hash;
|
||||
continue;
|
||||
}
|
||||
if got_file {
|
||||
db.mutate(|db| {
|
||||
db.as_public_mut()
|
||||
.as_server_info_mut()
|
||||
@@ -274,44 +314,52 @@ impl Resolver {
|
||||
})
|
||||
.await
|
||||
.result?;
|
||||
let auth: Vec<Arc<dyn AuthorityObject>> = vec![Arc::new(
|
||||
ForwardAuthority::builder_tokio(ForwardConfig {
|
||||
name_servers: from_value(Value::Array(
|
||||
config
|
||||
.name_servers()
|
||||
.into_iter()
|
||||
.skip(4)
|
||||
.map(to_value)
|
||||
.collect::<Result<_, Error>>()?,
|
||||
))?,
|
||||
options: Some(opts),
|
||||
}
|
||||
let forward_servers = if let Some(servers) = &static_servers {
|
||||
servers
|
||||
.iter()
|
||||
.flat_map(|addr| {
|
||||
[
|
||||
NameServerConfig::new(*addr, Protocol::Udp),
|
||||
NameServerConfig::new(*addr, Protocol::Tcp),
|
||||
]
|
||||
})
|
||||
.build()
|
||||
.map_err(|e| Error::new(eyre!("{e}"), ErrorKind::Network))?,
|
||||
)];
|
||||
{
|
||||
let mut guard = tokio::time::timeout(
|
||||
Duration::from_secs(10),
|
||||
catalog.write(),
|
||||
)
|
||||
.await
|
||||
.map_err(|_| {
|
||||
Error::new(
|
||||
eyre!("{}", t!("net.dns.timeout-updating-catalog")),
|
||||
ErrorKind::Timeout,
|
||||
)
|
||||
})?;
|
||||
guard.upsert(Name::root().into(), auth);
|
||||
drop(guard);
|
||||
}
|
||||
.map(|n| to_value(&n))
|
||||
.collect::<Result<_, Error>>()?
|
||||
} else {
|
||||
config
|
||||
.name_servers()
|
||||
.into_iter()
|
||||
.skip(4)
|
||||
.map(to_value)
|
||||
.collect::<Result<_, Error>>()?
|
||||
};
|
||||
let auth: Vec<Arc<dyn AuthorityObject>> = vec![Arc::new(
|
||||
ForwardAuthority::builder_tokio(ForwardConfig {
|
||||
name_servers: from_value(Value::Array(forward_servers))?,
|
||||
options: Some(opts.clone()),
|
||||
})
|
||||
.build()
|
||||
.map_err(|e| Error::new(eyre!("{e}"), ErrorKind::Network))?,
|
||||
)];
|
||||
{
|
||||
let mut guard =
|
||||
tokio::time::timeout(Duration::from_secs(10), catalog.write())
|
||||
.await
|
||||
.map_err(|_| {
|
||||
Error::new(
|
||||
eyre!("{}", t!("net.dns.timeout-updating-catalog")),
|
||||
ErrorKind::Timeout,
|
||||
)
|
||||
})?;
|
||||
guard.upsert(Name::root().into(), auth);
|
||||
drop(guard);
|
||||
}
|
||||
prev = hash;
|
||||
}
|
||||
|
||||
Ok::<_, Error>(())
|
||||
}
|
||||
.await
|
||||
{
|
||||
.await;
|
||||
if let Err(e) = res {
|
||||
tracing::error!("{e}");
|
||||
tracing::debug!("{e:?}");
|
||||
tokio::time::sleep(Duration::from_secs(1)).await;
|
||||
@@ -402,7 +450,41 @@ impl RequestHandler for Resolver {
|
||||
match async {
|
||||
let req = request.request_info()?;
|
||||
let query = req.query;
|
||||
if let Some(ip) = self.resolve(query.name().borrow(), req.src.ip()) {
|
||||
let name = query.name();
|
||||
|
||||
if STARTOS.zone_of(name) && query.query_type() == RecordType::TXT {
|
||||
let name_str =
|
||||
InternedString::intern(name.to_lowercase().to_utf8().trim_end_matches('.'));
|
||||
if let Some(txt_value) = self.resolve.mutate(|r| {
|
||||
r.challenges.retain(|_, (_, weak)| weak.strong_count() > 0);
|
||||
r.challenges.remove(&name_str).map(|(val, _)| val)
|
||||
}) {
|
||||
let mut header = Header::response_from_request(request.header());
|
||||
header.set_recursion_available(true);
|
||||
return response_handle
|
||||
.send_response(
|
||||
MessageResponseBuilder::from_message_request(&*request).build(
|
||||
header,
|
||||
&[Record::from_rdata(
|
||||
query.name().to_owned().into(),
|
||||
0,
|
||||
hickory_server::proto::rr::RData::TXT(
|
||||
hickory_server::proto::rr::rdata::TXT::new(vec![
|
||||
txt_value.to_string(),
|
||||
]),
|
||||
),
|
||||
)],
|
||||
[],
|
||||
[],
|
||||
[],
|
||||
),
|
||||
)
|
||||
.await
|
||||
.map(Some);
|
||||
}
|
||||
}
|
||||
|
||||
if let Some(ip) = self.resolve(name, req.src.ip()) {
|
||||
match query.query_type() {
|
||||
RecordType::A => {
|
||||
let mut header = Header::response_from_request(request.header());
|
||||
@@ -618,6 +700,34 @@ impl DnsController {
|
||||
}
|
||||
}
|
||||
|
||||
pub fn add_challenge(
|
||||
&self,
|
||||
domain: InternedString,
|
||||
value: InternedString,
|
||||
) -> Result<Arc<()>, Error> {
|
||||
if let Some(resolve) = Weak::upgrade(&self.resolve) {
|
||||
resolve.mutate(|writable| {
|
||||
let entry = writable
|
||||
.challenges
|
||||
.entry(domain)
|
||||
.or_insert_with(|| (value.clone(), Weak::new()));
|
||||
let rc = if let Some(rc) = Weak::upgrade(&entry.1) {
|
||||
rc
|
||||
} else {
|
||||
let new = Arc::new(());
|
||||
*entry = (value, Arc::downgrade(&new));
|
||||
new
|
||||
};
|
||||
Ok(rc)
|
||||
})
|
||||
} else {
|
||||
Err(Error::new(
|
||||
eyre!("{}", t!("net.dns.server-thread-exited")),
|
||||
crate::ErrorKind::Network,
|
||||
))
|
||||
}
|
||||
}
|
||||
|
||||
pub fn gc_private_domains<'a, BK: Ord + 'a>(
|
||||
&self,
|
||||
domains: impl IntoIterator<Item = &'a BK> + 'a,
|
||||
|
||||
@@ -3,18 +3,16 @@ use std::net::{IpAddr, SocketAddrV4};
|
||||
use std::sync::{Arc, Weak};
|
||||
use std::time::Duration;
|
||||
|
||||
use ipnet::IpNet;
|
||||
|
||||
use futures::channel::oneshot;
|
||||
use iddqd::{IdOrdItem, IdOrdMap};
|
||||
use rand::Rng;
|
||||
use imbl::OrdMap;
|
||||
use ipnet::{IpNet, Ipv4Net};
|
||||
use rand::Rng;
|
||||
use rpc_toolkit::{Context, HandlerArgs, HandlerExt, ParentHandler, from_fn_async};
|
||||
use serde::{Deserialize, Serialize};
|
||||
use tokio::process::Command;
|
||||
use tokio::sync::mpsc;
|
||||
|
||||
use crate::GatewayId;
|
||||
use crate::context::{CliContext, RpcContext};
|
||||
use crate::db::model::public::NetworkInterfaceInfo;
|
||||
use crate::prelude::*;
|
||||
@@ -22,6 +20,7 @@ use crate::util::Invoke;
|
||||
use crate::util::future::NonDetachingJoinHandle;
|
||||
use crate::util::serde::{HandlerExtSerde, display_serializable};
|
||||
use crate::util::sync::Watch;
|
||||
use crate::{GatewayId, HOST_IP};
|
||||
|
||||
pub const START9_BRIDGE_IFACE: &str = "lxcbr0";
|
||||
const EPHEMERAL_PORT_START: u16 = 49152;
|
||||
@@ -77,6 +76,11 @@ impl AvailablePorts {
|
||||
self.0.insert(port, ssl);
|
||||
Some(port)
|
||||
}
|
||||
|
||||
pub fn set_ssl(&mut self, port: u16, ssl: bool) {
|
||||
self.0.insert(port, ssl);
|
||||
}
|
||||
|
||||
/// Returns whether a given allocated port is SSL.
|
||||
pub fn is_ssl(&self, port: u16) -> bool {
|
||||
self.0.get(&port).copied().unwrap_or(false)
|
||||
@@ -254,7 +258,13 @@ pub async fn add_iptables_rule(nat: bool, undo: bool, args: &[&str]) -> Result<(
|
||||
if nat {
|
||||
cmd.arg("-t").arg("nat");
|
||||
}
|
||||
if undo != !cmd.arg("-C").args(args).status().await?.success() {
|
||||
let exists = cmd
|
||||
.arg("-C")
|
||||
.args(args)
|
||||
.invoke(ErrorKind::Network)
|
||||
.await
|
||||
.is_ok();
|
||||
if undo != !exists {
|
||||
let mut cmd = Command::new("iptables");
|
||||
if nat {
|
||||
cmd.arg("-t").arg("nat");
|
||||
@@ -443,14 +453,13 @@ impl InterfaceForwardEntry {
|
||||
continue;
|
||||
}
|
||||
|
||||
let src_filter =
|
||||
if reqs.public_gateways.contains(gw_id) {
|
||||
None
|
||||
} else if reqs.private_ips.contains(&IpAddr::V4(ip)) {
|
||||
Some(subnet.trunc())
|
||||
} else {
|
||||
continue;
|
||||
};
|
||||
let src_filter = if reqs.public_gateways.contains(gw_id) {
|
||||
None
|
||||
} else if reqs.private_ips.contains(&IpAddr::V4(ip)) {
|
||||
Some(subnet.trunc())
|
||||
} else {
|
||||
continue;
|
||||
};
|
||||
|
||||
keep.insert(addr);
|
||||
let fwd_rc = port_forward
|
||||
@@ -712,7 +721,14 @@ async fn forward(
|
||||
.env("dip", target.ip().to_string())
|
||||
.env("dprefix", target_prefix.to_string())
|
||||
.env("sport", source.port().to_string())
|
||||
.env("dport", target.port().to_string());
|
||||
.env("dport", target.port().to_string())
|
||||
.env(
|
||||
"bridge_subnet",
|
||||
Ipv4Net::new(HOST_IP.into(), 24)
|
||||
.with_kind(ErrorKind::ParseNetAddress)?
|
||||
.trunc()
|
||||
.to_string(),
|
||||
);
|
||||
if let Some(subnet) = src_filter {
|
||||
cmd.env("src_subnet", subnet.to_string());
|
||||
}
|
||||
|
||||
File diff suppressed because it is too large
Load Diff
@@ -10,7 +10,7 @@ use ts_rs::TS;
|
||||
use crate::GatewayId;
|
||||
use crate::context::{CliContext, RpcContext};
|
||||
use crate::db::model::DatabaseModel;
|
||||
use crate::hostname::Hostname;
|
||||
use crate::hostname::ServerHostname;
|
||||
use crate::net::acme::AcmeProvider;
|
||||
use crate::net::host::{HostApiKind, all_hosts};
|
||||
use crate::prelude::*;
|
||||
@@ -197,7 +197,7 @@ pub async fn add_public_domain<Kind: HostApiKind>(
|
||||
.as_public_domains_mut()
|
||||
.insert(&fqdn, &PublicDomainConfig { acme, gateway })?;
|
||||
handle_duplicates(db)?;
|
||||
let hostname = Hostname(db.as_public().as_server_info().as_hostname().de()?);
|
||||
let hostname = ServerHostname::load(db.as_public().as_server_info())?;
|
||||
let gateways = db.as_public().as_server_info().as_network().as_gateways().de()?;
|
||||
let ports = db.as_private().as_available_ports().de()?;
|
||||
Kind::host_for(&inheritance, db)?.update_addresses(&hostname, &gateways, &ports)
|
||||
@@ -230,8 +230,13 @@ pub async fn remove_public_domain<Kind: HostApiKind>(
|
||||
Kind::host_for(&inheritance, db)?
|
||||
.as_public_domains_mut()
|
||||
.remove(&fqdn)?;
|
||||
let hostname = Hostname(db.as_public().as_server_info().as_hostname().de()?);
|
||||
let gateways = db.as_public().as_server_info().as_network().as_gateways().de()?;
|
||||
let hostname = ServerHostname::load(db.as_public().as_server_info())?;
|
||||
let gateways = db
|
||||
.as_public()
|
||||
.as_server_info()
|
||||
.as_network()
|
||||
.as_gateways()
|
||||
.de()?;
|
||||
let ports = db.as_private().as_available_ports().de()?;
|
||||
Kind::host_for(&inheritance, db)?.update_addresses(&hostname, &gateways, &ports)
|
||||
})
|
||||
@@ -262,8 +267,13 @@ pub async fn add_private_domain<Kind: HostApiKind>(
|
||||
.upsert(&fqdn, || Ok(BTreeSet::new()))?
|
||||
.mutate(|d| Ok(d.insert(gateway)))?;
|
||||
handle_duplicates(db)?;
|
||||
let hostname = Hostname(db.as_public().as_server_info().as_hostname().de()?);
|
||||
let gateways = db.as_public().as_server_info().as_network().as_gateways().de()?;
|
||||
let hostname = ServerHostname::load(db.as_public().as_server_info())?;
|
||||
let gateways = db
|
||||
.as_public()
|
||||
.as_server_info()
|
||||
.as_network()
|
||||
.as_gateways()
|
||||
.de()?;
|
||||
let ports = db.as_private().as_available_ports().de()?;
|
||||
Kind::host_for(&inheritance, db)?.update_addresses(&hostname, &gateways, &ports)
|
||||
})
|
||||
@@ -284,8 +294,13 @@ pub async fn remove_private_domain<Kind: HostApiKind>(
|
||||
Kind::host_for(&inheritance, db)?
|
||||
.as_private_domains_mut()
|
||||
.mutate(|d| Ok(d.remove(&domain)))?;
|
||||
let hostname = Hostname(db.as_public().as_server_info().as_hostname().de()?);
|
||||
let gateways = db.as_public().as_server_info().as_network().as_gateways().de()?;
|
||||
let hostname = ServerHostname::load(db.as_public().as_server_info())?;
|
||||
let gateways = db
|
||||
.as_public()
|
||||
.as_server_info()
|
||||
.as_network()
|
||||
.as_gateways()
|
||||
.de()?;
|
||||
let ports = db.as_private().as_available_ports().de()?;
|
||||
Kind::host_for(&inheritance, db)?.update_addresses(&hostname, &gateways, &ports)
|
||||
})
|
||||
|
||||
@@ -75,7 +75,7 @@ impl DerivedAddressInfo {
|
||||
} else {
|
||||
!self
|
||||
.disabled
|
||||
.contains(&(h.host.clone(), h.port.unwrap_or_default())) // disablable addresses will always have a port
|
||||
.contains(&(h.hostname.clone(), h.port.unwrap_or_default())) // disablable addresses will always have a port
|
||||
}
|
||||
})
|
||||
.collect()
|
||||
@@ -204,11 +204,7 @@ impl BindInfo {
|
||||
enabled: true,
|
||||
options,
|
||||
net: lan,
|
||||
addresses: DerivedAddressInfo {
|
||||
enabled: addresses.enabled,
|
||||
disabled: addresses.disabled,
|
||||
available: BTreeSet::new(),
|
||||
},
|
||||
addresses,
|
||||
})
|
||||
}
|
||||
pub fn disable(&mut self) {
|
||||
@@ -350,7 +346,7 @@ pub async fn set_address_enabled<Kind: HostApiKind>(
|
||||
} else {
|
||||
// Domains and private IPs: toggle via (host, port) in `disabled` set
|
||||
let port = address.port.unwrap_or(if address.ssl { 443 } else { 80 });
|
||||
let key = (address.host.clone(), port);
|
||||
let key = (address.hostname.clone(), port);
|
||||
if enabled {
|
||||
bind.addresses.disabled.remove(&key);
|
||||
} else {
|
||||
|
||||
@@ -15,7 +15,7 @@ use ts_rs::TS;
|
||||
use crate::context::RpcContext;
|
||||
use crate::db::model::DatabaseModel;
|
||||
use crate::db::model::public::{NetworkInterfaceInfo, NetworkInterfaceType};
|
||||
use crate::hostname::Hostname;
|
||||
use crate::hostname::ServerHostname;
|
||||
use crate::net::forward::AvailablePorts;
|
||||
use crate::net::host::address::{HostAddress, PublicDomainConfig, address_api};
|
||||
use crate::net::host::binding::{BindInfo, BindOptions, Bindings, binding};
|
||||
@@ -82,7 +82,7 @@ impl Host {
|
||||
impl Model<Host> {
|
||||
pub fn update_addresses(
|
||||
&mut self,
|
||||
mdns: &Hostname,
|
||||
mdns: &ServerHostname,
|
||||
gateways: &OrdMap<GatewayId, NetworkInterfaceInfo>,
|
||||
available_ports: &AvailablePorts,
|
||||
) -> Result<(), Error> {
|
||||
@@ -92,7 +92,10 @@ impl Model<Host> {
|
||||
for (_, bind) in this.bindings.as_entries_mut()? {
|
||||
let net = bind.as_net().de()?;
|
||||
let opt = bind.as_options().de()?;
|
||||
let mut available = BTreeSet::new();
|
||||
|
||||
// Preserve existing plugin-provided addresses across recomputation
|
||||
let mut available = bind.as_addresses().as_available().de()?;
|
||||
available.retain(|h| matches!(h.metadata, HostnameMetadata::Plugin { .. }));
|
||||
for (gid, g) in gateways {
|
||||
let Some(ip_info) = &g.ip_info else {
|
||||
continue;
|
||||
@@ -117,7 +120,7 @@ impl Model<Host> {
|
||||
available.insert(HostnameInfo {
|
||||
ssl: opt.secure.map_or(false, |s| s.ssl),
|
||||
public: false,
|
||||
host: host.clone(),
|
||||
hostname: host.clone(),
|
||||
port: Some(port),
|
||||
metadata: metadata.clone(),
|
||||
});
|
||||
@@ -126,7 +129,7 @@ impl Model<Host> {
|
||||
available.insert(HostnameInfo {
|
||||
ssl: true,
|
||||
public: false,
|
||||
host: host.clone(),
|
||||
hostname: host.clone(),
|
||||
port: Some(port),
|
||||
metadata,
|
||||
});
|
||||
@@ -146,7 +149,7 @@ impl Model<Host> {
|
||||
available.insert(HostnameInfo {
|
||||
ssl: opt.secure.map_or(false, |s| s.ssl),
|
||||
public: true,
|
||||
host: host.clone(),
|
||||
hostname: host.clone(),
|
||||
port: Some(port),
|
||||
metadata: metadata.clone(),
|
||||
});
|
||||
@@ -155,7 +158,7 @@ impl Model<Host> {
|
||||
available.insert(HostnameInfo {
|
||||
ssl: true,
|
||||
public: true,
|
||||
host: host.clone(),
|
||||
hostname: host.clone(),
|
||||
port: Some(port),
|
||||
metadata,
|
||||
});
|
||||
@@ -182,7 +185,7 @@ impl Model<Host> {
|
||||
available.insert(HostnameInfo {
|
||||
ssl: opt.secure.map_or(false, |s| s.ssl),
|
||||
public: false,
|
||||
host: mdns_host.clone(),
|
||||
hostname: mdns_host.clone(),
|
||||
port: Some(port),
|
||||
metadata: HostnameMetadata::Mdns {
|
||||
gateways: mdns_gateways.clone(),
|
||||
@@ -193,7 +196,7 @@ impl Model<Host> {
|
||||
available.insert(HostnameInfo {
|
||||
ssl: true,
|
||||
public: false,
|
||||
host: mdns_host,
|
||||
hostname: mdns_host,
|
||||
port: Some(port),
|
||||
metadata: HostnameMetadata::Mdns {
|
||||
gateways: mdns_gateways,
|
||||
@@ -215,7 +218,7 @@ impl Model<Host> {
|
||||
available.insert(HostnameInfo {
|
||||
ssl: opt.secure.map_or(false, |s| s.ssl),
|
||||
public: true,
|
||||
host: domain.clone(),
|
||||
hostname: domain.clone(),
|
||||
port: Some(port),
|
||||
metadata: metadata.clone(),
|
||||
});
|
||||
@@ -232,7 +235,7 @@ impl Model<Host> {
|
||||
available.insert(HostnameInfo {
|
||||
ssl: true,
|
||||
public: true,
|
||||
host: domain,
|
||||
hostname: domain,
|
||||
port: Some(port),
|
||||
metadata,
|
||||
});
|
||||
@@ -257,7 +260,7 @@ impl Model<Host> {
|
||||
available.insert(HostnameInfo {
|
||||
ssl: opt.secure.map_or(false, |s| s.ssl),
|
||||
public: true,
|
||||
host: domain.clone(),
|
||||
hostname: domain.clone(),
|
||||
port: Some(port),
|
||||
metadata: HostnameMetadata::PrivateDomain { gateways },
|
||||
});
|
||||
@@ -274,7 +277,7 @@ impl Model<Host> {
|
||||
available.insert(HostnameInfo {
|
||||
ssl: true,
|
||||
public: true,
|
||||
host: domain,
|
||||
hostname: domain,
|
||||
port: Some(port),
|
||||
metadata: HostnameMetadata::PrivateDomain {
|
||||
gateways: domain_gateways,
|
||||
@@ -289,7 +292,7 @@ impl Model<Host> {
|
||||
let bindings: Bindings = this.bindings.de()?;
|
||||
let mut port_forwards = BTreeSet::new();
|
||||
for bind in bindings.values() {
|
||||
for addr in &bind.addresses.available {
|
||||
for addr in bind.addresses.enabled() {
|
||||
if !addr.public {
|
||||
continue;
|
||||
}
|
||||
|
||||
@@ -4,15 +4,16 @@ use std::sync::{Arc, Weak};
|
||||
|
||||
use color_eyre::eyre::eyre;
|
||||
use imbl_value::InternedString;
|
||||
use nix::net::if_::if_nametoindex;
|
||||
use patch_db::json_ptr::JsonPointer;
|
||||
use tokio::process::Command;
|
||||
use tokio::sync::Mutex;
|
||||
use tokio::task::JoinHandle;
|
||||
use tokio_rustls::rustls::ClientConfig as TlsClientConfig;
|
||||
use tracing::instrument;
|
||||
|
||||
use patch_db::json_ptr::JsonPointer;
|
||||
|
||||
use crate::db::model::Database;
|
||||
use crate::hostname::Hostname;
|
||||
use crate::hostname::ServerHostname;
|
||||
use crate::net::dns::DnsController;
|
||||
use crate::net::forward::{
|
||||
ForwardRequirements, InterfacePortForwardController, START9_BRIDGE_IFACE, add_iptables_rule,
|
||||
@@ -26,6 +27,7 @@ use crate::net::socks::SocksController;
|
||||
use crate::net::vhost::{AlpnInfo, DynVHostTarget, ProxyTarget, VHostController};
|
||||
use crate::prelude::*;
|
||||
use crate::service::effects::callbacks::ServiceCallbacks;
|
||||
use crate::util::Invoke;
|
||||
use crate::util::serde::MaybeUtf8String;
|
||||
use crate::util::sync::Watch;
|
||||
use crate::{GatewayId, HOST_IP, HostId, OptionExt, PackageId};
|
||||
@@ -38,16 +40,11 @@ pub struct NetController {
|
||||
pub(super) dns: DnsController,
|
||||
pub(super) forward: InterfacePortForwardController,
|
||||
pub(super) socks: SocksController,
|
||||
pub(super) server_hostnames: Vec<Option<InternedString>>,
|
||||
pub(crate) callbacks: Arc<ServiceCallbacks>,
|
||||
}
|
||||
|
||||
impl NetController {
|
||||
pub async fn init(
|
||||
db: TypedPatchDb<Database>,
|
||||
hostname: &Hostname,
|
||||
socks_listen: SocketAddr,
|
||||
) -> Result<Self, Error> {
|
||||
pub async fn init(db: TypedPatchDb<Database>, socks_listen: SocketAddr) -> Result<Self, Error> {
|
||||
let net_iface = Arc::new(NetworkInterfaceController::new(db.clone()));
|
||||
let socks = SocksController::new(socks_listen)?;
|
||||
let crypto_provider = Arc::new(tokio_rustls::rustls::crypto::ring::default_provider());
|
||||
@@ -87,18 +84,6 @@ impl NetController {
|
||||
forward: InterfacePortForwardController::new(net_iface.watcher.subscribe()),
|
||||
net_iface,
|
||||
socks,
|
||||
server_hostnames: vec![
|
||||
// LAN IP
|
||||
None,
|
||||
// Internal DNS
|
||||
Some("embassy".into()),
|
||||
Some("startos".into()),
|
||||
// localhost
|
||||
Some("localhost".into()),
|
||||
Some(hostname.no_dot_host_name()),
|
||||
// LAN mDNS
|
||||
Some(hostname.local_domain_name()),
|
||||
],
|
||||
callbacks: Arc::new(ServiceCallbacks::default()),
|
||||
})
|
||||
}
|
||||
@@ -180,12 +165,7 @@ impl NetServiceData {
|
||||
})
|
||||
}
|
||||
|
||||
async fn update(
|
||||
&mut self,
|
||||
ctrl: &NetController,
|
||||
id: HostId,
|
||||
host: Host,
|
||||
) -> Result<(), Error> {
|
||||
async fn update(&mut self, ctrl: &NetController, id: HostId, host: Host) -> Result<(), Error> {
|
||||
let mut forwards: BTreeMap<u16, (SocketAddrV4, ForwardRequirements)> = BTreeMap::new();
|
||||
let mut vhosts: BTreeMap<(Option<InternedString>, u16), ProxyTarget> = BTreeMap::new();
|
||||
let mut private_dns: BTreeSet<InternedString> = BTreeSet::new();
|
||||
@@ -236,23 +216,29 @@ impl NetServiceData {
|
||||
.flat_map(|ip_info| ip_info.subnets.iter().map(|s| s.addr()))
|
||||
.collect();
|
||||
|
||||
// Server hostname vhosts (on assigned_ssl_port) — private only
|
||||
if !server_private_ips.is_empty() {
|
||||
for hostname in ctrl.server_hostnames.iter().cloned() {
|
||||
vhosts.insert(
|
||||
(hostname, assigned_ssl_port),
|
||||
ProxyTarget {
|
||||
public: BTreeSet::new(),
|
||||
private: server_private_ips.clone(),
|
||||
acme: None,
|
||||
addr,
|
||||
add_x_forwarded_headers: ssl.add_x_forwarded_headers,
|
||||
connect_ssl: connect_ssl
|
||||
.clone()
|
||||
.map(|_| ctrl.tls_client_config.clone()),
|
||||
},
|
||||
);
|
||||
}
|
||||
// Collect public gateways from enabled public IP addresses
|
||||
let server_public_gateways: BTreeSet<GatewayId> = enabled_addresses
|
||||
.iter()
|
||||
.filter(|a| a.public && a.metadata.is_ip())
|
||||
.flat_map(|a| a.metadata.gateways())
|
||||
.cloned()
|
||||
.collect();
|
||||
|
||||
// * vhost (on assigned_ssl_port)
|
||||
if !server_private_ips.is_empty() || !server_public_gateways.is_empty() {
|
||||
vhosts.insert(
|
||||
(None, assigned_ssl_port),
|
||||
ProxyTarget {
|
||||
public: server_public_gateways.clone(),
|
||||
private: server_private_ips.clone(),
|
||||
acme: None,
|
||||
addr,
|
||||
add_x_forwarded_headers: ssl.add_x_forwarded_headers,
|
||||
connect_ssl: connect_ssl
|
||||
.clone()
|
||||
.map(|_| ctrl.tls_client_config.clone()),
|
||||
},
|
||||
);
|
||||
}
|
||||
}
|
||||
|
||||
@@ -266,7 +252,7 @@ impl NetServiceData {
|
||||
| HostnameMetadata::PrivateDomain { .. } => {}
|
||||
_ => continue,
|
||||
}
|
||||
let domain = &addr_info.host;
|
||||
let domain = &addr_info.hostname;
|
||||
let domain_ssl_port = addr_info.port.unwrap_or(443);
|
||||
let key = (Some(domain.clone()), domain_ssl_port);
|
||||
let target = vhosts.entry(key).or_insert_with(|| ProxyTarget {
|
||||
@@ -424,7 +410,6 @@ impl NetServiceData {
|
||||
|
||||
Ok(())
|
||||
}
|
||||
|
||||
}
|
||||
|
||||
pub struct NetService {
|
||||
@@ -458,36 +443,163 @@ impl NetService {
|
||||
let synced = Watch::new(0u64);
|
||||
let synced_writer = synced.clone();
|
||||
|
||||
let ip = data.ip;
|
||||
let data = Arc::new(Mutex::new(data));
|
||||
let thread_data = data.clone();
|
||||
|
||||
let sync_task = tokio::spawn(async move {
|
||||
if let Some(ref id) = pkg_id {
|
||||
let ptr: JsonPointer = format!("/public/packageData/{}/hosts", id)
|
||||
.parse()
|
||||
.unwrap();
|
||||
let ptr: JsonPointer = format!("/public/packageData/{}/hosts", id).parse().unwrap();
|
||||
let mut watch = db.watch(ptr).await.typed::<Hosts>();
|
||||
|
||||
// Outbound gateway enforcement
|
||||
let service_ip = ip.to_string();
|
||||
// Purge any stale rules from a previous instance
|
||||
loop {
|
||||
if let Err(e) = watch.changed().await {
|
||||
tracing::error!("DB watch disconnected for {id}: {e}");
|
||||
if Command::new("ip")
|
||||
.arg("rule")
|
||||
.arg("del")
|
||||
.arg("from")
|
||||
.arg(&service_ip)
|
||||
.arg("priority")
|
||||
.arg("100")
|
||||
.invoke(ErrorKind::Network)
|
||||
.await
|
||||
.is_err()
|
||||
{
|
||||
break;
|
||||
}
|
||||
if let Err(e) = async {
|
||||
let hosts = watch.peek()?.de()?;
|
||||
let mut data = thread_data.lock().await;
|
||||
let ctrl = data.net_controller()?;
|
||||
for (host_id, host) in hosts.0 {
|
||||
data.update(&*ctrl, host_id, host).await?;
|
||||
}
|
||||
let mut outbound_sub = db
|
||||
.subscribe(
|
||||
format!("/public/packageData/{}/outboundGateway", id)
|
||||
.parse::<JsonPointer<_, _>>()
|
||||
.unwrap(),
|
||||
)
|
||||
.await;
|
||||
let ctrl_for_ip = thread_data.lock().await.net_controller().ok();
|
||||
let mut ip_info_watch = ctrl_for_ip
|
||||
.as_ref()
|
||||
.map(|c| c.net_iface.watcher.subscribe());
|
||||
if let Some(ref mut w) = ip_info_watch {
|
||||
w.mark_seen();
|
||||
}
|
||||
drop(ctrl_for_ip);
|
||||
let mut current_outbound_table: Option<u32> = None;
|
||||
|
||||
loop {
|
||||
let (hosts_changed, outbound_changed) = tokio::select! {
|
||||
res = watch.changed() => {
|
||||
if let Err(e) = res {
|
||||
tracing::error!("DB watch disconnected for {id}: {e}");
|
||||
break;
|
||||
}
|
||||
(true, false)
|
||||
}
|
||||
_ = outbound_sub.recv() => (false, true),
|
||||
_ = async {
|
||||
if let Some(ref mut w) = ip_info_watch {
|
||||
w.changed().await;
|
||||
} else {
|
||||
std::future::pending::<()>().await;
|
||||
}
|
||||
} => (false, true),
|
||||
};
|
||||
|
||||
// Handle host updates
|
||||
if hosts_changed {
|
||||
if let Err(e) = async {
|
||||
let hosts = watch.peek()?.de()?;
|
||||
let mut data = thread_data.lock().await;
|
||||
let ctrl = data.net_controller()?;
|
||||
for (host_id, host) in hosts.0 {
|
||||
data.update(&*ctrl, host_id, host).await?;
|
||||
}
|
||||
Ok::<_, Error>(())
|
||||
}
|
||||
.await
|
||||
{
|
||||
tracing::error!("Failed to update network info for {id}: {e}");
|
||||
tracing::debug!("{e:?}");
|
||||
}
|
||||
Ok::<_, Error>(())
|
||||
}
|
||||
.await
|
||||
{
|
||||
tracing::error!("Failed to update network info for {id}: {e}");
|
||||
tracing::debug!("{e:?}");
|
||||
|
||||
// Handle outbound gateway changes
|
||||
if outbound_changed {
|
||||
if let Err(e) = async {
|
||||
// Remove old rule if any
|
||||
if let Some(old_table) = current_outbound_table.take() {
|
||||
let old_table_str = old_table.to_string();
|
||||
let _ = Command::new("ip")
|
||||
.arg("rule")
|
||||
.arg("del")
|
||||
.arg("from")
|
||||
.arg(&service_ip)
|
||||
.arg("lookup")
|
||||
.arg(&old_table_str)
|
||||
.arg("priority")
|
||||
.arg("100")
|
||||
.invoke(ErrorKind::Network)
|
||||
.await;
|
||||
}
|
||||
// Read current outbound gateway from DB
|
||||
let outbound_gw: Option<GatewayId> = db
|
||||
.peek()
|
||||
.await
|
||||
.as_public()
|
||||
.as_package_data()
|
||||
.as_idx(id)
|
||||
.map(|p| p.as_outbound_gateway().de().ok())
|
||||
.flatten()
|
||||
.flatten();
|
||||
if let Some(gw_id) = outbound_gw {
|
||||
// Look up table ID for this gateway
|
||||
if let Some(table_id) = if_nametoindex(gw_id.as_str())
|
||||
.map(|idx| 1000 + idx)
|
||||
.log_err()
|
||||
{
|
||||
let table_str = table_id.to_string();
|
||||
Command::new("ip")
|
||||
.arg("rule")
|
||||
.arg("add")
|
||||
.arg("from")
|
||||
.arg(&service_ip)
|
||||
.arg("lookup")
|
||||
.arg(&table_str)
|
||||
.arg("priority")
|
||||
.arg("100")
|
||||
.invoke(ErrorKind::Network)
|
||||
.await
|
||||
.log_err();
|
||||
current_outbound_table = Some(table_id);
|
||||
}
|
||||
}
|
||||
Ok::<_, Error>(())
|
||||
}
|
||||
.await
|
||||
{
|
||||
tracing::error!("Failed to update outbound gateway for {id}: {e}");
|
||||
tracing::debug!("{e:?}");
|
||||
}
|
||||
}
|
||||
|
||||
synced_writer.send_modify(|v| *v += 1);
|
||||
}
|
||||
|
||||
// Cleanup outbound rule on task exit
|
||||
if let Some(table_id) = current_outbound_table {
|
||||
let table_str = table_id.to_string();
|
||||
let _ = Command::new("ip")
|
||||
.arg("rule")
|
||||
.arg("del")
|
||||
.arg("from")
|
||||
.arg(&service_ip)
|
||||
.arg("lookup")
|
||||
.arg(&table_str)
|
||||
.arg("priority")
|
||||
.arg("100")
|
||||
.invoke(ErrorKind::Network)
|
||||
.await;
|
||||
}
|
||||
} else {
|
||||
let ptr: JsonPointer = "/public/serverInfo/network/host".parse().unwrap();
|
||||
let mut watch = db.watch(ptr).await.typed::<Host>();
|
||||
@@ -539,7 +651,7 @@ impl NetService {
|
||||
.as_network()
|
||||
.as_gateways()
|
||||
.de()?;
|
||||
let hostname = Hostname(db.as_public().as_server_info().as_hostname().de()?);
|
||||
let hostname = ServerHostname::load(db.as_public().as_server_info())?;
|
||||
let mut ports = db.as_private().as_available_ports().de()?;
|
||||
let host = host_for(db, pkg_id.as_ref(), &id)?;
|
||||
host.add_binding(&mut ports, internal_port, options)?;
|
||||
@@ -564,7 +676,7 @@ impl NetService {
|
||||
.as_network()
|
||||
.as_gateways()
|
||||
.de()?;
|
||||
let hostname = Hostname(db.as_public().as_server_info().as_hostname().de()?);
|
||||
let hostname = ServerHostname::load(db.as_public().as_server_info())?;
|
||||
let ports = db.as_private().as_available_ports().de()?;
|
||||
if let Some(ref pkg_id) = pkg_id {
|
||||
for (host_id, host) in db
|
||||
@@ -634,6 +746,23 @@ impl NetService {
|
||||
let mut w = self.synced.clone();
|
||||
w.wait_for(|v| *v > current).await;
|
||||
self.sync_task.abort();
|
||||
// Clean up any outbound gateway ip rules for this service
|
||||
let service_ip = self.data.lock().await.ip.to_string();
|
||||
loop {
|
||||
if Command::new("ip")
|
||||
.arg("rule")
|
||||
.arg("del")
|
||||
.arg("from")
|
||||
.arg(&service_ip)
|
||||
.arg("priority")
|
||||
.arg("100")
|
||||
.invoke(ErrorKind::Network)
|
||||
.await
|
||||
.is_err()
|
||||
{
|
||||
break;
|
||||
}
|
||||
}
|
||||
self.shutdown = true;
|
||||
Ok(())
|
||||
}
|
||||
|
||||
@@ -1,12 +1,12 @@
|
||||
use std::collections::BTreeSet;
|
||||
use std::net::SocketAddr;
|
||||
|
||||
use imbl_value::{InOMap, InternedString};
|
||||
use imbl_value::InternedString;
|
||||
use serde::{Deserialize, Serialize};
|
||||
use ts_rs::TS;
|
||||
|
||||
use crate::prelude::*;
|
||||
use crate::{GatewayId, HostId, PackageId, ServiceInterfaceId};
|
||||
use crate::{ActionId, GatewayId, HostId, PackageId, ServiceInterfaceId};
|
||||
|
||||
#[derive(Clone, Debug, PartialEq, Eq, PartialOrd, Ord, Deserialize, Serialize, TS)]
|
||||
#[ts(export)]
|
||||
@@ -14,7 +14,7 @@ use crate::{GatewayId, HostId, PackageId, ServiceInterfaceId};
|
||||
pub struct HostnameInfo {
|
||||
pub ssl: bool,
|
||||
pub public: bool,
|
||||
pub host: InternedString,
|
||||
pub hostname: InternedString,
|
||||
pub port: Option<u16>,
|
||||
pub metadata: HostnameMetadata,
|
||||
}
|
||||
@@ -42,21 +42,23 @@ pub enum HostnameMetadata {
|
||||
gateway: GatewayId,
|
||||
},
|
||||
Plugin {
|
||||
package: PackageId,
|
||||
#[serde(flatten)]
|
||||
#[ts(skip)]
|
||||
extra: InOMap<InternedString, Value>,
|
||||
package_id: PackageId,
|
||||
remove_action: Option<ActionId>,
|
||||
overflow_actions: Vec<ActionId>,
|
||||
#[ts(type = "unknown")]
|
||||
#[serde(default)]
|
||||
info: Value,
|
||||
},
|
||||
}
|
||||
|
||||
impl HostnameInfo {
|
||||
pub fn to_socket_addr(&self) -> Option<SocketAddr> {
|
||||
let ip = self.host.parse().ok()?;
|
||||
let ip = self.hostname.parse().ok()?;
|
||||
Some(SocketAddr::new(ip, self.port?))
|
||||
}
|
||||
|
||||
pub fn to_san_hostname(&self) -> InternedString {
|
||||
self.host.clone()
|
||||
self.hostname.clone()
|
||||
}
|
||||
}
|
||||
|
||||
@@ -70,14 +72,70 @@ impl HostnameMetadata {
|
||||
Self::Ipv4 { gateway }
|
||||
| Self::Ipv6 { gateway, .. }
|
||||
| Self::PublicDomain { gateway } => Box::new(std::iter::once(gateway)),
|
||||
Self::PrivateDomain { gateways } | Self::Mdns { gateways } => {
|
||||
Box::new(gateways.iter())
|
||||
}
|
||||
Self::PrivateDomain { gateways } | Self::Mdns { gateways } => Box::new(gateways.iter()),
|
||||
Self::Plugin { .. } => Box::new(std::iter::empty()),
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
#[derive(Clone, Debug, PartialEq, Eq, PartialOrd, Ord, Deserialize, Serialize, TS)]
|
||||
#[ts(export)]
|
||||
#[serde(rename_all = "camelCase")]
|
||||
pub struct PluginHostnameInfo {
|
||||
pub package_id: Option<PackageId>,
|
||||
pub host_id: HostId,
|
||||
pub internal_port: u16,
|
||||
pub ssl: bool,
|
||||
pub public: bool,
|
||||
#[ts(type = "string")]
|
||||
pub hostname: InternedString,
|
||||
pub port: Option<u16>,
|
||||
#[ts(type = "unknown")]
|
||||
#[serde(default)]
|
||||
pub info: Value,
|
||||
}
|
||||
|
||||
impl PluginHostnameInfo {
|
||||
/// Convert to a `HostnameInfo` with `Plugin` metadata, using the given plugin package ID.
|
||||
pub fn to_hostname_info(
|
||||
&self,
|
||||
plugin_package: &PackageId,
|
||||
remove_action: Option<ActionId>,
|
||||
overflow_actions: Vec<ActionId>,
|
||||
) -> HostnameInfo {
|
||||
HostnameInfo {
|
||||
ssl: self.ssl,
|
||||
public: self.public,
|
||||
hostname: self.hostname.clone(),
|
||||
port: self.port,
|
||||
metadata: HostnameMetadata::Plugin {
|
||||
package_id: plugin_package.clone(),
|
||||
info: self.info.clone(),
|
||||
remove_action,
|
||||
overflow_actions,
|
||||
},
|
||||
}
|
||||
}
|
||||
|
||||
/// Check if a `HostnameInfo` with Plugin metadata matches this `PluginHostnameInfo`
|
||||
/// (comparing address fields only, not row_actions).
|
||||
pub fn matches_hostname_info(&self, h: &HostnameInfo, plugin_package: &PackageId) -> bool {
|
||||
match &h.metadata {
|
||||
HostnameMetadata::Plugin {
|
||||
package_id, info, ..
|
||||
} => {
|
||||
package_id == plugin_package
|
||||
&& h.ssl == self.ssl
|
||||
&& h.public == self.public
|
||||
&& h.hostname == self.hostname
|
||||
&& h.port == self.port
|
||||
&& *info == self.info
|
||||
}
|
||||
_ => false,
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
#[derive(Clone, Debug, PartialEq, Eq, PartialOrd, Ord, Deserialize, Serialize, TS)]
|
||||
#[ts(export)]
|
||||
#[serde(rename_all = "camelCase")]
|
||||
|
||||
@@ -33,7 +33,7 @@ use crate::SOURCE_DATE;
|
||||
use crate::account::AccountInfo;
|
||||
use crate::db::model::Database;
|
||||
use crate::db::{DbAccess, DbAccessMut};
|
||||
use crate::hostname::Hostname;
|
||||
use crate::hostname::ServerHostname;
|
||||
use crate::init::check_time_is_synchronized;
|
||||
use crate::net::gateway::GatewayInfo;
|
||||
use crate::net::tls::TlsHandler;
|
||||
@@ -283,7 +283,7 @@ pub fn gen_nistp256() -> Result<PKey<Private>, Error> {
|
||||
#[instrument(skip_all)]
|
||||
pub fn make_root_cert(
|
||||
root_key: &PKey<Private>,
|
||||
hostname: &Hostname,
|
||||
hostname: &ServerHostname,
|
||||
start_time: SystemTime,
|
||||
) -> Result<X509, Error> {
|
||||
let mut builder = X509Builder::new()?;
|
||||
@@ -300,7 +300,8 @@ pub fn make_root_cert(
|
||||
builder.set_serial_number(&*rand_serial()?)?;
|
||||
|
||||
let mut subject_name_builder = X509NameBuilder::new()?;
|
||||
subject_name_builder.append_entry_by_text("CN", &format!("{} Local Root CA", &*hostname.0))?;
|
||||
subject_name_builder
|
||||
.append_entry_by_text("CN", &format!("{} Local Root CA", hostname.as_ref()))?;
|
||||
subject_name_builder.append_entry_by_text("O", "Start9")?;
|
||||
subject_name_builder.append_entry_by_text("OU", "StartOS")?;
|
||||
let subject_name = subject_name_builder.build();
|
||||
|
||||
@@ -31,7 +31,7 @@ use tokio_util::io::ReaderStream;
|
||||
use url::Url;
|
||||
|
||||
use crate::context::{DiagnosticContext, InitContext, RpcContext, SetupContext};
|
||||
use crate::hostname::Hostname;
|
||||
use crate::hostname::ServerHostname;
|
||||
use crate::middleware::auth::Auth;
|
||||
use crate::middleware::auth::session::ValidSessionToken;
|
||||
use crate::middleware::cors::Cors;
|
||||
@@ -105,8 +105,9 @@ impl UiContext for RpcContext {
|
||||
get(move || {
|
||||
let ctx = self.clone();
|
||||
async move {
|
||||
ctx.account
|
||||
.peek(|account| cert_send(&account.root_ca_cert, &account.hostname))
|
||||
ctx.account.peek(|account| {
|
||||
cert_send(&account.root_ca_cert, &account.hostname.hostname)
|
||||
})
|
||||
}
|
||||
}),
|
||||
)
|
||||
@@ -419,7 +420,7 @@ pub fn bad_request() -> Response {
|
||||
.unwrap()
|
||||
}
|
||||
|
||||
fn cert_send(cert: &X509, hostname: &Hostname) -> Result<Response, Error> {
|
||||
fn cert_send(cert: &X509, hostname: &ServerHostname) -> Result<Response, Error> {
|
||||
let pem = cert.to_pem()?;
|
||||
Response::builder()
|
||||
.status(StatusCode::OK)
|
||||
@@ -435,7 +436,7 @@ fn cert_send(cert: &X509, hostname: &Hostname) -> Result<Response, Error> {
|
||||
.header(http::header::CONTENT_LENGTH, pem.len())
|
||||
.header(
|
||||
http::header::CONTENT_DISPOSITION,
|
||||
format!("attachment; filename={}.crt", &hostname.0),
|
||||
format!("attachment; filename={}.crt", hostname.as_ref()),
|
||||
)
|
||||
.body(Body::from(pem))
|
||||
.with_kind(ErrorKind::Network)
|
||||
|
||||
@@ -1,5 +1,6 @@
|
||||
use std::sync::Arc;
|
||||
use std::task::{Poll, ready};
|
||||
use std::time::Duration;
|
||||
|
||||
use futures::future::BoxFuture;
|
||||
use futures::stream::FuturesUnordered;
|
||||
@@ -170,7 +171,7 @@ where
|
||||
let (metadata, stream) = ready!(self.accept.poll_accept(cx)?);
|
||||
let mut tls_handler = self.tls_handler.clone();
|
||||
let mut fut = async move {
|
||||
let res = async {
|
||||
let res = match tokio::time::timeout(Duration::from_secs(15), async {
|
||||
let mut acceptor =
|
||||
LazyConfigAcceptor::new(Acceptor::default(), BackTrackingIO::new(stream));
|
||||
let mut mid: tokio_rustls::StartHandshake<BackTrackingIO<AcceptStream>> =
|
||||
@@ -233,14 +234,22 @@ where
|
||||
}
|
||||
|
||||
Ok(None)
|
||||
}
|
||||
.await;
|
||||
})
|
||||
.await
|
||||
{
|
||||
Ok(res) => res,
|
||||
Err(_) => {
|
||||
tracing::trace!("TLS handshake timed out");
|
||||
Ok(None)
|
||||
}
|
||||
};
|
||||
(tls_handler, res)
|
||||
}
|
||||
.boxed();
|
||||
match fut.poll_unpin(cx) {
|
||||
Poll::Pending => {
|
||||
in_progress.push(fut);
|
||||
cx.waker().wake_by_ref();
|
||||
Poll::Pending
|
||||
}
|
||||
Poll::Ready((handler, res)) => {
|
||||
|
||||
@@ -175,8 +175,13 @@ pub async fn remove_tunnel(
|
||||
|
||||
ctx.db
|
||||
.mutate(|db| {
|
||||
let hostname = crate::hostname::Hostname(db.as_public().as_server_info().as_hostname().de()?);
|
||||
let gateways = db.as_public().as_server_info().as_network().as_gateways().de()?;
|
||||
let hostname = crate::hostname::ServerHostname::load(db.as_public().as_server_info())?;
|
||||
let gateways = db
|
||||
.as_public()
|
||||
.as_server_info()
|
||||
.as_network()
|
||||
.as_gateways()
|
||||
.de()?;
|
||||
let ports = db.as_private().as_available_ports().de()?;
|
||||
for host in all_hosts(db) {
|
||||
let host = host?;
|
||||
@@ -194,8 +199,13 @@ pub async fn remove_tunnel(
|
||||
|
||||
ctx.db
|
||||
.mutate(|db| {
|
||||
let hostname = crate::hostname::Hostname(db.as_public().as_server_info().as_hostname().de()?);
|
||||
let gateways = db.as_public().as_server_info().as_network().as_gateways().de()?;
|
||||
let hostname = crate::hostname::ServerHostname::load(db.as_public().as_server_info())?;
|
||||
let gateways = db
|
||||
.as_public()
|
||||
.as_server_info()
|
||||
.as_network()
|
||||
.as_gateways()
|
||||
.de()?;
|
||||
let ports = db.as_private().as_available_ports().de()?;
|
||||
for host in all_hosts(db) {
|
||||
let host = host?;
|
||||
|
||||
@@ -161,7 +161,10 @@ pub struct WifiAddParams {
|
||||
password: String,
|
||||
}
|
||||
#[instrument(skip_all)]
|
||||
pub async fn add(ctx: RpcContext, WifiAddParams { ssid, password }: WifiAddParams) -> Result<(), Error> {
|
||||
pub async fn add(
|
||||
ctx: RpcContext,
|
||||
WifiAddParams { ssid, password }: WifiAddParams,
|
||||
) -> Result<(), Error> {
|
||||
let wifi_manager = ctx.wifi_manager.clone();
|
||||
if !ssid.is_ascii() {
|
||||
return Err(Error::new(
|
||||
@@ -240,7 +243,10 @@ pub struct WifiSsidParams {
|
||||
}
|
||||
|
||||
#[instrument(skip_all)]
|
||||
pub async fn connect(ctx: RpcContext, WifiSsidParams { ssid }: WifiSsidParams) -> Result<(), Error> {
|
||||
pub async fn connect(
|
||||
ctx: RpcContext,
|
||||
WifiSsidParams { ssid }: WifiSsidParams,
|
||||
) -> Result<(), Error> {
|
||||
let wifi_manager = ctx.wifi_manager.clone();
|
||||
if !ssid.is_ascii() {
|
||||
return Err(Error::new(
|
||||
|
||||
@@ -579,14 +579,12 @@ fn check_matching_info_short() {
|
||||
use crate::s9pk::manifest::{Alerts, Description};
|
||||
use crate::util::DataUrl;
|
||||
|
||||
let lang_map = |s: &str| {
|
||||
LocaleString::LanguageMap([("en".into(), s.into())].into_iter().collect())
|
||||
};
|
||||
let lang_map =
|
||||
|s: &str| LocaleString::LanguageMap([("en".into(), s.into())].into_iter().collect());
|
||||
|
||||
let info = PackageVersionInfo {
|
||||
metadata: PackageMetadata {
|
||||
title: "Test Package".into(),
|
||||
icon: DataUrl::from_vec("image/png", vec![]),
|
||||
description: Description {
|
||||
short: lang_map("A short description"),
|
||||
long: lang_map("A longer description of the test package"),
|
||||
@@ -594,18 +592,19 @@ fn check_matching_info_short() {
|
||||
release_notes: lang_map("Initial release"),
|
||||
git_hash: None,
|
||||
license: "MIT".into(),
|
||||
wrapper_repo: "https://github.com/example/wrapper".parse().unwrap(),
|
||||
package_repo: "https://github.com/example/wrapper".parse().unwrap(),
|
||||
upstream_repo: "https://github.com/example/upstream".parse().unwrap(),
|
||||
support_site: "https://example.com/support".parse().unwrap(),
|
||||
marketing_site: "https://example.com".parse().unwrap(),
|
||||
marketing_url: Some("https://example.com".parse().unwrap()),
|
||||
donation_url: None,
|
||||
docs_url: None,
|
||||
docs_urls: Vec::new(),
|
||||
alerts: Alerts::default(),
|
||||
dependency_metadata: BTreeMap::new(),
|
||||
os_version: exver::Version::new([0, 3, 6], []),
|
||||
sdk_version: None,
|
||||
hardware_acceleration: false,
|
||||
plugins: BTreeSet::new(),
|
||||
},
|
||||
icon: DataUrl::from_vec("image/png", vec![]),
|
||||
dependency_metadata: BTreeMap::new(),
|
||||
source_version: None,
|
||||
s9pks: Vec::new(),
|
||||
};
|
||||
|
||||
@@ -17,8 +17,11 @@ use crate::registry::device_info::DeviceInfo;
|
||||
use crate::rpc_continuations::Guid;
|
||||
use crate::s9pk::S9pk;
|
||||
use crate::s9pk::git_hash::GitHash;
|
||||
use crate::s9pk::manifest::{Alerts, Description, HardwareRequirements, LocaleString};
|
||||
use crate::s9pk::manifest::{
|
||||
Alerts, Description, HardwareRequirements, LocaleString, current_version,
|
||||
};
|
||||
use crate::s9pk::merkle_archive::source::FileSource;
|
||||
use crate::service::effects::plugin::PluginId;
|
||||
use crate::sign::commitment::merkle_archive::MerkleArchiveCommitment;
|
||||
use crate::sign::{AnySignature, AnyVerifyingKey};
|
||||
use crate::util::{DataUrl, VersionString};
|
||||
@@ -69,75 +72,44 @@ impl DependencyMetadata {
|
||||
}
|
||||
}
|
||||
|
||||
#[derive(Debug, Deserialize, Serialize, HasModel, TS, PartialEq)]
|
||||
fn placeholder_url() -> Url {
|
||||
"https://example.com".parse().unwrap()
|
||||
}
|
||||
|
||||
#[derive(Clone, Debug, Deserialize, Serialize, HasModel, TS, PartialEq)]
|
||||
#[serde(rename_all = "camelCase")]
|
||||
#[model = "Model<Self>"]
|
||||
pub struct PackageMetadata {
|
||||
#[ts(type = "string")]
|
||||
pub title: InternedString,
|
||||
pub icon: DataUrl<'static>,
|
||||
pub description: Description,
|
||||
pub release_notes: LocaleString,
|
||||
pub git_hash: Option<GitHash>,
|
||||
#[ts(type = "string")]
|
||||
pub license: InternedString,
|
||||
#[ts(type = "string")]
|
||||
pub wrapper_repo: Url,
|
||||
#[serde(default = "placeholder_url")] // TODO: remove
|
||||
pub package_repo: Url,
|
||||
#[ts(type = "string")]
|
||||
pub upstream_repo: Url,
|
||||
#[ts(type = "string")]
|
||||
pub support_site: Url,
|
||||
#[ts(type = "string")]
|
||||
pub marketing_site: Url,
|
||||
pub marketing_url: Option<Url>,
|
||||
#[ts(type = "string | null")]
|
||||
pub donation_url: Option<Url>,
|
||||
#[ts(type = "string | null")]
|
||||
pub docs_url: Option<Url>,
|
||||
#[serde(default)]
|
||||
#[ts(type = "string[]")]
|
||||
pub docs_urls: Vec<Url>,
|
||||
#[serde(default)]
|
||||
pub alerts: Alerts,
|
||||
pub dependency_metadata: BTreeMap<PackageId, DependencyMetadata>,
|
||||
#[serde(default = "current_version")]
|
||||
#[ts(type = "string")]
|
||||
pub os_version: Version,
|
||||
#[ts(type = "string | null")]
|
||||
pub sdk_version: Option<Version>,
|
||||
#[serde(default)]
|
||||
pub hardware_acceleration: bool,
|
||||
}
|
||||
impl PackageMetadata {
|
||||
pub async fn load<S: FileSource + Clone>(s9pk: &S9pk<S>) -> Result<Self, Error> {
|
||||
let manifest = s9pk.as_manifest();
|
||||
let mut dependency_metadata = BTreeMap::new();
|
||||
for (id, info) in &manifest.dependencies.0 {
|
||||
let metadata = s9pk.dependency_metadata(id).await?;
|
||||
dependency_metadata.insert(
|
||||
id.clone(),
|
||||
DependencyMetadata {
|
||||
title: metadata.map(|m| m.title),
|
||||
icon: s9pk.dependency_icon_data_url(id).await?,
|
||||
description: info.description.clone(),
|
||||
optional: info.optional,
|
||||
},
|
||||
);
|
||||
}
|
||||
Ok(Self {
|
||||
title: manifest.title.clone(),
|
||||
icon: s9pk.icon_data_url().await?,
|
||||
description: manifest.description.clone(),
|
||||
release_notes: manifest.release_notes.clone(),
|
||||
git_hash: manifest.git_hash.clone(),
|
||||
license: manifest.license.clone(),
|
||||
wrapper_repo: manifest.wrapper_repo.clone(),
|
||||
upstream_repo: manifest.upstream_repo.clone(),
|
||||
support_site: manifest.support_site.clone(),
|
||||
marketing_site: manifest.marketing_site.clone(),
|
||||
donation_url: manifest.donation_url.clone(),
|
||||
docs_url: manifest.docs_url.clone(),
|
||||
alerts: manifest.alerts.clone(),
|
||||
dependency_metadata,
|
||||
os_version: manifest.os_version.clone(),
|
||||
sdk_version: manifest.sdk_version.clone(),
|
||||
hardware_acceleration: manifest.hardware_acceleration.clone(),
|
||||
})
|
||||
}
|
||||
#[serde(default)]
|
||||
pub plugins: BTreeSet<PluginId>,
|
||||
}
|
||||
|
||||
#[derive(Debug, Deserialize, Serialize, HasModel, TS)]
|
||||
@@ -147,6 +119,8 @@ impl PackageMetadata {
|
||||
pub struct PackageVersionInfo {
|
||||
#[serde(flatten)]
|
||||
pub metadata: PackageMetadata,
|
||||
pub icon: DataUrl<'static>,
|
||||
pub dependency_metadata: BTreeMap<PackageId, DependencyMetadata>,
|
||||
#[ts(type = "string | null")]
|
||||
pub source_version: Option<VersionRange>,
|
||||
pub s9pks: Vec<(HardwareRequirements, RegistryAsset<MerkleArchiveCommitment>)>,
|
||||
@@ -156,11 +130,28 @@ impl PackageVersionInfo {
|
||||
s9pk: &S9pk<S>,
|
||||
urls: Vec<Url>,
|
||||
) -> Result<Self, Error> {
|
||||
let manifest = s9pk.as_manifest();
|
||||
let icon = s9pk.icon_data_url().await?;
|
||||
let mut dependency_metadata = BTreeMap::new();
|
||||
for (id, info) in &manifest.dependencies.0 {
|
||||
let dep_meta = s9pk.dependency_metadata(id).await?;
|
||||
dependency_metadata.insert(
|
||||
id.clone(),
|
||||
DependencyMetadata {
|
||||
title: dep_meta.map(|m| m.title),
|
||||
icon: s9pk.dependency_icon_data_url(id).await?,
|
||||
description: info.description.clone(),
|
||||
optional: info.optional,
|
||||
},
|
||||
);
|
||||
}
|
||||
Ok(Self {
|
||||
metadata: PackageMetadata::load(s9pk).await?,
|
||||
metadata: manifest.metadata.clone(),
|
||||
icon,
|
||||
dependency_metadata,
|
||||
source_version: None, // TODO
|
||||
s9pks: vec![(
|
||||
s9pk.as_manifest().hardware_requirements.clone(),
|
||||
manifest.hardware_requirements.clone(),
|
||||
RegistryAsset {
|
||||
published_at: Utc::now(),
|
||||
urls,
|
||||
@@ -176,6 +167,27 @@ impl PackageVersionInfo {
|
||||
})
|
||||
}
|
||||
pub fn merge_with(&mut self, other: Self, replace_urls: bool) -> Result<(), Error> {
|
||||
if self.metadata != other.metadata {
|
||||
return Err(Error::new(
|
||||
color_eyre::eyre::eyre!("{}", t!("registry.package.index.metadata-mismatch")),
|
||||
ErrorKind::InvalidRequest,
|
||||
));
|
||||
}
|
||||
if self.icon != other.icon {
|
||||
return Err(Error::new(
|
||||
color_eyre::eyre::eyre!("{}", t!("registry.package.index.icon-mismatch")),
|
||||
ErrorKind::InvalidRequest,
|
||||
));
|
||||
}
|
||||
if self.dependency_metadata != other.dependency_metadata {
|
||||
return Err(Error::new(
|
||||
color_eyre::eyre::eyre!(
|
||||
"{}",
|
||||
t!("registry.package.index.dependency-metadata-mismatch")
|
||||
),
|
||||
ErrorKind::InvalidRequest,
|
||||
));
|
||||
}
|
||||
for (hw_req, asset) in other.s9pks {
|
||||
if let Some((_, matching)) = self
|
||||
.s9pks
|
||||
@@ -221,10 +233,9 @@ impl PackageVersionInfo {
|
||||
]);
|
||||
table.add_row(row![br -> "GIT HASH", self.metadata.git_hash.as_deref().unwrap_or("N/A")]);
|
||||
table.add_row(row![br -> "LICENSE", &self.metadata.license]);
|
||||
table.add_row(row![br -> "PACKAGE REPO", &self.metadata.wrapper_repo.to_string()]);
|
||||
table.add_row(row![br -> "PACKAGE REPO", &self.metadata.package_repo.to_string()]);
|
||||
table.add_row(row![br -> "SERVICE REPO", &self.metadata.upstream_repo.to_string()]);
|
||||
table.add_row(row![br -> "WEBSITE", &self.metadata.marketing_site.to_string()]);
|
||||
table.add_row(row![br -> "SUPPORT", &self.metadata.support_site.to_string()]);
|
||||
table.add_row(row![br -> "WEBSITE", self.metadata.marketing_url.as_ref().map_or("N/A".to_owned(), |u| u.to_string())]);
|
||||
|
||||
table
|
||||
}
|
||||
@@ -287,19 +298,17 @@ impl Model<PackageVersionInfo> {
|
||||
}
|
||||
|
||||
if let Some(locale) = device_info.os.language.as_deref() {
|
||||
let metadata = self.as_metadata_mut();
|
||||
metadata
|
||||
self.as_metadata_mut()
|
||||
.as_alerts_mut()
|
||||
.mutate(|a| Ok(a.localize_for(locale)))?;
|
||||
metadata
|
||||
.as_dependency_metadata_mut()
|
||||
self.as_dependency_metadata_mut()
|
||||
.as_entries_mut()?
|
||||
.into_iter()
|
||||
.try_for_each(|(_, d)| d.mutate(|d| Ok(d.localize_for(locale))))?;
|
||||
metadata
|
||||
self.as_metadata_mut()
|
||||
.as_description_mut()
|
||||
.mutate(|d| Ok(d.localize_for(locale)))?;
|
||||
metadata
|
||||
self.as_metadata_mut()
|
||||
.as_release_notes_mut()
|
||||
.mutate(|r| Ok(r.localize_for(locale)))?;
|
||||
}
|
||||
|
||||
@@ -9,6 +9,7 @@ use tokio::process::Command;
|
||||
|
||||
use crate::dependencies::{DepInfo, Dependencies};
|
||||
use crate::prelude::*;
|
||||
use crate::registry::package::index::PackageMetadata;
|
||||
use crate::s9pk::manifest::{DeviceFilter, LocaleString, Manifest};
|
||||
use crate::s9pk::merkle_archive::directory_contents::DirectoryContents;
|
||||
use crate::s9pk::merkle_archive::source::TmpSource;
|
||||
@@ -195,20 +196,30 @@ impl TryFrom<ManifestV1> for Manifest {
|
||||
}
|
||||
Ok(Self {
|
||||
id: value.id,
|
||||
title: format!("{} (Legacy)", value.title).into(),
|
||||
version: version.into(),
|
||||
satisfies: BTreeSet::new(),
|
||||
release_notes: LocaleString::Translated(value.release_notes),
|
||||
can_migrate_from: VersionRange::any(),
|
||||
can_migrate_to: VersionRange::none(),
|
||||
license: value.license.into(),
|
||||
wrapper_repo: value.wrapper_repo,
|
||||
upstream_repo: value.upstream_repo,
|
||||
support_site: value.support_site.unwrap_or_else(|| default_url.clone()),
|
||||
marketing_site: value.marketing_site.unwrap_or_else(|| default_url.clone()),
|
||||
donation_url: value.donation_url,
|
||||
docs_url: None,
|
||||
description: value.description,
|
||||
metadata: PackageMetadata {
|
||||
title: format!("{} (Legacy)", value.title).into(),
|
||||
release_notes: LocaleString::Translated(value.release_notes),
|
||||
license: value.license.into(),
|
||||
package_repo: value.wrapper_repo,
|
||||
upstream_repo: value.upstream_repo,
|
||||
marketing_url: Some(value.marketing_site.unwrap_or_else(|| default_url.clone())),
|
||||
donation_url: value.donation_url,
|
||||
docs_urls: Vec::new(),
|
||||
description: value.description,
|
||||
alerts: value.alerts,
|
||||
git_hash: value.git_hash,
|
||||
os_version: value.eos_version,
|
||||
sdk_version: None,
|
||||
hardware_acceleration: match value.main {
|
||||
PackageProcedure::Docker(d) => d.gpu_acceleration,
|
||||
PackageProcedure::Script(_) => false,
|
||||
},
|
||||
plugins: BTreeSet::new(),
|
||||
},
|
||||
images: BTreeMap::new(),
|
||||
volumes: value
|
||||
.volumes
|
||||
@@ -217,7 +228,6 @@ impl TryFrom<ManifestV1> for Manifest {
|
||||
.map(|(id, _)| id.clone())
|
||||
.chain([VolumeId::from_str("embassy").unwrap()])
|
||||
.collect(),
|
||||
alerts: value.alerts,
|
||||
dependencies: Dependencies(
|
||||
value
|
||||
.dependencies
|
||||
@@ -252,13 +262,6 @@ impl TryFrom<ManifestV1> for Manifest {
|
||||
})
|
||||
.collect(),
|
||||
},
|
||||
git_hash: value.git_hash,
|
||||
os_version: value.eos_version,
|
||||
sdk_version: None,
|
||||
hardware_acceleration: match value.main {
|
||||
PackageProcedure::Docker(d) => d.gpu_acceleration,
|
||||
PackageProcedure::Script(_) => false,
|
||||
},
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
@@ -7,12 +7,11 @@ use exver::{Version, VersionRange};
|
||||
use imbl_value::{InOMap, InternedString};
|
||||
use serde::{Deserialize, Serialize};
|
||||
use ts_rs::TS;
|
||||
use url::Url;
|
||||
|
||||
pub use crate::PackageId;
|
||||
use crate::dependencies::Dependencies;
|
||||
use crate::prelude::*;
|
||||
use crate::s9pk::git_hash::GitHash;
|
||||
use crate::registry::package::index::PackageMetadata;
|
||||
use crate::s9pk::merkle_archive::directory_contents::DirectoryContents;
|
||||
use crate::s9pk::merkle_archive::expected::{Expected, Filter};
|
||||
use crate::s9pk::v2::pack::ImageConfig;
|
||||
@@ -22,7 +21,7 @@ use crate::util::{FromStrParser, VersionString, mime};
|
||||
use crate::version::{Current, VersionT};
|
||||
use crate::{ImageId, VolumeId};
|
||||
|
||||
fn current_version() -> Version {
|
||||
pub(crate) fn current_version() -> Version {
|
||||
Current::default().semver()
|
||||
}
|
||||
|
||||
@@ -32,46 +31,20 @@ fn current_version() -> Version {
|
||||
#[ts(export)]
|
||||
pub struct Manifest {
|
||||
pub id: PackageId,
|
||||
#[ts(type = "string")]
|
||||
pub title: InternedString,
|
||||
pub version: VersionString,
|
||||
pub satisfies: BTreeSet<VersionString>,
|
||||
pub release_notes: LocaleString,
|
||||
#[ts(type = "string")]
|
||||
pub can_migrate_to: VersionRange,
|
||||
#[ts(type = "string")]
|
||||
pub can_migrate_from: VersionRange,
|
||||
#[ts(type = "string")]
|
||||
pub license: InternedString, // type of license
|
||||
#[ts(type = "string")]
|
||||
pub wrapper_repo: Url,
|
||||
#[ts(type = "string")]
|
||||
pub upstream_repo: Url,
|
||||
#[ts(type = "string")]
|
||||
pub support_site: Url,
|
||||
#[ts(type = "string")]
|
||||
pub marketing_site: Url,
|
||||
#[ts(type = "string | null")]
|
||||
pub donation_url: Option<Url>,
|
||||
#[ts(type = "string | null")]
|
||||
pub docs_url: Option<Url>,
|
||||
pub description: Description,
|
||||
#[serde(flatten)]
|
||||
pub metadata: PackageMetadata,
|
||||
pub images: BTreeMap<ImageId, ImageConfig>,
|
||||
pub volumes: BTreeSet<VolumeId>,
|
||||
#[serde(default)]
|
||||
pub alerts: Alerts,
|
||||
#[serde(default)]
|
||||
pub dependencies: Dependencies,
|
||||
#[serde(default)]
|
||||
pub hardware_requirements: HardwareRequirements,
|
||||
#[serde(default)]
|
||||
pub hardware_acceleration: bool,
|
||||
pub git_hash: Option<GitHash>,
|
||||
#[serde(default = "current_version")]
|
||||
#[ts(type = "string")]
|
||||
pub os_version: Version,
|
||||
#[ts(type = "string | null")]
|
||||
pub sdk_version: Option<Version>,
|
||||
}
|
||||
impl Manifest {
|
||||
pub fn validate_for<'a, T: Clone>(
|
||||
|
||||
@@ -685,7 +685,7 @@ pub async fn pack(ctx: CliContext, params: PackParams) -> Result<(), Error> {
|
||||
.await?;
|
||||
|
||||
let manifest = s9pk.as_manifest_mut();
|
||||
manifest.git_hash = Some(GitHash::from_path(params.path()).await?);
|
||||
manifest.metadata.git_hash = Some(GitHash::from_path(params.path()).await?);
|
||||
if !params.arch.is_empty() {
|
||||
let arches: BTreeSet<InternedString> = match manifest.hardware_requirements.arch.take() {
|
||||
Some(a) => params
|
||||
@@ -792,7 +792,7 @@ pub async fn pack(ctx: CliContext, params: PackParams) -> Result<(), Error> {
|
||||
}
|
||||
};
|
||||
Some((
|
||||
LocaleString::Translated(s9pk.as_manifest().title.to_string()),
|
||||
LocaleString::Translated(s9pk.as_manifest().metadata.title.to_string()),
|
||||
s9pk.icon_data_url().await?,
|
||||
))
|
||||
}
|
||||
|
||||
@@ -17,6 +17,7 @@ use crate::{ActionId, PackageId, ReplayId};
|
||||
|
||||
pub(super) struct GetActionInput {
|
||||
id: ActionId,
|
||||
prefill: Value,
|
||||
}
|
||||
impl Handler<GetActionInput> for ServiceActor {
|
||||
type Response = Result<Option<ActionInput>, Error>;
|
||||
@@ -26,7 +27,10 @@ impl Handler<GetActionInput> for ServiceActor {
|
||||
async fn handle(
|
||||
&mut self,
|
||||
id: Guid,
|
||||
GetActionInput { id: action_id }: GetActionInput,
|
||||
GetActionInput {
|
||||
id: action_id,
|
||||
prefill,
|
||||
}: GetActionInput,
|
||||
_: &BackgroundJobQueue,
|
||||
) -> Self::Response {
|
||||
let container = &self.0.persistent_container;
|
||||
@@ -34,7 +38,7 @@ impl Handler<GetActionInput> for ServiceActor {
|
||||
.execute::<Option<ActionInput>>(
|
||||
id,
|
||||
ProcedureName::GetActionInput(action_id),
|
||||
Value::Null,
|
||||
json!({ "prefill": prefill }),
|
||||
Some(Duration::from_secs(30)),
|
||||
)
|
||||
.await
|
||||
@@ -47,6 +51,7 @@ impl Service {
|
||||
&self,
|
||||
id: Guid,
|
||||
action_id: ActionId,
|
||||
prefill: Value,
|
||||
) -> Result<Option<ActionInput>, Error> {
|
||||
if !self
|
||||
.seed
|
||||
@@ -67,7 +72,13 @@ impl Service {
|
||||
return Ok(None);
|
||||
}
|
||||
self.actor
|
||||
.send(id, GetActionInput { id: action_id })
|
||||
.send(
|
||||
id,
|
||||
GetActionInput {
|
||||
id: action_id,
|
||||
prefill,
|
||||
},
|
||||
)
|
||||
.await?
|
||||
}
|
||||
}
|
||||
|
||||
@@ -122,6 +122,10 @@ pub struct GetActionInputParams {
|
||||
package_id: Option<PackageId>,
|
||||
#[arg(help = "help.arg.action-id")]
|
||||
action_id: ActionId,
|
||||
#[ts(type = "Record<string, unknown> | null")]
|
||||
#[serde(default)]
|
||||
#[arg(skip)]
|
||||
prefill: Option<Value>,
|
||||
}
|
||||
async fn get_action_input(
|
||||
context: EffectContext,
|
||||
@@ -129,9 +133,11 @@ async fn get_action_input(
|
||||
procedure_id,
|
||||
package_id,
|
||||
action_id,
|
||||
prefill,
|
||||
}: GetActionInputParams,
|
||||
) -> Result<Option<ActionInput>, Error> {
|
||||
let context = context.deref()?;
|
||||
let prefill = prefill.unwrap_or(Value::Null);
|
||||
|
||||
if let Some(package_id) = package_id {
|
||||
context
|
||||
@@ -142,10 +148,12 @@ async fn get_action_input(
|
||||
.await
|
||||
.as_ref()
|
||||
.or_not_found(&package_id)?
|
||||
.get_action_input(procedure_id, action_id)
|
||||
.get_action_input(procedure_id, action_id, prefill)
|
||||
.await
|
||||
} else {
|
||||
context.get_action_input(procedure_id, action_id).await
|
||||
context
|
||||
.get_action_input(procedure_id, action_id, prefill)
|
||||
.await
|
||||
}
|
||||
}
|
||||
|
||||
@@ -245,7 +253,7 @@ async fn create_task(
|
||||
.as_ref()
|
||||
{
|
||||
let Some(prev) = service
|
||||
.get_action_input(procedure_id.clone(), task.action_id.clone())
|
||||
.get_action_input(procedure_id.clone(), task.action_id.clone(), Value::Null)
|
||||
.await?
|
||||
else {
|
||||
return Err(Error::new(
|
||||
|
||||
@@ -5,15 +5,16 @@ use std::time::{Duration, SystemTime};
|
||||
|
||||
use clap::Parser;
|
||||
use futures::future::join_all;
|
||||
use imbl::{Vector, vector};
|
||||
use imbl::{OrdMap, Vector, vector};
|
||||
use imbl_value::InternedString;
|
||||
use patch_db::TypedDbWatch;
|
||||
use patch_db::json_ptr::JsonPointer;
|
||||
use serde::{Deserialize, Serialize};
|
||||
use tracing::warn;
|
||||
use ts_rs::TS;
|
||||
|
||||
use patch_db::json_ptr::JsonPointer;
|
||||
|
||||
use crate::db::model::Database;
|
||||
use crate::db::model::public::NetworkInterfaceInfo;
|
||||
use crate::net::ssl::FullchainCertData;
|
||||
use crate::prelude::*;
|
||||
use crate::service::effects::context::EffectContext;
|
||||
@@ -22,7 +23,7 @@ use crate::service::rpc::{CallbackHandle, CallbackId};
|
||||
use crate::service::{Service, ServiceActorSeed};
|
||||
use crate::util::collections::EqMap;
|
||||
use crate::util::future::NonDetachingJoinHandle;
|
||||
use crate::{HostId, PackageId, ServiceInterfaceId};
|
||||
use crate::{GatewayId, HostId, PackageId, ServiceInterfaceId};
|
||||
|
||||
#[derive(Default)]
|
||||
pub struct ServiceCallbacks(Mutex<ServiceCallbackMap>);
|
||||
@@ -32,7 +33,8 @@ struct ServiceCallbackMap {
|
||||
get_service_interface: BTreeMap<(PackageId, ServiceInterfaceId), Vec<CallbackHandler>>,
|
||||
list_service_interfaces: BTreeMap<PackageId, Vec<CallbackHandler>>,
|
||||
get_system_smtp: Vec<CallbackHandler>,
|
||||
get_host_info: BTreeMap<(PackageId, HostId), (NonDetachingJoinHandle<()>, Vec<CallbackHandler>)>,
|
||||
get_host_info:
|
||||
BTreeMap<(PackageId, HostId), (NonDetachingJoinHandle<()>, Vec<CallbackHandler>)>,
|
||||
get_ssl_certificate: EqMap<
|
||||
(BTreeSet<InternedString>, FullchainCertData, Algorithm),
|
||||
(NonDetachingJoinHandle<()>, Vec<CallbackHandler>),
|
||||
@@ -40,6 +42,7 @@ struct ServiceCallbackMap {
|
||||
get_status: BTreeMap<PackageId, Vec<CallbackHandler>>,
|
||||
get_container_ip: BTreeMap<PackageId, Vec<CallbackHandler>>,
|
||||
get_service_manifest: BTreeMap<PackageId, Vec<CallbackHandler>>,
|
||||
get_outbound_gateway: BTreeMap<PackageId, (NonDetachingJoinHandle<()>, Vec<CallbackHandler>)>,
|
||||
}
|
||||
|
||||
impl ServiceCallbacks {
|
||||
@@ -76,6 +79,10 @@ impl ServiceCallbacks {
|
||||
v.retain(|h| h.handle.is_active() && h.seed.strong_count() > 0);
|
||||
!v.is_empty()
|
||||
});
|
||||
this.get_outbound_gateway.retain(|_, (_, v)| {
|
||||
v.retain(|h| h.handle.is_active() && h.seed.strong_count() > 0);
|
||||
!v.is_empty()
|
||||
});
|
||||
})
|
||||
}
|
||||
|
||||
@@ -154,12 +161,10 @@ impl ServiceCallbacks {
|
||||
this.get_host_info
|
||||
.entry((package_id.clone(), host_id.clone()))
|
||||
.or_insert_with(|| {
|
||||
let ptr: JsonPointer = format!(
|
||||
"/public/packageData/{}/hosts/{}",
|
||||
package_id, host_id
|
||||
)
|
||||
.parse()
|
||||
.expect("valid json pointer");
|
||||
let ptr: JsonPointer =
|
||||
format!("/public/packageData/{}/hosts/{}", package_id, host_id)
|
||||
.parse()
|
||||
.expect("valid json pointer");
|
||||
let db = db.clone();
|
||||
let callbacks = Arc::clone(self);
|
||||
let key = (package_id, host_id);
|
||||
@@ -174,9 +179,7 @@ impl ServiceCallbacks {
|
||||
.filter(|cb| !cb.0.is_empty())
|
||||
}) {
|
||||
if let Err(e) = cbs.call(vector![]).await {
|
||||
tracing::error!(
|
||||
"Error in host info callback: {e}"
|
||||
);
|
||||
tracing::error!("Error in host info callback: {e}");
|
||||
tracing::debug!("{e:?}");
|
||||
}
|
||||
}
|
||||
@@ -287,6 +290,61 @@ impl ServiceCallbacks {
|
||||
})
|
||||
}
|
||||
|
||||
/// Register a callback for outbound gateway changes.
|
||||
pub(super) fn add_get_outbound_gateway(
|
||||
self: &Arc<Self>,
|
||||
package_id: PackageId,
|
||||
mut outbound_gateway: TypedDbWatch<Option<GatewayId>>,
|
||||
mut default_outbound: Option<TypedDbWatch<Option<GatewayId>>>,
|
||||
mut fallback: Option<TypedDbWatch<OrdMap<GatewayId, NetworkInterfaceInfo>>>,
|
||||
handler: CallbackHandler,
|
||||
) {
|
||||
self.mutate(|this| {
|
||||
this.get_outbound_gateway
|
||||
.entry(package_id.clone())
|
||||
.or_insert_with(|| {
|
||||
let callbacks = Arc::clone(self);
|
||||
let key = package_id;
|
||||
(
|
||||
tokio::spawn(async move {
|
||||
tokio::select! {
|
||||
_ = outbound_gateway.changed() => {}
|
||||
_ = async {
|
||||
if let Some(ref mut w) = default_outbound {
|
||||
let _ = w.changed().await;
|
||||
} else {
|
||||
std::future::pending::<()>().await;
|
||||
}
|
||||
} => {}
|
||||
_ = async {
|
||||
if let Some(ref mut w) = fallback {
|
||||
let _ = w.changed().await;
|
||||
} else {
|
||||
std::future::pending::<()>().await;
|
||||
}
|
||||
} => {}
|
||||
}
|
||||
if let Some(cbs) = callbacks.mutate(|this| {
|
||||
this.get_outbound_gateway
|
||||
.remove(&key)
|
||||
.map(|(_, handlers)| CallbackHandlers(handlers))
|
||||
.filter(|cb| !cb.0.is_empty())
|
||||
}) {
|
||||
if let Err(e) = cbs.call(vector![]).await {
|
||||
tracing::error!("Error in outbound gateway callback: {e}");
|
||||
tracing::debug!("{e:?}");
|
||||
}
|
||||
}
|
||||
})
|
||||
.into(),
|
||||
Vec::new(),
|
||||
)
|
||||
})
|
||||
.1
|
||||
.push(handler);
|
||||
})
|
||||
}
|
||||
|
||||
pub(super) fn add_get_service_manifest(&self, package_id: PackageId, handler: CallbackHandler) {
|
||||
self.mutate(|this| {
|
||||
this.get_service_manifest
|
||||
|
||||
@@ -14,6 +14,7 @@ mod control;
|
||||
mod dependency;
|
||||
mod health;
|
||||
mod net;
|
||||
pub mod plugin;
|
||||
mod prelude;
|
||||
pub mod subcontainer;
|
||||
mod system;
|
||||
@@ -142,6 +143,10 @@ pub fn handler<C: Context>() -> ParentHandler<C> {
|
||||
"get-container-ip",
|
||||
from_fn_async(net::info::get_container_ip).no_cli(),
|
||||
)
|
||||
.subcommand(
|
||||
"get-outbound-gateway",
|
||||
from_fn_async(net::info::get_outbound_gateway).no_cli(),
|
||||
)
|
||||
.subcommand(
|
||||
"get-os-ip",
|
||||
from_fn(|_: C| Ok::<_, Error>(Ipv4Addr::from(HOST_IP))),
|
||||
@@ -167,6 +172,23 @@ pub fn handler<C: Context>() -> ParentHandler<C> {
|
||||
from_fn_async(net::ssl::get_ssl_certificate).no_cli(),
|
||||
)
|
||||
.subcommand("get-ssl-key", from_fn_async(net::ssl::get_ssl_key).no_cli())
|
||||
// plugin
|
||||
.subcommand(
|
||||
"plugin",
|
||||
ParentHandler::<C>::new().subcommand(
|
||||
"url",
|
||||
ParentHandler::<C>::new()
|
||||
.subcommand("register", from_fn_async(net::plugin::register).no_cli())
|
||||
.subcommand(
|
||||
"export-url",
|
||||
from_fn_async(net::plugin::export_url).no_cli(),
|
||||
)
|
||||
.subcommand(
|
||||
"clear-urls",
|
||||
from_fn_async(net::plugin::clear_urls).no_cli(),
|
||||
),
|
||||
),
|
||||
)
|
||||
.subcommand(
|
||||
"set-data-version",
|
||||
from_fn_async(version::set_data_version)
|
||||
|
||||
@@ -1,9 +1,16 @@
|
||||
use std::net::Ipv4Addr;
|
||||
|
||||
use crate::PackageId;
|
||||
use imbl::OrdMap;
|
||||
use patch_db::TypedDbWatch;
|
||||
use patch_db::json_ptr::JsonPointer;
|
||||
use tokio::process::Command;
|
||||
|
||||
use crate::db::model::public::NetworkInterfaceInfo;
|
||||
use crate::service::effects::callbacks::CallbackHandler;
|
||||
use crate::service::effects::prelude::*;
|
||||
use crate::service::rpc::CallbackId;
|
||||
use crate::util::Invoke;
|
||||
use crate::{GatewayId, PackageId};
|
||||
|
||||
#[derive(Debug, Clone, serde::Serialize, serde::Deserialize, TS)]
|
||||
#[serde(rename_all = "camelCase")]
|
||||
@@ -51,3 +58,116 @@ pub async fn get_container_ip(
|
||||
lxc.ip().await.map(Some)
|
||||
}
|
||||
}
|
||||
|
||||
#[derive(Debug, Clone, serde::Serialize, serde::Deserialize, TS)]
|
||||
#[serde(rename_all = "camelCase")]
|
||||
#[ts(export)]
|
||||
pub struct GetOutboundGatewayParams {
|
||||
#[ts(optional)]
|
||||
callback: Option<CallbackId>,
|
||||
}
|
||||
|
||||
pub async fn get_outbound_gateway(
|
||||
context: EffectContext,
|
||||
GetOutboundGatewayParams { callback }: GetOutboundGatewayParams,
|
||||
) -> Result<GatewayId, Error> {
|
||||
let context = context.deref()?;
|
||||
let ctx = &context.seed.ctx;
|
||||
|
||||
// Resolve the effective gateway; DB watches are created atomically
|
||||
// with each read to avoid race conditions.
|
||||
let (gw, pkg_watch, os_watch, gateways_watch) =
|
||||
resolve_outbound_gateway(ctx, &context.seed.id).await?;
|
||||
|
||||
if let Some(callback) = callback {
|
||||
let callback = callback.register(&context.seed.persistent_container);
|
||||
context.seed.ctx.callbacks.add_get_outbound_gateway(
|
||||
context.seed.id.clone(),
|
||||
pkg_watch,
|
||||
os_watch,
|
||||
gateways_watch,
|
||||
CallbackHandler::new(&context, callback),
|
||||
);
|
||||
}
|
||||
|
||||
Ok(gw)
|
||||
}
|
||||
|
||||
async fn resolve_outbound_gateway(
|
||||
ctx: &crate::context::RpcContext,
|
||||
package_id: &PackageId,
|
||||
) -> Result<
|
||||
(
|
||||
GatewayId,
|
||||
TypedDbWatch<Option<GatewayId>>,
|
||||
Option<TypedDbWatch<Option<GatewayId>>>,
|
||||
Option<TypedDbWatch<OrdMap<GatewayId, NetworkInterfaceInfo>>>,
|
||||
),
|
||||
Error,
|
||||
> {
|
||||
// 1. Package-specific outbound gateway — subscribe before reading
|
||||
let pkg_ptr: JsonPointer = format!("/public/packageData/{}/outboundGateway", package_id)
|
||||
.parse()
|
||||
.expect("valid json pointer");
|
||||
let mut pkg_watch = ctx.db.watch(pkg_ptr).await;
|
||||
let pkg_gw: Option<GatewayId> = imbl_value::from_value(pkg_watch.peek_and_mark_seen()?)?;
|
||||
|
||||
if let Some(gw) = pkg_gw {
|
||||
return Ok((gw, pkg_watch.typed(), None, None));
|
||||
}
|
||||
|
||||
// 2. OS-level default outbound — subscribe before reading
|
||||
let os_ptr: JsonPointer = "/public/serverInfo/network/defaultOutbound"
|
||||
.parse()
|
||||
.expect("valid json pointer");
|
||||
let mut os_watch = ctx.db.watch(os_ptr).await;
|
||||
let default_outbound: Option<GatewayId> =
|
||||
imbl_value::from_value(os_watch.peek_and_mark_seen()?)?;
|
||||
|
||||
if let Some(gw) = default_outbound {
|
||||
return Ok((gw, pkg_watch.typed(), Some(os_watch.typed()), None));
|
||||
}
|
||||
|
||||
// 3. Fall through to main routing table — watch gateways for changes
|
||||
let gw_ptr: JsonPointer = "/public/serverInfo/network/gateways"
|
||||
.parse()
|
||||
.expect("valid json pointer");
|
||||
let mut gateways_watch = ctx.db.watch(gw_ptr).await;
|
||||
gateways_watch.peek_and_mark_seen()?;
|
||||
|
||||
let gw = default_route_interface().await?;
|
||||
Ok((
|
||||
gw,
|
||||
pkg_watch.typed(),
|
||||
Some(os_watch.typed()),
|
||||
Some(gateways_watch.typed()),
|
||||
))
|
||||
}
|
||||
|
||||
/// Parses `ip route show table main` for the default route's `dev` field.
|
||||
async fn default_route_interface() -> Result<GatewayId, Error> {
|
||||
let output = Command::new("ip")
|
||||
.arg("route")
|
||||
.arg("show")
|
||||
.arg("table")
|
||||
.arg("main")
|
||||
.invoke(ErrorKind::Network)
|
||||
.await?;
|
||||
let text = String::from_utf8_lossy(&output);
|
||||
for line in text.lines() {
|
||||
if line.starts_with("default ") {
|
||||
let mut parts = line.split_whitespace();
|
||||
while let Some(tok) = parts.next() {
|
||||
if tok == "dev" {
|
||||
if let Some(dev) = parts.next() {
|
||||
return Ok(dev.parse().unwrap());
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
Err(Error::new(
|
||||
eyre!("no default route found in main routing table"),
|
||||
ErrorKind::Network,
|
||||
))
|
||||
}
|
||||
|
||||
@@ -2,4 +2,5 @@ pub mod bind;
|
||||
pub mod host;
|
||||
pub mod info;
|
||||
pub mod interface;
|
||||
pub mod plugin;
|
||||
pub mod ssl;
|
||||
|
||||
176
core/src/service/effects/net/plugin.rs
Normal file
176
core/src/service/effects/net/plugin.rs
Normal file
@@ -0,0 +1,176 @@
|
||||
use std::collections::BTreeSet;
|
||||
use std::sync::Arc;
|
||||
|
||||
use crate::ActionId;
|
||||
use crate::net::host::{all_hosts, host_for};
|
||||
use crate::net::service_interface::{HostnameMetadata, PluginHostnameInfo};
|
||||
use crate::service::Service;
|
||||
use crate::service::effects::plugin::PluginId;
|
||||
use crate::service::effects::prelude::*;
|
||||
|
||||
fn require_url_plugin(context: &Arc<Service>) -> Result<(), Error> {
|
||||
if !context
|
||||
.seed
|
||||
.persistent_container
|
||||
.s9pk
|
||||
.as_manifest()
|
||||
.metadata
|
||||
.plugins
|
||||
.contains(&PluginId::UrlV0)
|
||||
{
|
||||
return Err(Error::new(
|
||||
eyre!(
|
||||
"{}",
|
||||
t!("net.plugin.manifest-missing-plugin", plugin = "url-v0")
|
||||
),
|
||||
ErrorKind::InvalidRequest,
|
||||
));
|
||||
}
|
||||
Ok(())
|
||||
}
|
||||
|
||||
#[derive(Debug, Clone, Serialize, Deserialize, TS)]
|
||||
#[serde(rename_all = "camelCase")]
|
||||
#[ts(export)]
|
||||
pub struct UrlPluginRegisterParams {
|
||||
pub table_action: ActionId,
|
||||
}
|
||||
|
||||
pub async fn register(
|
||||
context: EffectContext,
|
||||
UrlPluginRegisterParams { table_action }: UrlPluginRegisterParams,
|
||||
) -> Result<(), Error> {
|
||||
use crate::db::model::package::UrlPluginRegistration;
|
||||
|
||||
let context = context.deref()?;
|
||||
require_url_plugin(&context)?;
|
||||
let plugin_id = context.seed.id.clone();
|
||||
|
||||
context
|
||||
.seed
|
||||
.ctx
|
||||
.db
|
||||
.mutate(|db| {
|
||||
db.as_public_mut()
|
||||
.as_package_data_mut()
|
||||
.as_idx_mut(&plugin_id)
|
||||
.or_not_found(&plugin_id)?
|
||||
.as_plugin_mut()
|
||||
.as_url_mut()
|
||||
.ser(&Some(UrlPluginRegistration { table_action }))?;
|
||||
Ok(())
|
||||
})
|
||||
.await
|
||||
.result?;
|
||||
|
||||
Ok(())
|
||||
}
|
||||
|
||||
#[derive(Debug, Clone, Serialize, Deserialize, TS)]
|
||||
#[serde(rename_all = "camelCase")]
|
||||
#[ts(export)]
|
||||
pub struct UrlPluginExportUrlParams {
|
||||
pub hostname_info: PluginHostnameInfo,
|
||||
pub remove_action: Option<ActionId>,
|
||||
pub overflow_actions: Vec<ActionId>,
|
||||
}
|
||||
|
||||
pub async fn export_url(
|
||||
context: EffectContext,
|
||||
UrlPluginExportUrlParams {
|
||||
hostname_info,
|
||||
remove_action,
|
||||
overflow_actions,
|
||||
}: UrlPluginExportUrlParams,
|
||||
) -> Result<(), Error> {
|
||||
let context = context.deref()?;
|
||||
require_url_plugin(&context)?;
|
||||
let plugin_id = context.seed.id.clone();
|
||||
|
||||
let entry = hostname_info.to_hostname_info(&plugin_id, remove_action, overflow_actions);
|
||||
|
||||
context
|
||||
.seed
|
||||
.ctx
|
||||
.db
|
||||
.mutate(|db| {
|
||||
let host = host_for(
|
||||
db,
|
||||
hostname_info.package_id.as_ref(),
|
||||
&hostname_info.host_id,
|
||||
)?;
|
||||
host.as_bindings_mut()
|
||||
.as_idx_mut(&hostname_info.internal_port)
|
||||
.or_not_found(t!(
|
||||
"net.plugin.binding-not-found",
|
||||
binding = format!(
|
||||
"{}:{}:{}",
|
||||
hostname_info.package_id.as_deref().unwrap_or("STARTOS"),
|
||||
hostname_info.host_id,
|
||||
hostname_info.internal_port
|
||||
)
|
||||
))?
|
||||
.as_addresses_mut()
|
||||
.as_available_mut()
|
||||
.mutate(|available: &mut BTreeSet<_>| {
|
||||
available.insert(entry);
|
||||
Ok(())
|
||||
})?;
|
||||
Ok(())
|
||||
})
|
||||
.await
|
||||
.result?;
|
||||
|
||||
Ok(())
|
||||
}
|
||||
|
||||
#[derive(Debug, Clone, Serialize, Deserialize, TS)]
|
||||
#[serde(rename_all = "camelCase")]
|
||||
#[ts(export)]
|
||||
pub struct UrlPluginClearUrlsParams {
|
||||
pub except: BTreeSet<PluginHostnameInfo>,
|
||||
}
|
||||
|
||||
pub async fn clear_urls(
|
||||
context: EffectContext,
|
||||
UrlPluginClearUrlsParams { except }: UrlPluginClearUrlsParams,
|
||||
) -> Result<(), Error> {
|
||||
let context = context.deref()?;
|
||||
require_url_plugin(&context)?;
|
||||
let plugin_id = context.seed.id.clone();
|
||||
|
||||
context
|
||||
.seed
|
||||
.ctx
|
||||
.db
|
||||
.mutate(|db| {
|
||||
for host in all_hosts(db) {
|
||||
let host = host?;
|
||||
for (_, bind) in host.as_bindings_mut().as_entries_mut()? {
|
||||
bind.as_addresses_mut().as_available_mut().mutate(
|
||||
|available: &mut BTreeSet<_>| {
|
||||
available.retain(|h| {
|
||||
match &h.metadata {
|
||||
HostnameMetadata::Plugin { package_id, .. }
|
||||
if package_id == &plugin_id =>
|
||||
{
|
||||
// Keep if it matches any entry in the except list
|
||||
except
|
||||
.iter()
|
||||
.any(|e| e.matches_hostname_info(h, &plugin_id))
|
||||
}
|
||||
_ => true,
|
||||
}
|
||||
});
|
||||
Ok(())
|
||||
},
|
||||
)?;
|
||||
}
|
||||
}
|
||||
Ok(())
|
||||
})
|
||||
.await
|
||||
.result?;
|
||||
|
||||
Ok(())
|
||||
}
|
||||
9
core/src/service/effects/plugin.rs
Normal file
9
core/src/service/effects/plugin.rs
Normal file
@@ -0,0 +1,9 @@
|
||||
use serde::{Deserialize, Serialize};
|
||||
use ts_rs::TS;
|
||||
|
||||
#[derive(Debug, Clone, PartialEq, Eq, PartialOrd, Ord, Serialize, Deserialize, TS)]
|
||||
#[serde(rename_all = "kebab-case")]
|
||||
#[ts(export)]
|
||||
pub enum PluginId {
|
||||
UrlV0,
|
||||
}
|
||||
@@ -16,7 +16,7 @@ use futures::{FutureExt, SinkExt, StreamExt, TryStreamExt};
|
||||
use imbl_value::{InternedString, json};
|
||||
use itertools::Itertools;
|
||||
use nix::sys::signal::Signal;
|
||||
use persistent_container::{PersistentContainer, Subcontainer};
|
||||
use persistent_container::PersistentContainer;
|
||||
use rpc_toolkit::HandlerArgs;
|
||||
use rpc_toolkit::yajrc::RpcError;
|
||||
use serde::{Deserialize, Serialize};
|
||||
@@ -534,7 +534,7 @@ impl Service {
|
||||
.contains_key(&action_id)?
|
||||
{
|
||||
if let Some(input) = service
|
||||
.get_action_input(procedure_id.clone(), action_id.clone())
|
||||
.get_action_input(procedure_id.clone(), action_id.clone(), Value::Null)
|
||||
.await
|
||||
.log_err()
|
||||
.flatten()
|
||||
@@ -587,6 +587,7 @@ impl Service {
|
||||
entry.as_developer_key_mut().ser(&Pem::new(developer_key))?;
|
||||
entry.as_icon_mut().ser(&icon)?;
|
||||
entry.as_registry_mut().ser(registry)?;
|
||||
entry.as_status_info_mut().as_error_mut().ser(&None)?;
|
||||
|
||||
Ok(())
|
||||
})
|
||||
@@ -1195,6 +1196,9 @@ pub async fn cli_attach(
|
||||
{
|
||||
Ok(a) => a,
|
||||
Err(e) => {
|
||||
if e.kind != ErrorKind::InvalidRequest {
|
||||
return Err(e);
|
||||
}
|
||||
let prompt = e.to_string();
|
||||
let options: Vec<SubcontainerInfo> = from_value(e.info)?;
|
||||
let choice = choose(&prompt, &options).await?;
|
||||
@@ -1207,6 +1211,7 @@ pub async fn cli_attach(
|
||||
)?;
|
||||
let mut ws = context.ws_continuation(guid).await?;
|
||||
|
||||
print!("\r");
|
||||
let (kill, thread_kill) = tokio::sync::oneshot::channel();
|
||||
let (thread_send, recv) = tokio::sync::mpsc::channel(4 * CAP_1_KiB);
|
||||
let stdin_thread: NonDetachingJoinHandle<()> = tokio::task::spawn_blocking(move || {
|
||||
@@ -1235,18 +1240,6 @@ pub async fn cli_attach(
|
||||
let mut stderr = Some(stderr);
|
||||
loop {
|
||||
futures::select_biased! {
|
||||
// signal = tokio:: => {
|
||||
// let exit = exit?;
|
||||
// if current_out != "exit" {
|
||||
// ws.send(Message::Text("exit".into()))
|
||||
// .await
|
||||
// .with_kind(ErrorKind::Network)?;
|
||||
// current_out = "exit";
|
||||
// }
|
||||
// ws.send(Message::Binary(
|
||||
// i32::to_be_bytes(exit.into_raw()).to_vec()
|
||||
// )).await.with_kind(ErrorKind::Network)?;
|
||||
// }
|
||||
input = stdin.as_mut().map_or(
|
||||
futures::future::Either::Left(futures::future::pending()),
|
||||
|s| futures::future::Either::Right(s.recv())
|
||||
|
||||
@@ -97,7 +97,7 @@ impl PersistentContainer {
|
||||
.join(&s9pk.as_manifest().id),
|
||||
),
|
||||
LxcConfig {
|
||||
hardware_acceleration: s9pk.manifest.hardware_acceleration,
|
||||
hardware_acceleration: s9pk.manifest.metadata.hardware_acceleration,
|
||||
},
|
||||
)
|
||||
.await?;
|
||||
|
||||
@@ -260,6 +260,7 @@ impl ServiceMap {
|
||||
hosts: Default::default(),
|
||||
store_exposed_dependents: Default::default(),
|
||||
outbound_gateway: None,
|
||||
plugin: Default::default(),
|
||||
},
|
||||
)?;
|
||||
};
|
||||
|
||||
@@ -1,9 +1,12 @@
|
||||
use std::collections::BTreeSet;
|
||||
use std::path::Path;
|
||||
|
||||
use imbl::vector;
|
||||
|
||||
use crate::context::RpcContext;
|
||||
use crate::db::model::package::{InstalledState, InstallingInfo, InstallingState, PackageState};
|
||||
use crate::net::host::all_hosts;
|
||||
use crate::net::service_interface::{HostnameInfo, HostnameMetadata};
|
||||
use crate::prelude::*;
|
||||
use crate::volume::PKG_VOLUME_DIR;
|
||||
use crate::{DATA_DIR, PACKAGE_DATA, PackageId};
|
||||
@@ -36,6 +39,24 @@ pub async fn cleanup(ctx: &RpcContext, id: &PackageId, soft: bool) -> Result<(),
|
||||
Ok(())
|
||||
})?;
|
||||
d.as_private_mut().as_package_stores_mut().remove(&id)?;
|
||||
// Remove plugin URLs exported by this package from all hosts
|
||||
for host in all_hosts(d) {
|
||||
let host = host?;
|
||||
for (_, bind) in host.as_bindings_mut().as_entries_mut()? {
|
||||
bind.as_addresses_mut().as_available_mut().mutate(
|
||||
|available: &mut BTreeSet<HostnameInfo>| {
|
||||
available.retain(|h| {
|
||||
!matches!(
|
||||
&h.metadata,
|
||||
HostnameMetadata::Plugin { package_id, .. }
|
||||
if package_id == id
|
||||
)
|
||||
});
|
||||
Ok(())
|
||||
},
|
||||
)?;
|
||||
}
|
||||
}
|
||||
Ok(Some(pde))
|
||||
} else {
|
||||
Ok(None)
|
||||
|
||||
@@ -31,6 +31,7 @@ use crate::disk::mount::filesystem::ReadWrite;
|
||||
use crate::disk::mount::filesystem::cifs::Cifs;
|
||||
use crate::disk::mount::guard::{GenericMountGuard, TmpMountGuard};
|
||||
use crate::disk::util::{DiskInfo, StartOsRecoveryInfo, pvscan, recovery_info};
|
||||
use crate::hostname::ServerHostnameInfo;
|
||||
use crate::init::{InitPhases, InitResult, init};
|
||||
use crate::net::ssl::root_ca_start_time;
|
||||
use crate::prelude::*;
|
||||
@@ -115,6 +116,7 @@ async fn setup_init(
|
||||
ctx: &SetupContext,
|
||||
password: Option<String>,
|
||||
kiosk: Option<bool>,
|
||||
hostname: Option<ServerHostnameInfo>,
|
||||
init_phases: InitPhases,
|
||||
) -> Result<(AccountInfo, InitResult), Error> {
|
||||
let init_result = init(&ctx.webserver, &ctx.config.peek(|c| c.clone()), init_phases).await?;
|
||||
@@ -129,6 +131,9 @@ async fn setup_init(
|
||||
if let Some(password) = &password {
|
||||
account.set_password(password)?;
|
||||
}
|
||||
if let Some(hostname) = hostname {
|
||||
account.hostname = hostname;
|
||||
}
|
||||
account.save(m)?;
|
||||
let info = m.as_public_mut().as_server_info_mut();
|
||||
info.as_password_hash_mut().ser(&account.password)?;
|
||||
@@ -171,6 +176,8 @@ pub struct AttachParams {
|
||||
pub guid: InternedString,
|
||||
#[ts(optional)]
|
||||
pub kiosk: Option<bool>,
|
||||
pub name: Option<InternedString>,
|
||||
pub hostname: Option<InternedString>,
|
||||
}
|
||||
|
||||
#[instrument(skip_all)]
|
||||
@@ -180,6 +187,8 @@ pub async fn attach(
|
||||
password,
|
||||
guid: disk_guid,
|
||||
kiosk,
|
||||
name,
|
||||
hostname,
|
||||
}: AttachParams,
|
||||
) -> Result<SetupProgress, Error> {
|
||||
let setup_ctx = ctx.clone();
|
||||
@@ -233,7 +242,10 @@ pub async fn attach(
|
||||
}
|
||||
disk_phase.complete();
|
||||
|
||||
let (account, net_ctrl) = setup_init(&setup_ctx, password, kiosk, init_phases).await?;
|
||||
let hostname = ServerHostnameInfo::new_opt(name, hostname)?;
|
||||
|
||||
let (account, net_ctrl) =
|
||||
setup_init(&setup_ctx, password, kiosk, hostname, init_phases).await?;
|
||||
|
||||
let rpc_ctx = RpcContext::init(
|
||||
&setup_ctx.webserver,
|
||||
@@ -246,7 +258,7 @@ pub async fn attach(
|
||||
|
||||
Ok((
|
||||
SetupResult {
|
||||
hostname: account.hostname,
|
||||
hostname: account.hostname.hostname,
|
||||
root_ca: Pem(account.root_ca_cert),
|
||||
needs_restart: setup_ctx.install_rootfs.peek(|a| a.is_some()),
|
||||
},
|
||||
@@ -406,6 +418,8 @@ pub struct SetupExecuteParams {
|
||||
recovery_source: Option<RecoverySource<EncryptedWire>>,
|
||||
#[ts(optional)]
|
||||
kiosk: Option<bool>,
|
||||
name: Option<InternedString>,
|
||||
hostname: Option<InternedString>,
|
||||
}
|
||||
|
||||
// #[command(rpc_only)]
|
||||
@@ -416,6 +430,8 @@ pub async fn execute(
|
||||
password,
|
||||
recovery_source,
|
||||
kiosk,
|
||||
name,
|
||||
hostname,
|
||||
}: SetupExecuteParams,
|
||||
) -> Result<SetupProgress, Error> {
|
||||
let password = match password.decrypt(&ctx) {
|
||||
@@ -446,8 +462,10 @@ pub async fn execute(
|
||||
None => None,
|
||||
};
|
||||
|
||||
let hostname = ServerHostnameInfo::new_opt(name, hostname)?;
|
||||
|
||||
let setup_ctx = ctx.clone();
|
||||
ctx.run_setup(move || execute_inner(setup_ctx, guid, password, recovery, kiosk))?;
|
||||
ctx.run_setup(move || execute_inner(setup_ctx, guid, password, recovery, kiosk, hostname))?;
|
||||
|
||||
Ok(ctx.progress().await)
|
||||
}
|
||||
@@ -462,7 +480,7 @@ pub async fn complete(ctx: SetupContext) -> Result<SetupResult, Error> {
|
||||
guid_file.sync_all().await?;
|
||||
Command::new("systemd-firstboot")
|
||||
.arg("--root=/media/startos/config/overlay/")
|
||||
.arg(format!("--hostname={}", res.hostname.0))
|
||||
.arg(format!("--hostname={}", res.hostname.as_ref()))
|
||||
.invoke(ErrorKind::ParseSysInfo)
|
||||
.await?;
|
||||
Command::new("sync").invoke(ErrorKind::Filesystem).await?;
|
||||
@@ -536,6 +554,7 @@ pub async fn execute_inner(
|
||||
password: String,
|
||||
recovery_source: Option<RecoverySource<String>>,
|
||||
kiosk: Option<bool>,
|
||||
hostname: Option<ServerHostnameInfo>,
|
||||
) -> Result<(SetupResult, RpcContext), Error> {
|
||||
let progress = &ctx.progress;
|
||||
let restore_phase = match recovery_source.as_ref() {
|
||||
@@ -570,14 +589,15 @@ pub async fn execute_inner(
|
||||
server_id,
|
||||
recovery_password,
|
||||
kiosk,
|
||||
hostname,
|
||||
progress,
|
||||
)
|
||||
.await
|
||||
}
|
||||
Some(RecoverySource::Migrate { guid: old_guid }) => {
|
||||
migrate(&ctx, guid, &old_guid, password, kiosk, progress).await
|
||||
migrate(&ctx, guid, &old_guid, password, kiosk, hostname, progress).await
|
||||
}
|
||||
None => fresh_setup(&ctx, guid, &password, kiosk, progress).await,
|
||||
None => fresh_setup(&ctx, guid, &password, kiosk, hostname, progress).await,
|
||||
}
|
||||
}
|
||||
|
||||
@@ -592,13 +612,14 @@ async fn fresh_setup(
|
||||
guid: InternedString,
|
||||
password: &str,
|
||||
kiosk: Option<bool>,
|
||||
hostname: Option<ServerHostnameInfo>,
|
||||
SetupExecuteProgress {
|
||||
init_phases,
|
||||
rpc_ctx_phases,
|
||||
..
|
||||
}: SetupExecuteProgress,
|
||||
) -> Result<(SetupResult, RpcContext), Error> {
|
||||
let account = AccountInfo::new(password, root_ca_start_time().await)?;
|
||||
let account = AccountInfo::new(password, root_ca_start_time().await, hostname)?;
|
||||
let db = ctx.db().await?;
|
||||
let kiosk = Some(kiosk.unwrap_or(true)).filter(|_| &*PLATFORM != "raspberrypi");
|
||||
sync_kiosk(kiosk).await?;
|
||||
@@ -635,7 +656,7 @@ async fn fresh_setup(
|
||||
|
||||
Ok((
|
||||
SetupResult {
|
||||
hostname: account.hostname,
|
||||
hostname: account.hostname.hostname,
|
||||
root_ca: Pem(account.root_ca_cert),
|
||||
needs_restart: ctx.install_rootfs.peek(|a| a.is_some()),
|
||||
},
|
||||
@@ -652,6 +673,7 @@ async fn recover(
|
||||
server_id: String,
|
||||
recovery_password: String,
|
||||
kiosk: Option<bool>,
|
||||
hostname: Option<ServerHostnameInfo>,
|
||||
progress: SetupExecuteProgress,
|
||||
) -> Result<(SetupResult, RpcContext), Error> {
|
||||
let recovery_source = TmpMountGuard::mount(&recovery_source, ReadWrite).await?;
|
||||
@@ -663,6 +685,7 @@ async fn recover(
|
||||
&server_id,
|
||||
&recovery_password,
|
||||
kiosk,
|
||||
hostname,
|
||||
progress,
|
||||
)
|
||||
.await
|
||||
@@ -675,6 +698,7 @@ async fn migrate(
|
||||
old_guid: &str,
|
||||
password: String,
|
||||
kiosk: Option<bool>,
|
||||
hostname: Option<ServerHostnameInfo>,
|
||||
SetupExecuteProgress {
|
||||
init_phases,
|
||||
restore_phase,
|
||||
@@ -753,7 +777,8 @@ async fn migrate(
|
||||
crate::disk::main::export(&old_guid, "/media/startos/migrate").await?;
|
||||
restore_phase.complete();
|
||||
|
||||
let (account, net_ctrl) = setup_init(&ctx, Some(password), kiosk, init_phases).await?;
|
||||
let (account, net_ctrl) =
|
||||
setup_init(&ctx, Some(password), kiosk, hostname, init_phases).await?;
|
||||
|
||||
let rpc_ctx = RpcContext::init(
|
||||
&ctx.webserver,
|
||||
@@ -766,7 +791,7 @@ async fn migrate(
|
||||
|
||||
Ok((
|
||||
SetupResult {
|
||||
hostname: account.hostname,
|
||||
hostname: account.hostname.hostname,
|
||||
root_ca: Pem(account.root_ca_cert),
|
||||
needs_restart: ctx.install_rootfs.peek(|a| a.is_some()),
|
||||
},
|
||||
|
||||
@@ -12,7 +12,7 @@ use tracing::instrument;
|
||||
use ts_rs::TS;
|
||||
|
||||
use crate::context::{CliContext, RpcContext};
|
||||
use crate::hostname::Hostname;
|
||||
use crate::hostname::ServerHostname;
|
||||
use crate::prelude::*;
|
||||
use crate::util::io::create_file;
|
||||
use crate::util::serde::{HandlerExtSerde, Pem, WithIoFormat, display_serializable};
|
||||
@@ -125,7 +125,10 @@ pub struct SshAddParams {
|
||||
}
|
||||
|
||||
#[instrument(skip_all)]
|
||||
pub async fn add(ctx: RpcContext, SshAddParams { key }: SshAddParams) -> Result<SshKeyResponse, Error> {
|
||||
pub async fn add(
|
||||
ctx: RpcContext,
|
||||
SshAddParams { key }: SshAddParams,
|
||||
) -> Result<SshKeyResponse, Error> {
|
||||
let mut key = WithTimeData::new(key);
|
||||
let fingerprint = InternedString::intern(key.0.fingerprint_md5());
|
||||
let (keys, res) = ctx
|
||||
@@ -238,7 +241,7 @@ pub async fn list(ctx: RpcContext) -> Result<Vec<SshKeyResponse>, Error> {
|
||||
|
||||
#[instrument(skip_all)]
|
||||
pub async fn sync_keys<P: AsRef<Path>>(
|
||||
hostname: &Hostname,
|
||||
hostname: &ServerHostname,
|
||||
privkey: &Pem<ssh_key::PrivateKey>,
|
||||
pubkeys: &SshKeys,
|
||||
ssh_dir: P,
|
||||
@@ -284,8 +287,8 @@ pub async fn sync_keys<P: AsRef<Path>>(
|
||||
.to_openssh()
|
||||
.with_kind(ErrorKind::OpenSsh)?
|
||||
+ " start9@"
|
||||
+ &*hostname.0)
|
||||
.as_bytes(),
|
||||
+ hostname.as_ref())
|
||||
.as_bytes(),
|
||||
)
|
||||
.await?;
|
||||
f.write_all(b"\n").await?;
|
||||
|
||||
@@ -1049,20 +1049,36 @@ async fn get_disk_info() -> Result<MetricsDisk, Error> {
|
||||
})
|
||||
}
|
||||
|
||||
#[derive(
|
||||
Debug, Clone, Copy, Default, serde::Serialize, serde::Deserialize, TS, clap::ValueEnum,
|
||||
)]
|
||||
#[ts(export)]
|
||||
#[serde(rename_all = "camelCase")]
|
||||
pub enum SmtpSecurity {
|
||||
#[default]
|
||||
Starttls,
|
||||
Tls,
|
||||
}
|
||||
|
||||
#[derive(Debug, Clone, serde::Serialize, serde::Deserialize, Parser, TS)]
|
||||
#[ts(export)]
|
||||
#[serde(rename_all = "camelCase")]
|
||||
pub struct SmtpValue {
|
||||
#[arg(long, help = "help.arg.smtp-server")]
|
||||
pub server: String,
|
||||
#[arg(long, help = "help.arg.smtp-host")]
|
||||
#[serde(alias = "server")]
|
||||
pub host: String,
|
||||
#[arg(long, help = "help.arg.smtp-port")]
|
||||
pub port: u16,
|
||||
#[arg(long, help = "help.arg.smtp-from")]
|
||||
pub from: String,
|
||||
#[arg(long, help = "help.arg.smtp-login")]
|
||||
pub login: String,
|
||||
#[arg(long, help = "help.arg.smtp-username")]
|
||||
#[serde(alias = "login")]
|
||||
pub username: String,
|
||||
#[arg(long, help = "help.arg.smtp-password")]
|
||||
pub password: Option<String>,
|
||||
#[arg(long, help = "help.arg.smtp-security")]
|
||||
#[serde(default)]
|
||||
pub security: SmtpSecurity,
|
||||
}
|
||||
pub async fn set_system_smtp(ctx: RpcContext, smtp: SmtpValue) -> Result<(), Error> {
|
||||
let smtp = Some(smtp);
|
||||
@@ -1095,51 +1111,89 @@ pub async fn clear_system_smtp(ctx: RpcContext) -> Result<(), Error> {
|
||||
}
|
||||
Ok(())
|
||||
}
|
||||
|
||||
#[derive(Debug, Clone, Deserialize, Serialize, Parser)]
|
||||
pub struct SetIfconfigUrlParams {
|
||||
#[arg(help = "help.arg.ifconfig-url")]
|
||||
pub url: url::Url,
|
||||
}
|
||||
|
||||
pub async fn set_ifconfig_url(
|
||||
ctx: RpcContext,
|
||||
SetIfconfigUrlParams { url }: SetIfconfigUrlParams,
|
||||
) -> Result<(), Error> {
|
||||
ctx.db
|
||||
.mutate(|db| {
|
||||
db.as_public_mut()
|
||||
.as_server_info_mut()
|
||||
.as_ifconfig_url_mut()
|
||||
.ser(&url)
|
||||
})
|
||||
.await
|
||||
.result
|
||||
}
|
||||
|
||||
#[derive(Debug, Clone, serde::Serialize, serde::Deserialize, Parser, TS)]
|
||||
#[ts(export)]
|
||||
#[serde(rename_all = "camelCase")]
|
||||
pub struct TestSmtpParams {
|
||||
#[arg(long, help = "help.arg.smtp-server")]
|
||||
pub server: String,
|
||||
#[arg(long, help = "help.arg.smtp-host")]
|
||||
pub host: String,
|
||||
#[arg(long, help = "help.arg.smtp-port")]
|
||||
pub port: u16,
|
||||
#[arg(long, help = "help.arg.smtp-from")]
|
||||
pub from: String,
|
||||
#[arg(long, help = "help.arg.smtp-to")]
|
||||
pub to: String,
|
||||
#[arg(long, help = "help.arg.smtp-login")]
|
||||
pub login: String,
|
||||
#[arg(long, help = "help.arg.smtp-username")]
|
||||
pub username: String,
|
||||
#[arg(long, help = "help.arg.smtp-password")]
|
||||
pub password: String,
|
||||
#[arg(long, help = "help.arg.smtp-security")]
|
||||
#[serde(default)]
|
||||
pub security: SmtpSecurity,
|
||||
}
|
||||
pub async fn test_smtp(
|
||||
_: RpcContext,
|
||||
TestSmtpParams {
|
||||
server,
|
||||
host,
|
||||
port,
|
||||
from,
|
||||
to,
|
||||
login,
|
||||
username,
|
||||
password,
|
||||
security,
|
||||
}: TestSmtpParams,
|
||||
) -> Result<(), Error> {
|
||||
use lettre::message::header::ContentType;
|
||||
use lettre::transport::smtp::authentication::Credentials;
|
||||
use lettre::transport::smtp::client::{Tls, TlsParameters};
|
||||
use lettre::{AsyncSmtpTransport, AsyncTransport, Message, Tokio1Executor};
|
||||
|
||||
AsyncSmtpTransport::<Tokio1Executor>::relay(&server)?
|
||||
.port(port)
|
||||
.credentials(Credentials::new(login, password))
|
||||
.build()
|
||||
.send(
|
||||
Message::builder()
|
||||
.from(from.parse()?)
|
||||
.to(to.parse()?)
|
||||
.subject("StartOS Test Email")
|
||||
.header(ContentType::TEXT_PLAIN)
|
||||
.body("This is a test email sent from your StartOS Server".to_owned())?,
|
||||
)
|
||||
.await?;
|
||||
let creds = Credentials::new(username, password);
|
||||
let message = Message::builder()
|
||||
.from(from.parse()?)
|
||||
.to(to.parse()?)
|
||||
.subject("StartOS Test Email")
|
||||
.header(ContentType::TEXT_PLAIN)
|
||||
.body("This is a test email sent from your StartOS Server".to_owned())?;
|
||||
|
||||
let transport = match security {
|
||||
SmtpSecurity::Starttls => AsyncSmtpTransport::<Tokio1Executor>::relay(&host)?
|
||||
.port(port)
|
||||
.credentials(creds)
|
||||
.build(),
|
||||
SmtpSecurity::Tls => {
|
||||
let tls = TlsParameters::new(host.clone())?;
|
||||
AsyncSmtpTransport::<Tokio1Executor>::relay(&host)?
|
||||
.port(port)
|
||||
.tls(Tls::Wrapper(tls))
|
||||
.credentials(creds)
|
||||
.build()
|
||||
}
|
||||
};
|
||||
|
||||
transport.send(message).await?;
|
||||
Ok(())
|
||||
}
|
||||
|
||||
@@ -1239,9 +1293,15 @@ pub async fn save_language(language: &str) -> Result<(), Error> {
|
||||
"/media/startos/config/overlay/usr/lib/locale/locale-archive",
|
||||
)
|
||||
.await?;
|
||||
let locale_content = format!("LANG={language}.UTF-8\n");
|
||||
write_file_atomic(
|
||||
"/media/startos/config/overlay/etc/default/locale",
|
||||
format!("LANG={language}.UTF-8\n").as_bytes(),
|
||||
locale_content.as_bytes(),
|
||||
)
|
||||
.await?;
|
||||
write_file_atomic(
|
||||
"/media/startos/config/overlay/etc/locale.conf",
|
||||
locale_content.as_bytes(),
|
||||
)
|
||||
.await?;
|
||||
Ok(())
|
||||
|
||||
@@ -53,6 +53,24 @@ pub fn tunnel_api<C: Context>() -> ParentHandler<C> {
|
||||
.with_call_remote::<CliContext>(),
|
||||
),
|
||||
)
|
||||
.subcommand(
|
||||
"update",
|
||||
ParentHandler::<C>::new()
|
||||
.subcommand(
|
||||
"check",
|
||||
from_fn_async(super::update::check_update)
|
||||
.with_display_serializable()
|
||||
.with_about("about.check-for-updates")
|
||||
.with_call_remote::<CliContext>(),
|
||||
)
|
||||
.subcommand(
|
||||
"apply",
|
||||
from_fn_async(super::update::apply_update)
|
||||
.with_display_serializable()
|
||||
.with_about("about.apply-available-update")
|
||||
.with_call_remote::<CliContext>(),
|
||||
),
|
||||
)
|
||||
}
|
||||
|
||||
#[derive(Deserialize, Serialize, Parser)]
|
||||
@@ -456,7 +474,10 @@ pub async fn add_forward(
|
||||
})
|
||||
.map(|s| s.prefix_len())
|
||||
.unwrap_or(32);
|
||||
let rc = ctx.forward.add_forward(source, target, prefix, None).await?;
|
||||
let rc = ctx
|
||||
.forward
|
||||
.add_forward(source, target, prefix, None)
|
||||
.await?;
|
||||
ctx.active_forwards.mutate(|m| {
|
||||
m.insert(source, rc);
|
||||
});
|
||||
|
||||
@@ -9,6 +9,7 @@ pub mod api;
|
||||
pub mod auth;
|
||||
pub mod context;
|
||||
pub mod db;
|
||||
pub mod update;
|
||||
pub mod web;
|
||||
pub mod wg;
|
||||
|
||||
|
||||
102
core/src/tunnel/update.rs
Normal file
102
core/src/tunnel/update.rs
Normal file
@@ -0,0 +1,102 @@
|
||||
use std::process::Stdio;
|
||||
|
||||
use rpc_toolkit::Empty;
|
||||
use serde::{Deserialize, Serialize};
|
||||
use tokio::process::Command;
|
||||
use tracing::instrument;
|
||||
use ts_rs::TS;
|
||||
|
||||
use crate::prelude::*;
|
||||
use crate::tunnel::context::TunnelContext;
|
||||
use crate::util::Invoke;
|
||||
|
||||
#[derive(Deserialize, Serialize, TS)]
|
||||
#[serde(rename_all = "camelCase")]
|
||||
pub struct TunnelUpdateResult {
|
||||
/// "up-to-date", "update-available", or "updating"
|
||||
pub status: String,
|
||||
/// Currently installed version
|
||||
pub installed: String,
|
||||
/// Available candidate version
|
||||
pub candidate: String,
|
||||
}
|
||||
|
||||
#[instrument(skip_all)]
|
||||
pub async fn check_update(_ctx: TunnelContext, _: Empty) -> Result<TunnelUpdateResult, Error> {
|
||||
Command::new("apt-get")
|
||||
.arg("update")
|
||||
.invoke(ErrorKind::UpdateFailed)
|
||||
.await?;
|
||||
|
||||
let policy_output = Command::new("apt-cache")
|
||||
.arg("policy")
|
||||
.arg("start-tunnel")
|
||||
.invoke(ErrorKind::UpdateFailed)
|
||||
.await?;
|
||||
|
||||
let policy_str = String::from_utf8_lossy(&policy_output).to_string();
|
||||
let installed = parse_version_field(&policy_str, "Installed:");
|
||||
let candidate = parse_version_field(&policy_str, "Candidate:");
|
||||
|
||||
let status = if installed == candidate {
|
||||
"up-to-date"
|
||||
} else {
|
||||
"update-available"
|
||||
};
|
||||
|
||||
Ok(TunnelUpdateResult {
|
||||
status: status.to_string(),
|
||||
installed: installed.unwrap_or_default(),
|
||||
candidate: candidate.unwrap_or_default(),
|
||||
})
|
||||
}
|
||||
|
||||
#[instrument(skip_all)]
|
||||
pub async fn apply_update(_ctx: TunnelContext, _: Empty) -> Result<TunnelUpdateResult, Error> {
|
||||
let policy_output = Command::new("apt-cache")
|
||||
.arg("policy")
|
||||
.arg("start-tunnel")
|
||||
.invoke(ErrorKind::UpdateFailed)
|
||||
.await?;
|
||||
|
||||
let policy_str = String::from_utf8_lossy(&policy_output).to_string();
|
||||
let installed = parse_version_field(&policy_str, "Installed:");
|
||||
let candidate = parse_version_field(&policy_str, "Candidate:");
|
||||
|
||||
// Spawn in a separate cgroup via systemd-run so the process survives
|
||||
// when the postinst script restarts start-tunneld.service.
|
||||
// After the install completes, reboot the system.
|
||||
// Uses --reinstall so the update applies even when versions match.
|
||||
Command::new("systemd-run")
|
||||
.arg("--scope")
|
||||
.arg("--")
|
||||
.arg("sh")
|
||||
.arg("-c")
|
||||
.arg("apt-get install --reinstall -y start-tunnel && reboot")
|
||||
.env("DEBIAN_FRONTEND", "noninteractive")
|
||||
.stdin(Stdio::null())
|
||||
.stdout(Stdio::null())
|
||||
.stderr(Stdio::null())
|
||||
.spawn()
|
||||
.with_kind(ErrorKind::UpdateFailed)?;
|
||||
|
||||
Ok(TunnelUpdateResult {
|
||||
status: "updating".to_string(),
|
||||
installed: installed.unwrap_or_default(),
|
||||
candidate: candidate.unwrap_or_default(),
|
||||
})
|
||||
}
|
||||
|
||||
fn parse_version_field(policy: &str, field: &str) -> Option<String> {
|
||||
policy
|
||||
.lines()
|
||||
.find(|l| l.trim().starts_with(field))
|
||||
.and_then(|l| l.split_whitespace().nth(1))
|
||||
.filter(|v| *v != "(none)")
|
||||
.map(|s| s.to_string())
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn export_bindings_tunnel_update() {
|
||||
TunnelUpdateResult::export_all_to("bindings/tunnel").unwrap();
|
||||
}
|
||||
@@ -18,7 +18,7 @@ use tokio_rustls::rustls::server::ClientHello;
|
||||
use ts_rs::TS;
|
||||
|
||||
use crate::context::CliContext;
|
||||
use crate::hostname::Hostname;
|
||||
use crate::hostname::ServerHostname;
|
||||
use crate::net::ssl::{SANInfo, root_ca_start_time};
|
||||
use crate::net::tls::TlsHandler;
|
||||
use crate::net::web_server::Accept;
|
||||
@@ -292,7 +292,7 @@ pub async fn generate_certificate(
|
||||
let root_key = crate::net::ssl::gen_nistp256()?;
|
||||
let root_cert = crate::net::ssl::make_root_cert(
|
||||
&root_key,
|
||||
&Hostname("start-tunnel".into()),
|
||||
&ServerHostname::new("start-tunnel".into())?,
|
||||
root_ca_start_time().await,
|
||||
)?;
|
||||
let int_key = crate::net::ssl::gen_nistp256()?;
|
||||
@@ -523,27 +523,27 @@ pub async fn init_web(ctx: CliContext) -> Result<(), Error> {
|
||||
println!(concat!(
|
||||
"To access your Web URL securely, trust your Root CA (displayed above) on your client device(s):\n",
|
||||
" - MacOS\n",
|
||||
" 1. Open the Terminal app\n",
|
||||
" 2. Paste the following command (**DO NOT** click Return): pbcopy < ~/Desktop/ca.crt\n",
|
||||
" 3. Copy your Root CA (including -----BEGIN CERTIFICATE----- and -----END CERTIFICATE-----)\n",
|
||||
" 4. Back in Terminal, click Return. ca.crt is saved to your Desktop\n",
|
||||
" 5. Complete by trusting your Root CA: https://docs.start9.com/device-guides/mac/ca.html\n",
|
||||
" 1. Open the Terminal app\n",
|
||||
" 2. Paste the following command (**DO NOT** click Return): pbcopy < ~/Desktop/ca.crt\n",
|
||||
" 3. Copy your Root CA (including -----BEGIN CERTIFICATE----- and -----END CERTIFICATE-----)\n",
|
||||
" 4. Back in Terminal, click Return. ca.crt is saved to your Desktop\n",
|
||||
" 5. Complete by trusting your Root CA: https://docs.start9.com/device-guides/mac/ca.html\n",
|
||||
" - Linux\n",
|
||||
" 1. Open gedit, nano, or any editor\n",
|
||||
" 2. Copy/paste your Root CA (including -----BEGIN CERTIFICATE----- and -----END CERTIFICATE-----)\n",
|
||||
" 3. Name the file ca.crt and save as plaintext\n",
|
||||
" 4. Complete by trusting your Root CA: https://docs.start9.com/device-guides/linux/ca.html\n",
|
||||
" 1. Open gedit, nano, or any editor\n",
|
||||
" 2. Copy/paste your Root CA (including -----BEGIN CERTIFICATE----- and -----END CERTIFICATE-----)\n",
|
||||
" 3. Name the file ca.crt and save as plaintext\n",
|
||||
" 4. Complete by trusting your Root CA: https://docs.start9.com/device-guides/linux/ca.html\n",
|
||||
" - Windows\n",
|
||||
" 1. Open the Notepad app\n",
|
||||
" 2. Copy/paste your Root CA (including -----BEGIN CERTIFICATE----- and -----END CERTIFICATE-----)\n",
|
||||
" 3. Name the file ca.crt and save as plaintext\n",
|
||||
" 4. Complete by trusting your Root CA: https://docs.start9.com/device-guides/windows/ca.html\n",
|
||||
" 1. Open the Notepad app\n",
|
||||
" 2. Copy/paste your Root CA (including -----BEGIN CERTIFICATE----- and -----END CERTIFICATE-----)\n",
|
||||
" 3. Name the file ca.crt and save as plaintext\n",
|
||||
" 4. Complete by trusting your Root CA: https://docs.start9.com/device-guides/windows/ca.html\n",
|
||||
" - Android/Graphene\n",
|
||||
" 1. Send the ca.crt file (created above) to yourself\n",
|
||||
" 2. Complete by trusting your Root CA: https://docs.start9.com/device-guides/android/ca.html\n",
|
||||
" 1. Send the ca.crt file (created above) to yourself\n",
|
||||
" 2. Complete by trusting your Root CA: https://docs.start9.com/device-guides/android/ca.html\n",
|
||||
" - iOS\n",
|
||||
" 1. Send the ca.crt file (created above) to yourself\n",
|
||||
" 2. Complete by trusting your Root CA: https://docs.start9.com/device-guides/ios/ca.html\n",
|
||||
" 1. Send the ca.crt file (created above) to yourself\n",
|
||||
" 2. Complete by trusting your Root CA: https://docs.start9.com/device-guides/ios/ca.html\n",
|
||||
));
|
||||
|
||||
return Ok(());
|
||||
|
||||
@@ -1,5 +1,6 @@
|
||||
use futures::future::BoxFuture;
|
||||
use futures::{Future, FutureExt};
|
||||
use futures::stream::FuturesUnordered;
|
||||
use futures::{Future, FutureExt, StreamExt};
|
||||
use tokio::sync::mpsc;
|
||||
|
||||
#[derive(Clone)]
|
||||
@@ -11,7 +12,7 @@ impl BackgroundJobQueue {
|
||||
Self(send),
|
||||
BackgroundJobRunner {
|
||||
recv,
|
||||
jobs: Vec::new(),
|
||||
jobs: FuturesUnordered::new(),
|
||||
},
|
||||
)
|
||||
}
|
||||
@@ -27,7 +28,7 @@ impl BackgroundJobQueue {
|
||||
|
||||
pub struct BackgroundJobRunner {
|
||||
recv: mpsc::UnboundedReceiver<BoxFuture<'static, ()>>,
|
||||
jobs: Vec<BoxFuture<'static, ()>>,
|
||||
jobs: FuturesUnordered<BoxFuture<'static, ()>>,
|
||||
}
|
||||
impl BackgroundJobRunner {
|
||||
pub fn is_empty(&self) -> bool {
|
||||
@@ -43,19 +44,7 @@ impl Future for BackgroundJobRunner {
|
||||
while let std::task::Poll::Ready(Some(job)) = self.recv.poll_recv(cx) {
|
||||
self.jobs.push(job);
|
||||
}
|
||||
let complete = self
|
||||
.jobs
|
||||
.iter_mut()
|
||||
.enumerate()
|
||||
.filter_map(|(i, f)| match f.poll_unpin(cx) {
|
||||
std::task::Poll::Pending => None,
|
||||
std::task::Poll::Ready(_) => Some(i),
|
||||
})
|
||||
.collect::<Vec<_>>();
|
||||
for idx in complete.into_iter().rev() {
|
||||
#[allow(clippy::let_underscore_future)]
|
||||
let _ = self.jobs.swap_remove(idx);
|
||||
}
|
||||
while let std::task::Poll::Ready(Some(())) = self.jobs.poll_next_unpin(cx) {}
|
||||
if self.jobs.is_empty() && self.recv.is_closed() {
|
||||
std::task::Poll::Ready(())
|
||||
} else {
|
||||
|
||||
@@ -90,7 +90,13 @@ impl Current {
|
||||
.await
|
||||
.result?;
|
||||
}
|
||||
Ordering::Equal => (),
|
||||
Ordering::Equal => {
|
||||
db.apply_function(|db| {
|
||||
Ok::<_, Error>((to_value(&from_value::<Database>(db.clone())?)?, ()))
|
||||
})
|
||||
.await
|
||||
.result?;
|
||||
}
|
||||
}
|
||||
Ok(())
|
||||
}
|
||||
|
||||
@@ -21,7 +21,7 @@ use crate::backup::target::cifs::CifsTargets;
|
||||
use crate::context::RpcContext;
|
||||
use crate::disk::mount::filesystem::cifs::Cifs;
|
||||
use crate::disk::mount::util::unmount;
|
||||
use crate::hostname::Hostname;
|
||||
use crate::hostname::{ServerHostname, ServerHostnameInfo};
|
||||
use crate::net::forward::AvailablePorts;
|
||||
use crate::net::keys::KeyStore;
|
||||
use crate::notifications::Notifications;
|
||||
@@ -166,11 +166,7 @@ impl VersionT for Version {
|
||||
|
||||
Ok((account, ssh_keys, cifs))
|
||||
}
|
||||
fn up(
|
||||
self,
|
||||
db: &mut Value,
|
||||
(account, ssh_keys, cifs): Self::PreUpRes,
|
||||
) -> Result<Value, Error> {
|
||||
fn up(self, db: &mut Value, (account, ssh_keys, cifs): Self::PreUpRes) -> Result<Value, Error> {
|
||||
let prev_package_data = db["package-data"].clone();
|
||||
|
||||
let wifi = json!({
|
||||
@@ -435,12 +431,12 @@ async fn previous_account_info(pg: &sqlx::Pool<sqlx::Postgres>) -> Result<Accoun
|
||||
server_id: account_query
|
||||
.try_get("server_id")
|
||||
.with_ctx(|_| (ErrorKind::Database, "server_id"))?,
|
||||
hostname: Hostname(
|
||||
hostname: ServerHostnameInfo::from_hostname(ServerHostname::new(
|
||||
account_query
|
||||
.try_get::<String, _>("hostname")
|
||||
.with_ctx(|_| (ErrorKind::Database, "hostname"))?
|
||||
.into(),
|
||||
),
|
||||
)?),
|
||||
root_ca_key: PKey::private_key_from_pem(
|
||||
&account_query
|
||||
.try_get::<String, _>("root_ca_key_pem")
|
||||
@@ -502,4 +498,3 @@ async fn previous_ssh_keys(pg: &sqlx::Pool<sqlx::Postgres>) -> Result<SshKeys, E
|
||||
};
|
||||
Ok(ssh_keys)
|
||||
}
|
||||
|
||||
|
||||
@@ -50,7 +50,10 @@ impl VersionT for Version {
|
||||
async fn post_up(self, ctx: &RpcContext, _input: Value) -> Result<(), Error> {
|
||||
Command::new("systemd-firstboot")
|
||||
.arg("--root=/media/startos/config/overlay/")
|
||||
.arg(ctx.account.peek(|a| format!("--hostname={}", a.hostname.0)))
|
||||
.arg(
|
||||
ctx.account
|
||||
.peek(|a| format!("--hostname={}", a.hostname.hostname.as_ref())),
|
||||
)
|
||||
.invoke(ErrorKind::ParseSysInfo)
|
||||
.await?;
|
||||
Ok(())
|
||||
|
||||
@@ -1,7 +1,11 @@
|
||||
use std::path::Path;
|
||||
|
||||
use exver::{PreReleaseSegment, VersionRange};
|
||||
use imbl_value::json;
|
||||
|
||||
use super::v0_3_5::V0_3_0_COMPAT;
|
||||
use super::{VersionT, v0_4_0_alpha_19};
|
||||
use crate::context::RpcContext;
|
||||
use crate::prelude::*;
|
||||
|
||||
lazy_static::lazy_static! {
|
||||
@@ -29,6 +33,75 @@ impl VersionT for Version {
|
||||
}
|
||||
#[instrument(skip_all)]
|
||||
fn up(self, db: &mut Value, _: Self::PreUpRes) -> Result<Value, Error> {
|
||||
// Extract onion migration data before removing it
|
||||
let onion_store = db
|
||||
.get("private")
|
||||
.and_then(|p| p.get("keyStore"))
|
||||
.and_then(|k| k.get("onion"))
|
||||
.cloned()
|
||||
.unwrap_or(Value::Object(Default::default()));
|
||||
|
||||
let mut addresses = imbl::Vector::<Value>::new();
|
||||
|
||||
// Extract OS host onion addresses
|
||||
if let Some(onions) = db
|
||||
.get("public")
|
||||
.and_then(|p| p.get("serverInfo"))
|
||||
.and_then(|s| s.get("network"))
|
||||
.and_then(|n| n.get("host"))
|
||||
.and_then(|h| h.get("onions"))
|
||||
.and_then(|o| o.as_array())
|
||||
{
|
||||
for onion in onions {
|
||||
if let Some(hostname) = onion.as_str() {
|
||||
let key = onion_store
|
||||
.get(hostname)
|
||||
.and_then(|v| v.as_str())
|
||||
.unwrap_or_default();
|
||||
addresses.push_back(json!({
|
||||
"hostname": hostname,
|
||||
"packageId": "STARTOS",
|
||||
"hostId": "STARTOS",
|
||||
"key": key,
|
||||
}));
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// Extract package host onion addresses
|
||||
if let Some(packages) = db
|
||||
.get("public")
|
||||
.and_then(|p| p.get("packageData"))
|
||||
.and_then(|p| p.as_object())
|
||||
{
|
||||
for (package_id, package) in packages.iter() {
|
||||
if let Some(hosts) = package.get("hosts").and_then(|h| h.as_object()) {
|
||||
for (host_id, host) in hosts.iter() {
|
||||
if let Some(onions) = host.get("onions").and_then(|o| o.as_array()) {
|
||||
for onion in onions {
|
||||
if let Some(hostname) = onion.as_str() {
|
||||
let key = onion_store
|
||||
.get(hostname)
|
||||
.and_then(|v| v.as_str())
|
||||
.unwrap_or_default();
|
||||
addresses.push_back(json!({
|
||||
"hostname": hostname,
|
||||
"packageId": &**package_id,
|
||||
"hostId": &**host_id,
|
||||
"key": key,
|
||||
}));
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
let migration_data = json!({
|
||||
"addresses": addresses,
|
||||
});
|
||||
|
||||
// Remove onions and tor-related fields from server host
|
||||
if let Some(host) = db
|
||||
.get_mut("public")
|
||||
@@ -93,7 +166,50 @@ impl VersionT for Version {
|
||||
// Rebuild from actual assigned ports in all bindings
|
||||
migrate_available_ports(db);
|
||||
|
||||
Ok(Value::Null)
|
||||
// Migrate SMTP: rename server->host, login->username, add security field
|
||||
migrate_smtp(db);
|
||||
|
||||
// Delete ui.name (moved to serverInfo.name)
|
||||
if let Some(ui) = db
|
||||
.get_mut("public")
|
||||
.and_then(|p| p.get_mut("ui"))
|
||||
.and_then(|u| u.as_object_mut())
|
||||
{
|
||||
ui.remove("name");
|
||||
}
|
||||
|
||||
// Generate serverInfo.name from serverInfo.hostname
|
||||
if let Some(hostname) = db
|
||||
.get("public")
|
||||
.and_then(|p| p.get("serverInfo"))
|
||||
.and_then(|s| s.get("hostname"))
|
||||
.and_then(|h| h.as_str())
|
||||
.map(|s| s.to_owned())
|
||||
{
|
||||
let name = denormalize_hostname(&hostname);
|
||||
if let Some(server_info) = db
|
||||
.get_mut("public")
|
||||
.and_then(|p| p.get_mut("serverInfo"))
|
||||
.and_then(|s| s.as_object_mut())
|
||||
{
|
||||
server_info.insert("name".into(), Value::String(name.into()));
|
||||
}
|
||||
}
|
||||
|
||||
Ok(migration_data)
|
||||
}
|
||||
|
||||
#[instrument(skip_all)]
|
||||
async fn post_up(self, _ctx: &RpcContext, input: Value) -> Result<(), Error> {
|
||||
let path = Path::new(
|
||||
"/media/startos/data/package-data/volumes/tor/data/startos/onion-migration.json",
|
||||
);
|
||||
|
||||
let json = serde_json::to_string(&input).with_kind(ErrorKind::Serialization)?;
|
||||
|
||||
crate::util::io::write_file_atomic(path, json).await?;
|
||||
|
||||
Ok(())
|
||||
}
|
||||
fn down(self, _db: &mut Value) -> Result<(), Error> {
|
||||
Ok(())
|
||||
@@ -156,6 +272,42 @@ fn migrate_available_ports(db: &mut Value) {
|
||||
}
|
||||
}
|
||||
|
||||
fn migrate_smtp(db: &mut Value) {
|
||||
if let Some(smtp) = db
|
||||
.get_mut("public")
|
||||
.and_then(|p| p.get_mut("serverInfo"))
|
||||
.and_then(|s| s.get_mut("smtp"))
|
||||
.and_then(|s| s.as_object_mut())
|
||||
{
|
||||
if let Some(server) = smtp.remove("server") {
|
||||
smtp.insert("host".into(), server);
|
||||
}
|
||||
if let Some(login) = smtp.remove("login") {
|
||||
smtp.insert("username".into(), login);
|
||||
}
|
||||
if !smtp.contains_key("security") {
|
||||
smtp.insert("security".into(), json!("starttls"));
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
fn denormalize_hostname(s: &str) -> String {
|
||||
let mut cap = true;
|
||||
s.chars()
|
||||
.map(|c| {
|
||||
if c == '-' {
|
||||
cap = true;
|
||||
' '
|
||||
} else if cap {
|
||||
cap = false;
|
||||
c.to_ascii_uppercase()
|
||||
} else {
|
||||
c
|
||||
}
|
||||
})
|
||||
.collect()
|
||||
}
|
||||
|
||||
fn migrate_host(host: Option<&mut Value>) {
|
||||
let Some(host) = host.and_then(|h| h.as_object_mut()) else {
|
||||
return;
|
||||
@@ -165,7 +317,11 @@ fn migrate_host(host: Option<&mut Value>) {
|
||||
host.remove("hostnameInfo");
|
||||
|
||||
// Migrate privateDomains from array to object (BTreeSet -> BTreeMap<_, BTreeSet<GatewayId>>)
|
||||
if let Some(private_domains) = host.get("privateDomains").and_then(|v| v.as_array()).cloned() {
|
||||
if let Some(private_domains) = host
|
||||
.get("privateDomains")
|
||||
.and_then(|v| v.as_array())
|
||||
.cloned()
|
||||
{
|
||||
let mut new_pd: Value = serde_json::json!({}).into();
|
||||
for domain in private_domains {
|
||||
if let Some(d) = domain.as_str() {
|
||||
|
||||
48
docs/TODO.md
Normal file
48
docs/TODO.md
Normal file
@@ -0,0 +1,48 @@
|
||||
# AI Agent TODOs
|
||||
|
||||
Pending tasks for AI agents. Remove items when completed.
|
||||
|
||||
## Features
|
||||
|
||||
- [ ] Extract TS-exported types into a lightweight sub-crate for fast binding generation
|
||||
|
||||
**Problem**: `make ts-bindings` compiles the entire `start-os` crate (with all dependencies: tokio,
|
||||
axum, openssl, etc.) just to run test functions that serialize type definitions to `.ts` files.
|
||||
Even in debug mode, this takes minutes. The generated output is pure type info — no runtime code
|
||||
is needed.
|
||||
|
||||
**Goal**: Generate TS bindings in seconds by isolating exported types in a small crate with minimal
|
||||
dependencies.
|
||||
|
||||
**Approach**: Create a `core/bindings-types/` sub-crate containing (or re-exporting) all 168
|
||||
`#[ts(export)]` types. This crate depends only on `serde`, `ts-rs`, `exver`, and other type-only
|
||||
crates — not on tokio, axum, openssl, etc. Then `build-ts.sh` runs `cargo test -p bindings-types`
|
||||
instead of `cargo test -p start-os`.
|
||||
|
||||
**Challenge**: The exported types are scattered across `core/src/` and reference each other and
|
||||
other crate types. Extracting them requires either moving the type definitions into the sub-crate
|
||||
(and importing them back into `start-os`) or restructuring to share a common types crate.
|
||||
|
||||
- [ ] Make `SetupExecuteParams.password` optional in the backend - @dr-bonez
|
||||
|
||||
**Problem**: In `core/src/setup.rs`, `SetupExecuteParams` has `password: EncryptedWire` (non-nullable),
|
||||
but the frontend needs to send `null` for restore/transfer flows where the user keeps their existing
|
||||
password. The `AttachParams` type correctly uses `Option<EncryptedWire>` for this purpose.
|
||||
|
||||
**Fix**: Change `password: EncryptedWire` to `password: Option<EncryptedWire>` in `SetupExecuteParams`
|
||||
and handle the `None` case in the `execute` handler (similar to how `attach` handles it).
|
||||
|
||||
- [ ] Auto-configure port forwards via UPnP/NAT-PMP/PCP - @dr-bonez
|
||||
|
||||
**Goal**: When a binding is marked public, automatically configure port forwards on the user's router
|
||||
using UPnP, NAT-PMP, or PCP, instead of requiring manual router configuration. Fall back to
|
||||
displaying manual instructions (the port forward mapping from patch-db) when auto-configuration is
|
||||
unavailable or fails.
|
||||
|
||||
- [ ] Decouple createTask from service running state - @dr-bonez
|
||||
|
||||
**Problem**: `createTask` currently depends on the service being in a running state.
|
||||
|
||||
**Goal**: The `input-not-matches` handler in StartOS should queue the task, check it once the
|
||||
service is ready, then clear it if it matches. This allows tasks to be created regardless of
|
||||
whether the service is currently running.
|
||||
@@ -1,12 +1,12 @@
|
||||
PACKAGE_TS_FILES := $(shell git ls-files package/lib) package/lib/test/output.ts
|
||||
BASE_TS_FILES := $(shell git ls-files base/lib) package/lib/test/output.ts
|
||||
PACKAGE_TS_FILES := $(shell git ls-files package/lib)
|
||||
BASE_TS_FILES := $(shell git ls-files base/lib)
|
||||
version = $(shell git tag --sort=committerdate | tail -1)
|
||||
|
||||
.PHONY: test base/test package/test clean bundle fmt buildOutput check
|
||||
|
||||
all: bundle
|
||||
|
||||
package/test: $(PACKAGE_TS_FILES) package/lib/test/output.ts package/node_modules base/node_modules
|
||||
package/test: $(PACKAGE_TS_FILES) package/node_modules base/node_modules
|
||||
cd package && npm test
|
||||
|
||||
base/test: $(BASE_TS_FILES) base/node_modules
|
||||
@@ -21,9 +21,6 @@ clean:
|
||||
rm -f package/lib/test/output.ts
|
||||
rm -rf package/node_modules
|
||||
|
||||
package/lib/test/output.ts: package/node_modules package/lib/test/makeOutput.ts package/scripts/oldSpecToBuilder.ts
|
||||
cd package && npm run buildOutput
|
||||
|
||||
bundle: baseDist dist | test fmt
|
||||
touch dist
|
||||
|
||||
|
||||
@@ -16,6 +16,7 @@ import {
|
||||
MountParams,
|
||||
StatusInfo,
|
||||
Manifest,
|
||||
HostnameInfo,
|
||||
} from './osBindings'
|
||||
import {
|
||||
PackageId,
|
||||
@@ -23,6 +24,7 @@ import {
|
||||
ServiceInterfaceId,
|
||||
SmtpValue,
|
||||
ActionResult,
|
||||
PluginHostnameInfo,
|
||||
} from './types'
|
||||
|
||||
/** Used to reach out from the pure js runtime */
|
||||
@@ -133,6 +135,8 @@ export type Effects = {
|
||||
}): Promise<string>
|
||||
/** Returns the IP address of StartOS */
|
||||
getOsIp(): Promise<string>
|
||||
/** Returns the effective outbound gateway for this service */
|
||||
getOutboundGateway(options: { callback?: () => void }): Promise<string>
|
||||
// interface
|
||||
/** Creates an interface bound to a specific host and port to show to the user */
|
||||
exportServiceInterface(options: ExportServiceInterfaceParams): Promise<null>
|
||||
@@ -151,6 +155,18 @@ export type Effects = {
|
||||
clearServiceInterfaces(options: {
|
||||
except: ServiceInterfaceId[]
|
||||
}): Promise<null>
|
||||
|
||||
plugin: {
|
||||
url: {
|
||||
register(options: { tableAction: ActionId }): Promise<null>
|
||||
exportUrl(options: {
|
||||
hostnameInfo: PluginHostnameInfo
|
||||
removeAction: ActionId | null
|
||||
overflowActions: ActionId[]
|
||||
}): Promise<null>
|
||||
clearUrls(options: { except: PluginHostnameInfo[] }): Promise<null>
|
||||
}
|
||||
}
|
||||
// ssl
|
||||
/** Returns a PEM encoded fullchain for the hostnames specified */
|
||||
getSslCertificate: (options: {
|
||||
|
||||
@@ -2,21 +2,32 @@ import { ValueSpec } from '../inputSpecTypes'
|
||||
import { Value } from './value'
|
||||
import { _ } from '../../../util'
|
||||
import { Effects } from '../../../Effects'
|
||||
import { Parser, object } from 'ts-matches'
|
||||
import { z } from 'zod'
|
||||
import { DeepPartial } from '../../../types'
|
||||
import { InputSpecTools, createInputSpecTools } from './inputSpecTools'
|
||||
|
||||
export type LazyBuildOptions = {
|
||||
/** Options passed to a lazy builder function when resolving dynamic form field values. */
|
||||
export type LazyBuildOptions<Type> = {
|
||||
/** The effects interface for runtime operations (e.g. reading files, querying state). */
|
||||
effects: Effects
|
||||
/** Previously saved form data to pre-fill the form with, or `null` for fresh creation. */
|
||||
prefill: DeepPartial<Type> | null
|
||||
}
|
||||
export type LazyBuild<ExpectedOut> = (
|
||||
options: LazyBuildOptions,
|
||||
/**
|
||||
* A function that lazily produces a value, potentially using effects and prefill data.
|
||||
* Used by `dynamic*` variants of {@link Value} to compute form field options at runtime.
|
||||
*/
|
||||
export type LazyBuild<ExpectedOut, Type> = (
|
||||
options: LazyBuildOptions<Type>,
|
||||
) => Promise<ExpectedOut> | ExpectedOut
|
||||
|
||||
/** Extracts the runtime type from an {@link InputSpec}. */
|
||||
// prettier-ignore
|
||||
export type ExtractInputSpecType<A extends InputSpec<Record<string, any>, any>> =
|
||||
export type ExtractInputSpecType<A extends InputSpec<Record<string, any>, any>> =
|
||||
A extends InputSpec<infer B, any> ? B :
|
||||
never
|
||||
|
||||
/** Extracts the static validation type from an {@link InputSpec}. */
|
||||
export type ExtractInputSpecStaticValidatedAs<
|
||||
A extends InputSpec<any, Record<string, any>>,
|
||||
> = A extends InputSpec<any, infer B> ? B : never
|
||||
@@ -25,11 +36,13 @@ export type ExtractInputSpecStaticValidatedAs<
|
||||
// A extends Record<string, any> | InputSpec<Record<string, any>>,
|
||||
// > = A extends InputSpec<infer B> ? DeepPartial<B> : DeepPartial<A>
|
||||
|
||||
/** Maps an object type to a record of {@link Value} entries for use with `InputSpec.of`. */
|
||||
export type InputSpecOf<A extends Record<string, any>> = {
|
||||
[K in keyof A]: Value<A[K]>
|
||||
}
|
||||
|
||||
export type MaybeLazyValues<A> = LazyBuild<A> | A
|
||||
/** A value that is either directly provided or lazily computed via a {@link LazyBuild} function. */
|
||||
export type MaybeLazyValues<A, T> = LazyBuild<A, T> | A
|
||||
/**
|
||||
* InputSpecs are the specs that are used by the os input specification form for this service.
|
||||
* Here is an example of a simple input specification
|
||||
@@ -94,21 +107,26 @@ export class InputSpec<
|
||||
private readonly spec: {
|
||||
[K in keyof Type]: Value<Type[K]>
|
||||
},
|
||||
public readonly validator: Parser<unknown, StaticValidatedAs>,
|
||||
public readonly validator: z.ZodType<StaticValidatedAs>,
|
||||
) {}
|
||||
public _TYPE: Type = null as any as Type
|
||||
public _PARTIAL: DeepPartial<Type> = null as any as DeepPartial<Type>
|
||||
async build(options: LazyBuildOptions): Promise<{
|
||||
/**
|
||||
* Builds the runtime form specification and combined Zod validator from this InputSpec's fields.
|
||||
*
|
||||
* @returns An object containing the resolved `spec` (field specs keyed by name) and a combined `validator`
|
||||
*/
|
||||
async build<OuterType>(options: LazyBuildOptions<OuterType>): Promise<{
|
||||
spec: {
|
||||
[K in keyof Type]: ValueSpec
|
||||
}
|
||||
validator: Parser<unknown, Type>
|
||||
validator: z.ZodType<Type>
|
||||
}> {
|
||||
const answer = {} as {
|
||||
[K in keyof Type]: ValueSpec
|
||||
}
|
||||
const validator = {} as {
|
||||
[K in keyof Type]: Parser<unknown, any>
|
||||
[K in keyof Type]: z.ZodType<any>
|
||||
}
|
||||
for (const k in this.spec) {
|
||||
const built = await this.spec[k].build(options as any)
|
||||
@@ -117,22 +135,99 @@ export class InputSpec<
|
||||
}
|
||||
return {
|
||||
spec: answer,
|
||||
validator: object(validator) as any,
|
||||
validator: z.object(validator) as any,
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Adds a single named field to this spec, returning a new `InputSpec` with the extended type.
|
||||
*
|
||||
* @param key - The field key name
|
||||
* @param build - A {@link Value} instance, or a function receiving typed tools that returns one
|
||||
*/
|
||||
addKey<Key extends string, V extends Value<any, any, any>>(
|
||||
key: Key,
|
||||
build: V | ((tools: InputSpecTools<Type>) => V),
|
||||
): InputSpec<
|
||||
Type & { [K in Key]: V extends Value<infer T, any, any> ? T : never },
|
||||
StaticValidatedAs & {
|
||||
[K in Key]: V extends Value<any, infer S, any> ? S : never
|
||||
}
|
||||
> {
|
||||
const value =
|
||||
build instanceof Function ? build(createInputSpecTools<Type>()) : build
|
||||
const newSpec = { ...this.spec, [key]: value } as any
|
||||
const newValidator = z.object(
|
||||
Object.fromEntries(
|
||||
Object.entries(newSpec).map(([k, v]) => [
|
||||
k,
|
||||
(v as Value<any>).validator,
|
||||
]),
|
||||
),
|
||||
)
|
||||
return new InputSpec(newSpec, newValidator as any)
|
||||
}
|
||||
|
||||
/**
|
||||
* Adds multiple fields to this spec at once, returning a new `InputSpec` with extended types.
|
||||
*
|
||||
* @param build - A record of {@link Value} entries, or a function receiving typed tools that returns one
|
||||
*/
|
||||
add<AddSpec extends Record<string, Value<any, any, any>>>(
|
||||
build: AddSpec | ((tools: InputSpecTools<Type>) => AddSpec),
|
||||
): InputSpec<
|
||||
Type & {
|
||||
[K in keyof AddSpec]: AddSpec[K] extends Value<infer T, any, any>
|
||||
? T
|
||||
: never
|
||||
},
|
||||
StaticValidatedAs & {
|
||||
[K in keyof AddSpec]: AddSpec[K] extends Value<any, infer S, any>
|
||||
? S
|
||||
: never
|
||||
}
|
||||
> {
|
||||
const addedValues =
|
||||
build instanceof Function ? build(createInputSpecTools<Type>()) : build
|
||||
const newSpec = { ...this.spec, ...addedValues } as any
|
||||
const newValidator = z.object(
|
||||
Object.fromEntries(
|
||||
Object.entries(newSpec).map(([k, v]) => [
|
||||
k,
|
||||
(v as Value<any>).validator,
|
||||
]),
|
||||
),
|
||||
)
|
||||
return new InputSpec(newSpec, newValidator as any)
|
||||
}
|
||||
|
||||
/**
|
||||
* Creates an `InputSpec` from a plain record of {@link Value} entries.
|
||||
*
|
||||
* @example
|
||||
* ```ts
|
||||
* const spec = InputSpec.of({
|
||||
* username: Value.text({ name: 'Username', required: true, default: null }),
|
||||
* verbose: Value.toggle({ name: 'Verbose Logging', default: false }),
|
||||
* })
|
||||
* ```
|
||||
*/
|
||||
static of<Spec extends Record<string, Value<any, any>>>(spec: Spec) {
|
||||
const validator = object(
|
||||
const validator = z.object(
|
||||
Object.fromEntries(
|
||||
Object.entries(spec).map(([k, v]) => [k, v.validator]),
|
||||
),
|
||||
)
|
||||
return new InputSpec<
|
||||
{
|
||||
[K in keyof Spec]: Spec[K] extends Value<infer T, any> ? T : never
|
||||
[K in keyof Spec]: Spec[K] extends Value<infer T, any, unknown>
|
||||
? T
|
||||
: never
|
||||
},
|
||||
{
|
||||
[K in keyof Spec]: Spec[K] extends Value<any, infer T> ? T : never
|
||||
[K in keyof Spec]: Spec[K] extends Value<any, infer T, unknown>
|
||||
? T
|
||||
: never
|
||||
}
|
||||
>(spec, validator as any)
|
||||
}
|
||||
|
||||
274
sdk/base/lib/actions/input/builder/inputSpecTools.ts
Normal file
274
sdk/base/lib/actions/input/builder/inputSpecTools.ts
Normal file
@@ -0,0 +1,274 @@
|
||||
import { InputSpec, LazyBuild } from './inputSpec'
|
||||
import { AsRequired, FileInfo, Value } from './value'
|
||||
import { List } from './list'
|
||||
import { UnionRes, UnionResStaticValidatedAs, Variants } from './variants'
|
||||
import {
|
||||
Pattern,
|
||||
RandomString,
|
||||
ValueSpecDatetime,
|
||||
ValueSpecText,
|
||||
} from '../inputSpecTypes'
|
||||
import { DefaultString } from '../inputSpecTypes'
|
||||
import { z } from 'zod'
|
||||
import { ListValueSpecText } from '../inputSpecTypes'
|
||||
|
||||
export interface InputSpecTools<OuterType> {
|
||||
Value: BoundValue<OuterType>
|
||||
Variants: typeof Variants
|
||||
InputSpec: typeof InputSpec
|
||||
List: BoundList<OuterType>
|
||||
}
|
||||
|
||||
export interface BoundValue<OuterType> {
|
||||
// Static (non-dynamic) methods — no OuterType involved
|
||||
toggle: typeof Value.toggle
|
||||
text: typeof Value.text
|
||||
textarea: typeof Value.textarea
|
||||
number: typeof Value.number
|
||||
color: typeof Value.color
|
||||
datetime: typeof Value.datetime
|
||||
select: typeof Value.select
|
||||
multiselect: typeof Value.multiselect
|
||||
object: typeof Value.object
|
||||
file: typeof Value.file
|
||||
list: typeof Value.list
|
||||
hidden: typeof Value.hidden
|
||||
union: typeof Value.union
|
||||
|
||||
// Dynamic methods with OuterType pre-bound (last generic param removed)
|
||||
dynamicToggle(
|
||||
a: LazyBuild<
|
||||
{
|
||||
name: string
|
||||
description?: string | null
|
||||
warning?: string | null
|
||||
default: boolean
|
||||
disabled?: false | string
|
||||
},
|
||||
OuterType
|
||||
>,
|
||||
): Value<boolean, boolean, OuterType>
|
||||
|
||||
dynamicText<Required extends boolean>(
|
||||
getA: LazyBuild<
|
||||
{
|
||||
name: string
|
||||
description?: string | null
|
||||
warning?: string | null
|
||||
default: DefaultString | null
|
||||
required: Required
|
||||
masked?: boolean
|
||||
placeholder?: string | null
|
||||
minLength?: number | null
|
||||
maxLength?: number | null
|
||||
patterns?: Pattern[]
|
||||
inputmode?: ValueSpecText['inputmode']
|
||||
disabled?: string | false
|
||||
generate?: null | RandomString
|
||||
},
|
||||
OuterType
|
||||
>,
|
||||
): Value<AsRequired<string, Required>, string | null, OuterType>
|
||||
|
||||
dynamicTextarea<Required extends boolean>(
|
||||
getA: LazyBuild<
|
||||
{
|
||||
name: string
|
||||
description?: string | null
|
||||
warning?: string | null
|
||||
default: string | null
|
||||
required: Required
|
||||
minLength?: number | null
|
||||
maxLength?: number | null
|
||||
patterns?: Pattern[]
|
||||
minRows?: number
|
||||
maxRows?: number
|
||||
placeholder?: string | null
|
||||
disabled?: false | string
|
||||
},
|
||||
OuterType
|
||||
>,
|
||||
): Value<AsRequired<string, Required>, string | null, OuterType>
|
||||
|
||||
dynamicNumber<Required extends boolean>(
|
||||
getA: LazyBuild<
|
||||
{
|
||||
name: string
|
||||
description?: string | null
|
||||
warning?: string | null
|
||||
default: number | null
|
||||
required: Required
|
||||
min?: number | null
|
||||
max?: number | null
|
||||
step?: number | null
|
||||
integer: boolean
|
||||
units?: string | null
|
||||
placeholder?: string | null
|
||||
disabled?: false | string
|
||||
},
|
||||
OuterType
|
||||
>,
|
||||
): Value<AsRequired<number, Required>, number | null, OuterType>
|
||||
|
||||
dynamicColor<Required extends boolean>(
|
||||
getA: LazyBuild<
|
||||
{
|
||||
name: string
|
||||
description?: string | null
|
||||
warning?: string | null
|
||||
default: string | null
|
||||
required: Required
|
||||
disabled?: false | string
|
||||
},
|
||||
OuterType
|
||||
>,
|
||||
): Value<AsRequired<string, Required>, string | null, OuterType>
|
||||
|
||||
dynamicDatetime<Required extends boolean>(
|
||||
getA: LazyBuild<
|
||||
{
|
||||
name: string
|
||||
description?: string | null
|
||||
warning?: string | null
|
||||
default: string | null
|
||||
required: Required
|
||||
inputmode?: ValueSpecDatetime['inputmode']
|
||||
min?: string | null
|
||||
max?: string | null
|
||||
disabled?: false | string
|
||||
},
|
||||
OuterType
|
||||
>,
|
||||
): Value<AsRequired<string, Required>, string | null, OuterType>
|
||||
|
||||
dynamicSelect<Values extends Record<string, string>>(
|
||||
getA: LazyBuild<
|
||||
{
|
||||
name: string
|
||||
description?: string | null
|
||||
warning?: string | null
|
||||
default: string
|
||||
values: Values
|
||||
disabled?: false | string | string[]
|
||||
},
|
||||
OuterType
|
||||
>,
|
||||
): Value<keyof Values & string, keyof Values & string, OuterType>
|
||||
|
||||
dynamicMultiselect<Values extends Record<string, string>>(
|
||||
getA: LazyBuild<
|
||||
{
|
||||
name: string
|
||||
description?: string | null
|
||||
warning?: string | null
|
||||
default: string[]
|
||||
values: Values
|
||||
minLength?: number | null
|
||||
maxLength?: number | null
|
||||
disabled?: false | string | string[]
|
||||
},
|
||||
OuterType
|
||||
>,
|
||||
): Value<(keyof Values & string)[], (keyof Values & string)[], OuterType>
|
||||
|
||||
dynamicFile<Required extends boolean>(
|
||||
a: LazyBuild<
|
||||
{
|
||||
name: string
|
||||
description?: string | null
|
||||
warning?: string | null
|
||||
extensions: string[]
|
||||
required: Required
|
||||
},
|
||||
OuterType
|
||||
>,
|
||||
): Value<AsRequired<FileInfo, Required>, FileInfo | null, OuterType>
|
||||
|
||||
dynamicUnion<
|
||||
VariantValues extends {
|
||||
[K in string]: {
|
||||
name: string
|
||||
spec: InputSpec<any>
|
||||
}
|
||||
},
|
||||
>(
|
||||
getA: LazyBuild<
|
||||
{
|
||||
name: string
|
||||
description?: string | null
|
||||
warning?: string | null
|
||||
variants: Variants<VariantValues>
|
||||
default: keyof VariantValues & string
|
||||
disabled: string[] | false | string
|
||||
},
|
||||
OuterType
|
||||
>,
|
||||
): Value<UnionRes<VariantValues>, UnionRes<VariantValues>, OuterType>
|
||||
dynamicUnion<
|
||||
StaticVariantValues extends {
|
||||
[K in string]: {
|
||||
name: string
|
||||
spec: InputSpec<any, any>
|
||||
}
|
||||
},
|
||||
VariantValues extends StaticVariantValues,
|
||||
>(
|
||||
getA: LazyBuild<
|
||||
{
|
||||
name: string
|
||||
description?: string | null
|
||||
warning?: string | null
|
||||
variants: Variants<VariantValues>
|
||||
default: keyof VariantValues & string
|
||||
disabled: string[] | false | string
|
||||
},
|
||||
OuterType
|
||||
>,
|
||||
validator: z.ZodType<UnionResStaticValidatedAs<StaticVariantValues>>,
|
||||
): Value<
|
||||
UnionRes<VariantValues>,
|
||||
UnionResStaticValidatedAs<StaticVariantValues>,
|
||||
OuterType
|
||||
>
|
||||
|
||||
dynamicHidden<T>(
|
||||
getParser: LazyBuild<z.ZodType<T>, OuterType>,
|
||||
): Value<T, T, OuterType>
|
||||
}
|
||||
|
||||
export interface BoundList<OuterType> {
|
||||
text: typeof List.text
|
||||
obj: typeof List.obj
|
||||
dynamicText(
|
||||
getA: LazyBuild<
|
||||
{
|
||||
name: string
|
||||
description?: string | null
|
||||
warning?: string | null
|
||||
default?: string[]
|
||||
minLength?: number | null
|
||||
maxLength?: number | null
|
||||
disabled?: false | string
|
||||
generate?: null | RandomString
|
||||
spec: {
|
||||
masked?: boolean
|
||||
placeholder?: string | null
|
||||
minLength?: number | null
|
||||
maxLength?: number | null
|
||||
patterns?: Pattern[]
|
||||
inputmode?: ListValueSpecText['inputmode']
|
||||
}
|
||||
},
|
||||
OuterType
|
||||
>,
|
||||
): List<string[], string[], OuterType>
|
||||
}
|
||||
|
||||
export function createInputSpecTools<OuterType>(): InputSpecTools<OuterType> {
|
||||
return {
|
||||
Value: Value as any as BoundValue<OuterType>,
|
||||
Variants,
|
||||
InputSpec,
|
||||
List: List as any as BoundList<OuterType>,
|
||||
}
|
||||
}
|
||||
@@ -7,18 +7,39 @@ import {
|
||||
ValueSpecList,
|
||||
ValueSpecListOf,
|
||||
} from '../inputSpecTypes'
|
||||
import { Parser, arrayOf, string } from 'ts-matches'
|
||||
import { z } from 'zod'
|
||||
|
||||
export class List<Type extends StaticValidatedAs, StaticValidatedAs = Type> {
|
||||
/**
|
||||
* Builder class for defining list-type form fields.
|
||||
*
|
||||
* A list presents an interface to add, remove, and reorder items. Items can be
|
||||
* either text strings ({@link List.text}) or structured objects ({@link List.obj}).
|
||||
*
|
||||
* Used with {@link Value.list} to include a list field in an {@link InputSpec}.
|
||||
*/
|
||||
export class List<
|
||||
Type extends StaticValidatedAs,
|
||||
StaticValidatedAs = Type,
|
||||
OuterType = unknown,
|
||||
> {
|
||||
private constructor(
|
||||
public build: LazyBuild<{
|
||||
spec: ValueSpecList
|
||||
validator: Parser<unknown, Type>
|
||||
}>,
|
||||
public readonly validator: Parser<unknown, StaticValidatedAs>,
|
||||
public build: LazyBuild<
|
||||
{
|
||||
spec: ValueSpecList
|
||||
validator: z.ZodType<Type>
|
||||
},
|
||||
OuterType
|
||||
>,
|
||||
public readonly validator: z.ZodType<StaticValidatedAs>,
|
||||
) {}
|
||||
readonly _TYPE: Type = null as any
|
||||
|
||||
/**
|
||||
* Creates a list of text input items.
|
||||
*
|
||||
* @param a - List-level options (name, description, min/max length, defaults)
|
||||
* @param aSpec - Item-level options (patterns, input mode, masking, generation)
|
||||
*/
|
||||
static text(
|
||||
a: {
|
||||
name: string
|
||||
@@ -62,7 +83,7 @@ export class List<Type extends StaticValidatedAs, StaticValidatedAs = Type> {
|
||||
generate?: null | RandomString
|
||||
},
|
||||
) {
|
||||
const validator = arrayOf(string)
|
||||
const validator = z.array(z.string())
|
||||
return new List<string[]>(() => {
|
||||
const spec = {
|
||||
type: 'text' as const,
|
||||
@@ -90,28 +111,32 @@ export class List<Type extends StaticValidatedAs, StaticValidatedAs = Type> {
|
||||
}, validator)
|
||||
}
|
||||
|
||||
static dynamicText(
|
||||
getA: LazyBuild<{
|
||||
name: string
|
||||
description?: string | null
|
||||
warning?: string | null
|
||||
default?: string[]
|
||||
minLength?: number | null
|
||||
maxLength?: number | null
|
||||
disabled?: false | string
|
||||
generate?: null | RandomString
|
||||
spec: {
|
||||
masked?: boolean
|
||||
placeholder?: string | null
|
||||
/** Like {@link List.text} but options are resolved lazily at runtime via a builder function. */
|
||||
static dynamicText<OuterType = unknown>(
|
||||
getA: LazyBuild<
|
||||
{
|
||||
name: string
|
||||
description?: string | null
|
||||
warning?: string | null
|
||||
default?: string[]
|
||||
minLength?: number | null
|
||||
maxLength?: number | null
|
||||
patterns?: Pattern[]
|
||||
inputmode?: ListValueSpecText['inputmode']
|
||||
}
|
||||
}>,
|
||||
disabled?: false | string
|
||||
generate?: null | RandomString
|
||||
spec: {
|
||||
masked?: boolean
|
||||
placeholder?: string | null
|
||||
minLength?: number | null
|
||||
maxLength?: number | null
|
||||
patterns?: Pattern[]
|
||||
inputmode?: ListValueSpecText['inputmode']
|
||||
}
|
||||
},
|
||||
OuterType
|
||||
>,
|
||||
) {
|
||||
const validator = arrayOf(string)
|
||||
return new List<string[]>(async (options) => {
|
||||
const validator = z.array(z.string())
|
||||
return new List<string[], string[], OuterType>(async (options) => {
|
||||
const { spec: aSpec, ...a } = await getA(options)
|
||||
const spec = {
|
||||
type: 'text' as const,
|
||||
@@ -140,6 +165,12 @@ export class List<Type extends StaticValidatedAs, StaticValidatedAs = Type> {
|
||||
}, validator)
|
||||
}
|
||||
|
||||
/**
|
||||
* Creates a list of structured object items, each defined by a nested {@link InputSpec}.
|
||||
*
|
||||
* @param a - List-level options (name, description, min/max length)
|
||||
* @param aSpec - Item-level options (the nested spec, display expression, uniqueness constraint)
|
||||
*/
|
||||
static obj<
|
||||
Type extends StaticValidatedAs,
|
||||
StaticValidatedAs extends Record<string, any>,
|
||||
@@ -183,8 +214,8 @@ export class List<Type extends StaticValidatedAs, StaticValidatedAs = Type> {
|
||||
disabled: false,
|
||||
...value,
|
||||
},
|
||||
validator: arrayOf(built.validator),
|
||||
validator: z.array(built.validator),
|
||||
}
|
||||
}, arrayOf(aSpec.spec.validator))
|
||||
}, z.array(aSpec.spec.validator))
|
||||
}
|
||||
}
|
||||
|
||||
Some files were not shown because too many files have changed in this diff Show More
Reference in New Issue
Block a user