mirror of
https://github.com/Start9Labs/start-os.git
synced 2026-03-31 04:23:40 +00:00
Feature/lxc container runtime (#2514)
* wip: static-server errors * wip: fix wifi * wip: Fix the service_effects * wip: Fix cors in the middleware * wip(chore): Auth clean up the lint. * wip(fix): Vhost * wip: continue manager refactor Co-authored-by: J H <Blu-J@users.noreply.github.com> * wip: service manager refactor * wip: Some fixes * wip(fix): Fix the lib.rs * wip * wip(fix): Logs * wip: bins * wip(innspect): Add in the inspect * wip: config * wip(fix): Diagnostic * wip(fix): Dependencies * wip: context * wip(fix) Sorta auth * wip: warnings * wip(fix): registry/admin * wip(fix) marketplace * wip(fix) Some more converted and fixed with the linter and config * wip: Working on the static server * wip(fix)static server * wip: Remove some asynnc * wip: Something about the request and regular rpc * wip: gut install Co-authored-by: J H <Blu-J@users.noreply.github.com> * wip: Convert the static server into the new system * wip delete file * test * wip(fix) vhost does not need the with safe defaults * wip: Adding in the wifi * wip: Fix the developer and the verify * wip: new install flow Co-authored-by: J H <Blu-J@users.noreply.github.com> * fix middleware * wip * wip: Fix the auth * wip * continue service refactor * feature: Service get_config * feat: Action * wip: Fighting the great fight against the borrow checker * wip: Remove an error in a file that I just need to deel with later * chore: Add in some more lifetime stuff to the services * wip: Install fix on lifetime * cleanup * wip: Deal with the borrow later * more cleanup * resolve borrowchecker errors * wip(feat): add in the handler for the socket, for now * wip(feat): Update the service_effect_handler::action * chore: Add in the changes to make sure the from_service goes to context * chore: Change the * refactor service map * fix references to service map * fill out restore * wip: Before I work on the store stuff * fix backup module * handle some warnings * feat: add in the ui components on the rust side * feature: Update the procedures * chore: Update the js side of the main and a few of the others * chore: Update the rpc listener to match the persistant container * wip: Working on updating some things to have a better name * wip(feat): Try and get the rpc to return the correct shape? * lxc wip * wip(feat): Try and get the rpc to return the correct shape? * build for container runtime wip * remove container-init * fix build * fix error * chore: Update to work I suppose * lxc wip * remove docker module and feature * download alpine squashfs automatically * overlays effect Co-authored-by: Jade <Blu-J@users.noreply.github.com> * chore: Add the overlay effect * feat: Add the mounter in the main * chore: Convert to use the mounts, still need to work with the sandbox * install fixes * fix ssl * fixes from testing * implement tmpfile for upload * wip * misc fixes * cleanup * cleanup * better progress reporting * progress for sideload * return real guid * add devmode script * fix lxc rootfs path * fix percentage bar * fix progress bar styling * fix build for unstable * tweaks * label progress * tweaks * update progress more often * make symlink in rpc_client * make socket dir * fix parent path * add start-cli to container * add echo and gitInfo commands * wip: Add the init + errors * chore: Add in the exit effect for the system * chore: Change the type to null for failure to parse * move sigterm timeout to stopping status * update order * chore: Update the return type * remove dbg * change the map error * chore: Update the thing to capture id * chore add some life changes * chore: Update the loging * chore: Update the package to run module * us From for RpcError * chore: Update to use import instead * chore: update * chore: Use require for the backup * fix a default * update the type that is wrong * chore: Update the type of the manifest * chore: Update to make null * only symlink if not exists * get rid of double result * better debug info for ErrorCollection * chore: Update effects * chore: fix * mount assets and volumes * add exec instead of spawn * fix mounting in image * fix overlay mounts Co-authored-by: Jade <Blu-J@users.noreply.github.com> * misc fixes * feat: Fix two * fix: systemForEmbassy main * chore: Fix small part of main loop * chore: Modify the bundle * merge * fixMain loop" * move tsc to makefile * chore: Update the return types of the health check * fix client * chore: Convert the todo to use tsmatches * add in the fixes for the seen and create the hack to allow demo * chore: Update to include the systemForStartOs * chore UPdate to the latest types from the expected outout * fixes * fix typo * Don't emit if failure on tsc * wip Co-authored-by: Jade <Blu-J@users.noreply.github.com> * add s9pk api * add inspection * add inspect manifest * newline after display serializable * fix squashfs in image name * edit manifest Co-authored-by: Jade <Blu-J@users.noreply.github.com> * wait for response on repl * ignore sig for now * ignore sig for now * re-enable sig verification * fix * wip * env and chroot * add profiling logs * set uid & gid in squashfs to 100000 * set uid of sqfs to 100000 * fix mksquashfs args * add env to compat * fix * re-add docker feature flag * fix docker output format being stupid * here be dragons * chore: Add in the cross compiling for something * fix npm link * extract logs from container on exit * chore: Update for testing * add log capture to drop trait * chore: add in the modifications that I make * chore: Update small things for no updates * chore: Update the types of something * chore: Make main not complain * idmapped mounts * idmapped volumes * re-enable kiosk * chore: Add in some logging for the new system * bring in start-sdk * remove avahi * chore: Update the deps * switch to musl * chore: Update the version of prettier * chore: Organize' * chore: Update some of the headers back to the standard of fetch * fix musl build * fix idmapped mounts * fix cross build * use cross compiler for correct arch * feat: Add in the faked ssl stuff for the effects * @dr_bonez Did a solution here * chore: Something that DrBonez * chore: up * wip: We have a working server!!! * wip * uninstall * wip * tes --------- Co-authored-by: J H <dragondef@gmail.com> Co-authored-by: J H <Blu-J@users.noreply.github.com> Co-authored-by: J H <2364004+Blu-J@users.noreply.github.com>
This commit is contained in:
@@ -1,23 +1,48 @@
|
||||
use std::collections::BTreeMap;
|
||||
use std::path::Path;
|
||||
use std::ffi::OsStr;
|
||||
use std::fmt::Debug;
|
||||
use std::path::{Path, PathBuf};
|
||||
use std::sync::Arc;
|
||||
|
||||
use futures::future::BoxFuture;
|
||||
use futures::FutureExt;
|
||||
use imbl::OrdMap;
|
||||
use imbl_value::InternedString;
|
||||
use itertools::Itertools;
|
||||
use tokio::io::AsyncRead;
|
||||
|
||||
use crate::prelude::*;
|
||||
use crate::s9pk::merkle_archive::hash::{Hash, HashWriter};
|
||||
use crate::s9pk::merkle_archive::sink::{Sink, TrackingWriter};
|
||||
use crate::s9pk::merkle_archive::source::{ArchiveSource, FileSource, Section};
|
||||
use crate::s9pk::merkle_archive::source::{ArchiveSource, DynFileSource, FileSource, Section};
|
||||
use crate::s9pk::merkle_archive::write_queue::WriteQueue;
|
||||
use crate::s9pk::merkle_archive::{varint, Entry, EntryContents};
|
||||
|
||||
#[derive(Debug)]
|
||||
pub struct DirectoryContents<S>(BTreeMap<InternedString, Entry<S>>);
|
||||
#[derive(Clone)]
|
||||
pub struct DirectoryContents<S> {
|
||||
contents: OrdMap<InternedString, Entry<S>>,
|
||||
/// used to optimize files to have earliest needed information up front
|
||||
sort_by: Option<Arc<dyn Fn(&str, &str) -> std::cmp::Ordering + Send + Sync>>,
|
||||
}
|
||||
impl<S: Debug> Debug for DirectoryContents<S> {
|
||||
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
|
||||
f.debug_struct("DirectoryContents")
|
||||
.field("contents", &self.contents)
|
||||
.finish_non_exhaustive()
|
||||
}
|
||||
}
|
||||
impl<S> DirectoryContents<S> {
|
||||
pub fn new() -> Self {
|
||||
Self(BTreeMap::new())
|
||||
Self {
|
||||
contents: OrdMap::new(),
|
||||
sort_by: None,
|
||||
}
|
||||
}
|
||||
|
||||
pub fn sort_by(
|
||||
&mut self,
|
||||
sort_by: impl Fn(&str, &str) -> std::cmp::Ordering + Send + Sync + 'static,
|
||||
) {
|
||||
self.sort_by = Some(Arc::new(sort_by))
|
||||
}
|
||||
|
||||
#[instrument(skip_all)]
|
||||
@@ -39,6 +64,57 @@ impl<S> DirectoryContents<S> {
|
||||
res
|
||||
}
|
||||
|
||||
pub fn file_paths(&self, prefix: impl AsRef<Path>) -> Vec<PathBuf> {
|
||||
let prefix = prefix.as_ref();
|
||||
let mut res = Vec::new();
|
||||
for (name, entry) in &self.contents {
|
||||
let path = prefix.join(name);
|
||||
if let EntryContents::Directory(d) = entry.as_contents() {
|
||||
res.push(path.join(""));
|
||||
res.append(&mut d.file_paths(path));
|
||||
} else {
|
||||
res.push(path);
|
||||
}
|
||||
}
|
||||
res
|
||||
}
|
||||
|
||||
pub const fn header_size() -> u64 {
|
||||
8 // position: u64 BE
|
||||
+ 8 // size: u64 BE
|
||||
}
|
||||
|
||||
#[instrument(skip_all)]
|
||||
pub async fn serialize_header<W: Sink>(&self, position: u64, w: &mut W) -> Result<u64, Error> {
|
||||
use tokio::io::AsyncWriteExt;
|
||||
|
||||
let size = self.toc_size();
|
||||
|
||||
w.write_all(&position.to_be_bytes()).await?;
|
||||
w.write_all(&size.to_be_bytes()).await?;
|
||||
|
||||
Ok(position)
|
||||
}
|
||||
|
||||
pub fn toc_size(&self) -> u64 {
|
||||
self.iter().fold(
|
||||
varint::serialized_varint_size(self.len() as u64),
|
||||
|acc, (name, entry)| {
|
||||
acc + varint::serialized_varstring_size(&**name) + entry.header_size()
|
||||
},
|
||||
)
|
||||
}
|
||||
}
|
||||
impl<S: Clone> DirectoryContents<S> {
|
||||
pub fn with_stem(&self, stem: &str) -> impl Iterator<Item = (InternedString, Entry<S>)> {
|
||||
let prefix = InternedString::intern(stem);
|
||||
let (_, center, right) = self.split_lookup(&*stem);
|
||||
center.map(|e| (prefix.clone(), e)).into_iter().chain(
|
||||
right.into_iter().take_while(move |(k, _)| {
|
||||
Path::new(&**k).file_stem() == Some(OsStr::new(&*prefix))
|
||||
}),
|
||||
)
|
||||
}
|
||||
pub fn insert_path(&mut self, path: impl AsRef<Path>, entry: Entry<S>) -> Result<(), Error> {
|
||||
let path = path.as_ref();
|
||||
let (parent, Some(file)) = (path.parent(), path.file_name().and_then(|f| f.to_str()))
|
||||
@@ -73,32 +149,6 @@ impl<S> DirectoryContents<S> {
|
||||
dir.insert(file.into(), entry);
|
||||
Ok(())
|
||||
}
|
||||
|
||||
pub const fn header_size() -> u64 {
|
||||
8 // position: u64 BE
|
||||
+ 8 // size: u64 BE
|
||||
}
|
||||
|
||||
#[instrument(skip_all)]
|
||||
pub async fn serialize_header<W: Sink>(&self, position: u64, w: &mut W) -> Result<u64, Error> {
|
||||
use tokio::io::AsyncWriteExt;
|
||||
|
||||
let size = self.toc_size();
|
||||
|
||||
w.write_all(&position.to_be_bytes()).await?;
|
||||
w.write_all(&size.to_be_bytes()).await?;
|
||||
|
||||
Ok(position)
|
||||
}
|
||||
|
||||
pub fn toc_size(&self) -> u64 {
|
||||
self.0.iter().fold(
|
||||
varint::serialized_varint_size(self.0.len() as u64),
|
||||
|acc, (name, entry)| {
|
||||
acc + varint::serialized_varstring_size(&**name) + entry.header_size()
|
||||
},
|
||||
)
|
||||
}
|
||||
}
|
||||
impl<S: ArchiveSource> DirectoryContents<Section<S>> {
|
||||
#[instrument(skip_all)]
|
||||
@@ -121,7 +171,7 @@ impl<S: ArchiveSource> DirectoryContents<Section<S>> {
|
||||
let mut toc_reader = source.fetch(position, size).await?;
|
||||
|
||||
let len = varint::deserialize_varint(&mut toc_reader).await?;
|
||||
let mut entries = BTreeMap::new();
|
||||
let mut entries = OrdMap::new();
|
||||
for _ in 0..len {
|
||||
entries.insert(
|
||||
varint::deserialize_varstring(&mut toc_reader).await?.into(),
|
||||
@@ -129,7 +179,10 @@ impl<S: ArchiveSource> DirectoryContents<Section<S>> {
|
||||
);
|
||||
}
|
||||
|
||||
let res = Self(entries);
|
||||
let res = Self {
|
||||
contents: entries,
|
||||
sort_by: None,
|
||||
};
|
||||
|
||||
if res.sighash().await? == sighash {
|
||||
Ok(res)
|
||||
@@ -144,11 +197,33 @@ impl<S: ArchiveSource> DirectoryContents<Section<S>> {
|
||||
}
|
||||
}
|
||||
impl<S: FileSource> DirectoryContents<S> {
|
||||
pub fn filter(&mut self, filter: impl Fn(&Path) -> bool) -> Result<(), Error> {
|
||||
for k in self.keys().cloned().collect::<Vec<_>>() {
|
||||
let path = Path::new(&*k);
|
||||
if let Some(v) = self.get_mut(&k) {
|
||||
if !filter(path) {
|
||||
if v.hash.is_none() {
|
||||
return Err(Error::new(
|
||||
eyre!("cannot filter out unhashed file, run `update_hashes` first"),
|
||||
ErrorKind::InvalidRequest,
|
||||
));
|
||||
}
|
||||
v.contents = EntryContents::Missing;
|
||||
} else {
|
||||
let filter: Box<dyn Fn(&Path) -> bool> = Box::new(|p| filter(&path.join(p)));
|
||||
v.filter(filter)?;
|
||||
}
|
||||
}
|
||||
}
|
||||
Ok(())
|
||||
}
|
||||
#[instrument(skip_all)]
|
||||
pub fn update_hashes<'a>(&'a mut self, only_missing: bool) -> BoxFuture<'a, Result<(), Error>> {
|
||||
async move {
|
||||
for (_, entry) in &mut self.0 {
|
||||
entry.update_hash(only_missing).await?;
|
||||
for key in self.keys().cloned().collect::<Vec<_>>() {
|
||||
if let Some(entry) = self.get_mut(&key) {
|
||||
entry.update_hash(only_missing).await?;
|
||||
}
|
||||
}
|
||||
Ok(())
|
||||
}
|
||||
@@ -159,13 +234,16 @@ impl<S: FileSource> DirectoryContents<S> {
|
||||
pub fn sighash<'a>(&'a self) -> BoxFuture<'a, Result<Hash, Error>> {
|
||||
async move {
|
||||
let mut hasher = TrackingWriter::new(0, HashWriter::new());
|
||||
let mut sig_contents = BTreeMap::new();
|
||||
for (name, entry) in &self.0 {
|
||||
let mut sig_contents = OrdMap::new();
|
||||
for (name, entry) in &**self {
|
||||
sig_contents.insert(name.clone(), entry.to_missing().await?);
|
||||
}
|
||||
Self(sig_contents)
|
||||
.serialize_toc(&mut WriteQueue::new(0), &mut hasher)
|
||||
.await?;
|
||||
Self {
|
||||
contents: sig_contents,
|
||||
sort_by: None,
|
||||
}
|
||||
.serialize_toc(&mut WriteQueue::new(0), &mut hasher)
|
||||
.await?;
|
||||
Ok(hasher.into_inner().finalize())
|
||||
}
|
||||
.boxed()
|
||||
@@ -177,23 +255,42 @@ impl<S: FileSource> DirectoryContents<S> {
|
||||
queue: &mut WriteQueue<'a, S>,
|
||||
w: &mut W,
|
||||
) -> Result<(), Error> {
|
||||
varint::serialize_varint(self.0.len() as u64, w).await?;
|
||||
for (name, entry) in self.0.iter() {
|
||||
varint::serialize_varint(self.len() as u64, w).await?;
|
||||
for (name, entry) in self.iter().sorted_by(|a, b| match (a, b, &self.sort_by) {
|
||||
((_, a), (_, b), _) if a.as_contents().is_dir() && !b.as_contents().is_dir() => {
|
||||
std::cmp::Ordering::Less
|
||||
}
|
||||
((_, a), (_, b), _) if !a.as_contents().is_dir() && b.as_contents().is_dir() => {
|
||||
std::cmp::Ordering::Greater
|
||||
}
|
||||
((a, _), (b, _), Some(sort_by)) => sort_by(&***a, &***b),
|
||||
_ => std::cmp::Ordering::Equal,
|
||||
}) {
|
||||
varint::serialize_varstring(&**name, w).await?;
|
||||
entry.serialize_header(queue.add(entry).await?, w).await?;
|
||||
}
|
||||
|
||||
Ok(())
|
||||
}
|
||||
pub fn into_dyn(self) -> DirectoryContents<DynFileSource> {
|
||||
DirectoryContents {
|
||||
contents: self
|
||||
.contents
|
||||
.into_iter()
|
||||
.map(|(k, v)| (k, v.into_dyn()))
|
||||
.collect(),
|
||||
sort_by: self.sort_by,
|
||||
}
|
||||
}
|
||||
}
|
||||
impl<S> std::ops::Deref for DirectoryContents<S> {
|
||||
type Target = BTreeMap<InternedString, Entry<S>>;
|
||||
type Target = OrdMap<InternedString, Entry<S>>;
|
||||
fn deref(&self) -> &Self::Target {
|
||||
&self.0
|
||||
&self.contents
|
||||
}
|
||||
}
|
||||
impl<S> std::ops::DerefMut for DirectoryContents<S> {
|
||||
fn deref_mut(&mut self) -> &mut Self::Target {
|
||||
&mut self.0
|
||||
&mut self.contents
|
||||
}
|
||||
}
|
||||
|
||||
@@ -3,9 +3,9 @@ use tokio::io::AsyncRead;
|
||||
use crate::prelude::*;
|
||||
use crate::s9pk::merkle_archive::hash::{Hash, HashWriter};
|
||||
use crate::s9pk::merkle_archive::sink::{Sink, TrackingWriter};
|
||||
use crate::s9pk::merkle_archive::source::{ArchiveSource, FileSource, Section};
|
||||
use crate::s9pk::merkle_archive::source::{ArchiveSource, DynFileSource, FileSource, Section};
|
||||
|
||||
#[derive(Debug)]
|
||||
#[derive(Debug, Clone)]
|
||||
pub struct FileContents<S>(S);
|
||||
impl<S> FileContents<S> {
|
||||
pub fn new(source: S) -> Self {
|
||||
@@ -73,6 +73,9 @@ impl<S: FileSource> FileContents<S> {
|
||||
}
|
||||
Ok(())
|
||||
}
|
||||
pub fn into_dyn(self) -> FileContents<DynFileSource> {
|
||||
FileContents(DynFileSource::new(self.0))
|
||||
}
|
||||
}
|
||||
impl<S> std::ops::Deref for FileContents<S> {
|
||||
type Target = S;
|
||||
|
||||
@@ -1,3 +1,7 @@
|
||||
use std::path::Path;
|
||||
use std::sync::Arc;
|
||||
|
||||
use ed25519::signature::Keypair;
|
||||
use ed25519_dalek::{Signature, SigningKey, VerifyingKey};
|
||||
use tokio::io::AsyncRead;
|
||||
|
||||
@@ -6,7 +10,7 @@ use crate::s9pk::merkle_archive::directory_contents::DirectoryContents;
|
||||
use crate::s9pk::merkle_archive::file_contents::FileContents;
|
||||
use crate::s9pk::merkle_archive::hash::Hash;
|
||||
use crate::s9pk::merkle_archive::sink::Sink;
|
||||
use crate::s9pk::merkle_archive::source::{ArchiveSource, FileSource, Section};
|
||||
use crate::s9pk::merkle_archive::source::{ArchiveSource, DynFileSource, FileSource, Section};
|
||||
use crate::s9pk::merkle_archive::write_queue::WriteQueue;
|
||||
|
||||
pub mod directory_contents;
|
||||
@@ -19,13 +23,13 @@ mod test;
|
||||
pub mod varint;
|
||||
pub mod write_queue;
|
||||
|
||||
#[derive(Debug)]
|
||||
#[derive(Debug, Clone)]
|
||||
enum Signer {
|
||||
Signed(VerifyingKey, Signature),
|
||||
Signer(SigningKey),
|
||||
}
|
||||
|
||||
#[derive(Debug)]
|
||||
#[derive(Debug, Clone)]
|
||||
pub struct MerkleArchive<S> {
|
||||
signer: Signer,
|
||||
contents: DirectoryContents<S>,
|
||||
@@ -37,14 +41,33 @@ impl<S> MerkleArchive<S> {
|
||||
contents,
|
||||
}
|
||||
}
|
||||
pub fn signer(&self) -> VerifyingKey {
|
||||
match &self.signer {
|
||||
Signer::Signed(k, _) => *k,
|
||||
Signer::Signer(k) => k.verifying_key(),
|
||||
}
|
||||
}
|
||||
pub const fn header_size() -> u64 {
|
||||
32 // pubkey
|
||||
+ 64 // signature
|
||||
+ 32 // sighash
|
||||
+ DirectoryContents::<Section<S>>::header_size()
|
||||
}
|
||||
pub fn contents(&self) -> &DirectoryContents<S> {
|
||||
&self.contents
|
||||
}
|
||||
pub fn contents_mut(&mut self) -> &mut DirectoryContents<S> {
|
||||
&mut self.contents
|
||||
}
|
||||
pub fn set_signer(&mut self, key: SigningKey) {
|
||||
self.signer = Signer::Signer(key);
|
||||
}
|
||||
pub fn sort_by(
|
||||
&mut self,
|
||||
sort_by: impl Fn(&str, &str) -> std::cmp::Ordering + Send + Sync + 'static,
|
||||
) {
|
||||
self.contents.sort_by(sort_by)
|
||||
}
|
||||
}
|
||||
impl<S: ArchiveSource> MerkleArchive<Section<S>> {
|
||||
#[instrument(skip_all)]
|
||||
@@ -80,6 +103,9 @@ impl<S: FileSource> MerkleArchive<S> {
|
||||
pub async fn update_hashes(&mut self, only_missing: bool) -> Result<(), Error> {
|
||||
self.contents.update_hashes(only_missing).await
|
||||
}
|
||||
pub fn filter(&mut self, filter: impl Fn(&Path) -> bool) -> Result<(), Error> {
|
||||
self.contents.filter(filter)
|
||||
}
|
||||
#[instrument(skip_all)]
|
||||
pub async fn serialize<W: Sink>(&self, w: &mut W, verify: bool) -> Result<(), Error> {
|
||||
use tokio::io::AsyncWriteExt;
|
||||
@@ -103,9 +129,15 @@ impl<S: FileSource> MerkleArchive<S> {
|
||||
queue.serialize(w, verify).await?;
|
||||
Ok(())
|
||||
}
|
||||
pub fn into_dyn(self) -> MerkleArchive<DynFileSource> {
|
||||
MerkleArchive {
|
||||
signer: self.signer,
|
||||
contents: self.contents.into_dyn(),
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
#[derive(Debug)]
|
||||
#[derive(Debug, Clone)]
|
||||
pub struct Entry<S> {
|
||||
hash: Option<Hash>,
|
||||
contents: EntryContents<S>,
|
||||
@@ -117,12 +149,27 @@ impl<S> Entry<S> {
|
||||
contents,
|
||||
}
|
||||
}
|
||||
pub fn file(source: S) -> Self {
|
||||
Self::new(EntryContents::File(FileContents::new(source)))
|
||||
}
|
||||
pub fn hash(&self) -> Option<Hash> {
|
||||
self.hash
|
||||
}
|
||||
pub fn as_contents(&self) -> &EntryContents<S> {
|
||||
&self.contents
|
||||
}
|
||||
pub fn as_file(&self) -> Option<&FileContents<S>> {
|
||||
match self.as_contents() {
|
||||
EntryContents::File(f) => Some(f),
|
||||
_ => None,
|
||||
}
|
||||
}
|
||||
pub fn as_directory(&self) -> Option<&DirectoryContents<S>> {
|
||||
match self.as_contents() {
|
||||
EntryContents::Directory(d) => Some(d),
|
||||
_ => None,
|
||||
}
|
||||
}
|
||||
pub fn as_contents_mut(&mut self) -> &mut EntryContents<S> {
|
||||
self.hash = None;
|
||||
&mut self.contents
|
||||
@@ -130,11 +177,24 @@ impl<S> Entry<S> {
|
||||
pub fn into_contents(self) -> EntryContents<S> {
|
||||
self.contents
|
||||
}
|
||||
pub fn into_file(self) -> Option<FileContents<S>> {
|
||||
match self.into_contents() {
|
||||
EntryContents::File(f) => Some(f),
|
||||
_ => None,
|
||||
}
|
||||
}
|
||||
pub fn into_directory(self) -> Option<DirectoryContents<S>> {
|
||||
match self.into_contents() {
|
||||
EntryContents::Directory(d) => Some(d),
|
||||
_ => None,
|
||||
}
|
||||
}
|
||||
pub fn header_size(&self) -> u64 {
|
||||
32 // hash
|
||||
+ self.contents.header_size()
|
||||
}
|
||||
}
|
||||
impl<S: Clone> Entry<S> {}
|
||||
impl<S: ArchiveSource> Entry<Section<S>> {
|
||||
#[instrument(skip_all)]
|
||||
pub async fn deserialize(
|
||||
@@ -156,6 +216,24 @@ impl<S: ArchiveSource> Entry<Section<S>> {
|
||||
}
|
||||
}
|
||||
impl<S: FileSource> Entry<S> {
|
||||
pub fn filter(&mut self, filter: impl Fn(&Path) -> bool) -> Result<(), Error> {
|
||||
if let EntryContents::Directory(d) = &mut self.contents {
|
||||
d.filter(filter)?;
|
||||
}
|
||||
Ok(())
|
||||
}
|
||||
pub async fn read_file_to_vec(&self) -> Result<Vec<u8>, Error> {
|
||||
match self.as_contents() {
|
||||
EntryContents::File(f) => Ok(f.to_vec(self.hash).await?),
|
||||
EntryContents::Directory(_) => Err(Error::new(
|
||||
eyre!("expected file, found directory"),
|
||||
ErrorKind::ParseS9pk,
|
||||
)),
|
||||
EntryContents::Missing => {
|
||||
Err(Error::new(eyre!("entry is missing"), ErrorKind::ParseS9pk))
|
||||
}
|
||||
}
|
||||
}
|
||||
pub async fn to_missing(&self) -> Result<Self, Error> {
|
||||
let hash = if let Some(hash) = self.hash {
|
||||
hash
|
||||
@@ -190,9 +268,15 @@ impl<S: FileSource> Entry<S> {
|
||||
w.write_all(hash.as_bytes()).await?;
|
||||
self.contents.serialize_header(position, w).await
|
||||
}
|
||||
pub fn into_dyn(self) -> Entry<DynFileSource> {
|
||||
Entry {
|
||||
hash: self.hash,
|
||||
contents: self.contents.into_dyn(),
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
#[derive(Debug)]
|
||||
#[derive(Debug, Clone)]
|
||||
pub enum EntryContents<S> {
|
||||
Missing,
|
||||
File(FileContents<S>),
|
||||
@@ -214,6 +298,9 @@ impl<S> EntryContents<S> {
|
||||
Self::Directory(_) => DirectoryContents::<S>::header_size(),
|
||||
}
|
||||
}
|
||||
pub fn is_dir(&self) -> bool {
|
||||
matches!(self, &EntryContents::Directory(_))
|
||||
}
|
||||
}
|
||||
impl<S: ArchiveSource> EntryContents<Section<S>> {
|
||||
#[instrument(skip_all)]
|
||||
@@ -265,4 +352,11 @@ impl<S: FileSource> EntryContents<S> {
|
||||
Self::Directory(d) => Some(d.serialize_header(position, w).await?),
|
||||
})
|
||||
}
|
||||
pub fn into_dyn(self) -> EntryContents<DynFileSource> {
|
||||
match self {
|
||||
Self::Missing => EntryContents::Missing,
|
||||
Self::File(f) => EntryContents::File(f.into_dyn()),
|
||||
Self::Directory(d) => EntryContents::Directory(d.into_dyn()),
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
@@ -1,12 +1,9 @@
|
||||
use std::sync::Arc;
|
||||
|
||||
use bytes::Bytes;
|
||||
use futures::stream::BoxStream;
|
||||
use futures::{StreamExt, TryStreamExt};
|
||||
use http::header::{ACCEPT_RANGES, RANGE};
|
||||
use reqwest::header::{ACCEPT_RANGES, CONTENT_LENGTH, RANGE};
|
||||
use reqwest::{Client, Url};
|
||||
use tokio::io::AsyncRead;
|
||||
use tokio::sync::Mutex;
|
||||
use tokio_util::io::StreamReader;
|
||||
|
||||
use crate::prelude::*;
|
||||
@@ -16,6 +13,7 @@ use crate::s9pk::merkle_archive::source::ArchiveSource;
|
||||
pub struct HttpSource {
|
||||
url: Url,
|
||||
client: Client,
|
||||
size: Option<u64>,
|
||||
range_support: Result<
|
||||
(),
|
||||
(), // Arc<Mutex<Option<RangelessReader>>>
|
||||
@@ -23,24 +21,31 @@ pub struct HttpSource {
|
||||
}
|
||||
impl HttpSource {
|
||||
pub async fn new(client: Client, url: Url) -> Result<Self, Error> {
|
||||
let range_support = client
|
||||
let head = client
|
||||
.head(url.clone())
|
||||
.send()
|
||||
.await
|
||||
.with_kind(ErrorKind::Network)?
|
||||
.error_for_status()
|
||||
.with_kind(ErrorKind::Network)?
|
||||
.with_kind(ErrorKind::Network)?;
|
||||
let range_support = head
|
||||
.headers()
|
||||
.get(ACCEPT_RANGES)
|
||||
.and_then(|s| s.to_str().ok())
|
||||
== Some("bytes");
|
||||
let size = head
|
||||
.headers()
|
||||
.get(CONTENT_LENGTH)
|
||||
.and_then(|s| s.to_str().ok())
|
||||
.and_then(|s| s.parse().ok());
|
||||
Ok(Self {
|
||||
url,
|
||||
client,
|
||||
size,
|
||||
range_support: if range_support {
|
||||
Ok(())
|
||||
} else {
|
||||
todo!() // Err(Arc::new(Mutex::new(None)))
|
||||
Err(()) // Err(Arc::new(Mutex::new(None)))
|
||||
},
|
||||
})
|
||||
}
|
||||
@@ -48,6 +53,9 @@ impl HttpSource {
|
||||
#[async_trait::async_trait]
|
||||
impl ArchiveSource for HttpSource {
|
||||
type Reader = HttpReader;
|
||||
async fn size(&self) -> Option<u64> {
|
||||
self.size
|
||||
}
|
||||
async fn fetch(&self, position: u64, size: u64) -> Result<Self::Reader, Error> {
|
||||
match self.range_support {
|
||||
Ok(_) => Ok(HttpReader::Range(StreamReader::new(if size > 0 {
|
||||
|
||||
@@ -12,15 +12,15 @@ pub mod http;
|
||||
pub mod multi_cursor_file;
|
||||
|
||||
#[async_trait::async_trait]
|
||||
pub trait FileSource: Send + Sync + Sized + 'static {
|
||||
pub trait FileSource: Clone + Send + Sync + Sized + 'static {
|
||||
type Reader: AsyncRead + Unpin + Send;
|
||||
async fn size(&self) -> Result<u64, Error>;
|
||||
async fn reader(&self) -> Result<Self::Reader, Error>;
|
||||
async fn copy<W: AsyncWrite + Unpin + Send>(&self, w: &mut W) -> Result<(), Error> {
|
||||
async fn copy<W: AsyncWrite + Unpin + Send + ?Sized>(&self, w: &mut W) -> Result<(), Error> {
|
||||
tokio::io::copy(&mut self.reader().await?, w).await?;
|
||||
Ok(())
|
||||
}
|
||||
async fn copy_verify<W: AsyncWrite + Unpin + Send>(
|
||||
async fn copy_verify<W: AsyncWrite + Unpin + Send + ?Sized>(
|
||||
&self,
|
||||
w: &mut W,
|
||||
verify: Option<Hash>,
|
||||
@@ -37,6 +37,75 @@ pub trait FileSource: Send + Sync + Sized + 'static {
|
||||
}
|
||||
}
|
||||
|
||||
#[derive(Clone)]
|
||||
pub struct DynFileSource(Arc<dyn DynableFileSource>);
|
||||
impl DynFileSource {
|
||||
pub fn new<T: FileSource>(source: T) -> Self {
|
||||
Self(Arc::new(source))
|
||||
}
|
||||
}
|
||||
#[async_trait::async_trait]
|
||||
impl FileSource for DynFileSource {
|
||||
type Reader = Box<dyn AsyncRead + Unpin + Send>;
|
||||
async fn size(&self) -> Result<u64, Error> {
|
||||
self.0.size().await
|
||||
}
|
||||
async fn reader(&self) -> Result<Self::Reader, Error> {
|
||||
self.0.reader().await
|
||||
}
|
||||
async fn copy<W: AsyncWrite + Unpin + Send + ?Sized>(
|
||||
&self,
|
||||
mut w: &mut W,
|
||||
) -> Result<(), Error> {
|
||||
self.0.copy(&mut w).await
|
||||
}
|
||||
async fn copy_verify<W: AsyncWrite + Unpin + Send + ?Sized>(
|
||||
&self,
|
||||
mut w: &mut W,
|
||||
verify: Option<Hash>,
|
||||
) -> Result<(), Error> {
|
||||
self.0.copy_verify(&mut w, verify).await
|
||||
}
|
||||
async fn to_vec(&self, verify: Option<Hash>) -> Result<Vec<u8>, Error> {
|
||||
self.0.to_vec(verify).await
|
||||
}
|
||||
}
|
||||
|
||||
#[async_trait::async_trait]
|
||||
trait DynableFileSource: Send + Sync + 'static {
|
||||
async fn size(&self) -> Result<u64, Error>;
|
||||
async fn reader(&self) -> Result<Box<dyn AsyncRead + Unpin + Send>, Error>;
|
||||
async fn copy(&self, w: &mut (dyn AsyncWrite + Unpin + Send)) -> Result<(), Error>;
|
||||
async fn copy_verify(
|
||||
&self,
|
||||
w: &mut (dyn AsyncWrite + Unpin + Send),
|
||||
verify: Option<Hash>,
|
||||
) -> Result<(), Error>;
|
||||
async fn to_vec(&self, verify: Option<Hash>) -> Result<Vec<u8>, Error>;
|
||||
}
|
||||
#[async_trait::async_trait]
|
||||
impl<T: FileSource> DynableFileSource for T {
|
||||
async fn size(&self) -> Result<u64, Error> {
|
||||
FileSource::size(self).await
|
||||
}
|
||||
async fn reader(&self) -> Result<Box<dyn AsyncRead + Unpin + Send>, Error> {
|
||||
Ok(Box::new(FileSource::reader(self).await?))
|
||||
}
|
||||
async fn copy(&self, w: &mut (dyn AsyncWrite + Unpin + Send)) -> Result<(), Error> {
|
||||
FileSource::copy(self, w).await
|
||||
}
|
||||
async fn copy_verify(
|
||||
&self,
|
||||
w: &mut (dyn AsyncWrite + Unpin + Send),
|
||||
verify: Option<Hash>,
|
||||
) -> Result<(), Error> {
|
||||
FileSource::copy_verify(self, w, verify).await
|
||||
}
|
||||
async fn to_vec(&self, verify: Option<Hash>) -> Result<Vec<u8>, Error> {
|
||||
FileSource::to_vec(self, verify).await
|
||||
}
|
||||
}
|
||||
|
||||
#[async_trait::async_trait]
|
||||
impl FileSource for PathBuf {
|
||||
type Reader = File;
|
||||
@@ -57,7 +126,7 @@ impl FileSource for Arc<[u8]> {
|
||||
async fn reader(&self) -> Result<Self::Reader, Error> {
|
||||
Ok(std::io::Cursor::new(self.clone()))
|
||||
}
|
||||
async fn copy<W: AsyncWrite + Unpin + Send>(&self, w: &mut W) -> Result<(), Error> {
|
||||
async fn copy<W: AsyncWrite + Unpin + Send + ?Sized>(&self, w: &mut W) -> Result<(), Error> {
|
||||
use tokio::io::AsyncWriteExt;
|
||||
|
||||
w.write_all(&*self).await?;
|
||||
@@ -68,8 +137,11 @@ impl FileSource for Arc<[u8]> {
|
||||
#[async_trait::async_trait]
|
||||
pub trait ArchiveSource: Clone + Send + Sync + Sized + 'static {
|
||||
type Reader: AsyncRead + Unpin + Send;
|
||||
async fn size(&self) -> Option<u64> {
|
||||
None
|
||||
}
|
||||
async fn fetch(&self, position: u64, size: u64) -> Result<Self::Reader, Error>;
|
||||
async fn copy_to<W: AsyncWrite + Unpin + Send>(
|
||||
async fn copy_to<W: AsyncWrite + Unpin + Send + ?Sized>(
|
||||
&self,
|
||||
position: u64,
|
||||
size: u64,
|
||||
@@ -99,7 +171,7 @@ impl ArchiveSource for Arc<[u8]> {
|
||||
}
|
||||
}
|
||||
|
||||
#[derive(Debug)]
|
||||
#[derive(Debug, Clone)]
|
||||
pub struct Section<S> {
|
||||
source: S,
|
||||
position: u64,
|
||||
@@ -114,7 +186,7 @@ impl<S: ArchiveSource> FileSource for Section<S> {
|
||||
async fn reader(&self) -> Result<Self::Reader, Error> {
|
||||
self.source.fetch(self.position, self.size).await
|
||||
}
|
||||
async fn copy<W: AsyncWrite + Unpin + Send>(&self, w: &mut W) -> Result<(), Error> {
|
||||
async fn copy<W: AsyncWrite + Unpin + Send + ?Sized>(&self, w: &mut W) -> Result<(), Error> {
|
||||
self.source.copy_to(self.position, self.size, w).await
|
||||
}
|
||||
}
|
||||
|
||||
@@ -1,16 +1,20 @@
|
||||
use std::io::SeekFrom;
|
||||
use std::os::fd::{AsRawFd, RawFd};
|
||||
use std::os::fd::{AsRawFd, FromRawFd, RawFd};
|
||||
use std::path::{Path, PathBuf};
|
||||
use std::sync::Arc;
|
||||
use std::{borrow::Borrow, io::SeekFrom};
|
||||
|
||||
use tokio::fs::File;
|
||||
use tokio::io::AsyncRead;
|
||||
use tokio::io::{AsyncRead, AsyncReadExt};
|
||||
use tokio::sync::{Mutex, OwnedMutexGuard};
|
||||
|
||||
use crate::disk::mount::filesystem::loop_dev::LoopDev;
|
||||
use crate::prelude::*;
|
||||
use crate::s9pk::merkle_archive::source::{ArchiveSource, Section};
|
||||
|
||||
fn path_from_fd(fd: RawFd) -> PathBuf {
|
||||
Path::new("/proc/self/fd").join(fd.to_string())
|
||||
}
|
||||
|
||||
#[derive(Clone)]
|
||||
pub struct MultiCursorFile {
|
||||
fd: RawFd,
|
||||
@@ -18,7 +22,14 @@ pub struct MultiCursorFile {
|
||||
}
|
||||
impl MultiCursorFile {
|
||||
fn path(&self) -> PathBuf {
|
||||
Path::new("/proc/self/fd").join(self.fd.to_string())
|
||||
path_from_fd(self.fd)
|
||||
}
|
||||
pub async fn open(fd: &impl AsRawFd) -> Result<Self, Error> {
|
||||
let fd = fd.as_raw_fd();
|
||||
Ok(Self {
|
||||
fd,
|
||||
file: Arc::new(Mutex::new(File::open(path_from_fd(fd)).await?)),
|
||||
})
|
||||
}
|
||||
}
|
||||
impl From<File> for MultiCursorFile {
|
||||
@@ -47,8 +58,8 @@ impl AsyncRead for FileSectionReader {
|
||||
return std::task::Poll::Ready(Ok(()));
|
||||
}
|
||||
let before = buf.filled().len() as u64;
|
||||
let res = std::pin::Pin::new(&mut **this.file.get_mut())
|
||||
.poll_read(cx, &mut buf.take(*this.remaining as usize));
|
||||
let res = std::pin::Pin::new(&mut (&mut **this.file.get_mut()).take(*this.remaining))
|
||||
.poll_read(cx, buf);
|
||||
*this.remaining = this
|
||||
.remaining
|
||||
.saturating_sub(buf.filled().len() as u64 - before);
|
||||
@@ -59,13 +70,36 @@ impl AsyncRead for FileSectionReader {
|
||||
#[async_trait::async_trait]
|
||||
impl ArchiveSource for MultiCursorFile {
|
||||
type Reader = FileSectionReader;
|
||||
async fn size(&self) -> Option<u64> {
|
||||
tokio::fs::metadata(self.path()).await.ok().map(|m| m.len())
|
||||
}
|
||||
async fn fetch(&self, position: u64, size: u64) -> Result<Self::Reader, Error> {
|
||||
use tokio::io::AsyncSeekExt;
|
||||
|
||||
let mut file = if let Ok(file) = self.file.clone().try_lock_owned() {
|
||||
file
|
||||
} else {
|
||||
Arc::new(Mutex::new(File::open(self.path()).await?))
|
||||
#[cfg(target_os = "linux")]
|
||||
let file = File::open(self.path()).await?;
|
||||
#[cfg(target_os = "macos")] // here be dragons
|
||||
let file = unsafe {
|
||||
let mut buf = [0u8; libc::PATH_MAX as usize];
|
||||
if libc::fcntl(
|
||||
self.fd,
|
||||
libc::F_GETPATH,
|
||||
buf.as_mut_ptr().cast::<libc::c_char>(),
|
||||
) == -1
|
||||
{
|
||||
return Err(std::io::Error::last_os_error().into());
|
||||
}
|
||||
File::open(
|
||||
&*std::ffi::CStr::from_bytes_until_nul(&buf)
|
||||
.with_kind(ErrorKind::Utf8)?
|
||||
.to_string_lossy(),
|
||||
)
|
||||
.await?
|
||||
};
|
||||
Arc::new(Mutex::new(file))
|
||||
.try_lock_owned()
|
||||
.expect("freshly created")
|
||||
};
|
||||
@@ -77,8 +111,8 @@ impl ArchiveSource for MultiCursorFile {
|
||||
}
|
||||
}
|
||||
|
||||
impl From<Section<MultiCursorFile>> for LoopDev<PathBuf> {
|
||||
fn from(value: Section<MultiCursorFile>) -> Self {
|
||||
impl From<&Section<MultiCursorFile>> for LoopDev<PathBuf> {
|
||||
fn from(value: &Section<MultiCursorFile>) -> Self {
|
||||
LoopDev::new(value.source.path(), value.position, value.size)
|
||||
}
|
||||
}
|
||||
|
||||
@@ -4,7 +4,6 @@ use crate::prelude::*;
|
||||
use crate::s9pk::merkle_archive::sink::Sink;
|
||||
use crate::s9pk::merkle_archive::source::FileSource;
|
||||
use crate::s9pk::merkle_archive::{Entry, EntryContents};
|
||||
use crate::util::MaybeOwned;
|
||||
|
||||
pub struct WriteQueue<'a, S> {
|
||||
next_available_position: u64,
|
||||
|
||||
@@ -1,5 +1,39 @@
|
||||
pub mod merkle_archive;
|
||||
pub mod rpc;
|
||||
pub mod v1;
|
||||
pub mod v2;
|
||||
|
||||
pub use v1::*;
|
||||
use std::io::SeekFrom;
|
||||
use std::path::Path;
|
||||
|
||||
use tokio::fs::File;
|
||||
use tokio::io::{AsyncReadExt, AsyncSeekExt};
|
||||
pub use v2::{manifest, S9pk};
|
||||
|
||||
use crate::context::CliContext;
|
||||
use crate::prelude::*;
|
||||
use crate::s9pk::v1::reader::S9pkReader;
|
||||
use crate::s9pk::v2::compat::MAGIC_AND_VERSION;
|
||||
|
||||
pub async fn load(ctx: &CliContext, path: impl AsRef<Path>) -> Result<File, Error> {
|
||||
// TODO: return s9pk
|
||||
const MAGIC_LEN: usize = MAGIC_AND_VERSION.len();
|
||||
let mut magic = [0_u8; MAGIC_LEN];
|
||||
let mut file = tokio::fs::File::open(&path).await?;
|
||||
file.read_exact(&mut magic).await?;
|
||||
file.seek(SeekFrom::Start(0)).await?;
|
||||
if magic == v2::compat::MAGIC_AND_VERSION {
|
||||
tracing::info!("Converting package to v2 s9pk");
|
||||
let new_path = path.as_ref().with_extension("compat.s9pk");
|
||||
S9pk::from_v1(
|
||||
S9pkReader::from_reader(file, true).await?,
|
||||
&new_path,
|
||||
ctx.developer_key()?.clone(),
|
||||
)
|
||||
.await?;
|
||||
tokio::fs::rename(&new_path, &path).await?;
|
||||
file = tokio::fs::File::open(&path).await?;
|
||||
tracing::info!("Converted s9pk successfully");
|
||||
}
|
||||
Ok(file)
|
||||
}
|
||||
|
||||
227
core/startos/src/s9pk/rpc.rs
Normal file
227
core/startos/src/s9pk/rpc.rs
Normal file
@@ -0,0 +1,227 @@
|
||||
use std::path::{Path, PathBuf};
|
||||
use std::sync::Arc;
|
||||
|
||||
use clap::Parser;
|
||||
use itertools::Itertools;
|
||||
use models::ImageId;
|
||||
use rpc_toolkit::{from_fn_async, Empty, HandlerExt, ParentHandler};
|
||||
use serde::{Deserialize, Serialize};
|
||||
use tokio::fs::File;
|
||||
use tokio::process::Command;
|
||||
|
||||
use crate::context::CliContext;
|
||||
use crate::prelude::*;
|
||||
use crate::s9pk::manifest::Manifest;
|
||||
use crate::s9pk::merkle_archive::source::DynFileSource;
|
||||
use crate::s9pk::merkle_archive::Entry;
|
||||
use crate::s9pk::v2::compat::CONTAINER_TOOL;
|
||||
use crate::s9pk::S9pk;
|
||||
use crate::util::io::TmpDir;
|
||||
use crate::util::serde::{apply_expr, HandlerExtSerde};
|
||||
use crate::util::Invoke;
|
||||
|
||||
pub const SKIP_ENV: &[&str] = &["TERM", "container", "HOME", "HOSTNAME"];
|
||||
|
||||
pub fn s9pk() -> ParentHandler {
|
||||
ParentHandler::new()
|
||||
.subcommand("edit", edit())
|
||||
.subcommand("inspect", inspect())
|
||||
}
|
||||
|
||||
#[derive(Deserialize, Serialize, Parser)]
|
||||
struct S9pkPath {
|
||||
s9pk: PathBuf,
|
||||
}
|
||||
|
||||
fn edit() -> ParentHandler<S9pkPath> {
|
||||
let only_parent = |a, _| a;
|
||||
ParentHandler::<S9pkPath>::new()
|
||||
.subcommand(
|
||||
"add-image",
|
||||
from_fn_async(add_image)
|
||||
.with_inherited(only_parent)
|
||||
.no_display(),
|
||||
)
|
||||
.subcommand(
|
||||
"manifest",
|
||||
from_fn_async(edit_manifest)
|
||||
.with_inherited(only_parent)
|
||||
.with_display_serializable(),
|
||||
)
|
||||
}
|
||||
|
||||
fn inspect() -> ParentHandler<S9pkPath> {
|
||||
let only_parent = |a, _| a;
|
||||
ParentHandler::<S9pkPath>::new()
|
||||
.subcommand(
|
||||
"file-tree",
|
||||
from_fn_async(file_tree)
|
||||
.with_inherited(only_parent)
|
||||
.with_display_serializable(),
|
||||
)
|
||||
.subcommand(
|
||||
"manifest",
|
||||
from_fn_async(inspect_manifest)
|
||||
.with_inherited(only_parent)
|
||||
.with_display_serializable(),
|
||||
)
|
||||
}
|
||||
|
||||
#[derive(Deserialize, Serialize, Parser)]
|
||||
struct AddImageParams {
|
||||
id: ImageId,
|
||||
image: String,
|
||||
}
|
||||
async fn add_image(
|
||||
ctx: CliContext,
|
||||
AddImageParams { id, image }: AddImageParams,
|
||||
S9pkPath { s9pk: s9pk_path }: S9pkPath,
|
||||
) -> Result<(), Error> {
|
||||
let tmpdir = TmpDir::new().await?;
|
||||
let sqfs_path = tmpdir.join("image.squashfs");
|
||||
let arch = String::from_utf8(
|
||||
Command::new(CONTAINER_TOOL)
|
||||
.arg("run")
|
||||
.arg("--rm")
|
||||
.arg("--entrypoint")
|
||||
.arg("uname")
|
||||
.arg(&image)
|
||||
.arg("-m")
|
||||
.invoke(ErrorKind::Docker)
|
||||
.await?,
|
||||
)?;
|
||||
let env = String::from_utf8(
|
||||
Command::new(CONTAINER_TOOL)
|
||||
.arg("run")
|
||||
.arg("--rm")
|
||||
.arg("--entrypoint")
|
||||
.arg("env")
|
||||
.arg(&image)
|
||||
.invoke(ErrorKind::Docker)
|
||||
.await?,
|
||||
)?
|
||||
.lines()
|
||||
.filter(|l| {
|
||||
l.trim()
|
||||
.split_once("=")
|
||||
.map_or(false, |(v, _)| !SKIP_ENV.contains(&v))
|
||||
})
|
||||
.join("\n")
|
||||
+ "\n";
|
||||
let workdir = Path::new(
|
||||
String::from_utf8(
|
||||
Command::new(CONTAINER_TOOL)
|
||||
.arg("run")
|
||||
.arg("--rm")
|
||||
.arg("--entrypoint")
|
||||
.arg("pwd")
|
||||
.arg(&image)
|
||||
.invoke(ErrorKind::Docker)
|
||||
.await?,
|
||||
)?
|
||||
.trim(),
|
||||
)
|
||||
.to_owned();
|
||||
let container_id = String::from_utf8(
|
||||
Command::new(CONTAINER_TOOL)
|
||||
.arg("create")
|
||||
.arg(&image)
|
||||
.invoke(ErrorKind::Docker)
|
||||
.await?,
|
||||
)?;
|
||||
Command::new("bash")
|
||||
.arg("-c")
|
||||
.arg(format!(
|
||||
"{CONTAINER_TOOL} export {container_id} | mksquashfs - {sqfs} -tar -force-uid 100000 -force-gid 100000", // TODO: real uid mapping
|
||||
container_id = container_id.trim(),
|
||||
sqfs = sqfs_path.display()
|
||||
))
|
||||
.invoke(ErrorKind::Docker)
|
||||
.await?;
|
||||
Command::new(CONTAINER_TOOL)
|
||||
.arg("rm")
|
||||
.arg(container_id.trim())
|
||||
.invoke(ErrorKind::Docker)
|
||||
.await?;
|
||||
let mut s9pk = S9pk::from_file(super::load(&ctx, &s9pk_path).await?)
|
||||
.await?
|
||||
.into_dyn();
|
||||
let archive = s9pk.as_archive_mut();
|
||||
archive.set_signer(ctx.developer_key()?.clone());
|
||||
archive.contents_mut().insert_path(
|
||||
Path::new("images")
|
||||
.join(arch.trim())
|
||||
.join(&id)
|
||||
.with_extension("squashfs"),
|
||||
Entry::file(DynFileSource::new(sqfs_path)),
|
||||
)?;
|
||||
archive.contents_mut().insert_path(
|
||||
Path::new("images")
|
||||
.join(arch.trim())
|
||||
.join(&id)
|
||||
.with_extension("env"),
|
||||
Entry::file(DynFileSource::new(Arc::from(Vec::from(env)))),
|
||||
)?;
|
||||
archive.contents_mut().insert_path(
|
||||
Path::new("images")
|
||||
.join(arch.trim())
|
||||
.join(&id)
|
||||
.with_extension("json"),
|
||||
Entry::file(DynFileSource::new(Arc::from(
|
||||
serde_json::to_vec(&serde_json::json!({
|
||||
"workdir": workdir
|
||||
}))
|
||||
.with_kind(ErrorKind::Serialization)?,
|
||||
))),
|
||||
)?;
|
||||
let tmp_path = s9pk_path.with_extension("s9pk.tmp");
|
||||
let mut tmp_file = File::create(&tmp_path).await?;
|
||||
s9pk.serialize(&mut tmp_file, true).await?;
|
||||
tmp_file.sync_all().await?;
|
||||
tokio::fs::rename(&tmp_path, &s9pk_path).await?;
|
||||
|
||||
Ok(())
|
||||
}
|
||||
|
||||
#[derive(Deserialize, Serialize, Parser)]
|
||||
struct EditManifestParams {
|
||||
expression: String,
|
||||
}
|
||||
async fn edit_manifest(
|
||||
ctx: CliContext,
|
||||
EditManifestParams { expression }: EditManifestParams,
|
||||
S9pkPath { s9pk: s9pk_path }: S9pkPath,
|
||||
) -> Result<Manifest, Error> {
|
||||
let mut s9pk = S9pk::from_file(super::load(&ctx, &s9pk_path).await?).await?;
|
||||
let old = serde_json::to_value(s9pk.as_manifest()).with_kind(ErrorKind::Serialization)?;
|
||||
*s9pk.as_manifest_mut() = serde_json::from_value(apply_expr(old.into(), &expression)?.into())
|
||||
.with_kind(ErrorKind::Serialization)?;
|
||||
let manifest = s9pk.as_manifest().clone();
|
||||
let tmp_path = s9pk_path.with_extension("s9pk.tmp");
|
||||
let mut tmp_file = File::create(&tmp_path).await?;
|
||||
s9pk.as_archive_mut()
|
||||
.set_signer(ctx.developer_key()?.clone());
|
||||
s9pk.serialize(&mut tmp_file, true).await?;
|
||||
tmp_file.sync_all().await?;
|
||||
tokio::fs::rename(&tmp_path, &s9pk_path).await?;
|
||||
|
||||
Ok(manifest)
|
||||
}
|
||||
|
||||
async fn file_tree(
|
||||
ctx: CliContext,
|
||||
_: Empty,
|
||||
S9pkPath { s9pk }: S9pkPath,
|
||||
) -> Result<Vec<PathBuf>, Error> {
|
||||
let s9pk = S9pk::from_file(super::load(&ctx, &s9pk).await?).await?;
|
||||
Ok(s9pk.as_archive().contents().file_paths(""))
|
||||
}
|
||||
|
||||
async fn inspect_manifest(
|
||||
ctx: CliContext,
|
||||
_: Empty,
|
||||
S9pkPath { s9pk }: S9pkPath,
|
||||
) -> Result<Manifest, Error> {
|
||||
let s9pk = S9pk::from_file(super::load(&ctx, &s9pk).await?).await?;
|
||||
Ok(s9pk.as_manifest().clone())
|
||||
}
|
||||
@@ -1,27 +1,17 @@
|
||||
use std::collections::BTreeMap;
|
||||
use std::path::{Path, PathBuf};
|
||||
|
||||
use color_eyre::eyre::eyre;
|
||||
use imbl_value::InOMap;
|
||||
pub use models::PackageId;
|
||||
use serde::{Deserialize, Serialize};
|
||||
use url::Url;
|
||||
|
||||
use super::git_hash::GitHash;
|
||||
use crate::action::Actions;
|
||||
use crate::backup::BackupActions;
|
||||
use crate::config::action::ConfigActions;
|
||||
use crate::dependencies::Dependencies;
|
||||
use crate::migration::Migrations;
|
||||
use crate::net::interface::Interfaces;
|
||||
use crate::prelude::*;
|
||||
use crate::procedure::docker::DockerContainers;
|
||||
use crate::procedure::PackageProcedure;
|
||||
use crate::status::health_check::HealthChecks;
|
||||
use crate::util::serde::Regex;
|
||||
use crate::s9pk::manifest::{Alerts, Description, HardwareRequirements};
|
||||
use crate::util::Version;
|
||||
use crate::version::{Current, VersionT};
|
||||
use crate::volume::Volumes;
|
||||
use crate::Error;
|
||||
|
||||
fn current_version() -> Version {
|
||||
Current::new().semver().into()
|
||||
@@ -36,13 +26,11 @@ pub struct Manifest {
|
||||
pub id: PackageId,
|
||||
#[serde(default)]
|
||||
pub git_hash: Option<GitHash>,
|
||||
#[serde(default)]
|
||||
pub assets: Assets,
|
||||
pub title: String,
|
||||
pub version: Version,
|
||||
pub description: Description,
|
||||
#[serde(default)]
|
||||
pub assets: Assets,
|
||||
#[serde(default)]
|
||||
pub build: Option<Vec<String>>,
|
||||
pub release_notes: String,
|
||||
pub license: String, // type of license
|
||||
pub wrapper_repo: Url,
|
||||
@@ -52,24 +40,10 @@ pub struct Manifest {
|
||||
pub donation_url: Option<Url>,
|
||||
#[serde(default)]
|
||||
pub alerts: Alerts,
|
||||
pub main: PackageProcedure,
|
||||
pub health_checks: HealthChecks,
|
||||
pub config: Option<ConfigActions>,
|
||||
pub properties: Option<PackageProcedure>,
|
||||
pub volumes: Volumes,
|
||||
// #[serde(default)]
|
||||
pub interfaces: Interfaces,
|
||||
// #[serde(default)]
|
||||
pub backup: BackupActions,
|
||||
#[serde(default)]
|
||||
pub migrations: Migrations,
|
||||
#[serde(default)]
|
||||
pub actions: Actions,
|
||||
// #[serde(default)]
|
||||
// pub permissions: Permissions,
|
||||
#[serde(default)]
|
||||
pub dependencies: Dependencies,
|
||||
pub containers: Option<DockerContainers>,
|
||||
pub config: Option<InOMap<String, Value>>,
|
||||
|
||||
#[serde(default)]
|
||||
pub replaces: Vec<String>,
|
||||
@@ -78,43 +52,6 @@ pub struct Manifest {
|
||||
pub hardware_requirements: HardwareRequirements,
|
||||
}
|
||||
|
||||
impl Manifest {
|
||||
pub fn package_procedures(&self) -> impl Iterator<Item = &PackageProcedure> {
|
||||
use std::iter::once;
|
||||
let main = once(&self.main);
|
||||
let cfg_get = self.config.as_ref().map(|a| &a.get).into_iter();
|
||||
let cfg_set = self.config.as_ref().map(|a| &a.set).into_iter();
|
||||
let props = self.properties.iter();
|
||||
let backups = vec![&self.backup.create, &self.backup.restore].into_iter();
|
||||
let migrations = self
|
||||
.migrations
|
||||
.to
|
||||
.values()
|
||||
.chain(self.migrations.from.values());
|
||||
let actions = self.actions.0.values().map(|a| &a.implementation);
|
||||
main.chain(cfg_get)
|
||||
.chain(cfg_set)
|
||||
.chain(props)
|
||||
.chain(backups)
|
||||
.chain(migrations)
|
||||
.chain(actions)
|
||||
}
|
||||
|
||||
pub fn with_git_hash(mut self, git_hash: GitHash) -> Self {
|
||||
self.git_hash = Some(git_hash);
|
||||
self
|
||||
}
|
||||
}
|
||||
|
||||
#[derive(Clone, Debug, Default, Deserialize, Serialize)]
|
||||
#[serde(rename_all = "kebab-case")]
|
||||
pub struct HardwareRequirements {
|
||||
#[serde(default)]
|
||||
device: BTreeMap<String, Regex>,
|
||||
ram: Option<u64>,
|
||||
pub arch: Option<Vec<String>>,
|
||||
}
|
||||
|
||||
#[derive(Clone, Debug, Default, Deserialize, Serialize)]
|
||||
#[serde(rename_all = "kebab-case")]
|
||||
pub struct Assets {
|
||||
@@ -176,36 +113,3 @@ impl Assets {
|
||||
.unwrap_or(Path::new("scripts"))
|
||||
}
|
||||
}
|
||||
|
||||
#[derive(Clone, Debug, Deserialize, Serialize)]
|
||||
pub struct Description {
|
||||
pub short: String,
|
||||
pub long: String,
|
||||
}
|
||||
impl Description {
|
||||
pub fn validate(&self) -> Result<(), Error> {
|
||||
if self.short.chars().skip(160).next().is_some() {
|
||||
return Err(Error::new(
|
||||
eyre!("Short description must be 160 characters or less."),
|
||||
crate::ErrorKind::ValidateS9pk,
|
||||
));
|
||||
}
|
||||
if self.long.chars().skip(5000).next().is_some() {
|
||||
return Err(Error::new(
|
||||
eyre!("Long description must be 5000 characters or less."),
|
||||
crate::ErrorKind::ValidateS9pk,
|
||||
));
|
||||
}
|
||||
Ok(())
|
||||
}
|
||||
}
|
||||
|
||||
#[derive(Clone, Debug, Default, Deserialize, Serialize)]
|
||||
#[serde(rename_all = "kebab-case")]
|
||||
pub struct Alerts {
|
||||
pub install: Option<String>,
|
||||
pub uninstall: Option<String>,
|
||||
pub restore: Option<String>,
|
||||
pub start: Option<String>,
|
||||
pub stop: Option<String>,
|
||||
}
|
||||
|
||||
@@ -1,25 +1,7 @@
|
||||
use std::ffi::OsStr;
|
||||
use std::path::PathBuf;
|
||||
|
||||
use color_eyre::eyre::eyre;
|
||||
use futures::TryStreamExt;
|
||||
use imbl::OrdMap;
|
||||
use rpc_toolkit::command;
|
||||
use serde_json::Value;
|
||||
use tokio::io::AsyncRead;
|
||||
use tracing::instrument;
|
||||
|
||||
use crate::context::SdkContext;
|
||||
use crate::s9pk::builder::S9pkPacker;
|
||||
use crate::s9pk::docker::DockerMultiArch;
|
||||
use crate::s9pk::git_hash::GitHash;
|
||||
use crate::s9pk::manifest::Manifest;
|
||||
use crate::s9pk::reader::S9pkReader;
|
||||
use crate::util::display_none;
|
||||
use crate::util::io::BufferedWriteReader;
|
||||
use crate::util::serde::IoFormat;
|
||||
use crate::volume::Volume;
|
||||
use crate::{Error, ErrorKind, ResultExt};
|
||||
use clap::Parser;
|
||||
use serde::{Deserialize, Serialize};
|
||||
|
||||
pub mod builder;
|
||||
pub mod docker;
|
||||
@@ -30,217 +12,9 @@ pub mod reader;
|
||||
|
||||
pub const SIG_CONTEXT: &[u8] = b"s9pk";
|
||||
|
||||
#[command(cli_only, display(display_none))]
|
||||
#[instrument(skip_all)]
|
||||
pub async fn pack(#[context] ctx: SdkContext, #[arg] path: Option<PathBuf>) -> Result<(), Error> {
|
||||
use tokio::fs::File;
|
||||
|
||||
let path = if let Some(path) = path {
|
||||
path
|
||||
} else {
|
||||
std::env::current_dir()?
|
||||
};
|
||||
let manifest_value: Value = if path.join("manifest.toml").exists() {
|
||||
IoFormat::Toml
|
||||
.from_async_reader(File::open(path.join("manifest.toml")).await?)
|
||||
.await?
|
||||
} else if path.join("manifest.yaml").exists() {
|
||||
IoFormat::Yaml
|
||||
.from_async_reader(File::open(path.join("manifest.yaml")).await?)
|
||||
.await?
|
||||
} else if path.join("manifest.json").exists() {
|
||||
IoFormat::Json
|
||||
.from_async_reader(File::open(path.join("manifest.json")).await?)
|
||||
.await?
|
||||
} else {
|
||||
return Err(Error::new(
|
||||
eyre!("manifest not found"),
|
||||
crate::ErrorKind::Pack,
|
||||
));
|
||||
};
|
||||
|
||||
let manifest: Manifest = serde_json::from_value::<Manifest>(manifest_value.clone())
|
||||
.with_kind(crate::ErrorKind::Deserialization)?
|
||||
.with_git_hash(GitHash::from_path(&path).await?);
|
||||
let extra_keys =
|
||||
enumerate_extra_keys(&serde_json::to_value(&manifest).unwrap(), &manifest_value);
|
||||
for k in extra_keys {
|
||||
tracing::warn!("Unrecognized Manifest Key: {}", k);
|
||||
}
|
||||
|
||||
let outfile_path = path.join(format!("{}.s9pk", manifest.id));
|
||||
let mut outfile = File::create(outfile_path).await?;
|
||||
S9pkPacker::builder()
|
||||
.manifest(&manifest)
|
||||
.writer(&mut outfile)
|
||||
.license(
|
||||
File::open(path.join(manifest.assets.license_path()))
|
||||
.await
|
||||
.with_ctx(|_| {
|
||||
(
|
||||
crate::ErrorKind::Filesystem,
|
||||
manifest.assets.license_path().display().to_string(),
|
||||
)
|
||||
})?,
|
||||
)
|
||||
.icon(
|
||||
File::open(path.join(manifest.assets.icon_path()))
|
||||
.await
|
||||
.with_ctx(|_| {
|
||||
(
|
||||
crate::ErrorKind::Filesystem,
|
||||
manifest.assets.icon_path().display().to_string(),
|
||||
)
|
||||
})?,
|
||||
)
|
||||
.instructions(
|
||||
File::open(path.join(manifest.assets.instructions_path()))
|
||||
.await
|
||||
.with_ctx(|_| {
|
||||
(
|
||||
crate::ErrorKind::Filesystem,
|
||||
manifest.assets.instructions_path().display().to_string(),
|
||||
)
|
||||
})?,
|
||||
)
|
||||
.docker_images({
|
||||
let docker_images_path = path.join(manifest.assets.docker_images_path());
|
||||
let res: Box<dyn AsyncRead + Unpin + Send + Sync> = if tokio::fs::metadata(&docker_images_path).await?.is_dir() {
|
||||
let tars: Vec<_> = tokio_stream::wrappers::ReadDirStream::new(tokio::fs::read_dir(&docker_images_path).await?).try_collect().await?;
|
||||
let mut arch_info = DockerMultiArch::default();
|
||||
for tar in &tars {
|
||||
if tar.path().extension() == Some(OsStr::new("tar")) {
|
||||
arch_info.available.insert(tar.path().file_stem().unwrap_or_default().to_str().unwrap_or_default().to_owned());
|
||||
}
|
||||
}
|
||||
if arch_info.available.contains("aarch64") {
|
||||
arch_info.default = "aarch64".to_owned();
|
||||
} else {
|
||||
arch_info.default = arch_info.available.iter().next().cloned().unwrap_or_default();
|
||||
}
|
||||
let arch_info_cbor = IoFormat::Cbor.to_vec(&arch_info)?;
|
||||
Box::new(BufferedWriteReader::new(|w| async move {
|
||||
let mut docker_images = tokio_tar::Builder::new(w);
|
||||
let mut multiarch_header = tokio_tar::Header::new_gnu();
|
||||
multiarch_header.set_path("multiarch.cbor")?;
|
||||
multiarch_header.set_size(arch_info_cbor.len() as u64);
|
||||
multiarch_header.set_cksum();
|
||||
docker_images.append(&multiarch_header, std::io::Cursor::new(arch_info_cbor)).await?;
|
||||
for tar in tars
|
||||
{
|
||||
docker_images
|
||||
.append_path_with_name(
|
||||
tar.path(),
|
||||
tar.file_name(),
|
||||
)
|
||||
.await?;
|
||||
}
|
||||
Ok::<_, std::io::Error>(())
|
||||
}, 1024 * 1024))
|
||||
} else {
|
||||
Box::new(File::open(docker_images_path)
|
||||
.await
|
||||
.with_ctx(|_| {
|
||||
(
|
||||
crate::ErrorKind::Filesystem,
|
||||
manifest.assets.docker_images_path().display().to_string(),
|
||||
)
|
||||
})?)
|
||||
};
|
||||
res
|
||||
})
|
||||
.assets({
|
||||
let asset_volumes = manifest
|
||||
.volumes
|
||||
.iter()
|
||||
.filter(|(_, v)| matches!(v, &&Volume::Assets {})).map(|(id, _)| id.clone()).collect::<Vec<_>>();
|
||||
let assets_path = manifest.assets.assets_path().to_owned();
|
||||
let path = path.clone();
|
||||
|
||||
BufferedWriteReader::new(|w| async move {
|
||||
let mut assets = tokio_tar::Builder::new(w);
|
||||
for asset_volume in asset_volumes
|
||||
{
|
||||
assets
|
||||
.append_dir_all(
|
||||
&asset_volume,
|
||||
path.join(&assets_path).join(&asset_volume),
|
||||
)
|
||||
.await?;
|
||||
}
|
||||
Ok::<_, std::io::Error>(())
|
||||
}, 1024 * 1024)
|
||||
})
|
||||
.scripts({
|
||||
let script_path = path.join(manifest.assets.scripts_path()).join("embassy.js");
|
||||
let needs_script = manifest.package_procedures().any(|a| a.is_script());
|
||||
let has_script = script_path.exists();
|
||||
match (needs_script, has_script) {
|
||||
(true, true) => Some(File::open(script_path).await?),
|
||||
(true, false) => {
|
||||
return Err(Error::new(eyre!("Script is declared in manifest, but no such script exists at ./scripts/embassy.js"), ErrorKind::Pack).into())
|
||||
}
|
||||
(false, true) => {
|
||||
tracing::warn!("Manifest does not declare any actions that use scripts, but a script exists at ./scripts/embassy.js");
|
||||
None
|
||||
}
|
||||
(false, false) => None
|
||||
}
|
||||
})
|
||||
.build()
|
||||
.pack(&ctx.developer_key()?)
|
||||
.await?;
|
||||
outfile.sync_all().await?;
|
||||
|
||||
Ok(())
|
||||
}
|
||||
|
||||
#[command(rename = "s9pk", cli_only, display(display_none))]
|
||||
pub async fn verify(#[arg] path: PathBuf) -> Result<(), Error> {
|
||||
let mut s9pk = S9pkReader::open(path, true).await?;
|
||||
s9pk.validate().await?;
|
||||
|
||||
Ok(())
|
||||
}
|
||||
|
||||
fn enumerate_extra_keys(reference: &Value, candidate: &Value) -> Vec<String> {
|
||||
match (reference, candidate) {
|
||||
(Value::Object(m_r), Value::Object(m_c)) => {
|
||||
let om_r: OrdMap<String, Value> = m_r.clone().into_iter().collect();
|
||||
let om_c: OrdMap<String, Value> = m_c.clone().into_iter().collect();
|
||||
let common = om_r.clone().intersection(om_c.clone());
|
||||
let top_extra = common.clone().symmetric_difference(om_c.clone());
|
||||
let mut all_extra = top_extra
|
||||
.keys()
|
||||
.map(|s| format!(".{}", s))
|
||||
.collect::<Vec<String>>();
|
||||
for (k, v) in common {
|
||||
all_extra.extend(
|
||||
enumerate_extra_keys(&v, om_c.get(&k).unwrap())
|
||||
.into_iter()
|
||||
.map(|s| format!(".{}{}", k, s)),
|
||||
)
|
||||
}
|
||||
all_extra
|
||||
}
|
||||
(_, Value::Object(m1)) => m1.clone().keys().map(|s| format!(".{}", s)).collect(),
|
||||
_ => Vec::new(),
|
||||
}
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn test_enumerate_extra_keys() {
|
||||
use serde_json::json;
|
||||
let extras = enumerate_extra_keys(
|
||||
&json!({
|
||||
"test": 1,
|
||||
"test2": null,
|
||||
}),
|
||||
&json!({
|
||||
"test": 1,
|
||||
"test2": { "test3": null },
|
||||
"test4": null
|
||||
}),
|
||||
);
|
||||
println!("{:?}", extras)
|
||||
#[derive(Deserialize, Serialize, Parser)]
|
||||
#[serde(rename_all = "kebab-case")]
|
||||
#[command(rename_all = "kebab-case")]
|
||||
pub struct VerifyParams {
|
||||
pub path: PathBuf,
|
||||
}
|
||||
|
||||
@@ -1,4 +1,3 @@
|
||||
use std::collections::BTreeSet;
|
||||
use std::io::SeekFrom;
|
||||
use std::ops::Range;
|
||||
use std::path::Path;
|
||||
@@ -10,22 +9,17 @@ use color_eyre::eyre::eyre;
|
||||
use digest::Output;
|
||||
use ed25519_dalek::VerifyingKey;
|
||||
use futures::TryStreamExt;
|
||||
use models::ImageId;
|
||||
use models::{ImageId, PackageId};
|
||||
use sha2::{Digest, Sha512};
|
||||
use tokio::fs::File;
|
||||
use tokio::io::{AsyncRead, AsyncReadExt, AsyncSeek, AsyncSeekExt, ReadBuf};
|
||||
use tokio::io::{AsyncRead, AsyncReadExt, AsyncSeek, AsyncSeekExt, BufReader, ReadBuf};
|
||||
use tracing::instrument;
|
||||
|
||||
use super::header::{FileSection, Header, TableOfContents};
|
||||
use super::manifest::{Manifest, PackageId};
|
||||
use super::SIG_CONTEXT;
|
||||
use crate::install::progress::InstallProgressTracker;
|
||||
use crate::s9pk::docker::DockerReader;
|
||||
use crate::prelude::*;
|
||||
use crate::s9pk::v1::docker::DockerReader;
|
||||
use crate::util::Version;
|
||||
use crate::{Error, ResultExt};
|
||||
|
||||
const MAX_REPLACES: usize = 10;
|
||||
const MAX_TITLE_LEN: usize = 30;
|
||||
|
||||
#[pin_project::pin_project]
|
||||
#[derive(Debug)]
|
||||
@@ -144,7 +138,7 @@ impl FromStr for ImageTag {
|
||||
}
|
||||
}
|
||||
|
||||
pub struct S9pkReader<R: AsyncRead + AsyncSeek + Unpin + Send + Sync = File> {
|
||||
pub struct S9pkReader<R: AsyncRead + AsyncSeek + Unpin + Send + Sync = BufReader<File>> {
|
||||
hash: Option<Output<Sha512>>,
|
||||
hash_string: Option<String>,
|
||||
developer_key: VerifyingKey,
|
||||
@@ -159,103 +153,10 @@ impl S9pkReader {
|
||||
.await
|
||||
.with_ctx(|_| (crate::error::ErrorKind::Filesystem, p.display().to_string()))?;
|
||||
|
||||
Self::from_reader(rdr, check_sig).await
|
||||
}
|
||||
}
|
||||
impl<R: AsyncRead + AsyncSeek + Unpin + Send + Sync> S9pkReader<InstallProgressTracker<R>> {
|
||||
pub fn validated(&mut self) {
|
||||
self.rdr.validated()
|
||||
Self::from_reader(BufReader::new(rdr), check_sig).await
|
||||
}
|
||||
}
|
||||
impl<R: AsyncRead + AsyncSeek + Unpin + Send + Sync> S9pkReader<R> {
|
||||
#[instrument(skip_all)]
|
||||
pub async fn validate(&mut self) -> Result<(), Error> {
|
||||
if self.toc.icon.length > 102_400 {
|
||||
// 100 KiB
|
||||
return Err(Error::new(
|
||||
eyre!("icon must be less than 100KiB"),
|
||||
crate::ErrorKind::ValidateS9pk,
|
||||
));
|
||||
}
|
||||
let image_tags = self.image_tags().await?;
|
||||
let man = self.manifest().await?;
|
||||
let containers = &man.containers;
|
||||
let validated_image_ids = image_tags
|
||||
.into_iter()
|
||||
.map(|i| i.validate(&man.id, &man.version).map(|_| i.image_id))
|
||||
.collect::<Result<BTreeSet<ImageId>, _>>()?;
|
||||
man.description.validate()?;
|
||||
man.actions.0.iter().try_for_each(|(_, action)| {
|
||||
action.validate(
|
||||
containers,
|
||||
&man.eos_version,
|
||||
&man.volumes,
|
||||
&validated_image_ids,
|
||||
)
|
||||
})?;
|
||||
man.backup.validate(
|
||||
containers,
|
||||
&man.eos_version,
|
||||
&man.volumes,
|
||||
&validated_image_ids,
|
||||
)?;
|
||||
if let Some(cfg) = &man.config {
|
||||
cfg.validate(
|
||||
containers,
|
||||
&man.eos_version,
|
||||
&man.volumes,
|
||||
&validated_image_ids,
|
||||
)?;
|
||||
}
|
||||
man.health_checks
|
||||
.validate(&man.eos_version, &man.volumes, &validated_image_ids)?;
|
||||
man.interfaces.validate()?;
|
||||
man.main
|
||||
.validate(&man.eos_version, &man.volumes, &validated_image_ids, false)
|
||||
.with_ctx(|_| (crate::ErrorKind::ValidateS9pk, "Main"))?;
|
||||
man.migrations.validate(
|
||||
containers,
|
||||
&man.eos_version,
|
||||
&man.volumes,
|
||||
&validated_image_ids,
|
||||
)?;
|
||||
|
||||
if man.replaces.len() >= MAX_REPLACES {
|
||||
return Err(Error::new(
|
||||
eyre!("Cannot have more than {MAX_REPLACES} replaces"),
|
||||
crate::ErrorKind::ValidateS9pk,
|
||||
));
|
||||
}
|
||||
if let Some(too_big) = man.replaces.iter().find(|x| x.len() >= MAX_REPLACES) {
|
||||
return Err(Error::new(
|
||||
eyre!("We have found a replaces of ({too_big}) that exceeds the max length of {MAX_TITLE_LEN} "),
|
||||
crate::ErrorKind::ValidateS9pk,
|
||||
));
|
||||
}
|
||||
if man.title.len() >= MAX_TITLE_LEN {
|
||||
return Err(Error::new(
|
||||
eyre!("Cannot have more than a length of {MAX_TITLE_LEN} for title"),
|
||||
crate::ErrorKind::ValidateS9pk,
|
||||
));
|
||||
}
|
||||
|
||||
if man.containers.is_some()
|
||||
&& matches!(man.main, crate::procedure::PackageProcedure::Docker(_))
|
||||
{
|
||||
return Err(Error::new(
|
||||
eyre!("Cannot have a main docker and a main in containers"),
|
||||
crate::ErrorKind::ValidateS9pk,
|
||||
));
|
||||
}
|
||||
if let Some(props) = &man.properties {
|
||||
props
|
||||
.validate(&man.eos_version, &man.volumes, &validated_image_ids, true)
|
||||
.with_ctx(|_| (crate::ErrorKind::ValidateS9pk, "Properties"))?;
|
||||
}
|
||||
man.volumes.validate(&man.interfaces)?;
|
||||
|
||||
Ok(())
|
||||
}
|
||||
#[instrument(skip_all)]
|
||||
pub async fn image_tags(&mut self) -> Result<Vec<ImageTag>, Error> {
|
||||
let mut tar = tokio_tar::Archive::new(self.docker_images().await?);
|
||||
@@ -361,7 +262,7 @@ impl<R: AsyncRead + AsyncSeek + Unpin + Send + Sync> S9pkReader<R> {
|
||||
self.read_handle(self.toc.manifest).await
|
||||
}
|
||||
|
||||
pub async fn manifest(&mut self) -> Result<Manifest, Error> {
|
||||
pub async fn manifest(&mut self) -> Result<Value, Error> {
|
||||
let slice = self.manifest_raw().await?.to_vec().await?;
|
||||
serde_cbor::de::from_reader(slice.as_slice())
|
||||
.with_ctx(|_| (crate::ErrorKind::ParseS9pk, "Deserializing Manifest (CBOR)"))
|
||||
|
||||
358
core/startos/src/s9pk/v2/compat.rs
Normal file
358
core/startos/src/s9pk/v2/compat.rs
Normal file
@@ -0,0 +1,358 @@
|
||||
use std::io::Cursor;
|
||||
use std::path::{Path, PathBuf};
|
||||
use std::sync::Arc;
|
||||
|
||||
use itertools::Itertools;
|
||||
use tokio::fs::File;
|
||||
use tokio::io::{AsyncRead, AsyncSeek, AsyncWriteExt};
|
||||
use tokio::process::Command;
|
||||
|
||||
use crate::prelude::*;
|
||||
use crate::s9pk::manifest::Manifest;
|
||||
use crate::s9pk::merkle_archive::directory_contents::DirectoryContents;
|
||||
use crate::s9pk::merkle_archive::source::multi_cursor_file::MultiCursorFile;
|
||||
use crate::s9pk::merkle_archive::source::{FileSource, Section};
|
||||
use crate::s9pk::merkle_archive::{Entry, MerkleArchive};
|
||||
use crate::s9pk::rpc::SKIP_ENV;
|
||||
use crate::s9pk::v1::manifest::Manifest as ManifestV1;
|
||||
use crate::s9pk::v1::reader::S9pkReader;
|
||||
use crate::s9pk::v2::S9pk;
|
||||
use crate::util::io::TmpDir;
|
||||
use crate::util::Invoke;
|
||||
use crate::volume::Volume;
|
||||
use crate::ARCH;
|
||||
|
||||
pub const MAGIC_AND_VERSION: &[u8] = &[0x3b, 0x3b, 0x01];
|
||||
|
||||
#[cfg(not(feature = "docker"))]
|
||||
pub const CONTAINER_TOOL: &str = "podman";
|
||||
|
||||
#[cfg(feature = "docker")]
|
||||
pub const CONTAINER_TOOL: &str = "docker";
|
||||
|
||||
type DynRead = Box<dyn AsyncRead + Unpin + Send + Sync + 'static>;
|
||||
fn into_dyn_read<R: AsyncRead + Unpin + Send + Sync + 'static>(r: R) -> DynRead {
|
||||
Box::new(r)
|
||||
}
|
||||
|
||||
#[derive(Clone)]
|
||||
enum CompatSource {
|
||||
Buffered(Arc<[u8]>),
|
||||
File(PathBuf),
|
||||
}
|
||||
#[async_trait::async_trait]
|
||||
impl FileSource for CompatSource {
|
||||
type Reader = Box<dyn AsyncRead + Unpin + Send + Sync + 'static>;
|
||||
async fn size(&self) -> Result<u64, Error> {
|
||||
match self {
|
||||
Self::Buffered(a) => Ok(a.len() as u64),
|
||||
Self::File(f) => Ok(tokio::fs::metadata(f).await?.len()),
|
||||
}
|
||||
}
|
||||
async fn reader(&self) -> Result<Self::Reader, Error> {
|
||||
match self {
|
||||
Self::Buffered(a) => Ok(into_dyn_read(Cursor::new(a.clone()))),
|
||||
Self::File(f) => Ok(into_dyn_read(File::open(f).await?)),
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
impl S9pk<Section<MultiCursorFile>> {
|
||||
#[instrument(skip_all)]
|
||||
pub async fn from_v1<R: AsyncRead + AsyncSeek + Unpin + Send + Sync>(
|
||||
mut reader: S9pkReader<R>,
|
||||
destination: impl AsRef<Path>,
|
||||
signer: ed25519_dalek::SigningKey,
|
||||
) -> Result<Self, Error> {
|
||||
let scratch_dir = TmpDir::new().await?;
|
||||
|
||||
let mut archive = DirectoryContents::<CompatSource>::new();
|
||||
|
||||
// manifest.json
|
||||
let manifest_raw = reader.manifest().await?;
|
||||
let manifest = from_value::<ManifestV1>(manifest_raw.clone())?;
|
||||
let mut new_manifest = Manifest::from(manifest.clone());
|
||||
|
||||
// LICENSE.md
|
||||
let license: Arc<[u8]> = reader.license().await?.to_vec().await?.into();
|
||||
archive.insert_path(
|
||||
"LICENSE.md",
|
||||
Entry::file(CompatSource::Buffered(license.into())),
|
||||
)?;
|
||||
|
||||
// instructions.md
|
||||
let instructions: Arc<[u8]> = reader.instructions().await?.to_vec().await?.into();
|
||||
archive.insert_path(
|
||||
"instructions.md",
|
||||
Entry::file(CompatSource::Buffered(instructions.into())),
|
||||
)?;
|
||||
|
||||
// icon.md
|
||||
let icon: Arc<[u8]> = reader.icon().await?.to_vec().await?.into();
|
||||
archive.insert_path(
|
||||
format!("icon.{}", manifest.assets.icon_type()),
|
||||
Entry::file(CompatSource::Buffered(icon.into())),
|
||||
)?;
|
||||
|
||||
// images
|
||||
let images_dir = scratch_dir.join("images");
|
||||
tokio::fs::create_dir_all(&images_dir).await?;
|
||||
Command::new(CONTAINER_TOOL)
|
||||
.arg("load")
|
||||
.input(Some(&mut reader.docker_images().await?))
|
||||
.invoke(ErrorKind::Docker)
|
||||
.await?;
|
||||
#[derive(serde::Deserialize)]
|
||||
#[serde(rename_all = "PascalCase")]
|
||||
struct DockerImagesOut {
|
||||
repository: Option<String>,
|
||||
tag: Option<String>,
|
||||
#[serde(default)]
|
||||
names: Vec<String>,
|
||||
}
|
||||
for image in {
|
||||
#[cfg(feature = "docker")]
|
||||
let images = std::str::from_utf8(
|
||||
&Command::new(CONTAINER_TOOL)
|
||||
.arg("images")
|
||||
.arg("--format=json")
|
||||
.invoke(ErrorKind::Docker)
|
||||
.await?,
|
||||
)?
|
||||
.lines()
|
||||
.map(|l| serde_json::from_str::<DockerImagesOut>(l))
|
||||
.collect::<Result<Vec<_>, _>>()
|
||||
.with_kind(ErrorKind::Deserialization)?
|
||||
.into_iter();
|
||||
#[cfg(not(feature = "docker"))]
|
||||
let images = serde_json::from_slice::<Vec<DockerImagesOut>>(
|
||||
&Command::new(CONTAINER_TOOL)
|
||||
.arg("images")
|
||||
.arg("--format=json")
|
||||
.invoke(ErrorKind::Docker)
|
||||
.await?,
|
||||
)
|
||||
.with_kind(ErrorKind::Deserialization)?
|
||||
.into_iter();
|
||||
images
|
||||
}
|
||||
.flat_map(|i| {
|
||||
if let (Some(repository), Some(tag)) = (i.repository, i.tag) {
|
||||
vec![format!("{repository}:{tag}")]
|
||||
} else {
|
||||
i.names
|
||||
.into_iter()
|
||||
.filter_map(|i| i.strip_prefix("docker.io/").map(|s| s.to_owned()))
|
||||
.collect()
|
||||
}
|
||||
})
|
||||
.filter_map(|i| {
|
||||
i.strip_suffix(&format!(":{}", manifest.version))
|
||||
.map(|s| s.to_owned())
|
||||
})
|
||||
.filter_map(|i| {
|
||||
i.strip_prefix(&format!("start9/{}/", manifest.id))
|
||||
.map(|s| s.to_owned())
|
||||
}) {
|
||||
new_manifest.images.push(image.parse()?);
|
||||
let sqfs_path = images_dir.join(&image).with_extension("squashfs");
|
||||
let image_name = format!("start9/{}/{}:{}", manifest.id, image, manifest.version);
|
||||
let id = String::from_utf8(
|
||||
Command::new(CONTAINER_TOOL)
|
||||
.arg("create")
|
||||
.arg(&image_name)
|
||||
.invoke(ErrorKind::Docker)
|
||||
.await?,
|
||||
)?;
|
||||
let env = String::from_utf8(
|
||||
Command::new(CONTAINER_TOOL)
|
||||
.arg("run")
|
||||
.arg("--rm")
|
||||
.arg("--entrypoint")
|
||||
.arg("env")
|
||||
.arg(&image_name)
|
||||
.invoke(ErrorKind::Docker)
|
||||
.await?,
|
||||
)?
|
||||
.lines()
|
||||
.filter(|l| {
|
||||
l.trim()
|
||||
.split_once("=")
|
||||
.map_or(false, |(v, _)| !SKIP_ENV.contains(&v))
|
||||
})
|
||||
.join("\n")
|
||||
+ "\n";
|
||||
let workdir = Path::new(
|
||||
String::from_utf8(
|
||||
Command::new(CONTAINER_TOOL)
|
||||
.arg("run")
|
||||
.arg("--rm")
|
||||
.arg("--entrypoint")
|
||||
.arg("pwd")
|
||||
.arg(&image_name)
|
||||
.invoke(ErrorKind::Docker)
|
||||
.await?,
|
||||
)?
|
||||
.trim(),
|
||||
)
|
||||
.to_owned();
|
||||
Command::new("bash")
|
||||
.arg("-c")
|
||||
.arg(format!(
|
||||
"{CONTAINER_TOOL} export {id} | mksquashfs - {sqfs} -tar",
|
||||
id = id.trim(),
|
||||
sqfs = sqfs_path.display()
|
||||
))
|
||||
.invoke(ErrorKind::Docker)
|
||||
.await?;
|
||||
Command::new(CONTAINER_TOOL)
|
||||
.arg("rm")
|
||||
.arg(id.trim())
|
||||
.invoke(ErrorKind::Docker)
|
||||
.await?;
|
||||
archive.insert_path(
|
||||
Path::new("images")
|
||||
.join(&*ARCH)
|
||||
.join(&image)
|
||||
.with_extension("squashfs"),
|
||||
Entry::file(CompatSource::File(sqfs_path)),
|
||||
)?;
|
||||
archive.insert_path(
|
||||
Path::new("images")
|
||||
.join(&*ARCH)
|
||||
.join(&image)
|
||||
.with_extension("env"),
|
||||
Entry::file(CompatSource::Buffered(Vec::from(env).into())),
|
||||
)?;
|
||||
archive.insert_path(
|
||||
Path::new("images")
|
||||
.join(&*ARCH)
|
||||
.join(&image)
|
||||
.with_extension("json"),
|
||||
Entry::file(CompatSource::Buffered(
|
||||
serde_json::to_vec(&serde_json::json!({
|
||||
"workdir": workdir
|
||||
}))
|
||||
.with_kind(ErrorKind::Serialization)?
|
||||
.into(),
|
||||
)),
|
||||
)?;
|
||||
}
|
||||
Command::new(CONTAINER_TOOL)
|
||||
.arg("image")
|
||||
.arg("prune")
|
||||
.arg("-af")
|
||||
.invoke(ErrorKind::Docker)
|
||||
.await?;
|
||||
|
||||
// assets
|
||||
let asset_dir = scratch_dir.join("assets");
|
||||
tokio::fs::create_dir_all(&asset_dir).await?;
|
||||
tokio_tar::Archive::new(reader.assets().await?)
|
||||
.unpack(&asset_dir)
|
||||
.await?;
|
||||
for (asset_id, _) in manifest
|
||||
.volumes
|
||||
.iter()
|
||||
.filter(|(_, v)| matches!(v, Volume::Assets { .. }))
|
||||
{
|
||||
let assets_path = asset_dir.join(&asset_id);
|
||||
let sqfs_path = assets_path.with_extension("squashfs");
|
||||
Command::new("mksquashfs")
|
||||
.arg(&assets_path)
|
||||
.arg(&sqfs_path)
|
||||
.invoke(ErrorKind::Filesystem)
|
||||
.await?;
|
||||
archive.insert_path(
|
||||
Path::new("assets").join(&asset_id),
|
||||
Entry::file(CompatSource::File(sqfs_path)),
|
||||
)?;
|
||||
}
|
||||
|
||||
// javascript
|
||||
let js_dir = scratch_dir.join("javascript");
|
||||
let sqfs_path = js_dir.with_extension("squashfs");
|
||||
tokio::fs::create_dir_all(&js_dir).await?;
|
||||
if let Some(mut scripts) = reader.scripts().await? {
|
||||
let mut js_file = File::create(js_dir.join("embassy.js")).await?;
|
||||
tokio::io::copy(&mut scripts, &mut js_file).await?;
|
||||
js_file.sync_all().await?;
|
||||
}
|
||||
{
|
||||
let mut js_file = File::create(js_dir.join("embassyManifest.json")).await?;
|
||||
js_file
|
||||
.write_all(&serde_json::to_vec(&manifest_raw).with_kind(ErrorKind::Serialization)?)
|
||||
.await?;
|
||||
js_file.sync_all().await?;
|
||||
}
|
||||
Command::new("mksquashfs")
|
||||
.arg(&js_dir)
|
||||
.arg(&sqfs_path)
|
||||
.invoke(ErrorKind::Filesystem)
|
||||
.await?;
|
||||
archive.insert_path(
|
||||
Path::new("javascript.squashfs"),
|
||||
Entry::file(CompatSource::File(sqfs_path)),
|
||||
)?;
|
||||
|
||||
archive.insert_path(
|
||||
"manifest.json",
|
||||
Entry::file(CompatSource::Buffered(
|
||||
serde_json::to_vec::<Manifest>(&new_manifest)
|
||||
.with_kind(ErrorKind::Serialization)?
|
||||
.into(),
|
||||
)),
|
||||
)?;
|
||||
|
||||
let mut s9pk = S9pk::new(MerkleArchive::new(archive, signer), None).await?;
|
||||
let mut dest_file = File::create(destination.as_ref()).await?;
|
||||
s9pk.serialize(&mut dest_file, false).await?;
|
||||
dest_file.sync_all().await?;
|
||||
|
||||
scratch_dir.delete().await?;
|
||||
|
||||
Ok(S9pk::deserialize(&MultiCursorFile::from(
|
||||
File::open(destination.as_ref()).await?,
|
||||
))
|
||||
.await?)
|
||||
}
|
||||
}
|
||||
|
||||
impl From<ManifestV1> for Manifest {
|
||||
fn from(value: ManifestV1) -> Self {
|
||||
let default_url = value.upstream_repo.clone();
|
||||
Self {
|
||||
id: value.id,
|
||||
title: value.title,
|
||||
version: value.version,
|
||||
release_notes: value.release_notes,
|
||||
license: value.license,
|
||||
replaces: value.replaces,
|
||||
wrapper_repo: value.wrapper_repo,
|
||||
upstream_repo: value.upstream_repo,
|
||||
support_site: value.support_site.unwrap_or_else(|| default_url.clone()),
|
||||
marketing_site: value.marketing_site.unwrap_or_else(|| default_url.clone()),
|
||||
donation_url: value.donation_url,
|
||||
description: value.description,
|
||||
images: Vec::new(),
|
||||
assets: value
|
||||
.volumes
|
||||
.iter()
|
||||
.filter(|(_, v)| matches!(v, &&Volume::Assets { .. }))
|
||||
.map(|(id, _)| id.clone())
|
||||
.collect(),
|
||||
volumes: value
|
||||
.volumes
|
||||
.iter()
|
||||
.filter(|(_, v)| matches!(v, &&Volume::Data { .. }))
|
||||
.map(|(id, _)| id.clone())
|
||||
.collect(),
|
||||
alerts: value.alerts,
|
||||
dependencies: value.dependencies,
|
||||
hardware_requirements: value.hardware_requirements,
|
||||
git_hash: value.git_hash,
|
||||
os_version: value.eos_version,
|
||||
has_config: value.config.is_some(),
|
||||
}
|
||||
}
|
||||
}
|
||||
95
core/startos/src/s9pk/v2/manifest.rs
Normal file
95
core/startos/src/s9pk/v2/manifest.rs
Normal file
@@ -0,0 +1,95 @@
|
||||
use std::collections::BTreeMap;
|
||||
|
||||
use color_eyre::eyre::eyre;
|
||||
use helpers::const_true;
|
||||
pub use models::PackageId;
|
||||
use models::{ImageId, VolumeId};
|
||||
use serde::{Deserialize, Serialize};
|
||||
use url::Url;
|
||||
|
||||
use crate::dependencies::Dependencies;
|
||||
use crate::prelude::*;
|
||||
use crate::s9pk::v1::git_hash::GitHash;
|
||||
use crate::util::serde::Regex;
|
||||
use crate::util::Version;
|
||||
use crate::version::{Current, VersionT};
|
||||
|
||||
fn current_version() -> Version {
|
||||
Current::new().semver().into()
|
||||
}
|
||||
|
||||
#[derive(Clone, Debug, Deserialize, Serialize, HasModel)]
|
||||
#[serde(rename_all = "camelCase")]
|
||||
#[model = "Model<Self>"]
|
||||
pub struct Manifest {
|
||||
pub id: PackageId,
|
||||
pub title: String,
|
||||
pub version: Version,
|
||||
pub release_notes: String,
|
||||
pub license: String, // type of license
|
||||
#[serde(default)]
|
||||
pub replaces: Vec<String>,
|
||||
pub wrapper_repo: Url,
|
||||
pub upstream_repo: Url,
|
||||
pub support_site: Url,
|
||||
pub marketing_site: Url,
|
||||
pub donation_url: Option<Url>,
|
||||
pub description: Description,
|
||||
pub images: Vec<ImageId>,
|
||||
pub assets: Vec<VolumeId>, // TODO: AssetsId
|
||||
pub volumes: Vec<VolumeId>,
|
||||
#[serde(default)]
|
||||
pub alerts: Alerts,
|
||||
#[serde(default)]
|
||||
pub dependencies: Dependencies,
|
||||
#[serde(default)]
|
||||
pub hardware_requirements: HardwareRequirements,
|
||||
#[serde(default)]
|
||||
pub git_hash: Option<GitHash>,
|
||||
#[serde(default = "current_version")]
|
||||
pub os_version: Version,
|
||||
#[serde(default = "const_true")]
|
||||
pub has_config: bool,
|
||||
}
|
||||
|
||||
#[derive(Clone, Debug, Default, Deserialize, Serialize)]
|
||||
#[serde(rename_all = "kebab-case")]
|
||||
pub struct HardwareRequirements {
|
||||
#[serde(default)]
|
||||
device: BTreeMap<String, Regex>,
|
||||
ram: Option<u64>,
|
||||
pub arch: Option<Vec<String>>,
|
||||
}
|
||||
|
||||
#[derive(Clone, Debug, Deserialize, Serialize)]
|
||||
pub struct Description {
|
||||
pub short: String,
|
||||
pub long: String,
|
||||
}
|
||||
impl Description {
|
||||
pub fn validate(&self) -> Result<(), Error> {
|
||||
if self.short.chars().skip(160).next().is_some() {
|
||||
return Err(Error::new(
|
||||
eyre!("Short description must be 160 characters or less."),
|
||||
crate::ErrorKind::ValidateS9pk,
|
||||
));
|
||||
}
|
||||
if self.long.chars().skip(5000).next().is_some() {
|
||||
return Err(Error::new(
|
||||
eyre!("Long description must be 5000 characters or less."),
|
||||
crate::ErrorKind::ValidateS9pk,
|
||||
));
|
||||
}
|
||||
Ok(())
|
||||
}
|
||||
}
|
||||
|
||||
#[derive(Clone, Debug, Default, Deserialize, Serialize)]
|
||||
#[serde(rename_all = "kebab-case")]
|
||||
pub struct Alerts {
|
||||
pub install: Option<String>,
|
||||
pub uninstall: Option<String>,
|
||||
pub restore: Option<String>,
|
||||
pub start: Option<String>,
|
||||
pub stop: Option<String>,
|
||||
}
|
||||
@@ -1,23 +1,178 @@
|
||||
use std::ffi::OsStr;
|
||||
use std::path::Path;
|
||||
use std::sync::Arc;
|
||||
|
||||
use imbl_value::InternedString;
|
||||
use models::{mime, DataUrl, PackageId};
|
||||
use tokio::fs::File;
|
||||
|
||||
use crate::prelude::*;
|
||||
use crate::s9pk::manifest::Manifest;
|
||||
use crate::s9pk::merkle_archive::file_contents::FileContents;
|
||||
use crate::s9pk::merkle_archive::sink::Sink;
|
||||
use crate::s9pk::merkle_archive::source::{ArchiveSource, FileSource, Section};
|
||||
use crate::s9pk::merkle_archive::MerkleArchive;
|
||||
use crate::s9pk::merkle_archive::source::multi_cursor_file::MultiCursorFile;
|
||||
use crate::s9pk::merkle_archive::source::{ArchiveSource, DynFileSource, FileSource, Section};
|
||||
use crate::s9pk::merkle_archive::{Entry, MerkleArchive};
|
||||
use crate::ARCH;
|
||||
|
||||
const MAGIC_AND_VERSION: &[u8] = &[0x3b, 0x3b, 0x02];
|
||||
|
||||
pub struct S9pk<S>(MerkleArchive<S>);
|
||||
pub mod compat;
|
||||
pub mod manifest;
|
||||
|
||||
/**
|
||||
/
|
||||
├── manifest.json
|
||||
├── icon.<ext>
|
||||
├── LICENSE.md
|
||||
├── instructions.md
|
||||
├── javascript.squashfs
|
||||
├── assets
|
||||
│ └── <id>.squashfs (xN)
|
||||
└── images
|
||||
└── <arch>
|
||||
├── <id>.env (xN)
|
||||
└── <id>.squashfs (xN)
|
||||
*/
|
||||
|
||||
fn priority(s: &str) -> Option<usize> {
|
||||
match s {
|
||||
"manifest.json" => Some(0),
|
||||
a if Path::new(a).file_stem() == Some(OsStr::new("icon")) => Some(1),
|
||||
"LICENSE.md" => Some(2),
|
||||
"instructions.md" => Some(3),
|
||||
"javascript.squashfs" => Some(4),
|
||||
"assets" => Some(5),
|
||||
"images" => Some(6),
|
||||
_ => None,
|
||||
}
|
||||
}
|
||||
|
||||
fn filter(p: &Path) -> bool {
|
||||
match p.iter().count() {
|
||||
1 if p.file_name() == Some(OsStr::new("manifest.json")) => true,
|
||||
1 if p.file_stem() == Some(OsStr::new("icon")) => true,
|
||||
1 if p.file_name() == Some(OsStr::new("LICENSE.md")) => true,
|
||||
1 if p.file_name() == Some(OsStr::new("instructions.md")) => true,
|
||||
1 if p.file_name() == Some(OsStr::new("javascript.squashfs")) => true,
|
||||
1 if p.file_name() == Some(OsStr::new("assets")) => true,
|
||||
1 if p.file_name() == Some(OsStr::new("images")) => true,
|
||||
2 if p.parent() == Some(Path::new("assets")) => {
|
||||
p.extension().map_or(false, |ext| ext == "squashfs")
|
||||
}
|
||||
2 if p.parent() == Some(Path::new("images")) => p.file_name() == Some(OsStr::new(&*ARCH)),
|
||||
3 if p.parent() == Some(&*Path::new("images").join(&*ARCH)) => p
|
||||
.extension()
|
||||
.map_or(false, |ext| ext == "squashfs" || ext == "env"),
|
||||
_ => false,
|
||||
}
|
||||
}
|
||||
|
||||
#[derive(Clone)]
|
||||
pub struct S9pk<S = Section<MultiCursorFile>> {
|
||||
manifest: Manifest,
|
||||
manifest_dirty: bool,
|
||||
archive: MerkleArchive<S>,
|
||||
size: Option<u64>,
|
||||
}
|
||||
impl<S> S9pk<S> {
|
||||
pub fn as_manifest(&self) -> &Manifest {
|
||||
&self.manifest
|
||||
}
|
||||
pub fn as_manifest_mut(&mut self) -> &mut Manifest {
|
||||
self.manifest_dirty = true;
|
||||
&mut self.manifest
|
||||
}
|
||||
pub fn as_archive(&self) -> &MerkleArchive<S> {
|
||||
&self.archive
|
||||
}
|
||||
pub fn as_archive_mut(&mut self) -> &mut MerkleArchive<S> {
|
||||
&mut self.archive
|
||||
}
|
||||
pub fn size(&self) -> Option<u64> {
|
||||
self.size
|
||||
}
|
||||
}
|
||||
|
||||
impl<S: FileSource> S9pk<S> {
|
||||
pub async fn new(archive: MerkleArchive<S>, size: Option<u64>) -> Result<Self, Error> {
|
||||
let manifest = extract_manifest(&archive).await?;
|
||||
Ok(Self {
|
||||
manifest,
|
||||
manifest_dirty: false,
|
||||
archive,
|
||||
size,
|
||||
})
|
||||
}
|
||||
|
||||
pub async fn icon(&self) -> Result<(InternedString, FileContents<S>), Error> {
|
||||
let mut best_icon = None;
|
||||
for (path, icon) in self
|
||||
.archive
|
||||
.contents()
|
||||
.with_stem("icon")
|
||||
.filter(|(p, _)| {
|
||||
Path::new(&*p)
|
||||
.extension()
|
||||
.and_then(|e| e.to_str())
|
||||
.and_then(mime)
|
||||
.map_or(false, |e| e.starts_with("image/"))
|
||||
})
|
||||
.filter_map(|(k, v)| v.into_file().map(|f| (k, f)))
|
||||
{
|
||||
let size = icon.size().await?;
|
||||
best_icon = match best_icon {
|
||||
Some((s, a)) if s >= size => Some((s, a)),
|
||||
_ => Some((size, (path, icon))),
|
||||
};
|
||||
}
|
||||
best_icon
|
||||
.map(|(_, a)| a)
|
||||
.ok_or_else(|| Error::new(eyre!("no icon found in archive"), ErrorKind::ParseS9pk))
|
||||
}
|
||||
|
||||
pub async fn icon_data_url(&self) -> Result<DataUrl<'static>, Error> {
|
||||
let (name, contents) = self.icon().await?;
|
||||
let mime = Path::new(&*name)
|
||||
.extension()
|
||||
.and_then(|e| e.to_str())
|
||||
.and_then(mime)
|
||||
.unwrap_or("image/png");
|
||||
DataUrl::from_reader(mime, contents.reader().await?, Some(contents.size().await?)).await
|
||||
}
|
||||
|
||||
pub async fn serialize<W: Sink>(&mut self, w: &mut W, verify: bool) -> Result<(), Error> {
|
||||
use tokio::io::AsyncWriteExt;
|
||||
|
||||
w.write_all(MAGIC_AND_VERSION).await?;
|
||||
self.0.serialize(w, verify).await?;
|
||||
if !self.manifest_dirty {
|
||||
self.archive.serialize(w, verify).await?;
|
||||
} else {
|
||||
let mut dyn_s9pk = self.clone().into_dyn();
|
||||
dyn_s9pk.as_archive_mut().contents_mut().insert_path(
|
||||
"manifest.json",
|
||||
Entry::file(DynFileSource::new(Arc::<[u8]>::from(
|
||||
serde_json::to_vec(&self.manifest).with_kind(ErrorKind::Serialization)?,
|
||||
))),
|
||||
)?;
|
||||
dyn_s9pk.archive.serialize(w, verify).await?;
|
||||
}
|
||||
|
||||
Ok(())
|
||||
}
|
||||
|
||||
pub fn into_dyn(self) -> S9pk<DynFileSource> {
|
||||
S9pk {
|
||||
manifest: self.manifest,
|
||||
manifest_dirty: self.manifest_dirty,
|
||||
archive: self.archive.into_dyn(),
|
||||
size: self.size,
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
impl<S: ArchiveSource> S9pk<Section<S>> {
|
||||
#[instrument(skip_all)]
|
||||
pub async fn deserialize(source: &S) -> Result<Self, Error> {
|
||||
use tokio::io::AsyncReadExt;
|
||||
|
||||
@@ -36,6 +191,46 @@ impl<S: ArchiveSource> S9pk<Section<S>> {
|
||||
"Invalid Magic or Unexpected Version"
|
||||
);
|
||||
|
||||
Ok(Self(MerkleArchive::deserialize(source, &mut header).await?))
|
||||
let mut archive = MerkleArchive::deserialize(source, &mut header).await?;
|
||||
|
||||
archive.filter(filter)?;
|
||||
|
||||
archive.sort_by(|a, b| match (priority(a), priority(b)) {
|
||||
(Some(a), Some(b)) => a.cmp(&b),
|
||||
(Some(_), None) => std::cmp::Ordering::Less,
|
||||
(None, Some(_)) => std::cmp::Ordering::Greater,
|
||||
(None, None) => std::cmp::Ordering::Equal,
|
||||
});
|
||||
|
||||
Self::new(archive, source.size().await).await
|
||||
}
|
||||
}
|
||||
impl S9pk {
|
||||
pub async fn from_file(file: File) -> Result<Self, Error> {
|
||||
Self::deserialize(&MultiCursorFile::from(file)).await
|
||||
}
|
||||
pub async fn open(path: impl AsRef<Path>, id: Option<&PackageId>) -> Result<Self, Error> {
|
||||
let res = Self::from_file(tokio::fs::File::open(path).await?).await?;
|
||||
if let Some(id) = id {
|
||||
ensure_code!(
|
||||
&res.as_manifest().id == id,
|
||||
ErrorKind::ValidateS9pk,
|
||||
"manifest.id does not match expected"
|
||||
);
|
||||
}
|
||||
Ok(res)
|
||||
}
|
||||
}
|
||||
|
||||
async fn extract_manifest<S: FileSource>(archive: &MerkleArchive<S>) -> Result<Manifest, Error> {
|
||||
let manifest = serde_json::from_slice(
|
||||
&archive
|
||||
.contents()
|
||||
.get_path("manifest.json")
|
||||
.or_not_found("manifest.json")?
|
||||
.read_file_to_vec()
|
||||
.await?,
|
||||
)
|
||||
.with_kind(ErrorKind::Deserialization)?;
|
||||
Ok(manifest)
|
||||
}
|
||||
|
||||
Reference in New Issue
Block a user