@dalias @eniko Personally, I'll sooner or later have to engange the whole #ARMv5 / #ARM11r7 / #RaspberryPi architecture anyway with OS/1337.
But I know a #readonly - OS isn't practical and where it is people already use #iPXE & #iSCSI for #diskless setups!
/boot
on an SD card, and the rest could be on any USB mass storage device (i.e. SSD or even HDD)...That being said Raspberry Pi do have the key advantage of being by far the best in terms of #documentation.
Also most corp/org/edu networks only backup the $HOME
directory and sometimes even allow syncing them across distros & keep them across version updates, so all the settings, addons and stuff remaib where they are: in said /home/
subfolders!
In fact most places with a sizeable #Linux-#Desktop landscape will just keep the /home/ directory on a redundant, #iSCSI-SAN and #netboot their #DisklessWorkstation|s via #iPXE, as this way burglars stealing devices most likely end up with a locked-down machine (anything but booting the preset network targets won't work without admin password!) that is a paperweight to them and espechally no data, which is crucial when it comes to #ITsec, #InfoSec, #OpSec & #ComSec.
Cuz it's way easier to secure 1-5 server rooms than thousands of publicly accessible machines on multiple campuses.
@uastronomer it's something I.did implement in the past (abeit #KVM + #Proxmox, but the steps are similar enough):
You can seperate #Storage and #Compute given you have a Storage-LAN that is fast enough (and does at least 9k if not 64k Jumbo Frames) and have the "Compute Nodes" entirely #diskless (booting via #iPXE from the #SAN) and then mount the storage via #iSCSI or #Ceph.
Did a bigger project (easily 8-digits in hardware, as per MSRP) where a Employer/Client did #CloudExit amidst escalating costs and #ROI being within quarters (if not months at the predicted growth rate)...
@uastronomer now if you seperate the compute and storage layer with diskless compute nodes accessing the filesystem via #iSCSI or #Ceph, you can even do superfast updates by merely rebooting the jail/host...
Another weird idea, i'd like to try:
adding a 3rd member to my #zfs mirror, but it's #iSCSI over fast ethernet over #powerline - put it in the cellar, 3 storeys down, see what happens.
I've got everything i'd need for a test setup - that little Wyse 3040 thin client has 1 USB3 port…
*record scratch, rewind*
I think #virtio NICs can be manually throttled to a fraction of "fast ethernet".
Hmm, virtualised, the test would exclude the weird old powerline bridges, though.
Back to Plan A, then!
@SweetAIBelle @OS1337
So instead of relying on some non-#reproduceable system images for cheap SBCs, why not make something like #OS1337 that is compact enough that one can easily build everything one wants for it to get at least started with something.
Sometimes one just needs to bood a system, check it's hardware and #ddrescue something off the internal harddrive because one doesn't have any other system that can run it...
Answering All Your iSCSI Scanner Questions - iSCSI is a widely used protocol for exposing SCSI devices over a network connectio... - https://hackaday.com/2024/05/12/answering-all-your-iscsi-scanner-questions/ #documentscanner #retrocomputing #flatbedscanner #computerhacks #filmscanner #hardware #scanner #iscsi #scsi
@melissabeartrix then consider #iSCSI or #FCoE (#FibreChannel over #Ethernet) over #OM5-fiber - based #100GBASE-SR1.2 as per 802.3bm-2015 Ethernet.
Just make shure your devices have #QSFP28 ports to chug' in the LC-Duplex connectors of the fibers...
How to Install an #iSCSI Storage Server on #Ubuntu 22.04
https://www.howtoforge.com/how-to-install-iscsi-storage-server-on-ubuntu-22-04/
WD barely won but I think I'm ignoring 3 of you anyway and going for Exos Now I'm deciding between:
5x Exos x16 20T (a single raid-z2 60G pool for bulk smb/nfs and iscsi and jails and..)
or
3x Exos x16 14T (raid-z1, 28T for #k3s iscsi PVs) plus 5 exos x14 10T (raid-z2, 30T for bulk file storage and db jails)
The difference between them is only about $10 and 2T loss of usable bulk space, but there might be an advantage to moving the #k3s cluster pvs and such away from the bulk nfs/samba archival storage.
I can't use the entire pool with iscsi because its huge and my clusters aren't, but in general it would be nice to have more variety in storage options. (Z2 vs Z1 depending on value/performance)
Please jump in and tell me why I'm wrong before I spend all of my money screwing up.
#truenas #k3s #k8s #iscsi #storage #homelab #snarkhome #sata #zfs #nas #spinningrust #seagate
Bei mir steht ein #Downscaling an. Ich würde mich in absehbarer Zukunft von meinem einen #Server trennen. Eine #lenovo #ThinkstationC20, racktauglich (vielleicht hab ich auch noch ein #Railkit). Mit zwei #Xeon Prozessoren (16 Threads) und einer zusätzlichen vier port Netzwerkkarte (Gigabit) MIT eigenem BIOS (für Boot-Targets/ #iscsi)
Ich würde das ganze für 200€ (Sparschweine der Kinder) und Übernahme der VK hergeben.
Bei Interesse bitte melden und Toot bookmarken, Mehr Fotos folgen.
#forsale
@landley *nodds in agreement*
#SMB / #CIFS is just predominant because it's the bare minimum aka. worst standard to get files exchanged and provide transparent access to clients.
#NFS & #iSCSI failed outside of enterprises where the configuration overhead is justified by the performance and granularity one can get.
#Apple's proprietary replacements of IETF standards was just the same as Microsoft did, but with documentation.
@nixCraft You forgot #Ceph & #NFS as well as #GlusterFS but I guess those are less seen as #filesystems but protocols to provide data access like #SFTP, #FTP & #iSCSI...
Gotta love the smell of a healthy SAN in the morning. @caseyliss let's talk Synology upgrades and migrations! #synology #SynologyNAS @atpfm #homelab #iscsi #networking