Hosting Lemmy with Traefik as a reverse proxy
Just wrote up a little post for those who want to self host a lemmy instance with docker-compose and traefik.

This Week in Self-Hosted (23 February 2024)
Not my blog, just a good community share :)

With free esxi over, not shocking bit sad, I am now about to move away from a virtualisation platform i’ve used for a quarter of a century. Never having really tried the alternatives, is there anything that looks and feels like esxi out there? I don’t have anything exceptional I host, I don’t need production quality for myself but in all seriousness what we run at home end up at work at some point so there’s that aspect too. Thanks for your input!

Inventorying high value items with receipts
Is there a FOSS program where I can inventory my high value items in case there is an insurance claim? I was thinking of the item, the picture of the item and serial number, maybe the UPC, and then an attachment of the receipt. I'm guessing some kind of database that integrates file attachments per item.

ssh into raspberry without a router
Hi! I hope this is the right community to ask. Next week I will be on the road for 5 Days for work. I have quite some spare time, so I thought I would dig up my raspberry project again and hopefully finish it. I need it with me, because it controls some hardware, so a VPN to home does not work. So only option I could think of, is to connect the pi directly to my laptop via an ethernet cable. As far as I understood from some research is that I would need to install and run an DHCP server on my laptop, which they did not recommend. Alternatively they suggested to just take a router and plug both devices in there. I don't really have a spare router, so that's not an option either. To be hones it confuses me a little, that there does not seem to be a standard for connecting to a device directly over a single cable and login with a user account. Any recommendations how I can work on the pi like with ssh? Thanks a lot!

cross-posted from: > There's a new version of [Nephele WebDAV server]( (also on [Docker Hub]( that supports using an S3 compatible server as storage and encrypting filenames and file contents. > > This essentially means you can build your own cloud storage server leveraging something like Backblaze B2 for $6/TB/month, and that data is kept private through encryption. That's cheaper than Google Drive, _and_ no one can snoop on your files.

Up-to-date OpenSSL guide or tool for creating a certificate authority and self-signing TLS certificates?
Hello friends, Just about every guide that comes up on my Google search for "How to create certificate authority with OpenSSL" seems to be out-of-date. Particularly, they all guide me towards creating a certificate that gets rejected by the browser due to the "Common Name" field deprecation, and the requirement of "Subject Alternative Name" field. Does someone know a tool that creates a Certificate Authority and signs certificates with that CA? A tool that follows modern standards, gets accepted by browsers and other common web tools. Preferably something based on OpenSSL. If you know a guide that does this using OpenSSL, even better! But I have low hopes for this after going through dozens of guides all having the same issue I mentioned above. ### Replies to Some Questions you Might Ask Me #### Why not just correct those two fields you mention? I want to make sure I am doing this right. I don't want to keep running into errors in the future. For example, I actually did try that, and npm CLI rejected my certs without a good explanation (through browser accepts it). #### Why not Let's Encrypt? This is for private services that are only accessible on a private network or VPN #### If this is for LAN and VPN only services, why do you need TLS? TLS still has benefits. Any device on the same network could still compromise the security of the communication without TLS. Examples: random webcam or accessory at your house, a Meta Quest VR headset, or even a compromised smartphone or computer. #### Use small step CA (or other ACME tools) I am not sure I want the added complexity of this. I only have 2 services requiring TLS now, and I don't believe I will need to scale that much. I will have setup a way to consume the ACME server. I am happier with just a tool that spits out the certificates and I manage them that way, instead of a whole service for managing certs. If I am over estimating the difficulty for this, please correct me.

How “stable” (release cycle) does a server OS need to be? Experiences with CoreOS?
That's a question I always asked myself. Currently, I'm running Debian on both my servers, but I consider switching to Fedora Atomic Core (CoreOS), since I already use Fedora Atomic on my desktop and feel very comfortable with it. ### There's always the mentality of using a "stable" host OS bein better due to following reasons: * Things not changing means less maintenance, and nothing will break compatibility all of the sudden. * Less chance to break. * Services are up to date anyway, since they are usually containerized (e.g. Docker). * And, for Debian especially, there's one of the biggest availability of services and documentation, since it's THE server OS. My question is, how much of these pro-arguments will I loose when I switch to something less stable (more regular updates), in my case, Fedora Atomic? --- ### My pro-arguments in general for it would be: * The host OS image is very minimal, and I think most core packages should be running very reliably. And, in the worst case, if something breaks, I can always roll back. Even the, in comparison to the server image, "bloated" desktop OS (Silverblue) had been running extremely reliably and pretty much bug free in the past. * I can always use Podman/ Toolbx for example for running services that were made for Debian, and for everything else there's Docker and more. So, the software availability shouldn't be an issue. * I feel relatively comfortable using containers, and think especially the security benefits sound promising. ### Cons: * I don't have much experience. Everything I do related to my servers, e.g. getting a new service running, troubleshooting, etc., is hard for me. * Because of that, I often don't have "workarounds" (e.g. using Toolbx instead of installing something on the host directly) in my mind, due to the lack of experience. * Distros other than Debian and some others aren't the standard, and therefore, documentation and availability isn't as good. * Containerization adds another layer of abstraction. For example, if my webcam doesn't work, is it because of a missing driver, Docker, the service, the cable not being plugged in, or something entirely different? Troubleshooting would get harder that way. --- On my "proper" server I mainly use Nextcloud, installed as Docker image. My Raspberry Pi on the other hand is only used as print server, running Octoprint for my 3D-printer. I have installed Octoprint there in the form of Octopi, which is a Raspian fork distro where Octoprint is pre-installed, which is the recommended way. With my "proper" server, I'm not really unhappy with Debian. It works and the server is running 24/7. I don't plan to change it for the time being. Regarding the Raspi especially, it looks quite a bit different. I think I will just try it and see if I like it. ### Why? * It is running only rarely. Most of the time, the device is powered off. I only power it on a few times per month when I want to print something. This is actually pretty good, since the OS needs to reboot to apply updates, and it updates itself automatically, so I don't have to SSH into it from time to time, reducing maintenence. * And, last but not least, I've lost my password. I can't log in anymore and am not able to update anymore, so I have to reinstall anyway. --- What is your opinion about that?

Does anyone know anything about Solid pods?
I heard about this project years ago. Cool concept: standardized, interchangeable storage + identity that can be plugged into arbitrary apps. The idea is that your identity is tied to your data, and your data can be hosted anywhere so you can retain control over your data or use a simple provider. It was also created by Tim Berners-Lee, creator of the web. However, it doesn't seem to be gaining traction anywhere, even in the already-niche self-hosting community. From the GitHub (which was hard to find on the website!) I could see that it's being actively developed, including a new website redesign, but everything else seems stagnant. Their newsletter has no updates since 2021. There are only a small handful of apps listed on the site and most of them haven't been maintained since 2019 or earlier, and a lot are just things like "solid pod explorer" or "demo app". Anyone had any experience with it? Or know more about the situation? I would love to see this become more widely used.

I'm looking for a media player/OS for an ARM SBC that can stream from my navidrome (subsonic compatible) music server, and be controlled via either a web GUI or an android app. I'd love to hear what you guys came up with! Currently really happy with my setup, I'm using Navidrome as my music server, along with Ultrasonic as my phone client. I've set up a (dumb/analog) speaker system on my workshop, and I'd like to be able to listen to music there, but I don't want to add a whole setup (be it an old laptop, or add kb/mouse, monitor and such) and my phone no loner has a 3.5mm jack. I have a Raspberry Pi 3, an OrangePi Zero, and an OrangePi PC+. I'd rather use the zero or the PC+ since they're kinda unstable/wonky and I don't trust them anymore for stuff I want to keep running 24/7 (like pihole). I'm open to testing other music servers (volumio maybe?) on my main homelab if that means having the ability to change the client/sink from the app/gui (something like what Spotify does, where you can pick from any client to stream to other clients/speakers)

Password Manager that supports multiple databases/syncing?
I currently use keePass, and use it on both my PC and my phone. I like it because I can keep a copy of my DB on my phone and export it through a few different means. But I can't seem to find an option to actually sync my local DB against a remote one. I've thought about switching to BitWarden but from what I can see it uses a single DB with multiple connections. Is there a password manager that allows ultiple databases (one PC one Phone) with easy syncing between them - specifically from my phone? Or a way to setup keePass to allow syncing with a machine on my home network?

Please advise how to transfer P2P a 30 GB file
I have two computers with Windows 10. Preferably the simplest option, so that at the other end people with minimal IT competence can figure it out

Self Hosted IFTTT RSS Replacement
A couple of years ago, IFTTT did a thing where they asked people to sign up to premium and they could pay whatever they like and could keep the service forever. I didn't use many of the services, but thought it made sense to try and preserve something so useful for in case I did need it. In the meantime, I would allow it to check some RSS feeds and alert me when certain keywords came up. Some time goes by and the ambitions of IFTTT grow, they now rename the service I pay for as Legacy. Seems ominous, but I'm only using it for RSS so nothing to worry about. Fast forward to yesterday and I get an email to say that they're moving me to a new premium service and doubling what I pay. It left a bad taste in my mouth. I hate when companies do this. Especially when they promised I could keep my old thing at the same price forever. Anyway, since they've clearly lost their mind in the pursuit of AI supremacy, I may as well just host this myself. So is there a self hosted solution for RSS where I can get notifications when some RSS feeds publish indiscriminately and others when specific keywords come up? Something I can put in a Docker container on my RPi, set and forget.

RIP my photos from 2017 and contacts from 2005
I recently decided to replace the SD card in my Raspberry Pi and reinstall the system. Without any special backups in place, I turned to rsync to duplicate `/var/lib/docker` with all my containers, including Nextcloud. **Step #1:** I mounted an external hard drive to `/mnt/temp`. **Step #2:** I used rsync to copy the data to `/mnt/tmp`. See the difference? **Step #3:** I reformatted the SD card. **Step #4:** I realized my mistake. Moral: no one is immune to their own stupidity 😂

How to remotely reboot a Linux host if SSH fails to connect?
Edit2: Thanks all for your responses! I have checked the logs,, and based on that removed tracker-miner-fs as it's a search/index tool which I don't need. No idea why it took over all memory. I'll also get a WiFi Smartplug as a kill switch. Hopefully that solves it. Thanks again heaps! ---- I've got a HP ProDesk G3 which I'm using as home server, I've installed Ubuntu on it. Earlier this week the services I host on it stopped (Immich & Frigate). I tried to SSH, but it just hung after asking for a password. I could ping it, but it was just unresponsive. I had to force reboot it manually. This is fine, but I'm not always at home. The chip has Intel vPro as far as I know, which could be an option, but I have no idea how this works. The documentation on the Intel site seems focused on enterprises. I tried to connect with RealVNC which does not work, so I think I've got to install/configure something on the server first. I also asked Bing Chat but it came up with non existing packages & commands. Welcome your thoughts! /edit: I just found this, which seems to be exactly what I need:

Why are here so many spam-bot posts?
Probably a dumb question, but I have to report pretty much the same post (some website-link + some mentioned usernames, but always sent from different instances) multiple times a day. The weird thing is, that this happens only here in this community, and not in any else I have subscribed to. Is this some targeted attack, because due to the self hosting, we're a more valuable victims, or is it just due to time shift because the mods are in a different time zone and asleep when we report the posts? I think the latter one isn't the case, since there are many active moderators here :) Is there something we can do about it?

Looking to build my first PC in almost 30 years; What should I be on the look out for?
It looks like !buildapc community isn't super active so I apologize for posting here. Mods, let me know if I should post there instead. I built my first PC when I was I think 10-11 years old. Built my next PC after that and then sort of moved toward pre-made HP/Dell/etc. My last PC's mobo just gave out and I'm looking to replace the whole thing. I've read over the last few years that prefabs from HP/Dell/etc. have gone to shit and don't really work like they used to. Since I'm looking to expand comfortably, I've been thinking of giving building my own again. I remember when I was a young lad, that there were two big pain points when putting the rig together: motherboard alignment with the case (I shorted two mobos by having it touch the bare metal of the grounded case; not sure how that happened but it did) and CPU pin alignment so you don't bend any pins when inserting into the socket. Since it's been several decades since my last build, what are some things I should be aware of? Things I should avoid? For example, I only recently learned what M.2 SSD are. My desktop has (had) SATA 3.5" drives, only one of which is an SSD. I'll admit I am a bit overwhelmed by some of my choices. I've spent some time on [pcpartpicker]( and feel very overwhelmed by some of the options. Most of my time is spent in code development (primarily containers and node). I am planning on installing Linux (Ubuntu, most likely) and I am hoping to tinker with some AI models, something I haven't been able to do with my now broken desktop due to it's age. For ML/AI, I know I'll need some sort of GPU, knowing only that NVIDIA cards require closed-source drivers. While I fully support FOSS, I'm not a OSS purist and fully accept that using a closed source drivers for linux may not be avoidable. Happy to take recommendations on GPUs! Since I also host a myriad of self hosted apps on my desktop, I know I'll need to beef up my RAM (I usually go the max or at least plan for the max). My main requirements: - Intel i7 processor (I've tried i5s and they can't keep up with what I code; I know i9s are the latest hotness but don't think the price is worth it; I've also tried AMD processors before and had terrible luck. I'm willing to try them again but I'd need a GOOD recommendation) - At least 3 SATA ports so that I can carry my drives over - At least one M.2 port (I cannibalized a laptop I recycled recently and grabbed the 1TB M.2 card) - On-board Ethernet/NIC (on-board wifi/bluetooth not required, but won't complain if they have them) - Support at least 32 GB of RAM - GPU that can support some sort of ML/AI with DisplayPort (preferred) Nice to haves: - MoBo with front USB 3 ports but will accept USB 2 (C vs A doesn't matter) - On-board sound (I typically use headphones or bluetooth headset so I don't need anything fancy. I mostly listen to music when I code and occasionally do video calls.) I threw together this list: It didn't matter to me if it was in stock; just wanted a place to start. Advice is very much appreciated! EDIT: WOW!! I am shocked and humbled by the great advice I've gotten here. And you've given me a boost in confidence in doing this myself. Thank you all and I'll keep replying as I can.

Bad 4K Performance on Jellyfin
Hi all, looking for some help with the Jellyfin Media Player. For background, I've used Plex for years, and I've had it working well. I'm trying out Jellyfin because of all of the reasons you're already thinking of. One issue I'm having - I like uncompressed 4K HDR. I'm trying to play a large movie, one Plex direct plays perfectly fine to my HTPC. (2.5GB networking through and through, direct access, all the basics have checked). However Jellyfin Media Player seems to stutter and drop frames. Not like "It stops and buffers", but more like playing a video game and it drops down to 15fps. Is there a setting somewhere I'm missing to enable GPU support or something? I toggled OpenGL on and off and it didn't seem to have an effect. Video says it's direct play, no transcode. Not sure what else it could be beyond hardware acceleration? Thanks!

Unraid Moves to Annual Subscription Pricing Model
- Unraid is switching to annual subscription pricing, offering Starter, Unleashed, and Lifetime licenses with optional extension fees for updates. - Existing Basic, Plus, and Pro licenses can be upgraded to higher levels of perpetual licenses. - This change may increase revenue for Lime Technology but could also make other NAS providers more appealing to users. Archive link:

Starting from zero
I'm interested in exploring the world of self hosting, but most of the information that I find is incredibly detailed and specific, such as what type of CPU performs better, etc. What I'm really looking for is an extremely basic square 1 guide. I know basically nothing about networking, I don't really know any coding, but it seems like there are a lot of tools out there that might make this possible even for a dummy like me. Right now, my cloud computing is pretty much typical, I think. I use onedrive to sync my documents and old files. I need to be able to quickly access files on different devices, such as a powerpoint created on one device and presented on another. On my phone I use Android and my backups of downloads and photos and other data (messages, etc) are all on Google Drive /Google 1. I'm willing to spend the time learning to an extent, but I'm not looking to become a network expert. I'm also willing to spend a little bit of money on hardware or a subscription service if necessary. Ideally I'd like to be out of this subscription service game, but the main goal is to be in charge of my own files. I have an old laptop running Linux to play around with and a fast and stable home internet connection. Eventually, I would like to not only be syncing my files, photos, and documents in real time, but also I'd like to maybe try using it as an entertainment server to watch/listen to downloaded media on my home network. Is there such a thing as a guide for a total beginner starting from zero? Is this worth attempting, or will I quickly find myself frustrated and in way over my head? Or, do I need to wait a little longer until more idiot-proof tools become available?

cross-posted from: > February 20, 2024 [piefedadmin]( writes: > > > For a very small instance with only a couple of concurrent users a CDN might not make much difference. But if you take a look at your web server logs you’ll quickly notice that every post / like / vote triggers a storm of requests from other instances to yours, looking up lots of different things. It’s easy to imagine how quickly this would overwhelm an instance once it gets even a little busy. > > > > One of the first web performance tools people reach for is to use a CDN, like Cloudflare. But how much difference will it make? In this video I show you my web server logs before and after and compare them. > > Read [How much difference does a CDN make to a fediverse instance?](

[SOLVED] Nextcloud Snap behind Caddy is responding with 301 Moved Permanently
Cross-posted to: --- # Solution I'm still not really sure exactly what the root cause of the issue was (I would appreciate it if someone could explain it to me), but I disabled HTTPS on the Nextcloud server ``` nextcloud.disable-https ``` and, all of a sudden, it started working. My Caddyfile simply contains the following: ``` { server-LAN-ip:80 } ``` # Original Post I am trying to upgrade my existing Nextcloud server (installed as a Snap) so that it is sitting behind a reverse proxy. Originally, The Nextcloud server handled HTTPS with Let's Encrypt at ``; now, I would like for Caddy to handle HTTPS with Let's Encrypt at `` and to forward the traffic to the Nextcloud server. With my current setup, I am encountering an error where it is saying `301 Moved Permanently`. Does anyone have any ideas on how to fix or troubleshoot this? `Caddyfile`: ``` { reverse_proxy header / Strict-Transport-Security max-age=31536000; } ``` And here is the output of `curl -v`: ``` * Host was resolved. * IPv6: (none) * IPv4: public-ip * Trying public-ip:443... * Connected to (public-ip) port 443 * ALPN: curl offers h2,http/1.1 * TLSv1.3 (OUT), TLS handshake, Client hello (1): * CAfile: /etc/ssl/certs/ca-certificates.crt * CApath: none * TLSv1.3 (IN), TLS handshake, Server hello (2): * TLSv1.3 (IN), TLS handshake, Encrypted Extensions (8): * TLSv1.3 (IN), TLS handshake, Certificate (11): * TLSv1.3 (IN), TLS handshake, CERT verify (15): * TLSv1.3 (IN), TLS handshake, Finished (20): * TLSv1.3 (OUT), TLS change cipher, Change cipher spec (1): * TLSv1.3 (OUT), TLS handshake, Finished (20): * SSL connection using TLSv1.3 / TLS_CHACHA20_POLY1305_SHA256 / x25519 / id-ecPublicKey * ALPN: server accepted h2 * Server certificate: * subject: * start date: Feb 21 06:09:01 2024 GMT * expire date: May 21 06:09:00 2024 GMT * subjectAltName: host "" matched cert's "" * issuer: C=US; O=Let's Encrypt; CN=R3 * SSL certificate verify ok. * Certificate level 0: Public key type EC/prime256v1 (256/128 Bits/secBits), signed using sha256WithRSAEncryption * Certificate level 1: Public key type RSA (2048/112 Bits/secBits), signed using sha256WithRSAEncryption * Certificate level 2: Public key type RSA (4096/152 Bits/secBits), signed using sha256WithRSAEncryption * TLSv1.3 (IN), TLS handshake, Newsession Ticket (4): * using HTTP/2 * [HTTP/2] [1] OPENED stream for * [HTTP/2] [1] [:method: GET] * [HTTP/2] [1] [:scheme: https] * [HTTP/2] [1] [:authority:] * [HTTP/2] [1] [:path: /] * [HTTP/2] [1] [user-agent: curl/8.6.0] * [HTTP/2] [1] [accept: */*] > GET / HTTP/2 > Host: > User-Agent: curl/8.6.0 > Accept: */* > &lt; HTTP/2 301 &lt; alt-svc: h3="public-ip:443"; ma=2592000 &lt; content-type: text/html; charset=iso-8859-1 &lt; date: Wed, 21 Feb 2024 07:45:34 GMT &lt; location: &lt; server: Caddy &lt; server: Apache &lt; strict-transport-security: max-age=31536000; &lt; content-length: 250 &lt; 301 Moved Permanently <h1>Moved Permanently</h1> <p>The document has moved here.</p> * Connection #0 to host left intact ```

Hi. I switched from a few SBCs to a proxmox-server and i really enjoy it. Now - after playing a little bit around - i plugged an external 8tb-hdd on my server mainly for backups. I followed this tutorial: Next step is to use urbackup. I created a folder /urbackup on the 8tb-hdd and now i would like to assign this folder to the urbackup-docker but i do not understand how to do this. What "content" do i have to choose for this case and how can i assign the folder to the docker? Important EDIT: I forgot to mention that i do not use a VM but LXC! SOLUTION in this case is pretty simple: For example, to make the directory /mnt/bindmounts/shared accessible in the container with ID 100 under the path /shared, add a configuration line such as: mp0: /mnt/bindmounts/shared,mp=/shared into /etc/pve/lxc/100.conf. Or alternatively use the pct tool: pct set 100 -mp0 /mnt/bindmounts/shared,mp=/shared to achieve the same result. Thanks a lot for your help!

XMPP… on a Pi?
So, I've had it up to here (^^^) with the family using WhatsApp, etc and I'm heading off into the land of XMPP to find a better solution. I've got a Pi3 hanging off my pfSense firewall acting as a kinda DMZ box, so thought I could setup an XMPP server on it (Prosody?) Any advice? Will the Pi crumble (see what I did there) under the pressure of 4 people using it? Issues with proxying outside with a Lets Encrypt cert on the pfSense box, but maybe not inside the network? "Better" server software? Thanks

unRaid is NOT switching to a subscription model
Per the pricing plan, all licenses are forever licenses, but the lowest two tiers only offer 1 year of updates. After that you can _choose_ to renew, or continue with your current version. If you do not like subscriptions, **there still a lifetime plan**, but at a higher pricepoint. All existing plans are grandfathered in. Full announcement form Lime: _Note: I have mixed emotions about this, but I'm seeing a lot of rage bait, and if we're going to rage we might as well have our facts straight._ If you haven't subbed already and are interested, check out the unraid community at ! We are already discussing it over there too.

Jellyfin getting hourly 502 on Uptime-Kuma after adding music library?
I'm running Jellyfin in Docker in my home server for movies and shows. I recently added a music directory and apparently after that I'm getting almost hourly notifications from my Uptime-Kuma instance connected to Gotify that Jellyfin is down with status code 502. It's quickly up again, but I'm wondering what's causing this. I have Nginx Proxy Manager configured for a local and a public domain pointing to my Jellyfin instance. Any idea what could be causing this? ![](

  • 1d
[]( [@selfhosted]( [@suihan74]( [@raio0529]( [@undefine](

NAS + Jellyfin + Sunshine/Moonlight?
I'm looking into building my first NAS/Jellyfin server, and one thing I keep wondering about is whether I should try and make it work as a Sunshine server to stream games to my TV via Moonlight. On my current desktop, I mainly do this for emulated games or former console exclusives. That being said, I'd rather not use my desktop as a server, hence wanting to cram it into a NAS/Jellyfin server. Is this a good idea? Or should I drop it and keep the media server separate?

  • hash
  • English
  • 5d
Self hosted NAS + Lightweight Game Streaming Solution?
I have an aging gaming desktop with a GTX 970 that I've previously used to let friends/family stream games. My area has a lot of fiber so it's surprisingly usable, even got VR working. Problem is, I'd prefer to use it as a NAS most the time as it has plenty of drive bays and I need somewhere better to run jellyfin than my desktop. I'm somewhat aware of the options as I've used various hypervisors etc before, but I also want something as simple as possible. Because of that, I'm looking at TrueNAS. I'm aware my point of difficulty is gonna be the GPU. Is there any easy way to use it for a gaming VM at times and jellyfin encode others? If there's not some nifty feature in Proxmox or TrueNAS to solve my problem, how dumb would running a linux VM with both the games and Jellyfin be? Forgive me if this is a more generic question than I realize. I'd be plenty happy to be pointed to some existing resources.

Photofield v0.15.0 released: Google Photos alternative, now zoomier than ever! Plus related image search, map view, arm64, tags (alpha), and more!
Hi all! I'd like to share some slow, but steady progress I've made on my self-hosted personal photo gallery - a Google Photos alternative. It's been a while since I last posted any updates - the last time was [about v0.9.2 on /r/selfhosted](, so it's actually my first post here. ### What's new? Lots of things! Here's a quick summary: * **New website!** []( - bonus, it's embedded in every install, in fact, even the website is just hosted from the app itself. 😎 * **UX polish** - lots of small improvements, like [better interaction &amp; fixed video controls]( and [better error messages &amp; autoreloading config]( * **Zoomier than ever** - since [v0.15.0]( when you zoom into a photo, it zooms the whole scene! This wasn't the case for a few versions due to a technical detour, but I found a way to get it back without too many compromises. * **Related image search** - you can [Find Similar Images]( now, using the same AI functionality as the semantic image search. * **Map view** - you can [see your photos on a map]( Still has some quirks, so make sure to zoom in first. [To be improved]( * **Reverse geolocation** - you can see the [location of a photo in the timeline view]( Completely local, using [tinygpkg]( * **Tags (alpha)** - you can [tag your photos]( now. Quite basic for now, but should be a good foundation for [things to come]( * **ARM Docker images** - since [v0.14.1]( the published Docker images are multiarch - x64 and arm64, including [photofield-ai]( Makes it possible to run on cheaper, ARM-based servers, and faster on M1/M2/M3 Macs. ### Show me the demo Now hosted on Hetzner's arm64-based CAX11 - 2 vCPUs &amp; 4 GB of RAM - the cheapest one. The photos are © by their authors. Since migrating to the CAX11, it only uses one size of internally pregenerated sqlite-based thumbnails, taking up roughly 4% of the disk space of originals. Support for Synology Moments thumbnails is still there, but doesn't seem as crucial as before. ### How do I try it out? It's very low commitment, a single executable or Docker image that you can mount with read-only access to an existing file structure, see [Quick Start]( (also [on GitHub]( if the website is dead). ### Another one??? Why? It's a conspiracy to increase fragmentation and increase shareholder value of big tech companies. 😄 Jokes aside, I think there is some space for a fast, self-contained, extremely easy to deploy solution. But mainly, it's to scratch my developer itch and I get to learn new things. ### Thanks Thanks to everyone who's been using it, contributing, and giving feedback! See also [foss_photo_libraries]( for alternatives if this doesn't fit your needs. Let me know what you think and what you'd like to see next! 🙏

Best path to follow
I’ve had an rpi4 running yunohost for a while, and it runs just fine. Last year I got an optiplex running proxmox. Yuno is accessible directly through a couple of domains I own, while I connect to proxmox using Tailscale (fine for me, not so fine when the rest of the family is involved). Here’s the thing: I’m wondering if it would be best to add custom redirects on Yuno pointing to the services I have on proxmox, or if I should point my router to proxmox running nginx, and use it to point to Yuno (if that’s even possible, since I believe Yuno itself runs on nginx). Or maybe I should just ditch the rpi/yuno and try to move everything to the single proxmox machine (but that would take me some time). I even thought of backing up Yuno and loading it inside a VM in proxmox, but I believe that wouldn’t really change the main path-to-service problem.

Is it possible to have a Subsonic server that falls back to the YT Music API when a song isn’t downloaded?
The major thing keeping me from self-hosting my music is that I mostly discover music by using the radio function in ViMusic (Uses the YT Music API to stream music) or by finding a song through friends, social media, etc. and then listening to it in ViMusic to see whether it's good and I want to add it to my playlist. Not having the radio function wouldn't be that big a deal but having to wait for a song to download before being able to listen to it could be very annoying, depending on the situation. So my question is: Is there some way to have a Subsonic server where I'm able to search for any song from a client and have the server stream it from something like YT Music when it's not downloaded?

O365 email local cache
Work uses O365 and I'm getting a little frustrated with OWA. Thinking about running a local email server to mirror O365. In the end, I want to keep my email in O365, but have a 2 way sync with a local imap server. Looks like I have a few options on the email server - dovecot/cyrus/stalwart. For the syncing, I just see mbsync. Any experience setting up something similar? Any other options other than what I listed? Edit: IT knows what I'm doing. I'm not going to compromise any compliance requirements we have.

  • Padook
  • English
  • 5d
Tailscale/ PiHole
I've been slowly moving access to my Self hosted services from multiple WireGuard VPN connections over to tailscale for that nice flat network feel. One thing that was holding me back from the switch was that I liked vpn'ing my internet traffic from my phone and laptop back to my network and into the PiHole to avoid ads/tracking when I was away from home. Then I found the DNS settings on the tailscale admin console and everything lit up! I added the server that PiHole is running on as a nameserver and changed the global settings and BINGO! No ads! Unfortunately.... A few days later when looking at my PiHole admin console I realized that the PiHole that I set up at my parents house for them was one of the biggest clients.....Not optimal..... Is there a way to make an exception to the global DNS setting? Any suggestions? I don't want to remove their PiHole from my tailnet as it makes it much easier to maintain.

Docker configs destroyed after update [resolved!]
Hello selfhosted community, something weird just happened to my setup while running a routine update. I'm running docker containers on a couple Debian LXCs through Proxmox, and a regular apt-get upgrade just wiped all my configurations. Somehow it seems to have gutted my databases and deleted the compose.yml files without a trace remaining. Thankfully all my data seems to be intact as far as I can tell. Did I royally mess something up in all of my configurations or in doing the update? This has never happened to me before. Thankfully I have a backup for the configs that's about 6 days old, but it's still extremely annoying. Any hints? Thanks

Spam posts
So what can we do to combat this Spam posting as a community? Anyone have any ideas?

    Create a post

    A place to share alternatives to popular online services that can be self-hosted without giving up privacy or locking you into a service you don’t control.


    1. Be civil: we’re here to support and learn from one another. Insults won’t be tolerated. Flame wars are frowned upon.

    2. No spam posting.

    3. Posts have to be centered around self-hosting. There are other communities for discussing hardware or home computing. If it’s not obvious why your post topic revolves around selfhosting, please include details to make it clear.

    4. Don’t duplicate the full text of your blog or github here. Just post the link for folks to click.

    5. Submission headline should match the article title (don’t cherry-pick information from the title to fit your agenda).

    6. No trolling.


    Any issues on the community? Report it using the report flag.

    Questions? DM the mods!

    • 0 users online
    • 79 users / day
    • 656 users / week
    • 1.75K users / month
    • 5.53K users / 6 months
    • 1 subscriber
    • 2.55K Posts
    • Modlog
    A generic Lemmy server for everyone to use.

    The World’s Internet Frontpage Lemmy.World is a general-purpose Lemmy instance of various topics, for the entire world to use.

    Be polite and follow the rules (

    Get started

    See the Getting Started Guide

    Donations 💗

    If you would like to make a donation to support the cost of running this platform, please do so at the donation URLs:

    Open Collective backers and sponsors


    LW Legal & Help Center


    Join the team 😎

    Check out our team page to join

    Questions / Issues

    • Questions/issues post to Lemmy
    • To open a ticket Static Badge
    • Reporting is to be done via the reporting button under a post/comment.

    More Lemmy.World

    Mastodon Follow



    Alternative UIs

    Monitoring / Stats 🌐

    Lemmy.World is part of the FediHosting Foundation