Homelab,  Networking,  OPNsense

Some design considerations when setting up Nextcloud accessible from the Internet [Part 1]

Honestly this post will be all over the place. I wanted to focus on some design choices when deploying Nexcloud, which is accessible from the Internet, but while I was writing this and checking various assumptions that I made, I realized that I may need to go back to drawing board. However, as I haven’t written anything in a while and still think, that below information may be useful to someone I will post it as part 1.

This post and future part 2 is not focused on deploying Nextcloud instance, but rather some design considerations around it, which are omitted a lot of the time in guides, focusing specifically on deployment.

Personally I used Linode guide for general setup and official documentation for server tuning and additional configuration.

  1. Why use VM and build from Nextcloud tar archive?
  2. Networking
    1. Xcp-ng and OPNsense VLAN
    2. Restricting network access on OPNsense
    3. Restricting SSH and Web GUI access
  3. Nextcloud and External storage
  4. NFS vs iSCSI
    1. Synology, XCPng and iSCSI don’t work well together

Why use VM and build from Nextcloud tar archive?

There are many options on their official website AIO Docker Image, AIO VM image, snap, build from archive, however when looking at official documentation LAMP stack using Nextcloud .tar archive is their recommended option – source. Nextcloud can be configured in dozens of ways, but back in the day when I was hosting files for fellow students on university, this approach allowed for mostly trouble free operation, serving hundreds of Gigabytes of files each month.

Same reasoning was used when choosing underlying OS. There is no reason to go with anything other than Ubuntu 22.04 or Redhat Enterprise Linux 8 and risk dependency issues down the road.

Going with Ubuntu also allows to use Livepatch service, which I plan on exploring in part 2.

Networking

I updated the logical diagram, which I think makes it clearer, how everything is connected. This change adds Vlan 40, Ubuntu 22.04 VM and iSCSI connection.

logical network diagram showing multiple vlans in homelab

Xcp-ng and OPNsense VLAN

Following the XCP-ng documentation I decided to go with Multiple VIFs approach to utilize mutliple VLANs on OPNsense VM.
First I created new network using Xen Orchiestra. New -> Network and as below:

xen orchiestra new network interface

Keep in mind, this is not management interface.

Then I attached it to my OPNsense VM by going into the VM -> Network -> New Device and selecting just created network.

I had to restart OPNsense VM.

After it was restarted I configured new interface in OPNsense:

opnsense new interface configuraton

I also set up DHCP server on OPNsense. This is separate DHCP server that the one I have on my internal network. This can be done in Services / DHCPv4 / [interface_name]

settingup dhcp server on opnsense

I had to restart OPNsense VM again, as there were some stability problems. By stability problems I mean, the interface was connecting and disconnecting. After completing above steps. Reboot cleared them and it had not happened since.

The DHCP address was leased. I checked the IP address and MAC address in Services / DHCPv4 Leases and came back to Services / DHCPv4 / [interface_name] scrolled to the bottom and added reservation in DHCP Static Mappings for this interface. (It’s probably easier to set static IP ).

Restricting network access on OPNsense

After that it was time to set up firewall rules to only allow traffic to the Internet and exclude access from any current and future internal networks.

To do this I created alias in Firewall / Aliases which has address space from private networks:

creating alias with private networks on opnsense

Next in Firewall / Rules I’ve set up Block rule from any Source on this Interface, where Destination is newly created Alias containing private network address space.

Applying Alias in opnsense firewall rule

Second Rule basically allows access to any network on any port from Externalservices net.

picture showing network rule in opnsense firewall allowing network access from any source to any destination

It is important to keep in mind that rule positioning does matter. The higher one is more important. „Allow any” rule should always be on the bottom.

opnsense firewall - rule positioning is important

Restricting SSH and Web GUI access

While playing around with above setup, at one moment I noticed that before I added firewall rule to block traffic to private networks, I could access OPNsense Web GUI from this new interface. I think that, it is undesirable behavior to have by default Web GUI and SSH access on all interfaces. I found it can be restricted in System / Settings / Administration. In my case I restricted the Listen Interface to LAN.

restricting ssh and webgui access on opnsense

Assuming that Nextcloud VM is ready and configured. It can be made accessible from outside (if desired).

Opening port 80 is also required to get let’s encrypt certificate.

This can be done in OPNsense / Firewall / NAT / Port Forward

Set Protocol: TCP

Destination: WAN address

From: http to https

Redirect target IP: local IP of Nextcloud server

opnsense port forwarding settings

Additionally in Firewall / Rules / WAN allow rule should be created to allow incoming traffic on ports 80 and 443.

picture showing ready port forwarding rule on opnsense for port 80 and 443

This is also good moment to implement Spamhaus drop lists to make sure that malicious traffic is blocked, before it reaches servers. OPNsense has ready guide on how to configure Spamhaus (E)DROP, so I am redirecting to guide here. Additionally I added Talos feed as well. The feed can be found here.

After configuring all above, my Nextcloud test VM in Vlan 40 has internet access, is accessible from my Internal network and can’t access any anything on my Internal network.

Nextcloud and External storage

When looking at recommendations regarding using Nextcloud on one device and keeping data separate on NAS a lot of the time I saw External Storage as viable option or mounting NFS share directly to the VM. While this seems like viable option for Nextcloud instance hosted only internally, it doesn’t really work when Nextcloud VM is kept in DMZ and accessible from the Internet, while NAS is only accessible on the internal network. This is why I decided to created connection between NAS and XCP-ng and mount the network storage to VM that way.

NFS vs iSCSI

This creates a question NFS vs iSCSI. Below table is very simplified

NFSiSCSI
performanceacceptable performancehas slightly better performance than nfs
authenticationrequires Kerberos serverbuilt in authentication
Synologysupports Thin provisioningsupports Thin provisioning and Thick provisioning
XCP-ng supportThin provisioning, works reliably out of the boxThick provisioning, buggy experience

Usually when I was reading on discussion between iSCSI and NFS, the big advantage to use NFS was ease of use and that files are accessible. The file accessibly is not applicable in this use case, as NFS connection is between XCPng host and NAS.

VHD files on NFS share in Synology

This is why I decided to go with iSCSI. Possibility of better performance and built in password protected authentication were enough reasons for me to try it, compared to NFS, which is authenticating using IP or requires setting up Kerberos server.

However, this is why this post will have part 2.

Synology, XCPng and iSCSI don’t work well together

While iSCSI supports CHAP authentication, the data in transit is unencrypted and can be sniffed. SOURCE

Additionally it seems like XCP-ng has some problems with iSCSI using CHAP authentication, where after providing correct username and password the server gives an error SR_BACKEND_FAILURE_68(,ISCSI login failed – check access settings for the initiator on the storage, if CHAP is used verify CHAP credentials.

There is an issue open for it on github. I posted workaround there and am adding it here as well:

  1. Remove CHAP on Synology
  2. Search again in XenOrchestra
  3. The LUN should be found
image showing xen orchiestra panel used to create new iSCSI connection with Synology LUN
  1. Enable CHAP authentication on Synology
  2. Add credentials in XenOrchestra to the above image
  3. Click create
  4. ISCSI SR should be created successfully

While it restarting or updating Synology server does not cause issues and iSCSI connection is reestablished, I had a problem with reattaching SR after updating the host. Without changing anything in CHAP authentication it gave me POST_ATTACH_SCAN_FAILED error with content Failed to scan SR <redacted> after attaching error Input/output error. I resolved itself randomly after 15th try.

Digging around a little deeper I found a discussion on Citrix forum regarding similar issues with Xen, Synology and iSCSI.

Taking into account all that, I can’t really recommend using xcpng + iSCSI + Synology together. I am going back to the drawing board, now with Kerberos server, NFS 4.1 connection and hopefully less buggy experience.

Alternatively I will just create new VLAN and isolate it from the rest of infrastructure. More on that in Part 2.

Thanks for reading.

Dodaj komentarz

Twój adres e-mail nie zostanie opublikowany. Wymagane pola są oznaczone *