Let’s Encrypt with Nginx
|
Posts: 244
Threads: 1
Joined: Jul 2016
Reputation:
12
[Solved]
Oct 17, 2016, 01:32 PM
Based on your logs, the problem is related to the securing nginx security settings, namely
Code:
add_header X-Content-Type-Options nosniff;
In most cases it is advised to set this header as it will increase security. However, when set, some browsers like Chrome, then perform strict checking of MIME types, and this give you the error.
Disable the nosniff header option in your
Code:
/etc/nginx/snippets/strong-ssl.conf
file, restart nginx, clear browser cache and see if it works. I think it should resolve your problem. You will have a somewhat less secure nginx server, but depending on your use case, sometimes you need to sacrifice some strict security settings to make some services work properly.
For example, the recommended
Code:
add_header X-Frame-Options
setting for maximum security would be DENY, but we need to loosen a little bit here and allow SAMEORIGIN to make iframes work, needed for Deluge Web UI and ownCloud, etc.
If you run the Qually Labs SSL Test you will receive back some useful information about your current settings and you will be able to see what browsers, protocols, etc are supported (at least it will give you a hint).
Each browser handles these things differently under the hood, even if they are quite similar.
Let us know if this solved your problem. Btw, this will still give you a very secure nginx configuration, much safer then the default one.
Posts: 12
Threads: 3
Joined: May 2016
Reputation:
6
[Solved]
Oct 17, 2016, 04:40 PM
okay so I have a second file "Strong-ssl-qnap" and it is the following
Code:
# By Remy van Elst -- https://raymii.org/s/tutorials/Strong_SSL_Security_On_nginx.html
# Modified version by HTPC Guides -- https://www.htpcguides.com
ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
ssl_prefer_server_ciphers on;
ssl_ciphers "EECDH+AESGCM:EDH+AESGCM:AES256+EECDH:AES256+EDH";
ssl_ecdh_curve secp384r1;
ssl_session_cache shared:SSL:10m;
ssl_session_tickets off;
ssl_stapling on;
ssl_stapling_verify on;
# Set Google's public DNS servers as upstream resolver
resolver 8.8.8.8 8.8.4.4 valid=300s;
resolver_timeout 5s;
add_header Strict-Transport-Security "max-age=63072000; includeSubdomains; preload";
# Modify X-Frame-Option from DENY to SAMEORIGIN, required for Deluge Web UI, ownCloud, etc.
add_header X-Frame-Options SAMEORIGIN;
#add_header X-Content-Type-Options nosniff;
# Use the 2048 bit DH key
ssl_dhparam /etc/ssl/certs/dhparam.pem;
I still however have the same challenge.
Posts: 244
Threads: 1
Joined: Jul 2016
Reputation:
12
[Solved]
Oct 17, 2016, 04:55 PM
(This post was last modified: Oct 17, 2016, 04:56 PM by drake.)
Try Firefox with these settings.
And try to disable Strict Transport Security. I can't really help you too much, as I can't test it myself, but I think it will be related to some of these settings. Perhaps even try without stong ssl settings and see if it works then. Then we could narrow down the reason.
Sent from my Xperia Z3 Compact using Tapatalk
Posts: 12
Threads: 3
Joined: May 2016
Reputation:
6
[Solved]
Oct 17, 2016, 05:24 PM
Okay thanks for the tip, it seems it was the add_header Strict-Transport-Security option.
I have enabled everything else and that all works still. Chrome seems to store details about the certs so I had to use In Private Browsing to get it to update every time but worth it.
Strong SSL now looks like this and is working 100%.
Code:
# By Remy van Elst -- https://raymii.org/s/tutorials/Strong_SSL_Security_On_nginx.html
# Modified version by HTPC Guides -- https://www.htpcguides.com
ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
ssl_prefer_server_ciphers on;
ssl_ciphers "EECDH+AESGCM:EDH+AESGCM:AES256+EECDH:AES256+EDH";
ssl_ecdh_curve secp384r1;
ssl_session_cache shared:SSL:10m;
ssl_session_tickets off;
ssl_stapling on;
ssl_stapling_verify on;
# Set Google's public DNS servers as upstream resolver
resolver 8.8.8.8 8.8.4.4 valid=300s;
resolver_timeout 5s;
#add_header Strict-Transport-Security "max-age=63072000; includeSubdomains; preload";
# Modify X-Frame-Option from DENY to SAMEORIGIN, required for Deluge Web UI, ownCloud, etc.
add_header X-Frame-Options SAMEORIGIN;
#add_header X-Content-Type-Options nosniff;
# Use the 2048 bit DH key
ssl_dhparam /etc/ssl/certs/dhparam.pem;
Thanks you so much.
Posts: 1,646
Threads: 2
Joined: Aug 2015
Reputation:
42
[Solved]
Oct 17, 2016, 05:31 PM
Excellent work everybody
Posts: 244
Threads: 1
Joined: Jul 2016
Reputation:
12
[Solved]
Oct 17, 2016, 06:53 PM
Excellent, great to hear it is working. I'm sure it can be done with enabled Strict Transport Security, but since I can not recreate your setup, it is very hard to troubleshoot. What STS does that it makes sure only https is used between your browser and server, and prevents Man In The Middle attacks, etc. I would experiment with the location settings for qnap, maybe you need to adjust that for https, not sure. Now we actually know the cause. You should do some testing to find the proper setup and still make STS enabled.
Mike gave you some excellent links, since the MIME error is there with STS enables, I think you should look into this. Let us know how it turned out! Of course, even without STS you are still way more secure, but would be good to have it enabled.
Posts: 12
Threads: 3
Joined: May 2016
Reputation:
6
[Solved]
Oct 17, 2016, 07:44 PM
(Oct 17, 2016, 06:53 PM)drake Wrote: Excellent, great to hear it is working. I'm sure it can be done with enabled Strict Transport Security, but since I can not recreate your setup, it is very hard to troubleshoot. What STS does that it makes sure only https is used between your browser and server, and prevents Man In The Middle attacks, etc. I would experiment with the location settings for qnap, maybe you need to adjust that for https, not sure. Now we actually know the cause. You should do some testing to find the proper setup and still make STS enabled.
Mike gave you some excellent links, since the MIME error is there with STS enables, I think you should look into this. Let us know how it turned out! Of course, even without STS you are still way more secure, but would be good to have it enabled.
While trying to fix this I actually changed the QNAP to serve https and mapped NGINX proxy to it. So now the config says " proxy_pass https://10.1.1.2;" that didn't fix anything.
Maybe we could set up a teamviewer session, since I have NGINX running on its own VM, I could share the ssh session with you after taking a snapshot of the current setup.
I don't really know the coding side as I have probably shown here. If you think it would help you understand anything better then PM me and we can arrange it.
Posts: 244
Threads: 1
Joined: Jul 2016
Reputation:
12
[Solved]
Oct 18, 2016, 07:12 AM
(This post was last modified: Oct 18, 2016, 07:30 AM by drake.)
I'm sorry, but I'm so busy with work and family that I really can't help you with a TeamViewer session, I hope you understand.
I see people do have problems with reverse proxy/proxy forward and qnap.
Try the following, remove preload directive from the HSTS, it should look like this
Code:
add_header Strict-Transport-Security "max-age=63072000; includeSubdomains; ";
Then try with Firefox, Safari, Edge, and Chrome too, of course. But make sure you clear all the browser cache! We will see if it is working with other browsers then Chrome.
And I advise to ask on the qnap forum, how others (if anybody) made it work with HSTS enables with nginx.
Looking forward to your update!
(Oct 17, 2016, 07:44 PM)armss001 Wrote: While trying to fix this I actually changed the QNAP to serve https and mapped NGINX proxy to it. So now the config says "proxy_pass https://10.1.1.2;" that didn't fix anything.
Again, not sure, but if you set qnap to serve https, then you need to use a different port, or? So it " proxy_pass https://10.1.1.2;" should use the port used by qnap when https is enabled (most likely 443)? just guessing here, I never had qnap.
Edit2: as you can see in the user manual: http://docs.qnap.com/nas/4.1/Home/en/ind...ttings.htm
Since HSTS forces https connection and doesn't allow for any http (unless on localhost), maybe this would be the solutions. So I guess you would need something like
" proxy_pass https://10.1.1.2:443;"
But this is just some idea, something I would try.
Posts: 12
Threads: 3
Joined: May 2016
Reputation:
6
[Solved]
Oct 19, 2016, 04:58 PM
(Oct 18, 2016, 07:12 AM)drake Wrote: I'm sorry, but I'm so busy with work and family that I really can't help you with a TeamViewer session, I hope you understand.
This is completely fine, I don't mind doing it myself, I just thought you would like to see the setup.
As for adding the port, from a networking point of view, by default sending a https request will be routed via 443. So from the server side a request for "proxy_pass https://10.1.1.2:443;" and "proxy_pass https://10.1.1.2;" would both be viewed as exactly the same. Or at least that is my understanding from port forwarding and firewalls.
In the end it would seem it was the option below.
Code:
#add_header X-Content-Type-Options nosniff;
I must have not saved or forgotten to clear the cache when testing it.
This is my new Strong-SSL-QNAP file and it seems to be fully functional in all browsers.
Code:
# By Remy van Elst -- https://raymii.org/s/tutorials/Strong_SSL_Security_On_nginx.html
# Modified version by HTPC Guides -- https://www.htpcguides.com
ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
ssl_prefer_server_ciphers on;
ssl_ciphers "EECDH+AESGCM:EDH+AESGCM:AES256+EECDH:AES256+EDH";
ssl_ecdh_curve secp384r1;
ssl_session_cache shared:SSL:10m;
ssl_session_tickets off;
ssl_stapling on;
ssl_stapling_verify on;
# Set Google's public DNS servers as upstream resolver
resolver 8.8.8.8 8.8.4.4 valid=300s;
resolver_timeout 5s;
add_header Strict-Transport-Security "max-age=63072000; includeSubdomains; preload";
# Modify X-Frame-Option from DENY to SAMEORIGIN, required for Deluge Web UI, ownCloud, etc.
add_header X-Frame-Options SAMEORIGIN;
#add_header X-Content-Type-Options nosniff;
# Use the 2048 bit DH key
ssl_dhparam /etc/ssl/certs/dhparam.pem;
Are there any security concerns with this option not been enabled? I am about to read up on it but I am not 100% sure.
Thanks again for all your support guys.
Posts: 244
Threads: 1
Joined: Jul 2016
Reputation:
12
[Solved]
Oct 20, 2016, 01:18 PM
(This post was last modified: Oct 20, 2016, 01:23 PM by drake.)
It is great to have access to the actual system to troubleshoot, just it is very hard to find time for this. You did great and actually resolved the problem.
It was sure a cache or not saved configuration, I was really puzzled why Strict Transport Security didn't work. I was quite sure that nosniff header needs to be turned off becuase the logs directed exactly to this.
How safe the settings are is always very relative. I believe the solution to this will be in qnap management interface, as they should make it compatible with nosniff header. I'm not aware of any possibility that can be set in nginx or Apache to use nosniff with a site that doesn't properly supports it. Harding security most of the times comes at the cost of compatibility, as you see, it would be good to have nosniff enabled, but since qnap doesn't support it, we need to have it disabled if we would like to use it.
Like when you need to keep compatibility for some older browsers then you can't have the most up to date security options enabled, since those who will use an older browser (don't ask why would anybody do this) are not able to access properly your nginx server.
Some hint about nosniff can be seen here
I would first suggest to open a thread at qnap forums and ask for help how to use the management console with nosniff header enabled, maybe they will be able to help you. Just tell them you use nginx with nosniff header enabled, and put the log error about the MIME there that they can see too. Of course, let them know that without nosniff header it is working perfectly fine.
Great work btw, you have a very secure nginx configuration, way most secure then the majority of servers out there. Again, how safe it is, it is very relative. Companies who run very popular services get hacked all the time, even if they employ many professionals who should make their systems secure. You use this on a home server, if a potential attacker finds you, he will see that you have quite an advanced setup. He might be able to brake it, but that would most likely involve quite a significant effort. And then what? You got hacked, which is really bad, but you are not a bank or you probably don't store so sensitive information that is of great value for anybody, except of course for you and people you know (I don't have at least nothing, like most of us). The chances somebody will go after you are very slim, especially with so much unsecure servers out there.
What I would suggest you: nginx has excellent geoip blocking and support from fail2ban. With implementing geoip restriction and fail2ban jails you exclude a bunch of potential attackers.
Geoip: you have access to your nginx home server from Internet. You live in, lets say, Finland, and you access your server only from Finland. You just limit nginx access to Finland geoip locations, and your server will only be reachable from IP addresses in Finland. You need access from other countries too? You just allow them in nginx, one by one. Of course, you can also limit access to given IP range, etc. I personally experienced most attacks and port scans from China, and I have China blocked completely.
Fail2ban jails are also very good. they will ban IP addresses or ranges based on the number of failed attempts or the no-proxy jail will ban everybody who would try to use your server as a transparent proxy. We will have guides for both geoip and fail2ban jails in the near future. We already have a guide against brute force on nginx here
|
|
|