This is part of the Semicolon&Sons Code Diary - consisting of lessons learned on the job. You're in the web-development category.
Last Updated: 2024-11-23
The nginx's configuration file's location: /etc/nginx/nginx.conf
Reload the server with the new configuration
$ nginx -s reload
Send all logs to /var/log/system.log;
and tail it. This requires sudoing to start nginx.
Don't both running with services, just run sudo nginx
Kill it with sudo pkill nginx
This placement of upstream
within a server
block was invalid:
server {
listen 8080;
server_name localhost;
access_log logs/host.access.log main;
// INVALID: Needs to be up one level, OUTSIDE of the server block,
// i.e. at the same top-level as it is.
upstream myserver {
server 192.168.100.10:8010;
}
location / {
root /Users/jack/code/vim-browser/;
index index.html;
}
}
Just put a "/" at the end of the argument given to proxy_pass
. The following
redirects /chat
to the root of an upstream server:
location /chat {
proxy_pass http://127.0.0.1:9000/;
}
The effect is that calling /chat
from web (port 80 - default for web) redirects to /
on port 9000
// Upstream server listening on port 8081
upstream gotty {
server 127.0.0.1:8081;
}
// main server, listening on 8080
server {
listen 8080;
server_name localhost;
// We forward this request (for /js/gotty.js) to the upstream named gotty
location /js/gotty.js {
proxy_pass http://gotty;
}
}
location / {
proxy_pass http://wsgi; # defined above as an upstread
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; # Otherwise Django doesn't get user IP
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header Host $http_host; # Otherwise Django sees host as just "wsgi" instead of the domain name
}
location /web_sockets {
proxy_pass http://asgi; # defined above as an upstream
...
proxy_buffering off; # Needed to handle streaming for web-sockets
}
I am assuming that mysite.local
is under your control. Then add this config:
server {
server_name mysite.local;
add_header 'Access-Control-Allow-Origin' '*';
add_header 'Access-Control-Allow-Methods' 'GET, POST, OPTIONS';
add_header 'Access-Control-Allow-Headers' 'DNT,User-Agent,X-Requested-With,If-Modified-Since,Cache-Control,Content-Type,Range';
add_header 'Access-Control-Expose-Headers' 'Content-Length,Content-Range';
}
Now: in the browser console, if you GET mysite.local it will work. But without this code you'll get "has been blocked by CORS policy. No access-control-allow-origin header present"
I would have an nginx file like the following, installed into
/etc/nginx/sites-available/default
, that directs port 80 traffic on /
to a
socket
server {
listen 80;
location / {
include proxy_params;
proxy_pass http://unix:/home/ubuntu/SITENAME.sock;
}
}
The firewall on ufw
would be opened for nginx.
// Upstream lets you proxy to other servers (or clusters thereof)
upstream app {
// Path to Puma SOCK file, as defined in my puma setup
// fail_timeout=0 means we always retry an upstream even if it failed
// to return a good HTTP response.
// This happens when the puma master nukes a single worker for timing out.
server unix:/home/deploy/jacksapp/sockets/puma.sock fail_timeout=0;
}
// The public server
server {
listen 80;
// this server block (there may be others in the config)
// should service requests for jacksapp.com
server_name jacksapp.com;
// path for static files:
// Thus a request for /css/all.css will look for file
// /home/deploy/jackapps/public/css/all.css
root /home/deploy/jacksapp/public;
// Basically, check root folder - the public one - then if no file found //
// attempt in the upstream app)... see description below for more on how this
// works
try_files $uri @app;
location @app {
proxy_pass http://app;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $http_host;
proxy_redirect off;
}
// allow uploads
client_max_body_size 4G;
// Keepalive connections can have a major impact on performance by reducing
// the CPU and network overhead needed to open and close connections.
// Keeps connections open for 10 seconds
keepalive_timeout 10;
}
root /var/www/main;
// Tries the path as given, then with a `.html` at the end, then it
// settles on serving `fallback/index.html` as a fallback.
location / {
try_files $uri $uri.html /fallback/index.html;
}
location /fallback {
root /var/www/another;
}
If a request comes in for ~/blahblah
, the first location will initially get the
request. It will try to find a file called blahblah
in /var/www/main
directory.
If it cannot find one, it will follow up by searching for a file called
blahblah.html
. It will then try to see if there is a directory called blahblah/
within the /var/www/main
directory.
Failing all of these attempts, it will redirect to /fallback/index.html
. This
will trigger another location search that will be caught by the second location
block. This will serve the file /var/www/another/fallback/index.html
.
// Note how there are multiple servers here, some on remote domains, others on
// network sockets, others on unix sockets.
upstream rails_app_three {
server unix:/tmp/rails_app_three.sock fail_timeout=0;
server 192.168.0.7:8080 fail_timeout=0;
server 127.0.0.1:3000 weight=3 fail_timeout=0;
server backendworker.example.com weight=5 fail_timeout=0;
}
Forward any matching requests for PHP to a backend devoted to handling PHP processing using the FastCGI protocol
// `~` means case sensitive regular expression match
location ~ \.php$ {
// connect via HTTP on port 9000
fastcgi_pass 127.0.0.1:9000;
}
Any time that a proxy connection is made, the original request must be translated to ensure that the proxied request makes sense to the backend server. Since we are changing protocols with a FastCGI pass, this involves some additional work.
The above actually won't work because more parameters must be passed (and they cannot go in HTTP headers because FastCGI does not support them)
location ~ \.php$ {
// http method requested by the client.
fastcgi_param REQUEST_METHOD $request_method;
// tell backend server what resource to get
// The $document_root will contain the path to the base directory, as set by the root directive.
// The $fastcgi_script_name variable will be set to the request URI.
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
fastcgi_pass 127.0.0.1:9000;
}
If the backend server is on the same machine you probably want to use a unix socket (instead of a network socket) for better security. (Note: A network socket is NOT the same as HTTP)
location ~ \.php$ {
fastcgi_pass unix:/var/run/php5-fpm.sock;
}
Because then you'd need to type it whenever starting/restarting nginx, making automatic management impossible.
Imagine a user uploaded a PHP file and then requested it. This could cause nginx
to execute malicious code with the nginx
permissions.
One solution is to give a specific directive for folders with untrusted files:
location ^~ /uploads {
# disable PHP processing
location ~* \.php$ { return 403; }
}
Basically this means that the upstream server that nginx talks to is not giving it a valid response - either because the origin server is down or because a firewall is blocking or a domain name is not resolving.
E.g. this was relevant for me when deploying its build-pack to Heroku
You might get an error that there are not enough worker connections. This will drop requests. To increase you must have sufficient file descriptors and ports
ulimit -a
to see file descriptors
cat /proc/sys/net/ipv4/ip_local_port_range
to see ports
Sometimes a variable is available in your local nginx install but not in production. This is due to the server being built with different modules in each circumstance.
nginx -V