首页 > 解决方案 > 使用 Nginx、node-http-proxy 屏蔽 IP 地址

问题描述

首先,我想为这么长的帖子道歉!

我几乎要弄清楚一切了!我想要做的是使用 node-http-proxy 来屏蔽我从 MySQL 数据库中获得的一系列动态 IP。我通过将子域重定向到 node-http-proxy 并从那里解析它来做到这一点。我能够在本地毫无问题地做到这一点。

远程,它位于启用 HTTPS 的 Nginx Web 服务器后面(我有一个通过 Let's Encrypt 颁发的通配符证书,以及一个用于域的 Comodo SSL)。我设法对其进行了配置,因此它可以毫无问题地将其传递给 node-http-proxy。唯一的问题是后者给了我

 The error is { Error: connect ECONNREFUSED 127.0.0.1:80
     at TCPConnectWrap.afterConnect [as oncomplete] (net.js:1174:14)
   errno: 'ECONNREFUSED',
   code: 'ECONNREFUSED',
   syscall: 'connect',
   address: '127.0.0.1',
   port: 80 }

每当我设置:

proxy.web(req, res, { target, ws: true }

而且我不知道问题是远程地址(极不可能,因为我能够通过辅助设备连接),还是我配置错误的 nginx(极有可能)。还有可能它可能与正在监听端口 80 的 Nginx 发生冲突。但我不知道为什么 node-http-proxy 会通过端口 80 连接

一些附加信息:还有一个 Ruby on Rails 应用程序同时运行。Node-http-proxy、nginx、ruby on rails 在每个自己的 Docker 容器中运行。我认为这不是 Docker 的问题,因为我能够在本地测试它而没有任何问题。

这是我当前的 nginx.conf(出于安全原因,我已将域名替换为 example.com)

server_name "~^\d+\.example\.co$";是我希望它重定向到 node-http-proxy 的地方,而 example.com 是 Ruby on Rails 应用程序所在的地方。

# https://codepany.com/blog/rails-5-and-docker-puma-nginx/
# This is the port the app is currently exposing.
# Please, check this: https://gist.github.com/bradmontgomery/6487319#gistcomment-1559180  

upstream puma_example_docker_app {
  server app:5000;
}


server {
    listen 80 default_server;
    listen [::]:80 default_server;

    # Redirect all HTTP requests to HTTPS with a 301 Moved Permanently response.
    # Enable once you solve wildcard subdomain issue.
    return 301 https://$host$request_uri;
}

server {

  server_name "~^\d+\.example\.co$";

  # listen 80;
  listen 443 ssl http2;
  listen [::]:443 ssl http2;

  # certs sent to the client in SERVER HELLO are concatenated in ssl_certificate
  # Created by Certbot
  ssl_certificate /etc/letsencrypt/live/example.co/fullchain.pem;
  ssl_certificate_key /etc/letsencrypt/live/example.co/privkey.pem;
  # include /etc/letsencrypt/options-ssl-nginx.conf;
  ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem; 

    # certs sent to the client in SERVER HELLO are concatenated in ssl_certificate
  # ssl_certificate /etc/ssl/certs/ssl-bundle.crt;
  # ssl_certificate_key /etc/ssl/private/example.co.key;
  ssl_session_timeout 1d;
  ssl_session_cache shared:SSL:50m;
  ssl_session_tickets off;

  # Diffie-Hellman parameter for DHE ciphersuites, recommended 2048 bits
  # This is generated by ourselves. 
  # ssl_dhparam /etc/ssl/certs/dhparam.pem;

  # intermediate configuration. tweak to your needs.
  ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
  ssl_ciphers 'ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:DHE-RSA-AES128-GCM-SHA256:DHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA256:ECDHE-ECDSA-AES128-SHA:ECDHE-RSA-AES256-SHA384:ECDHE-RSA-AES128-SHA:ECDHE-ECDSA-AES256-SHA384:ECDHE-ECDSA-AES256-SHA:ECDHE-RSA-AES256-SHA:DHE-RSA-AES128-SHA256:DHE-RSA-AES128-SHA:DHE-RSA-AES256-SHA256:DHE-RSA-AES256-SHA:ECDHE-ECDSA-DES-CBC3-SHA:ECDHE-RSA-DES-CBC3-SHA:EDH-RSA-DES-CBC3-SHA:AES128-GCM-SHA256:AES256-GCM-SHA384:AES128-SHA256:AES256-SHA256:AES128-SHA:AES256-SHA:DES-CBC3-SHA:!DSS';
  ssl_prefer_server_ciphers on;

  # HSTS (ngx_http_headers_module is required) (15768000 seconds = 6 months)
  add_header Strict-Transport-Security max-age=15768000;

  # OCSP Stapling ---
  # fetch OCSP records from URL in ssl_certificate and cache them
  ssl_stapling on;
  ssl_stapling_verify on;

  ## verify chain of trust of OCSP response using Root CA and Intermediate certs
  ssl_trusted_certificate /etc/ssl/certs/trusted.crt;




  location / {
    # https://www.digitalocean.com/community/questions/error-too-many-redirect-on-nginx
    proxy_set_header X-Forwarded-Proto https;
    proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
    proxy_set_header Host $http_host;
    proxy_redirect off;

    proxy_pass http://ipmask_docker_app;
    # limit_req zone=one;
    access_log /var/www/example/log/nginx.access.log;
    error_log /var/www/example/log/nginx.error.log;
  }
}





# SSL configuration was obtained through Mozilla's 
# https://mozilla.github.io/server-side-tls/ssl-config-generator/
server {

server_name localhost example.co www.example.co; #puma_example_docker_app;

# listen 80;
  listen 443 ssl http2;
  listen [::]:443 ssl http2;

  # certs sent to the client in SERVER HELLO are concatenated in ssl_certificate
  # Created by Certbot
  # ssl_certificate /etc/letsencrypt/live/example.co/fullchain.pem;
  #ssl_certificate_key /etc/letsencrypt/live/example.co/privkey.pem;
  # include /etc/letsencrypt/options-ssl-nginx.conf;
  # ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem; 

    # certs sent to the client in SERVER HELLO are concatenated in ssl_certificate
  ssl_certificate /etc/ssl/certs/ssl-bundle.crt;
  ssl_certificate_key /etc/ssl/private/example.co.key;
  ssl_session_timeout 1d;
  ssl_session_cache shared:SSL:50m;
  ssl_session_tickets off;

  # Diffie-Hellman parameter for DHE ciphersuites, recommended 2048 bits
  # This is generated by ourselves. 
  ssl_dhparam /etc/ssl/certs/dhparam.pem;

  # intermediate configuration. tweak to your needs.
  ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
  ssl_ciphers 'ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:DHE-RSA-AES128-GCM-SHA256:DHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA256:ECDHE-ECDSA-AES128-SHA:ECDHE-RSA-AES256-SHA384:ECDHE-RSA-AES128-SHA:ECDHE-ECDSA-AES256-SHA384:ECDHE-ECDSA-AES256-SHA:ECDHE-RSA-AES256-SHA:DHE-RSA-AES128-SHA256:DHE-RSA-AES128-SHA:DHE-RSA-AES256-SHA256:DHE-RSA-AES256-SHA:ECDHE-ECDSA-DES-CBC3-SHA:ECDHE-RSA-DES-CBC3-SHA:EDH-RSA-DES-CBC3-SHA:AES128-GCM-SHA256:AES256-GCM-SHA384:AES128-SHA256:AES256-SHA256:AES128-SHA:AES256-SHA:DES-CBC3-SHA:!DSS';
  ssl_prefer_server_ciphers on;

  # HSTS (ngx_http_headers_module is required) (15768000 seconds = 6 months)
  add_header Strict-Transport-Security max-age=15768000;

  # OCSP Stapling ---
  # fetch OCSP records from URL in ssl_certificate and cache them
  ssl_stapling on;
  ssl_stapling_verify on;

  ## verify chain of trust of OCSP response using Root CA and Intermediate certs
  ssl_trusted_certificate /etc/ssl/certs/trusted.crt;

  # resolver 127.0.0.1;
  # https://support.comodo.com/index.php?/Knowledgebase/Article/View/1091/37/certificate-installation--nginx

  # The above was generated through Mozilla's SSL Config Generator
  # https://mozilla.github.io/server-side-tls/ssl-config-generator/

  # This is important for Rails to accept the headers, otherwise it won't work:
  # AKA. => HTTP_AUTHORIZATION_HEADER Will not work!
  underscores_in_headers on; 

  client_max_body_size 4G;
  keepalive_timeout 10;

  error_page 500 502 504 /500.html;
  error_page 503 @503;


  root /var/www/example/public;
  try_files $uri/index.html $uri @puma_example_docker_app;

  # This is a new configuration and needs to be tested.
  # Final slashes are critical
  # https://stackoverflow.com/a/47658830/1057052
  location /kibana/ {
      auth_basic "Restricted";
      auth_basic_user_file /etc/nginx/.htpasswd;
      #rewrite ^/kibanalogs/(.*)$ /$1 break;
      proxy_set_header X-Forwarded-Proto https;
      proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
      proxy_set_header Host $http_host;
      proxy_redirect off;

      proxy_pass http://kibana:5601/;

  }


  location @puma_example_docker_app {
    # https://www.digitalocean.com/community/questions/error-too-many-redirect-on-nginx
    proxy_set_header X-Forwarded-Proto https;
    proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
    proxy_set_header Host $http_host;
    proxy_redirect off;

    proxy_pass http://puma_example_docker_app;
    # limit_req zone=one;
    access_log /var/www/example/log/nginx.access.log;
    error_log /var/www/example/log/nginx.error.log;
  }

  location ~ ^/(assets|images|javascripts|stylesheets)/   {    
      try_files $uri @rails;     
      access_log off;    
      gzip_static on; 

      # to serve pre-gzipped version     
      expires max;    
      add_header Cache-Control public;     

      add_header Last-Modified "";    
      add_header ETag "";    
      break;  
   } 

  location = /50x.html {
    root html;
  }

  location = /404.html {
    root html;
  }

  location @503 {
    error_page 405 = /system/maintenance.html;
    if (-f $document_root/system/maintenance.html) {
      rewrite ^(.*)$ /system/maintenance.html break;
    }
    rewrite ^(.*)$ /503.html break;
  }

  if ($request_method !~ ^(GET|HEAD|PUT|PATCH|POST|DELETE|OPTIONS)$ ){
    return 405;
  }

  if (-f $document_root/system/maintenance.html) {
    return 503;
  }

  location ~ \.(php|html)$ {
    return 405;
  }
}

当前 docker-compose 文件:

# This is a docker compose file that will pull from the private
# repo and will use all the images. 
# This will be an equivalent for production.

version: '3.2'
services:
  # No need for the database in production, since it will be connecting to one
  # Use this while you solve Database problems
  app:
    image: myrepo/rails:latest
    restart: always
    environment:
      RAILS_ENV: production
      # What this is going to do is that all the logging is going to be printed into the console. 
      # Use this with caution as it can become very verbose and hard to read.
      # This can then be read by using docker-compose logs app.
      RAILS_LOG_TO_STDOUT: 'true'
      # RAILS_SERVE_STATIC_FILES: 'true'
    # The first command, the remove part, what it does is that it eliminates a file that 
    # tells rails and puma that an instance is running. This was causing issues, 
    # https://github.com/docker/compose/issues/1393
    command: bash -c "rm -f tmp/pids/server.pid && bundle exec rails s -e production -p 5000 -b '0.0.0.0'"
    # volumes:
    #   - /var/www/cprint
    ports:
      - "5000:5000"
    expose:
      - "5000"
    networks:
      - elk
    links:
      - logstash
  # Uses Nginx as a web server (Access everything through http://localhost)
  # https://stackoverflow.com/questions/30652299/having-docker-access-external-files
  # 
  web:
    image: myrepo/nginx:latest
    depends_on:
      - elasticsearch
      - kibana
      - app
      - ipmask
    restart: always
    volumes:
      # https://stackoverflow.com/a/48800695/1057052
      # - "/etc/ssl/:/etc/ssl/"
      - type: bind
        source: /etc/ssl/certs
        target: /etc/ssl/certs
      - type: bind
        source: /etc/ssl/private/
        target: /etc/ssl/private
      - type: bind
        source: /etc/nginx/.htpasswd
        target: /etc/nginx/.htpasswd
      - type: bind
        source: /etc/letsencrypt/
        target: /etc/letsencrypt/
    ports:
      - "80:80"
      - "443:443"
    networks:
      - elk
      - nginx
    links:
      - elasticsearch
      - kibana
  # Defining the ELK Stack! 
  elasticsearch:
    image: docker.elastic.co/elasticsearch/elasticsearch:6.2.3
    container_name: elasticsearch
    networks:
      - elk
    environment:
      - cluster.name=docker-cluster
      - bootstrap.memory_lock=true
      - "ES_JAVA_OPTS=-Xms512m -Xmx512m"
    ulimits:
      memlock:
        soft: -1
        hard: -1
    volumes:
      - elasticsearch:/usr/share/elasticsearch/data
      # - ./elk/elasticsearch/elasticsearch.yml:/usr/share/elasticsearch/config/elasticsearch.yml
    ports:
      - 9200:9200
  logstash:
    image: docker.elastic.co/logstash/logstash:6.2.3
    container_name: logstash
    volumes:
      - ./elk/logstash/config/logstash.yml:/usr/share/logstash/config/logstash.yml
      # This is the most important part of the configuration
      # This will allow Rails to connect to it. 
      # See application.rb for the configuration!
      - ./elk/logstash/pipeline/logstash.conf:/etc/logstash/conf.d/logstash.conf
    command: logstash -f /etc/logstash/conf.d/logstash.conf
    ports:
      - "5228:5228"
    environment:
      - "ES_JAVA_OPTS=-Xms512m -Xmx512m"
    networks:
      - elk
    links:
      - elasticsearch
    depends_on:
      - elasticsearch
  kibana:
    image: docker.elastic.co/kibana/kibana:6.2.3
    volumes:
      - ./elk/kibana/config/kibana.yml:/usr/share/kibana/config/kibana.yml
    ports:
      - "5601:5601"
    networks:
      - elk
    links:
      - elasticsearch
    depends_on:
      - elasticsearch
  ipmask:
    image: myrepo/proxy:latest
    command: "npm start"
    restart: always
    environment:
      - "NODE_ENV=production"
    expose:
      - "5050"
    ports:
      - "4430:80"
    links:
      - app
    networks:
      - nginx


# # Volumes are the recommended storage mechanism of Docker. 
volumes:
  elasticsearch:
    driver: local
  rails:
    driver: local

networks:
    elk:
      driver: bridge
    nginx:
      driver: bridge

非常感谢!

标签: node.jsdockernginxreverse-proxynode-http-proxy

解决方案


哇啊啊啊。代码没有问题!

问题是我试图传递一个平淡无奇的 IP 地址而不在它之前附加 http!通过附加 HTTP 一切正常!!

例子:

我在做:

proxy.web(req, res, { target: '128.29.41.1', ws: true })

事实上这是答案:

proxy.web(req, res, { target: 'http://128.29.41.1', ws: true })

推荐阅读