Friday, July 22, 2016

Creating CSR with modern cryptography

I had a post regarding SSL installation at http://jettyapplicationserver.blogspot.com/2015/04/applying-ssl-certificate-to-nginx.html but the procedure on CSR generation is outdated. If you want to protect your website with modern cryptography, you may find this post useful.

By the way, in Chrome you may click the pad lock icon at the address bar to know about a website's SSL connection details.


In this example we will generate a private key named sudo2016.key and CSR file named sudo2016.csr. For your purpose, rename the file names with the names you desire.


Generate an RSA Key

openssl genrsa -out sudo2016.key 4096


Generate CSR

openssl req -out sudo2016.csr -key sudo2016.key -new -sha256


Pre-SSL Certificate Generation

The contents of the CSR will be supplied to the SSL provider. The SSL Provider will generate a number of certificates for you. 

In Name.com, they provide three certificates: Server Certificate, CA Certificate and the Root Certificate.

Different Web Servers have different ways of installing SSL certificates. Usually, the SSL Providers give instruction for every Web Servers.





Sunday, June 26, 2016

Creating Start-up Script in Ubuntu

I installed Redmine in an Ubuntu Server at Windows Azure and was successful in doing so. However, Azure did some maintenance in my server and has to restart it. So, when I accessed Redmine again, Nginx redirected me to its Error 502 page. Upon checking, I realized that  there was really a restart that happened and I thought I have to create a start-up script to avoid this since it was not only me who is using the Redmine which I installed but also my clients' testers and BA's.

Here's how my script look like.

#!/bin/sh

REDMINE_START=/home/tsiminiya/redmine-3.3.0/run.sh
REDMINE_STOP=/home/tsiminiya/redmine-3.3.0/stop.sh
REDMINE_USER=tsiminiya
REDMINE_COMMAND=

executeCommand() {
    start-stop-daemon -S -u $REDMINE_USER -c $REDMINE_USER -o -x $REDMINE_COMMAND
}

case $1 in
    start)
        REDMINE_COMMAND=$REDMINE_START
        executeCommand
        ;;
    stop)
        REDMINE_COMMAND=$REDMINE_STOP
        executeCommand
        ;;
    *)
        echo "Usage: $(basename $0) (start | stop)"
        ;;
esac


You may modify the variables above to point to your start and stop scripts. I didn't put here the contents of my run.sh and stop.sh. What is important here is the start-stop-daemon line. And, also after constructing your start-up script:

1. Save the file at /etc/init.d/<filename-of-your-choice>.
2. sudo chmod +x /etc/init.d/<filename-of-your-choice>
3. sudo update-rc.d <filename-of-your-choice> defaults

Note: at #3, we don't specify the full path but just the script name.

Restart the server after Step 3 to test whether your start-up script is working.






Thursday, April 23, 2015

Applying SSL Certificate to Nginx

I. Requirements


This procedure is applicable only to Nginx on Ubuntu (or other Linux servers). For my own purpose, I had the following:
  • Ubuntu Server with Nginx installed
  • SSH Client to access the server
Also, to be able to apply SSL Certificate to your server you should already have purchased a domain and has an access to your domain records via your domain provider's control panel.

II. Procedure

1. Generate Server's Private Key and Certificate Signing Request (CSR)

To generate a private key and CSR, you need to be on your server's SSH terminal and logged-in as a sudoer user to be able to execute the following command:


sudo openssl req -new -newkey rsa:2048 -nodes -keyout sudocode.key -out sudocode.crt


The above command would create sudocode.key and sudocode.crt.

The command will ask you to provide the following information. Please change the values with your own.

Country Name (2 letter code) [AU]:PH
State or Province Name (full name) [Some-State]:Rizal
Locality Name (eg, city) []:Antipolo
Organization Name (eg, company) [Internet Widgits Pty Ltd]:My Happy Birthday Company
Organizational Unit Name (eg, section) []:R&D
Common Name (e.g. server FQDN or YOUR name) []:sudocodesystems.com
Email Address []:rmaranan@sudocodesystems.com

Please enter the following 'extra' attributes
to be sent with your certificate request
A challenge password []:
An optional company name []:

You may skip inputting values on the challenge password field and optional company name field by pressing enter key.

Copy sudocode.crt and sudocode.key to /etc/nginx/ssl.

Warning: Make sure you secure a copy of your private key and CSR somewhere safe. Losing one of two means you have to create them again. I don't know how much that will cost to re-upload a CSR to the CA Provider. But, probably it will be very inconvenient for you to talk to your CA Provider for re-issuance of CA cert.

2. Upload CSR to CA Provider

My CA Provider is Comodo. They are providing free trial SSL Certificate for 90-days - not bad for a first timer in SSL Certification. Most CA Providers offer only 30 to 60 days of free trial.

The CSR is the one we generate at Step 1 - /etc/nginx/ssl/sudocode.crt.
We need to upload its content to our provider.

On the terminal, we may display the contents of the CSR by using cat command.


cat /etc/nginx/ssl/sudocode.crt


The contents of the CSR should look like the following:


-----BEGIN CERTIFICATE REQUEST-----
MIIDUDCCArkCAQAwdTEWMBQGA1UEAxMNdGVzdC50ZXN0LmNvbTESMBAGA1UECxMJ
TWFya2V0aW5nMREwDwYDVQQKEwhUZXN0IE9yZzESMBAGA1UEBxMJVGVzdCBDaXR5
(more encoded data).......
Rq+blLr5X5iQdzyF1pLqP1Mck5Ve1eCz0R9/OekGSRno7ow4TVyxAF6J6ozDaw7e
GisfZw40VLT0/6IGvK2jX0i+t58RFQ8WYTOcTRlPnkG8B/uV
-----END CERTIFICATE REQUEST-----

We paste the CSR file contents to the Free SSL Certificate request form at Comodo.




3. Domain Validation


Domain Validation in Comdo can be done in several ways.


The easiest and most convenient for me is through CSR Hash which will be configured at the domain provider's control panel. My domain provider is EApps.

We add a CNAME DNS entry.

Comodo gives two hash values: one is created through MD5 and the other through SHA-1. The CNAME DNS entry should look like as shown below:

<Value of MD5 hash of CSR>.yourdomain.com. CNAME <value of SHA1 hash of CSR>.comodoca.com



4. Set-up SSL Certificate at Nginx

4.1. Create Certificate Bundle File

Comodo (or your own CA Provider) will send you the certificates you need to verify your server's identity. 

Here are the sample certificates sent by Comodo:

  • Root CA Certificate - AddTrustExternalCARoot.crt
  • Intermediate CA Certificate - COMODORSAAddTrustCA.crt
  • Intermediate CA Certificate - COMODORSADomainValidationSecureServerCA.crt
  • Your Free SSL Certificate - sudocodesystems_com.crt
We need to create a certificate bundle by putting all contents of the given certs in one file. Note that the contents must be put in a reverse order as listed above. The last cert to include is the Root CA Certificate and the first one is your SSL Certificate. We may use the cat command again for this.


cat sudocodesystems_com.crt COMODORSADomainValidationSecureServerCA.crt COMODORSAAddTrustCA.crt AddTrustExternalCARoot.crt > /etc/nginx/ssl/sudocode.crt


No that you might have to log-in as root to execute the command above.


4.2. Configure Nginx

Your nginx server configuration may look like the following:

server {
        listen 80 default_server;
        listen [::]:80 default_server ipv6only=on;

        root /usr/share/nginx/html;
        index index.html index.htm;

        server_name your_domain.com;

        location / {
                try_files $uri $uri/ =404;
        }
}
By applying SSL, it should look like the following:

server {
        listen 80 default_server;
        listen [::]:80 default_server ipv6only=on;

        listen 443 ssl;

        root /usr/share/nginx/html;
        index index.html index.htm;

        server_name sudocodesystems.com;
        ssl_certificate /etc/nginx/ssl/sudocode.crt;
        ssl_certificate_key /etc/nginx/ssl/sudocode.key;

        location / {
                try_files $uri $uri/ =404;
        }
}
Notice we added the following above:

listen 443 ssl;
server_name sudocodesystems.com
ssl_certificate /etc/nginx/ssl/sudocode.crt
ssl_certificate_key /etc/nginx/ssl/sudocode.key


When configuration is done, restart Nginx and access your website at your favorite browser.

Thursday, June 12, 2014

Problem with Java System Prefs and User Prefs

If you are running on Linux and you find any of the following WARNING messages at your Java Application logs or Web Application logs:

  • FileSystemPreferences syncWorld Couldn't flush user prefs: java.util.prefs
  • Could not create system preferences directory. System preferences are unusable
  • Could not lock system prefs. Unix error code 0
or anything regarding Java System Prefs and User Prefs, the reason is you have installed JDK in your machine but the installer wasn't able to set-up the System Prefs and User Prefs directories properly.

Here's the solution suggested by a guy at this link: https://groups.google.com/forum/#!topic/xnat_discussion/uOd-YyuBhCQ

chmod 755 /etc/.java
chmod 755 /etc/.systemPrefs
touch /etc/.java/.systemPrefs/.system.lock
touch /etc/.java/.systemPrefs/.systemRootModFile
chmod 544 /etc/.java/.systemPrefs/.system*


Everything went well for me after. To be particular, I'm running a web application in Jetty at my IDEA workspace.

I hope this can be useful in the future.

Tuesday, December 10, 2013

Changing Native Jenkins Root Context

I recently had a requirement to change our Jenkins URL from http://our.webdomain:8080 to http://our.webdomain:8080/jenkins. The application is installed in Ubuntu. I tried to read the script at /etc/init.d/jenkins and was able to find out that this can be easily done by modifying /etc/default/jenkins.

The solution is just to add --prefix=$PREFIX or --prefix=/jenkins to JENKINS_ARGS.

# servlet context, important if you want to use apache proxying  
PREFIX=/jenkins

# arguments to pass to jenkins.
# --javahome=$JAVA_HOME
# --httpPort=$HTTP_PORT (default 8080; disable with -1)
# --httpsPort=$HTTP_PORT
# --ajp13Port=$AJP_PORT
# --argumentsRealm.passwd.$ADMIN_USER=[password]
# --argumentsRealm.roles.$ADMIN_USER=admin
# --webroot=~/.jenkins/war
# --prefix=$PREFIX

JENKINS_ARGS="--webroot=/var/cache/jenkins/war --httpPort=$HTTP_PORT --ajp13Port=$AJP_PORT --prefix=$PREFIX"

Saturday, November 30, 2013

GitBlit at Nginx Issue on Pushing

I recently bumped into a problem with Gitblit which is deployed on a Jetty instance at my public virtual machine. I got some errors when pushing a change to my repository. It says "File too large to upload". This is error 413 and I first suspected Jetty but I have found out that the issue in on Nginx. It's not actually an issue but I just need to do some tweaking with my Nginx installation so I won't get the error anymore.


Client Max Body Size (client_max_body_size)

I don't know the exact default value of client_max_body_size but this parameter when set to a very high value will allow an upload not more that to that set value.

http {

        ##
        # Basic Settings
        ##

        sendfile on;
        tcp_nopush on;
        tcp_nodelay on;
        keepalive_timeout 65;
        types_hash_max_size 2048;
        # server_tokens off;

        # server_names_hash_bucket_size 64;
        # server_name_in_redirect off;

        include /etc/nginx/mime.types;
        default_type application/octet-stream;

        ##
        # Logging Settings
        ##

        access_log /var/log/nginx/access.log;
        error_log /var/log/nginx/error.log;

        ##
        # Gzip Settings
        ##

        gzip on;

        gzip_disable "msie6";

        # gzip_vary on;
        # gzip_proxied any;
        # gzip_comp_level 6;
        # gzip_buffers 16 8k;
        # gzip_http_version 1.1;
        # gzip_types text/plain text/css application/json application/x-      
        # javascript text/xml ap$

        ##
        # nginx-naxsi config
        ##
        # Uncomment it if you installed nginx-naxsi
        ##

        #include /etc/nginx/naxsi_core.rules;

        ##
        # nginx-passenger config
        ##
        # Uncomment it if you installed nginx-passenger
        ##

        #passenger_root /usr;
        #passenger_ruby /usr/bin/ruby;

        client_max_body_size 300M;

        ##
        # Virtual Host Configs
        ##

        include /etc/nginx/conf.d/*.conf;
        include /etc/nginx/sites-enabled/*;
}

When I set about 300M to my Nginx's client_max_body_size, I don't get the error 413 anymore. 

That's all! 

Proxy Pass Gitblit and Nexus in Nginx

I have done this set-up several weeks ago. I just want to share this to anyone who's looking for some steps to set-up the same pool of technologies in his machine.

I have been using Apache 2 as my web server for more than a year now, but I still don't understand how to set it up. I'm not sure if I'm just too slow for Apache 2 or its documentation just lack the details. When I started using Nginx, I thought I never come back to Apache 2 anymore. And, it's really true. Nginx is a lot better to understand and it just took me some time to set-up and configure my server again.

What I will show is how I pass proxy my Gitblit and Nexus applications.

Ngix Location Patterns

Before I show you how I pass proxy my applications, please read the following excerpts from Nginx documentation site.

location  = / {
  # matches the query / only.
  [ configuration A ] 
}
location  / {
  # matches any query, since all queries begin with /, but regular
  # expressions and any longer conventional blocks will be
  # matched first.
  [ configuration B ] 
}
location /documents/ {
  # matches any query beginning with /documents/ and continues searching,
  # so regular expressions will be checked. This will be matched only if
  # regular expressions don't find a match.
  [ configuration C ] 
}
location ^~ /images/ {
  # matches any query beginning with /images/ and halts searching,
  # so regular expressions will not be checked.
  [ configuration D ] 
}
location ~* \.(gif|jpg|jpeg)$ {
  # matches any request ending in gif, jpg, or jpeg. However, all
  # requests to the /images/ directory will be handled by
  # Configuration D.   
  [ configuration E ] 
}

You may read this part at http://wiki.nginx.org/HttpCoreModule.

Nginx Configuration Folders

I'm using Linux so my installation of Nginx might be a little different with that of Windows. But anyways, the way to configure Nginx must still be the same. The following is the content of the Nginx configuration folder (/etc/nginx).

total 80
drwxr-xr-x   5 root root  4096 Nov 12 06:24 ./
drwxr-xr-x 131 root root 12288 Nov 24 00:15 ../
drwxr-xr-x   2 root root  4096 May 10  2013 conf.d/
-rw-r--r--   1 root root   898 Apr 29  2013 fastcgi_params
-rw-r--r--   1 root root  2258 Apr 29  2013 koi-utf
-rw-r--r--   1 root root  1805 Apr 29  2013 koi-win
-rw-r--r--   1 root root  2085 Apr 29  2013 mime.types
-rw-r--r--   1 root root  5287 Apr 29  2013 naxsi_core.rules
-rw-r--r--   1 root root   287 Apr 29  2013 naxsi.rules
-rw-r--r--   1 root root   222 Apr 29  2013 naxsi-ui.conf
-rw-r--r--   1 root root  1644 Nov 27 08:03 nginx.conf
-rw-r--r--   1 root root   131 Apr 29  2013 proxy_params
-rw-r--r--   1 root root   465 Apr 29  2013 scgi_params
drwxr-xr-x   2 root root  4096 Nov 26 23:48 sites-available/
drwxr-xr-x   2 root root  4096 Nov 24 00:27 sites-enabled/
-rw-r--r--   1 root root   532 Apr 29  2013 uwsgi_params
-rw-r--r--   1 root root  3071 Apr 29  2013 win-utf

The main configuration file here is nginx.conf. If you open this file, you will find a few configuration blocks, but will focus on the HTTP configuration since we are talking about web application here. Note that Nginx can also pass proxy other stuff like SMTP.

http {

        ##
        # Basic Settings
        ##

        sendfile on;
        tcp_nopush on;
        tcp_nodelay on;
        keepalive_timeout 65;
        types_hash_max_size 2048;
        # server_tokens off;

        # server_names_hash_bucket_size 64;
        # server_name_in_redirect off;

        include /etc/nginx/mime.types;
        default_type application/octet-stream;

        ##
        # Logging Settings
        ##

        access_log /var/log/nginx/access.log;
        error_log /var/log/nginx/error.log;

        ##
        # Gzip Settings
        ##

        gzip on;

        gzip_disable "msie6";

        # gzip_vary on;
        # gzip_proxied any;
        # gzip_comp_level 6;
        # gzip_buffers 16 8k;
        # gzip_http_version 1.1;
        # gzip_types text/plain text/css application/json application/x-      
        # javascript text/xml ap$

        ##
        # nginx-naxsi config
        ##
        # Uncomment it if you installed nginx-naxsi
        ##

        #include /etc/nginx/naxsi_core.rules;

        ##
        # nginx-passenger config
        ##
        # Uncomment it if you installed nginx-passenger
        ##

        #passenger_root /usr;
        #passenger_ruby /usr/bin/ruby;

        ##
        # Virtual Host Configs
        ##

        include /etc/nginx/conf.d/*.conf;
        include /etc/nginx/sites-enabled/*;
}

Notice the last line with the keyword include. This statement says that it will include some configuration from configuration files under the folder /etc/nginx/sites-enabled.

Initially after installing Nginx, you will find a file named default under /etc/nginx/sites-enabled. This file contains server block configuration like the following:

server {
     listen 80;
     server_name my.server.com
     
     location / {
               root 8/usr/share/nginx/html;
        index index.html index.htm;          
     }
}

By calling include in nginx.conf, the effective configuration will be something like the following:

http {
     ...
     ...
     server {
          listen 80;
          server_name my.server.com
     
          location / {
                         root 8/usr/share/nginx/html;
            index index.html index.htm;          
          }
     }
}

For my purpose, I have removed the default file and created my own configuration file. I created myserver configuration file at /etc/nginx/sites-enabled and here are the contents:

server {
        listen          80;
        server_name     my.server.com;

        location / {
                proxy_pass           http://myserver.azurewebsites.net;
        }

        location ^~ /gitblit/ {
                proxy_pass           http://localhost:18080;
                proxy_set_header     X-Real-IP $remote_addr;
                proxy_set_header     X-Forwarded-For $proxy_add_x_forwarded_for;
                proxy_set_header     Host $http_host;
        }

        location ^~ /nexus/ {
                proxy_pass           http://localhost:18090;
                proxy_set_header     X-Real-IP $remote_addr;
                proxy_set_header     X-Forwarded-For $proxy_add_x_forwarded_for;
                proxy_set_header     Host $http_host;
        }

        location ^~ /application3/ {
                proxy_pass           http://localhost:18600;
                proxy_set_header     X-Real-IP $remote_addr;
                proxy_set_header     X-Forwarded-For $proxy_add_x_forwarded_for;
                proxy_set_header     Host $http_host;
        }

        location ^~ /application4/ {
                proxy_pass           http://localhost:18700;
                proxy_set_header     X-Real-IP $remote_addr;
                proxy_set_header     X-Forwarded-For $proxy_add_x_forwarded_for;
                proxy_set_header     Host $http_host;
        }
}

I have about six location blocks. All of these points to the respective Jetty server instances that I have created some weeks ago, except for one. The root / points to the Windows Azure website that I have recently created. I separate this application for some purpose that I will not mention here. All will be accessed via Nginx or via port 80.

That's all. I hope you get something from here.
Thanks for reading.