Friday, November 7, 2014

Specify download directory at the time of downloading - Safari

When I download something I used to save it to a relevant directory in my machine. When I moved to Mac and to Safari, one of the most inconvenient things I experienced was that Safari download everything to the download directory and I cannot choose the place at the time of downloading (Even though I can change the path from downloads directory to some other directory, it downloads everything to that and it was not the thing I was looking for)

After much trouble going through different sources finally found something reasonable. Since I saw that there are many more people out there with the same problem thought that it would be useful to share this simple tip (though the tip is simple, it was very useful to me + had to spend a considerable amount of time to find this out)

So simply if you need to specify the download directory at the time of downloading, without just clicking the link to download

Right click on link and choose ‘Download Linked File As…’

This will do the trick.
Hope this would make your life easier with Safari ☺

Saturday, November 1, 2014

Run a jar file in command line

This is a very simple and short post on running a jar file in command line.
Simplest command that you can try is
java –jar [jarFileName].jar

But you may need to have the MANIFEST file in the jar to run it simply with the above command.

So how to bundle MANIFEST.MF to your jar?

You can add the following maven plugin to your pom.xml and get it done. Remember to add the main class name of your app here.




How to run the jars with external dependencies?

If you need to use the simple command java –jar [jarFileName].jar to run your application which has external dependencies, you can bundle the dependencies you need to the executable jar itself. Again use the below maven plugin to get it done.




The highlighted part is doing this for you and the part above is to bundle MANIFEST.MF file to that jar file too.

Thursday, October 30, 2014

Hadoop - pseudo distributed mode setup


You can simply make your standalone hadoop setup to a pseudo distributed mode with following changes

  • In HADOOP_HOME/etc/hadoop/core-site.xml, add

    
        fs.defaultFS
        hdfs://localhost:9000
    

  • In HADOOP_HOME/etc/hadoop/hdfs-site.xml, add
    
        
            dfs.replication
            1
        
    
  • Make sure that you can connect to localhost with ssh.

Start and test your hadoop setup


  • Fist navigate to HADOOP_HOME
  • Format the hadoop file system
        /bin/hdfs namenode –format
  • Start Name node and Data node
        /sbin/start-dfs.sh

  • Now you should be able to browse the hadoop web interface through
        And your hadoop file system under
        Utilities > browse the file system

  • Add /user/ to hadoop file system
        hdfs dfs –mkdir /user
        hdfs dfs –mkdir /user/
        You will be able to see these directories when you browse the file system now. And you can list         the files with
        hdfs dfs –ls ( ie: hdfs dfs –ls / )

  • Copy the input file to the hadoop file system
        hdfs dfs –put
        ie: hdfs dfs –put myinput input
and the file will be copied to /user//input

  • Run the application with
      hadoop jar [local path to jar file] [path to main class] [input path in dfs]  [output location in dfs]
        ie: hadoop jar myapp.jar test.org.AppRunner input output

Result file: part-r-00000 should be saved in the output directory of dfs ( /user/[username]/output

Tuesday, October 28, 2014

Setup Hadoop in Mac

It is really simple to setup hadoop in Mac. I tried the latest available version at the moment. ( hadoop-2.5.1 ) You can setup hadoop in standalone mode or pseudo-distributed mode in your local machine. By following the below steps you will be able to setup hadoop in your machine in standalone mode.

( you need to install java and ssh beforehand to run hadoop )

  1. Download the version you need to install from here 
  2. Extract the downloaded pack
    The extracted directory will be your HADOOP_HOME ( ie: /Users/username/hadoopDir )
  3. Add HADOOP_HOME to .bash_profile
    Export HADOOP_HOME=/Users/userName/hadoop-2.5.1 
    export PATH=$PATH:$HADOOP_HOME/bin

  4. Source .bash_profile to affect the new changes
    source ~/.bash_profile

    Now you should be able to echo HADOOP_HOME in terminal ( echo $HADOOP_HOME )
  5. Make sure that you can ssh to localhost
    ssh localhost

Now your stand alone hadoop setup is ready to use.
I will share a sample code I found on map reduce to test your setup.

import java.io.IOException; import java.util.StringTokenizer; import org.apache.hadoop.conf.Configuration; import org.apache.hadoop.fs.Path; import org.apache.hadoop.io.IntWritable; import org.apache.hadoop.io.Text; import org.apache.hadoop.mapreduce.Job; import org.apache.hadoop.mapreduce.Mapper; import org.apache.hadoop.mapreduce.Reducer; import org.apache.hadoop.mapreduce.lib.input.FileInputFormat; import org.apache.hadoop.mapreduce.lib.output.FileOutputFormat; public class WordCount { public static class TokenizerMapper extends Mapper{ private final static IntWritable one = new IntWritable(1); private Text word = new Text(); public void map(Object key, Text value, Context context ) throws IOException, InterruptedException { StringTokenizer itr = new StringTokenizer(value.toString()); while (itr.hasMoreTokens()) { word.set(itr.nextToken()); context.write(word, one); } } } public static class IntSumReducer extends Reducer { private IntWritable result = new IntWritable(); public void reduce(Text key, Iterable values, Context context ) throws IOException, InterruptedException { int sum = 0; for (IntWritable val : values) { sum += val.get(); } result.set(sum); context.write(key, result); } } public static void main(String[] args) throws Exception { Configuration conf = new Configuration(); Job job = Job.getInstance(conf, "word count"); job.setJarByClass(WordCount.class); job.setMapperClass(TokenizerMapper.class); job.setCombinerClass(IntSumReducer.class); job.setReducerClass(IntSumReducer.class); job.setOutputKeyClass(Text.class); job.setOutputValueClass(IntWritable.class); FileInputFormat.addInputPath(job, new Path(args[0])); FileOutputFormat.setOutputPath(job, new Path(args[1])); System.exit(job.waitForCompletion(true) ? 0 : 1); } }

Run sample

  • Create a jar from the sample
  • Create a text file of which you need to count words
  • Run
             hadoop jar [path_to_jar] [path_to_main_class] [path_to_input] [path_to_output]              ie: hadoop jar wordCount.jar WordCount inputFile output
  • On  a successful execution, you will have the output directory created at the path you specify. And your result will be in output/part-r-00000
  • When you run the program again you need to remove the ‘output’ directory or give some other path for the output to be written.
You will see that this is really simple. 
You can find steps on setting up hadoop in pseudo-distributed mode in this post

Monday, February 3, 2014

Simple overview on the main roles in App Factory how they involve in application process.

With the multi tenanted App Factory there are some changes in user model in App Factory. I am going to give you and idea on the default roles and the main actions that those roles are responsible of doing in the application space in App Factory.

Admin
  • Creates a space for the organization in App Factory.
  • Can add organization level users and assign them roles
    Default roles would be Developer, DevOps, QA, Application Owner, CXO

Application Owner
  • Only the application owners can create applications.
  • After creating an application he can assign people ( that has been already added to the organization by the organization admin ) to his application. And those people become the members of the application and can play relevant roles ( developer, QA... ) assigned for them ( by admin ) for the created applications.

Developer
  • He will see all the applications of which he is a member of.
  • He can do git clone, push, trigger build, etc ( the work related for developing the application )

QA
  • Will see the applications that he has is a member of.
  • Can perform the testing tasks. ( testing the deployed artifacts, report bugs... )

CXO

  • Can view dashboards.


Tuesday, January 28, 2014

Configure SAML2 Single Sign-On on WSO2 servers with WSO2 Identity Server.

By following this post you will be able to find out how to configure WSO2 servers to have SAML2 SSO with WSO2 Identity Server (IS) as the identity provider. It is really simple to configure SAML2 SSO for carbon servers.
I am going to address the server that you need to have SSO configured as 'Carbon Server' and just by following the below 2 steps you can configure SSO in your carbon server with WSO2 IS.

1. Configure your carbon server to enable SSO

All the required configuration to have SSO in your carbon server are in Carbon server/repository/conf/security/authenticators.xml

  • Enable SSOAuthenticator in authenticators.xml

( 1 ) Set disabled="false"

( 2 ) This should be unique to your carbon server. You will need this value when configuring IS too.

  • Start your carbon server with an offset ( offset can be configured in carbon.xml)


2. Register a service provider in IS side
  • Start IS in default port ( 9443 ) and log in 
  • Follow Main > Manage > SAML SSO > Register New Service Provider
  • Add the unique identifier ( 2 ) as the Issuer
  • Provide Assertion Consumer URL with your carbon server info as https://[host name]:[port]/acs
  • Tick on Enable Response Signing and Enable Assertion Signing
  • Click on "Register"

Now you are done. You can simply try to log into your carbon server with SSO.
To verify
    - Try to access https://[host name]:[port]/carbon
    - This will direct you to the authentication endpoint of IdentityProviderSSOServiceURL specified in authenticators.xml
      ( here https://localhost:9443/authenticationendpoint )
    - Give the credentials and hit Sign in
    - You will be logged in to your carbon server