Skip to main content

HBase 0.96 + Eclipse + Maven

Since HBase-4336 (and HBase 0.96) the HBase source code has been split into multiple maven modules.
The post is no more related to a specific operating system, you can follow these steps on Linux or Windows.

0. Requirements

1. Checkout sources

Use your favorite Subversion client to checkout the HBase source code :

$ svn checkout http://svn.apache.org/repos/asf/hbase/trunk hbase

 (check http://hbase.apache.org/source-repository.html for more details)



2. Install M2Eclipse plugin

  • Select the menu : Help / "Install New Software"
  • In the 'Work with' field type : http://download.eclipse.org/technology/m2e/releases (press Enter)
  • Select m2e - Maven Integration for Eclipse

3. Import HBase source code

File - Import... - Maven / Existing Maven Projects and select the directory where sources have been checked out at step 1 :


Some java sources need to be generated, right click on  the hbase project, Run As and select "Maven generate-sources" :

4. Create Run configuration

 Create a new run configuration, name it 'HBase (start)', slect the hbase-server project and set org.apache.hadoop.hbase.master.HMaster as the main class :


In the Arguments  tab add the program arguments start :

Give it a try, click on the Run button :


You can also try the HBase web interface http://localhost:60010 :


5. Create HBase Shell Run configuration

Create a new Run configuration, set the Name to Shell, and select org.jruby.Main as the main class :
 In the Arguments tab :
  1. Add the path to the bin/hirb.rb file as the program argument
  2. Set the Java variable hbase.ruby.sources to the path src/main/ruby path (e.g. -Dhbase.ruby.sources=D:\HBASE\hbase-trunk\hbase-server\src\main\ruby)

Comments

Popular posts from this blog

HBase + Subversion + Eclipse + Windows

HBase + Subversion + Eclipse + Windows (it should be easy to adapt for Linux) Update : please note that since HBase-4336 / HBase 0.96 the source tree is split in more than one Maven module this post is no more relevant, i have created a new post on this subject : http://michaelmorello.blogspot.fr/2012/06/hbase-096-eclipse-maven.html This is a simple setup in order to play with the source code of HBase under Microsoft Windows. Since HBase use some Unix specific commands like chmod the only requirements here are  Cygwin and a working Maven 3 environment. (It is obvious that you need Java and Eclipse , but you DON'T need anything else like the Eclipse Maven plugin or any SSH configuration) 1. Checkout the source code The first step is to check out the source code from the Subversion repository. I did it under my cygwin home repository. In this example i want to play with the 0.90 branch : svn co http://svn.apache.org/repos/asf/hbase/branches/0.90/ hbase-...

Row Count : HBase Aggregation example

With the coprocessors HBase 0.92 introduces a new way to process data directly on a region server. As a user this is definitively a very exciting feature : now you can easily define your own distributed data services. This post is not intended to help you how to define them (i highly recommend you to watch this presentation if you want to do so) but to quickly presents the new aggregation service shipped with HBase 0.92 that is built upon the endpoint coprocessor framework. 1. Enable AggregationClient coprocessor You have two choices : You can enable aggregation coprocessor on all your tables by adding the following lines to hbase-site.xml : <property> <name>hbase.coprocessor.user.region.classes</name> <value>org.apache.hadoop.hbase.coprocessor.AggregateImplementation</value> </property> or ...you can enable coprocessor only on a table throught the HBase shell : 1. disable the table hbase> disable ' mytable ' 2....

Weave devient Apache Twill et entre en incubation

Ceux qui s'intéressent à YARN, le nouveau gestionnaire de ressources d'Hadoop, savent que son potentiel est énorme pour ceux qui font du BigData : spécifiez les ressources nécessaires à votre programme distribué (CPU, mémoires) et YARN se charge de trouver les nœuds de votre cluster possédant les ressources disponibles pour l’exécuter. Le tout bien entendu sur les pétaoctets hébergés par le système de fichier distribué d'Hadoop : HDFS. Emporté par la dynamique Hadoop YARN est en train de devenir le socle de nombreux projets de traitement de gros volumes données : on retrouve le traditionnel Map/Reduce mais aussi Storm porté par Yahoo ! ou Stinger d' Hortonworks pour faire du SQL à (très) grande échelle Cependant écrire un programme qui exploite les capacités de YARN n'est pas une sinécure, on se retrouve vite à copier / coller l'exemple du DistributedShell, à refaire les mêmes choses et à retomber dans les mêmes problématiques ... bref inu...