Hi friends, today I’m going to share about a very interesting project that I am currently involving with. It’s about developing a Distributed Resource Broker. This is an application that is useful in managing large volume of data in data intensive applications. The intension of this application is to store and manage huge number of large size files in a distributed system. The resource broker will handle all the tough part of the work including replica management and give an abstract view to the user where the user can simply upload, download, delete and search the files uploaded by him.
The network is a peer to peer (P2P) network and we are using an existing peer to peer substrate FreePastry for this purpose. We will be exposing an API with those above mentioned functionalities where a java client or a web client can use our API and customize our service.
I’ll present you some more details about this Distributed Resource Broker in future :)
Friday, April 22, 2011
Monday, April 18, 2011
Implementing a Lexical Analyzer and Parser
I recently involved in implementing a lexical analyzer and a parser. These were implemented to tokenize and parse a defined language “C-“ with specified conventions and grammar. For this I used JavaCC which is a lexical analyzer generator and a parser generator. The intension of the lexical analyzer was to tokenize the given input file. It will reject the input file if it is not according to the conventions of the C- language. The tokens generated in the lexical analyzer were used for parser. Parser will check the sequence of tokens and validate the grammar for C-.
Here I simply came with two files, LexAnalyzer.jj for tokenizing the input file and validate the generated tokens with C- specified tokens and MyParser.jjt file with the specified the grammar rules for the defined language and to validate whether the input file has been written in accordance with C- grammar. JavaCC which generates all the underline code, made my job much easy.
LexAnalyzer.jj
MyParser.jjt
Here I simply came with two files, LexAnalyzer.jj for tokenizing the input file and validate the generated tokens with C- specified tokens and MyParser.jjt file with the specified the grammar rules for the defined language and to validate whether the input file has been written in accordance with C- grammar. JavaCC which generates all the underline code, made my job much easy.
LexAnalyzer.jj
MyParser.jjt
Subscribe to:
Posts (Atom)