Tuesday, December 13, 2011

Doing More Cool Things with WattDepot



Alright, since it’s probably taken all the way from the last time until now to finish reading straight through the last post (the technical review of TNT’s Hale Aloha Command Line Interface), I suppose it’s time to tell you about the next steps my team and I have taken with this project. Keeping the original code base, we implemented further functionality on the product, to see if it could be done and with what effort and challenges. Our goal was to add three basic functions: set-baseline, monitor-power, and monitor-goal. Monitor-power outputs the current power readings of a tower or lounge every user-defined number of seconds until a character is entered, which terminates the task. Monitor-goal takes a baseline (defined by set-baseline) as well as a user-defined percentage reduction goal and calculates whether or not the tower or lounge is meeting that goal or not, and like monitor-power, outputs this every so many seconds as defined by the user.

Although there were some issues with implementing this such as getting the timer to work, especially with terminating on any character being pressed, majority of the issues arose from the fact that the bulk of the code— and it being the base, too— was built by other developers. Fortunately for us, the code base was built very well, with many advanced and well-implemented techniques; much more robust and easy to use (in the long run) than our own one was built. However, because some of the implementations went over our heads, it was difficult to understand it and get used to it in the beginning. But I don’t think that this was due to a lack of keeping with Prime Directive number 3 (an external develop being able to understand and install the system), because it was not due to poor documentation or unreadable code, or anything similar to that. It just took a little bit of time to understand, which is probably just normal with taking over someone else’s code. As mentioned in my technical review of TNT's product, the Three Prime Directives of Software Engineering were met, because the product definitely accomplishes something useful, the documentation provides clear instructions to installing and using the system, and an external developer can easily (or at least relatively easily) develop and enhance the system.

As far as the Issue Driven Project Management went, just as it was with our own Hale Aloha Command Line Interface, everything was rather smooth-sailing and we didn’t run into any issues. Again, it was great in the beginning for portioning out the work load, but as we had to make modifications to our code, it became less of a required thing and a little more of a nuisance. This time it wasn’t as bad though, because the command interface and processors were all created already, so we only had to implement the functions (and their associated test classes) themselves.

Most of our enhancements to TNT’s Hale Aloha CLI work well, and I’m actually quite surprised at what we were able to accomplish, given the little amount of time we had to spend on it, and the complexity of the tasks. One thing that doesn’t quite work properly is the functioning of the timers implemented in monitor-power and monitor-goal. It took a while just to get the timer properly implemented, and we could not get the termination of the loop to end properly. We would have liked for it to end when the user types in any character, but instead it only does so upon entering a carriage return. Another defect in the system is the full checking of whether a baseline was set before the monitor-goal command is run. The current setup has it where the baseline is not tower-specific. If set-baseline is run only for Ilima, for example, the user could run monitor-goal on Lehua, and not encounter any errors. This is a minor error and could probably be fixed rather easily, had we had more time. Other than this, everything seems to be working well, and I’m very pleased at the quality of the enhancements that we were able to produce in just a couple of weeks. The test cases prove that our system is pretty robust (other than the aforementioned flaw), and it is fully documented and working.


Working with someone else's code base-- or anything for that matter-- is difficult and can be a headache-- especially in the beginning when everything is so foreign-- but this process is beneficial in many ways. The obvious one is that it provides you with examples-- good and/or bad-- of implementation techniques, and so you can learn new tricks if there are any, and fix the places that are substandard. Secondly, however, being put in an environment where you are not only requires you to work as a team but to also switch products to compare or improve them forces you to write better code. Especially when you know that others' eyes are going to be looking at the code you wrote, you want it to be clean, correct, and the best it can possibly be. This includes documentation and code formatting, which are things that are often left attention-deprived when you know that only one set of eyes will be looking at the code. Plus, there's the pride aspect of providing really good code-- perhaps implementing a cool trick that you learned that makes a certain function simple, correct, and efficient; you can show it off in a "look at what I know how to do" kind of way, and that makes things a bit more fun, rather than "it's messy, but... ah, it works so it's good enough for me." Friendly, harmless "competition"-- if I may even call it that-- never hurts anyone, and is one way to intrinsically encourage good code in a pain-free way!

Friday, December 2, 2011

TNT's Explosive Performance

Now that my group and I have finished our Hale Aloha WattDepot implementation, I'm curious to see how other groups tackled on the same tasks, and to what extent of quality their products are. So I decided to take a look at the "TNT" group's Hale Aloha WattDepot implementation. In assessing the group's product, I believe that it is befitting to review them under the Three Prime Directives of Software Engineering. By doing so, I can grasp a large understanding of many components of the product, and also sharpen my knowledge of what the Three Prime Directives are and what I need to keep in mind whenever developing a piece of software, to make sure that the product is useful, easy to install, and easy to understand and enhance.

Therefore, to get started with the First Prime Directive: "does the system accomplish a useful task?", I took a look at the overall functionality of the product to see if all of the components were there. The group was able to retrieve the current power of a tower/lounge, get the energy for a tower/lounge from a specific date, get the energy usage for a tower between two given dates, and finally give a sorted order from least to most between two dates. These are all of the functions that we implemented in our own system as well, and they are useful tasks indeed.

The Second Prime Directive is: "Can an external user successfully install and use the system?"
Taking a look at the homepage, I found very precise information about what the system does, although to find sample input and output, I had to go into the User Guide wiki page. This page contains a sample input and output for the system along with simple instructions on running the system. In the download page, the group offers the option to download an executable jar file for those simply wishing to run the application. In addition to this, they also offer an executable jar file along with their source files so that the external users do not need to compile the system first before executing it. The distribution file contains a version number along with a timestamp for external users to monitor any changes and how long ago the changes took place.
In order to test how robust the system was, I input various commands, both valid and invalid, to see how the product handles them. Here are some of the commands I input, with a description of what was output:          

·      “help”
o  Gives the proper output.
·      “current-power Lehua”
o   Gives the proper output of Lehua’s power.
·      “current-power lehua”
o  Since tower is case sensitive, gives a bad syntax error.
·      “Current-power Lehua”
o   Since command checker is also case sensitive, gives a bad command error.
·      “daily-energy Lehua 2011-11-25”
o   Gives the proper output of Lehua’s daily energy for Nov. 25, 2011.
·      “daily-energy Lehua 2012-11-30”
o   Since the date is in the future, gives a bad date error.
·      “daily-energy Lehua 11-25-11”
o   Since date isn't properly formatted, gives a bad syntax error.
·      “daily-energy Lehua 11-25-2011”
o   Also gives a bad syntax error because there is only one format for the date.
·      “daily-energy Lehua 2011-10-20”
o   Gives a date error showing that the date should be after November 22.
·      “energy-since Lehua 2011-11-23”
o   Gives the proper output of the amount of energy used in Lehua since November 23.
·      “energy-since Lehua 2011-12-25”
o   Gives a date error
·      “rank-towers 2011-11-23 2011-11-25”
o   Gives the proper output rating the towers from least to most.
·      “rank-towers 2011-11-25 2011-11-23”
o   Since it recognizes that the start date is after the end date, gives both a date error and a bad syntax error.
·      “quit”
o   Properly closes the application.

Overall, the system does very well and is very useful. The system did not crash when providing bad inputs and it provided correct and useful outputs when providing valid commands.
 
Finally, the Third Prime Directive: "Can an external developer successfully understand and enhance the system?" was reviewed, and for this, I took a look at the Developer's Guide wiki page to see if it provides sufficient instructions on building the system from its sources. TNT's Developer’s Guide wiki page does indeed provide clear instructions on how to build the system once the sources are downloaded. Quality assurance is done automatically by running ant –f verify.build.xml that runs checkstyle, PMD, and findBugs. The wiki instructs the user to run this command first before any sort of additional modifications to the code to ensure the original code didn’t have any errors. In terms of coding standards, the wiki page mentions the Eclipse format and provides a link to the xml document for users to use. The application wiki mentions that it follows issue driven project management and continuous integration. Continuous integration is a practice in where smaller pieces of code are verified for validation at smaller increments instead of at the end of construction. Often times, each integration is automatically verified (in our case using the “ant” tool) and assists the group find out who, when, and what broke the code if there is a build failure. Issue driven project management is a practice in which the over build is broken down into smaller issues, usually about two days worth of work. For this project, we submitted issues in Google Code and used that service to also monitor and add our issues. A link is provided to the continuous integration server to that is associated with the application. The only thing the Developer’s Guide is missing is how to construct the JavaDoc documentation. The information contained in the Developer’s Guide is very concise and doesn’t contain any useless information.
 
 In addition to the Three Prime Directives, a number of other components must be reviewed as well in order to get a complete understanding of the quality of not only the functionality but the documentation and the process and progress of the product as well. JavaDoc, Build System, Coding Standards, as well as the Issue Driven Project Management and Continuous Integration must also be reviewed, and for TNT, the results of reviewing these components are as follows:

JavaDoc Review
            I was successfully able to create a JavaDoc documentation successfully. After reading through the various class documentations, the documentation shows that the entire application is linked together by the Processor class. The package names are named appropriately where the Main class is contained in a higher-level directory, the command processor is in a subdirectory of the main directory, and the various commands are also in a subdirectory of the main directory.

Build System Review
            Using the ant command “build.xml”, I was successfully able to build the system without any errors. The author(s) of each piece of the code are listed in the source code to allow external users to know who wrote each piece of code. In terms of coverage, after running JaCoCo, the project doesn’t have 100% coverage.  In some cases such as the Main, Processor, and InvalidArgumentException classes, the coverage is 0% mainly because test cases were not made for those classes. However, the sub-100% coverage is due to the fact that none of the test cases contain a case where the author asserts an invalid statement. The current set of test cases successfully tests the application pretty well for the valid inputs. However, because of the lack of test cases for invalid inputs, there is a slight chance that an external user may create an enhancement that calls throws some random exception and the external developer may not know why it is being called.

Coding Standards Review
            What I found to be really nice is that each of the command classes were formatted in the same way (because of the interface class). What I found to be very surprising at the same time was hardly any private methods were used. But this is okay since WattDepot provides a lot of the methods already. The only files I had small problems reading was the Main class and the CommandManager class. In the main class the only thing I had issues with was figuring out when the IOException would be thrown. Documentation on what would have thrown it would have been nice. The CommandManager class was beautifully written using some advanced Java techniques that I have not come across before and because of this I had some problems reading and had to take some time to do a bit of research on some of the things. This is was not a problem caused by the author.

Issues Page Review
            The issues page associated with this project makes it very clear on who worked on which part of the system. It seems that a different person worked on different classes in the system and because of this, if an external developer had any questions about the system, it would be very easy for them to contact the author(s) that worked on that piece of the system. In the group of three, it seems that two of the developers did more of the work. The other group member worked on a single command class, its test case, the Main class, and the Help class. Most of these classes were fairly short and it is clearly shown in the amount of issues posted by that author. However the overall quality of these classes is great, and so it is clear that this member worked hard as well but maybe just didn't have the experience levels of the other two.

Continuous Integration Server Review
            The Jenkins continuous integration server shows the progress of failed and good builds. Other than at the very beginning of the project between build three and four where the time it took to fix the project was about a day and a half, all failed builds were fixed within an hour. Most were even fixed in as little as 10-15 minutes. As for commits that were associated with issues, less than nine out of ten of them were related to an existing issue.  However, the group was pretty close to having nine out of ten commits be associated with issues with having about eight out of ten.
 
Conclusion
            From this extensive review of TNT's Hale Aloha command line interface system, it is safe to say that the Three Prime Directives of Software Engineering were fulfilled. The product can successfully be developed and enhanced by new programmers, and everything was well organized and presented. This group definitely had some features and functionality that was more advanced than my own group's ones, but that is a good thing, because I can learn from this and apply my newly realized techniques to other projects in the future. TNT's product was extremely robust and very impressive, and I see it as sort of a model for me to remember and refer to when implementing similar desired features. Although this review was very lengthy and time consuming, I learned a lot, and felt that the TNT group did an excellent job working together and putting together a solid product.

Monday, November 28, 2011

Issue, fix, issue, fix,...

In my latest experience, I got to know how to work in a completely new environment-- that is, working with others rather than by myself, which I have been more-or-less doing up until now. Now, you may think, "what's the big deal with working with others? People do that all the time, even with computer-based projects." This is very true, but the question is, "are they doing it correctly?"-- as in effectively and efficiently. They may, or they may not be, but I definitely received an exposure to a great process collaborate as a team on a project involving continuously-changing code, as well as an efficient way to utilize it. The process is called Continuous Integration, and the way to use it is through Issue-Driven Project Management.

Basically, Continuous Integration (CI) keeps all the group members up-to-date on the status of the project, notifying members immediately when something is "broken", and when it is "fixed". Issue-Driven Project Management (IDPM) is exactly what it says-- it's a system for managing a project by utilizing "issues". Issues are tasks set up by project members for designated members in order to 1) make sure that members aren't going, "uhhh, what should I work on now?", 2) make sure that members aren't working on the same part of the project at the same time, and 3) keep track of who did what. For our project, we used a combination of SVN + Jenkins (CI) and Google Project Hosting (IDPM) to create a project where we implement the WattDepot application-- the same one I introduced to you last time with my WattDepot Katas.

One thing that was very nice about using Continuous Integration and Google Project Hosting was the sharing of wealth; group members could work on portions of the project that were in their respective areas of expertise, or could even help fix something that another member broke, helping that person to understand the problem and become a better programmer.

On the other hand, having multiple people work on the same project has its downsides, especially when it comes to consistency. One person's thought-process, style, and knowledge is different from another's, and the same thing can be said about coding. No two people code in exactly the same way. So while I used Regular Expressions to check if a date was valid, another person used Lists and String matching to verify the location specified. Neither of us is more correct (although speed, code-length, and accuracy might differ), and so it was extremely difficult to resist the temptation to try and correct someone else's program and adjust it to fit your own coding styles.

But overall, we were able to work together with this framework and complete this project successfully. Our product is a command line interface program that users can use to view various data from the Hale Aloha residence halls at the University of Hawaii. For now, there are 4 basic functions to view the data for-- Current Power, Daily Energy, Energy Since, and Rank Towers, but the system can easily be expanded to accommodate more in the future. Entering "current-power " and a tower name will output the latest power readings for that tower. Entering "daily-energy [tower] [yyyy-mm-dd]" (without brackets, and where yyyy-mm-dd is a date) will give the energy consumption for the specified tower on the specified day. Similarly, "energy-since [tower] [yyyy-mm-dd]" will output the energy consumption for the specified tower, but as the total consumption from the specified date until the current time. Finally, entering "rank-towers [yyyy-mm-dd] [yyyy-mm-dd]" will output a list of all the towers sorted by their total energy consumption for the interval given. Additionally, the command line interface program supports the commands "help", which lists and explains the available commands to use, as well as "quit", which is used to exit out of the program. Until the user enters "quit", the program remains in a loop, so you can keep running commands and viewing data to your heart's content.

I believe that while far from perfect, our project turned out to be one of rather high quality. Perhaps we could've structured it differently to allow for better testing, or enhanced it to make it more robust, but it does what it needs to, and is fully functional. Because we got to share our knowledge, we got to see various new methods of accomplishing certain tasks and use those techniques in our own components of the project, rather than using the same old sub-optimal ones that we have been using for the longest time. There are always challenges with working in groups, but in the end, if there are even just a few parts which turned out to be implemented in a better way due to the collaborative efforts for everyone in the group, then I think the project is successful because each member takes away something learned, and can apply it with future projects.

Tuesday, November 8, 2011

WattDepot to the Rescue!


This week, I’ll be sharing with all of you my experiences with WattDepot in the past few days. As you may already know from my previous post (and from other sources), Hawaii is not in a very good position right now in regards to energy reliance and the future prospects of its energy. That’s where WattDepot can be a stepping stone—several, in fact—to building up public awareness of energy as well as bringing Hawaii into a state of renewable energy usage rather than continued reliance on oil.

First off, WattDepot is an open-source project that uses electrical monitors to record and report data to a server, which can then be accessed through client programs and applications. Users can collect data from different locations, different time periods, and even manipulate them to get another perspective on the data, such as a sorting by consumption amounts.

Because there are so many features that WattDepot has to offer, there was only one way to tackle learning how to use the API and the many components of WattDepot-- Katas, of course! The katas I needed to implement in order to become comfortable with WattDepot were:
  1. Connecting to a server and listing the sources hooked up to it, along with their descriptions.
  2. Calculating and listing the latency (how current the data is) of the sources, sorted by increasing latency.
  3. Displaying the hierarchical listing of the sources and subsources.
  4. Listing the energy consumption of each source from yesterday.
  5. Listing the highest power consumption recorded yesterday along with the time at which it was recorded.
  6. Calculating and listing for each source the average energy consumed over the past two Mondays.
Starting off with these katas was not too bad at all, because I was using a WattDepot simpleapp as sort of a template for the first two. The part that was tricky with kata #2 was the sorting of the latency. I had to brush up on my Collections, Generic Data Types and Comparators and implement an additional class called Data to help take care of the sorting. This made this kata take much longer than I expected it to-- maybe about an hour in total, whereas kata #1 took only a couple of minutes to complete.

Then came katas 3 and 4. My goodness, were these tough for me! Without much of a template to follow, I was stuck with going out and staring at the API for several hours trying to make sense of it. I could see what I wanted with the subsources and energy consumption, but I just did not know how exactly to get it. Perhaps it is a difficulty common to most programmers, or maybe it's just me, but I tend to have quite a difficult time understanding APIs (other than the Java API). It's like, "ok, I see you, and I want to get you. But how exactly do I do that?! And what kind of data type do you return?!" I guess I've been relying on the Java API too much.. But finally, finally after hours and hours (and hours and hours) of painfully staring at the computer screen (occasionally wanting to just hurl my laptop into the wall), I was able to figure out a way to parse through the list of subsources, figure out what the hierarchical ordering is logistically supposed to be, and then implementing a way to display it in a list. Phew! That only took 5 hours! Now on to kata 4.. Kata 4 took about 3 hours to complete, because I was slightly warmed up to using the API and I had already designed a Data class and a sorting class that I could tweak just a little to get it to work with the Energy Consumption values. Figuring out the calendar dates and whatnot was a pain, but other than that, it was not too bad overall.

I was already pretty much out of time by the time I got around to finishing Kata #4, so currently I only have part of Kata #5 implemented, and I did not even get to start on the last one. It was difficult to get these katas done in such a short amount of time, especially since the WattDepot application is one that I am not familiar with, and so it took a while to even minimally understand how it works. Especially for me, because I know that I take a really long time to learn knew things, more time is always very helpful. I probably could have paced out my scheduling for this assignment a little better, but sometimes you just get really crazy days where you can't do much of anything, and that happens to everyone, so you just have to live with the consequences and do better next time.

Through this exploration, I learned a lot about WattDepot—its capabilities, its potential, and just a few of its many features. WattDepot is an extremely neat and useful tool, and it is great for many applications. I can see it being used not only to monitor and report energy consumption and generation, but also using that information to analyze what is using a majority of the energy and where. This information, as well as many other findings can be used to inspire more reserved energy usage and help solve Hawaii’s energy crisis problem. Energy data manipulation may seem like only a concern for electricians and engineers who have to set up and monitor electrical components, but it can actually be constructed into user-friendly applications to be used by anyone to visually see what they are doing to their environment and their wallets. I sure hope to see a lot of people in the near future using applications as such to really think about their lifestyles and change for the better-- for their health, their finances, and their surroundings.

Tuesday, November 1, 2011

Sustainable Paradise

     Ahh the images of Hawaii: sandy beaches, blue skies, and-- what's this? Photovoltaics, wind turbines, and other renewable energy generators? Well, not so much yet, but hopefully in the years to come, Hawaii will not only be the epitome of paradise, but also of sustainable, renewable energy. For those who call Hawaii “home”, a less expensive, cleaner, sustainable source of energy is not only desirable for economic reasons but also for aesthetic reasons, because keeping Hawaii paradise-worthy is an important aspect of pride for us who are living here.
     I know what you're thinking-- “Hawaii is such a small and isolated place. Why should you guys care about renewable energy sources? The mainland and other bigger areas use more energy, so they should be the ones thinking about renewable energy.” Well, I agree that other areas should consider using more alternative energy sources, but just because Hawaii is small and isolated doesn't mean that renewable energy should be dismissed as an option. In fact, small places like Hawaii may have more reasons to transition towards alternative energy sources than larger areas, because here we face issues such as inefficient power generation, higher reliance on oil, and therefore higher price to pay for energy.
     Due to its small size and water-delimited major land masses, Hawaii does not have the large energy “supergrids” or even “grids” that the U.S. Mainland has to manage the power generation and distribution across the land. This results in having to rely on small, inefficient mini-grids in order to get power to all residents on all islands. Because of this, energy in Hawaii costs three to six times (or even more!) the amount that it does on the Mainland. Furthermore, over three-fourths of Hawaii's energy comes from high-cost oil, as opposed to the measly one percent that the U.S. Mainland relies on as their source of energy. If we do nothing to change our energy sources, Hawaii's expensive energy cost could continue to increase as time progresses.
     Another reason for Hawaii to look towards alternative energy sources is because of the high availability of many different sources of energy. Hawaii has easy access to solar, geothermal, wave, wind, hydroelectric, and many other natural sources due to its geography. In fact, there are so many options that would yield high energy output, that the entire state of Hawaii can run completely on its own generated clean energy. With such a promising possibility, what reason is there to not switch to these alternative energy solutions?
     Unfortunately, there are some negative aspects to converting to renewable energy sources, such as having to pay to set up the facilities and switch over to them. Taxes would increase, and therefore many people may oppose this, especially if they themselves aren't going to be reaping much of the benefits directly. Some organizations are actually against renewable energy sources, due to the cultural and environmental damage that the building of the structures causes to the surrounding area. But not switching to renewable energy sources will cause even more environmental damage in the end, as greenhouse gas pollution causes temperatures and sea levels to rise, greatly impacting Hawaii's landscape. And as for the cost issue, if we don't pay for cleaner energy now, we will certainly pay for it later, in the form of not only higher fuel and oil costs, but for everyday items as well, as prices will have to increase in order to compensate importing costs.
     So there are many reasons for Hawaii to really start getting its act together and really pushing for renewable energy sources. Everyone can and must do their part in order to make Hawaii a hospitable place that can be called home even decades into the future. Conservation of energy is something we can all do to help keep energy costs down, and supporting renewable energy can help us achieve the goal of a cleaner, sustainable paradise. This is our world-- our home. We should want to keep it a place where we can live free of worry, hazardous conditions, and high costs. It is our responsibility to take action and do what is right for our future.

Monday, October 24, 2011

5 Important Points of Software Engineering

While there are countless aspects of Software Engineering that are extremely important-- to the point where you could call many of them "the most important aspect of software engineering", I would like to share with you all just five of those points in the format of a mini quiz. See if you can answer all of them!

Questions:

  1. Why should you use annotations in Java even though they are not necessary?
  2. Automated quality assurance tools are also very important to programming effectively. The following code will result in which automated quality assurance tool (Checkstyle, PMD, or FindBugs) to report an error?: if (true) { x = 6; }
  3. What is JaCoCo used for, and what is one aspect to beware of when using it?
  4. Give an example of a good way to incorporate different types of software reviewing at various stages along the lifespan of a program.
  5. How and why would you recommend using an IDE to someone who normally codes using a text editor?
Think you got all the answers? Check out my answers below and see how much you know about these software engineering components!:

Answers:
  1. (Why should you use annotations in Java even though they are not necessary?Although the Java Interpreter ignores them, annotations are useful in that they associate various “metadata” with elements within the program. They allow Java tools such as the compiler and Javadoc, and even automated quality assurance tools such as Checkstyle and PMD to find errors in your code more effectively. Annotations are sort of like extra tools that help make sure that what you are writing is actually what you mean to write. For example, if you are intending to override a superclass method but you spell the name wrong or use different arguments, having put the @Override annotation will give you an error saying that the method does not in fact override a superclass method. After realizing this, you can fix the error in your code right away without having to ponder and search through your code later to figure out what is causing the problem.
  2. (Automated quality assurance tools are also very important to programming effectively. The following code will result in which automated quality assurance tool (Checkstyle, PMD, or FindBugs) to report an error?: if (true) { x = 6; }) This code will result in PMD reporting an error. Because PMD searches for possible bugs, dead and suboptimal code, overcomplicated expressions, and duplicate code, the above will be flagged as it is redundant; the condition inside the if statement is always true. It is syntactically correct and will not show up as a bug in the Java bytecode, so neither Checkstyle nor FindBugs will raise a concern for this statement.
  3. (What is JaCoCo used for, and what is one aspect to beware of when using it?) JaCoCo is a tool for automatically analyzing how many lines of code in a program are covered by test cases that are written for that program. When using JaCoCo, it is important to keep in mind that coverage does not equal perfect testing. Although it is important to make sure that as many lines as possible are tested, just because you have a 100% line coverage does not mean that your code will be unbreakable if the tests pass. You could very much have a test that runs through only 50% of the lines but is a much better test than a test that runs through 100% of the code, because the test case itself is much more rigorous and exposes more flaws. So when testing code, just use JaCoCo to get a sense of how many lines you have covered, and then work on making sure those lines are covered as best as possible. Then you can add more to the test or make another test in the same way to cover more lines of code, and your program will be extremely robust!
  4. (Give an example of a good way to incorporate different types of software reviewing at various stages along the lifespan of a program.) Following the general software development process: Requirements specification, Design, Implementation (Coding), and Testing, you should incorporate a reviewing process between each of these four stages. One example of how you could go about doing this is to first start off with a Walkthrough after analyzing the requirements. Since there is no code or design produced yet, just a basic check to make sure the idea is valid or thinking about other general issues may be sufficient at this point. Then, after coming up with a software design, it would probably be good to perform a little more in-depth Technical Review, since there will be more technical specifications at this point, so it would be a good way to check the design and find any defects and resolve ambiguities, before moving on to code the program. Another walkthrough could perhaps be performed in addition, to informally check the overall integrity of the program. Finally, after the coding is done, some nitty-gritty Inspections ought to be performed, to really detect and remove defects. Technical reviews and walkthroughs may also be good to incorporate, but streamline programs that are to be distributed for public use should definitely go through thorough inspection to get rid of as many bugs as possible, even before moving on to test the code.
  5. (How and why would you recommend using an IDE to someone who normally codes using a text editor?) IDEs are made to do more work for you, so that you don't have to. They do not only simple tasks such as closing open parentheses and braces and automatically indenting new lines of code, but IDEs also contain libraries that enable things such as auto-complete, allowing you to view the available methods right then and there in case you forgot the exact spelling or what parameters are needed. IDEs will also tell you if something is spelled wrong or is invalid syntax, and also provide a place for compiling (which is done in the background), running, and debugging code, without having to compile and execute the code manually. All in all, IDEs allow for far more efficient programming that is also much more accurate, so you will probably never have to dig through code to fix an error caused by forgetting to close a brace or by using the wrong method name, because the IDE will take care of that for you.
Did you get them all correct? I know the answers are rather lengthy, but if you got the general idea down in your head, then congratulations, you know quite a bit of tools that can help you in productive software engineering! If not, then please be sure to learn to love these tools and tips; they will certainly come in handy in your experience as a software engineer. Please be aware though, that these are not the only vital tools to know for software engineering; there are tons and tons more that I have not covered here that are absolutely essential to programming, so please, search and play around with them, and read up on my other blog posts to gain more tools to put under your belt, so that you too may be on your way to becoming an excellent software engineer!

Tuesday, October 18, 2011

Getting a Sense of the Real World

This week, I'll be sharing with you my experiences with Configuration Management of a project, specifically my Robocode project that I have been working on so far. My robot is ready to be deployed out into the bigger world and be used and improved by people around the world. This is where Configuration Management comes in, and I have chosen to use Google Project Hosting to house my robot, and Subversion as the version control system. But before we get into making my own hosted project, I had to get a taste of what Configuration Management is like.

I have used TortoiseSVN once before on Windows, but since I do my coding primarily on Ubuntu Linux, I was left to find alternative clients. My first approach was to use Subversion from the command line. After downloading Subversion for the Ubuntu Linux terminal, I checked out the robocode-pmj-dacruzer project, made a minor change (setting the scan color to be white) and committed the changes back to the repository. Everything was going rather smoothly except for when it came time to commit the changes. For some reason, my commits were doing nothing; I wasn't being prompted for a user name or a password. I found this to be very strange, since everything seemed to be working-- checking out, updating, making changes. I had two option to solve this problem-- either install an SVN client with a GUI (like SmartSVN) to diagnose the issue, or try and figure out what was wrong by myself. I didn't have much time, nor did I know how long it would take to install SmartSVN and figure out how to use it. So I decided to stick with the command line version and focus on diagnosing the problem myself. After about 45 minutes of frustration, I discovered that the problem was with Eclipse. For some reason, the changes I made weren't being saved, so when I tried to commit back to the server, it would have no reason to do so because there were no changes made. So I opened up the java file in a text editor, made my changes, committed the changes successfully, done. Just in the nick of time.
(I did eventually download and set up SmartSVN just in case, for easier visual diagnostic purposes.)

Now, with a basic feel for Configuration Management, I was ready to tackle hosting my own project up for the world to see. As aforementioned, I used Google Project Hosting to house my project, and posted the zip file (created using Ant) there. I even included a Developer Guide as well as a User Guide so that developers can verify that it works and can be tweaked, and so that users can add Narchi to their systems and watch how it plays out in a battle. I did not encounter many issues at all-- in fact, the hardest part was actually figuring out how to upload the file onto the Google Project page. It took me quite a bit of searching, only to find that it was just in the Downloads tab. Other than that, all has been going rather smoothly so far, and Narchi is sitting cozily at Google waiting for people to check him out, take him home, and make him a better robot.
The robocode-cht-narchi project can be found here.

What I learned from this was the basics on how Configuration Management works. I previously had a running idea of why it is important to have systems like this, but I had never actually gotten to use one for my own project. Now I am able to host my projects on Google, and use Subversion to keep it up to date with the changes that people can commit to it. This is what happens in the real world with real open-source software, so this is a great skill to know and is a big step towards becoming a bona fide software engineer! The future of having a career as a professional is becoming a lot less scary, and I hope that I can learn a lot more so that I am prepared to face on the world with software engineering.

Sunday, October 9, 2011

Getting Ready for the Big Arena

The number of days until the big event is counting down, and quickly! I'm going through a training regime with my robot (named Narchi) to get into tip-top shape for the big Robocode competition taking place this coming Tuesday, October 11th. Each athlete has their own strategy to winning, and Narchi is no different. We are working in collaboration with each other, brainstorming and trying out different strategies, keeping ones that work, and dismissing the ones that we come to find aren't fruitful. Unfortunately, it seems like there are many more happenings of the latter versus the former, but I think we have finally come up with a strategy (at least a general one) that is a keeper. The journey up until here has been long, with lots of dead-ends and backing-ups, as illustrated here:


Thursday, October 4th
Our first approach was to start off simple-- just to get moving and/or shooting. Since the corners of the battle field seem to be the best hiding spots, we decided to go there first. Finding and going to the closest corner appeared to be the most efficient plan, but I quickly found this to be troubling, because the many (not-so-simple) calculations made the lines of code grow and grow, and grow and grow. Furthermore, this proved to be inefficient, because when I added tracking and firing capabilities to Narchi, he would forget about moving to the corner once he spotted an enemy, and would just keep firing, only remembering to go to the corner once the enemy was annihilated..


Saturday, October 8th
So, back to the drawing board it was.. I needed to work on Narchi's ease to get distracted! I'm thinking that we might simplify the movements, and just focus on getting a good firing strategy. Previously, Narchi would fire based on the enemy's distance, using less power if the enemy was far away. But as I watched Narchi practice against robots such as Corners, he would miss a lot while Corners was moving away, and once Corners stayed still, Narchi would only fire small bullets if he was far away enough. I realized that this was pretty wasteful, so we'll try and change it so that if Narchi misses a shot, then he will decrease his bullet power, and if he hits an enemy, then he will increase his power. However, this is proving to be rather difficult, because sometimes Narchi wants to fire again, but the bullet is still traveling, so he doesn't know whether to use more or less power on the next bullet.
Back to more training! Stay posted for more news!


Monday, October 10th
Well, the final version of Narchi is completed, and ready for the big unveiling tomorrow! It took quite a lot of tweaking to get it ready, but we're finally ready to roll out. For the final specifications of Narchi, we have the following:
  1. Movement - The first thing that Narchi does is moves to the closest corner to its starting point (i.e. the one defined by the quadrant that he starts off in. The algorithm is a very simple "follow-along-the-walls" one, which may work better in terms of evading enemy attacks compared to a direct path to the corner. But the movement doesn't stop there! No, for when Narchi gets hit by a bullet, he tries to move to a new (supposedly random) spot to avoid getting hit again.
  2. Targeting - Narchi is an excellent targeting robot, able to lock on to the enemy that it can see. From the beginning of the match, if another robot crosses his view, he will abandon the task of moving to the nearest corner and just focus on locking on to this target instead. One flaw that Narchi has with targeting, however, is that once he starts moving around from being hit, he quickly changes plans and abandons his targeting scheme. In terms of "fight-or-flight", he is definitely a "flight" kind of guy.
  3. Firing - Alongside targeting, Narchi will fire during his initial "lock-on" period of the battle. He is wise about his actions, and decreases his bullet power when he knows that he has previously missed a shot, and increases the power when he knows that he has shot an enemy. However, just like the targeting problem, once Narchi gets hit and goes into "flight" mode, he seldom reverts back to firing, which could spell out losses.
Results
As a result, we have a robot that can do some interesting things, but is not quite perfect. It can reliably beat the sample bots that don't fire, such as SittingDuck and Target, but does not fare very well with those that shoot, due to Narchi's tendency to freak out when getting shot. Before implementing the evasion tactics, Narchi could beat Corners and even Walls sometimes, but it seems that adding the extra functionality caused Narchi to be unable to go back to its previous task, or to do them well simultaneously. As such, it probably would be better to remove the evasion portion from the code, but then Narchi would be nearly identical to Corners, and so we'll leave it in and show off a little of Narchi's ninja skills. If this tactic could be incorporated more smoothly into the overall strategy, Narchi could possibly become a tough contender to beat.

Testing
Rather than doing any unit testing, I decided to test Narchi's behaviors, and acceptance to opponent robots. I tested the movement to a corner, whether bullet power increased upon hitting, whether bullet power decreased upon missing, and evasion when hit by a bullet. For the first two, I tested against a non-moving enemy, so that the feedback on just the movement and firing could be tested. For the latter two, I tested against enemies which Narchi struggled against. By finding a bullet that had less power than before, I was able to determine that Narchi must have missed, thus turning the dial down on the power. And by seeing that Narchi first moved to a corner and then ended up in a different spot some time later in the round, I was able to determine that Narchi must have tried to move upon getting shot. The two acceptance tests were against SittingDuck and Target, since those were the only two that Narchi could beat because he would freak out if it got hit by any other robots.

Lessons Learned
Well, I can certainly say that Robocode was quite an adventure, to say the least. It was very difficult to get started and get used to the rules, physics, and flow of Robocode, which made it even more agonizing to figure out how to code my robot effectively. I still do not completely understand some subtle nuances with the execution flow, but overall, my understanding has increased greatly. But more so than Robocode specifically, I learned a lot about build systems and tools such as JUnit, Checkstyle, PMD, FindBugs, and Jacoco, that although sometimes gave me unnecessary headaches, are very useful in writing clean, concise, and effective code, and I am very glad to have learned about this. While there are many things that I could do differently if I were to do this again, I think that the only thing I would do (other than try and spend more time on it) is figure out in more detail about how Robocode works, so that I could be a more efficient programmer when implementing strategies. I feel that I wasted a lot of time simply because I didn't know a lot of the methods and classes that could have been really helpful to me. But, what's done is done, and now it's off to see how Narchi does against the fierce competitors!

Thursday, September 29, 2011

Ant 型 (Kata)


In this exciting week, I've had the opportunity to learn and work with the Ant Build System-- a software tool for automating software build processes. Although I've "used" build systems before, they were only IDE-integrated ones (such as MSBuild in Microsoft Visual Studio), and as such, I did not have to even know that such pieces of code existed. However, with all of its capabilities, it is crucial to actually write code using build system tools, so that when you want to release code to others, everything is neat and complete.

Since my primary focus for now is Java, the build system of choice is Apache Ant, which despite its somewhat not-so-pleasant name, stands for Another Neat Tool. And it is! Ant is quite nice to programmers, although it can get rather complicated at times. It comes packed with feature after feature, so it did seem overwhelming at first. In fact, it is still a little overwhelming to me, but just because of the sheer amount of capabilities that it has. But just like Robocode, I carried out the learning process of Ant through a series of programming martial arts, called Katas.


The 8 Ant Katas that I practiced were:
  1. Ant Hello World- Start from ground zero-- write a script that uses the <echo> task to print out Hello World.
  2. Ant Immutable Properties- Understand the immutability of properties-- create a script that sets a property to one value, and then attempts to change it by assigning it another value. Printing out the value of the property returns the initial value, demonstrating immutability.
  3. Ant Dependencies- Understand the use of the "depends" attribute by creating several targets that have different dependencies, and see in what order the targets end up executing. Also, understand what happens when the dependencies are cyclic (they depend on each other in an infinite loop).
  4. Hello Ant Compilation- Learn how to compile Java code using Ant-- create a script that uses <javac> to compile a simple HelloAnt.java file.
  5. Hello Ant Execution- Run Java code using Ant-- create a script that first compiles Java code (using dependencies and the compiler script from Kata 4), and executes the HelloAnt program, which prints "Hello Ant" to the console.
  6. Hello Ant Documentation- Generate JavaDoc documentation-- create a script that first compiles the HelloAnt program and then generates the JavaDocs for it using the <javadoc> task.
  7. Cleaning Hello Ant- Implement a cleaning task-- add a "clean" target to the compile script that deletes the "build/" directory that stores the .class files and JavaDocs.
  8. Packaging Hello Ant- Zip up the project so it can be shipped out-- create a script that uses the <zip> task to package the project into a zip file.
It was difficult to get started with writing the first couple of scripts because of the "unfamiliar territory" that I was in. However, as I got used to the different attributes and properties and just the overall structure of the scripts, things became much easier. To anyone wanting to start using Ant, I would recommend getting familiar the Ant Manual when implementing tasks. At first, I used this just to copy the examples, but many of them contain more complex features which are not necessary to learn at this moment, so definitely learn how the manual works, especially the Parameters table which lists and explains all the attributes, and even tells you whether it is required or not. I wish I had realized this earlier!


The most difficult part of these Katas was managing the directories to and from which files were to be compiled, run, generated, and packaged (for Katas 4-6 and 8). I frequently got an error message reporting that "<ProjectDirectory>/src could not be found", even though I had set up the properties correctly in the compile script. I resolved these errors by explicitly specifying the "basedir" property at the top of the script with the project declaration. Prior to this, I had thought that since it was specified in another dependency script, this would not have to be set, just like how properties need not be set again. I am not sure if this is the correct way to solve this issue, but it averted the crises that I had, so perhaps this is indeed the solution. I shall look into this more, and in the meanwhile hopefully I can learn more of the Ant features!

Tuesday, September 20, 2011

Let's Get Ready to Robocode!

About a week-and-a-half ago, I took my first peek at the Robocode project. "What is Robocode?", you may ask?-- well in a nutshell it is an open-source project started by Matthew Nelson of IBM in 2001. It was originally intended as an educational game designed to help budding programmers learn java, but eventually grew to become a worldwide programming sensation with fierce competitions-- and not to mention competitors who have tweaked their code to becoming virtually unstoppable.

After downloading the libraries, source code, and other essential components, I ran the program with a couple of sample robots-- just to get a feel for what the interface was like. I was flung back in my seat as I saw little tank-like robots fixing their aim on each other, rapidly firing their guns, ramming into each other, and whizzing around the perimeter of the screen. "How in the world am I ever going to create that?!", I exclaimed in my head when the mayhem was over just a few seconds later. Over the course of a week, I created 13 robots, starting from the bare-bones minimum and slowly working my way up, trying out and adding on different features.

The descriptions of the 13 robots are as follows:
  1. Position 01: Doesn't do anything; just watches the action in front of it.
  2. Position 02: Moves forward 100 pixels each turn, reversing direction if it hits a wall.
  3. Position 03: Moves forward N pixels each turn and then turn right. N starts at 15 and increases by 15 each turn.
  4. Position 04: Moves to the center of the playing field, spins around in a circle, and stops.
  5. Position 05: Moves to the corners in the following order: top-right, bottom-left, top-left, and bottom-right.
  6. Position 06: Moves to the center, then travels in a circle of approximately 100 pixels, ending up where the circular traversal started.
  7. Follow 01: Picks an enemy and follows it around.
  8. Follow 02: Picks an enemy and follows it around, but stops if it gets within 50 pixels of it.
  9. Follow 03: Finds the nearest enemy each turn and moves 100 pixels in the opposite direction.
  10. Boom 01: Sits still, rotates its gun, and fires when it is pointing at an enemy.
  11. Boom 02: Sits still, picks an enemy, and fires only when the gun is pointing at that enemy.
  12. Boom 03: Sits still, rotates its gun, and shoots when it is pointing at an enemy. Bullet power is weaker the farther away the enemy is.
  13. Boom 04: Sits still, picks one enemy and tracks it with its gun, keeping it pointed at the enemy but not firing.
What I Learned
Since most of the methods used were implemented through the Robocode API, I did not learn new Java concepts through this experience. I did, however, brush up on some skills that have gotten rusty-- especially in the Math class. Some of the Position robots required the use of trigonometric calculations to get to the right places on the field, but since I have not used these functions in quite some time, it took a little trial and error before the ball got fully rolling.

Difficulties Encountered
By far the biggest difficulty was getting adjusted to the whole Robocode environment. Figuring out how to even run something took some prodding around, because the run() method acts as the main method, instead of main(). Also, calling methods such as ahead(), scan(), and fire() seemed awkward at first, since in Java programming we typically don't see methods that are called just by themselves like this; they usually follow some sort of class or object (like System.out.println() or object.someMethod(). This made getting accustomed to Robocode coding quite difficult. Finally, another substantial difficulty was utilizing the bearings and headings properties correctly, and getting used to them. In mathematics and science, we're used to measuring angles starting at zero on the right side of the x-axis (between quadrants I and IV). In Robocode, however, angle zero lies in the positive side of the y-axis (between quadrants I and II). Headings are measured with this latter system, so in order to calculate angles using trigonometry in the conventional perspective, we have to adjust the values to reflect the Robocode coordinates. This was very confusing and frustrating indeed, but once the solution to one Robot was completed (and worked), then applying it and altering it for other Robots was significantly easier.

Robocode Robot Behaviors

Trying to balance simplicity with robustness proved to be very difficult, especially for the Position05 and Position06 robots. In order for Position05 to get to most of the corners, calculations are useful for setting the angle and distance to travel, but calculating the angle is not required when traversing from the bottom-left corner to the top-left corner, since it is just traveling "upward". Thus it is trivial to point the robot at angle zero and just traverse the height of the field. However, this is a naïve implementation, since bumping into another robot or missing the bottom-left corner due to the entry angle will cause the robot to move upward even though that is not the direction of the top-left corner.
As for Position06, it was difficult to implement the movement of the robot in a circle. At first, I had it move in a hexagonal path instead, since it seemed much more straight-forward to get the robot to move in that shape versus a circle. Eventually, however, I was able to get the robot to move in a circle (or close to one), although it took a significantly longer time.

The only robot that I could not get working entirely correctly is Boom02. It seems to only fire at the tracked enemy only when it is within a certain distance from the enemy:

Fig. 1- Boom02 vs. Walls vs. SpinBot. Boom02 is temperamental as to when it wants to fire.

Ideas for Building a Competitive Robot
As far as I can think of for now, there are only two strategies for a robot in Robocode: assault and evasion. Since there is no way to "defend" oneself, a robot can either be on the offensive, or try to dodge enemy attacks (since firing lowers one's own energy). I will not reveal which route I shall be taking in my creation of a competitive robot, but it will most likely include a lot of movement, in order to increase the chances of evading shots.

Thoughts on Code Katas
This project demonstrated the practice of code kata-- separating practice from profession. Since I started with simple tasks and progressed through more advanced territory all the while being in a "practice environment" (as this is not the true competition), I was able to hone my skills before diving into the actual competition, rather than being stuck with them during it. This is an extremely wise way of practicing software engineering, and produces much higher-quality software when it comes to the live development project.

Tuesday, August 30, 2011

FizzBuzz

The FizzBuzz program involves printing out the numbers 1-100, each number on its own line, and with the following conditions:
  • If the number is divisible by 3, print "Fizz" (and nothing else)
  • If the number is divisible by 5, print "Buzz" (and nothing else)
  • If the number is divisible by both 3 and 5, print "FizzBuzz" (and nothing else)
  • Otherwise, print the number
In implementing the FizzBuzz program in Java, I used the Eclipse IDE my tool of choice. It took me about a minute-and-a-half from the time I opened up Eclipse, and here is the outcome:



I did not run into any problems, with Java or Eclipse. However, I did realize the approach that I took in solving this problem. Although I knew the gist of how to write this program beforehand, instead of planning out the code, I just dove right in and started whacking away at my keyboard. Perhaps this was due to the pressure of "how fast could I do this", but I seem to recall following this same procedure before in other programs that I have made. Whether this approach is problematic or a technique, I am not so sure, but hopefully it will not be the cause of difficulties later. When you don't know where to start, sometimes it is good to just-- well-- start, and fix things up later as you gain more perspective of the program. However, the problem may lie in not getting to this last part, or changing the format or mindset mid-way through coding. These may definitely cause grave issues indeed...

I appreciate very much how Eclipse has autocompletion and autoformat features; they make coding so much quicker with a lot less room for error. Writing this out by hand or even with simple text editors like Notepad would take at least 2-3 times longer, and if I had forgotten what the exact syntax for some functions was, I would have been totally lost-- either having to try and remember what it was or looking it up in a reference. Luckily Eclipse does this all automatically and has references built into itself, thus speeding up the process significantly. I use a Vi keybinding plugin in Eclipse, thus speeding up the process even more, and the syntax highlighting is also very helpful in that the code is much more easily readible, again adding to the efficiency of the coding process.

From this assignment, I learned that Software Engineering involves using tools to your advantage. There are many different ways to do things, and each way has its faults, but some ways are far more superior to others. Using better tools can mean completing a project correctly and in less time, rather than taking longer in addition to writing bad code. I am eager to learn what this class has to offer in terms of giving me many more tools-- and better ones too-- in order to allow me to put more effort into being creative. In my experiences to come, I look forward to finding the best tools that are available, to become a more efficient and versatile programmer.

Sunday, August 28, 2011

Testing jg3d

In a quest to understand and gain respect for the Three Prime Directives for Open Source Software Engineering, I have chosen to download the project "jg3d" from the website http://sourceforge.net/projects/jg3d/files/jg3d.jar/download and evaluate its contents. Upon downloading this, I come to find that it is just the jar file itself, and does not contain any documentation. Thus I refer to the information on the sourceforge website that says that jg3d is "... an implementation of an interactive 3d graph rendering engine and editor..."

In terms of the first Prime Directive-- "successfully accomplishing a useful task"-- I can only assume that it works, and cannot be exactly sure for myself. The user interface is very simple with just a sample of interconnected nodes and edges which the user can grab and move one of the nodes, having the rest of the nodes follow with a moderate amount of elasticity:



While I am not able to make much sense from this and am not sure how to apply it to some actual graphic-creating application, but jg3d itself seems to be extremely responsive and well created. Unfortunately there are no examples of which I can actually render and test out, so I can do little more than just play around with the little balls. I suppose that this means that this product is useful in providing a little entertainment and amusement, but as far as accomplishing what it is actually meant to do, that cannot be determined without another piece of software and an actual manual of how to use the two together.

As for the second Prime Directive (the ability for an external user to use this system successfully), I can say that I was able to install the system successfully, since it required no installation at all; just downloading the jar file and executing it. When it came to using the application, however, I was not sure at all whether I was using it correctly or not. There was no sort of documentation at all, so no help could have been obtained without utilizing external sources. Therefore, I must say that jg3d fails to meet the expectations of the second Prime Directive.

Again, since there was no documentation at all, it would be extremely difficult for anyone without knowledge of the internal system to develop and enhance jg3d. There even lacks the Java source code for the program, making it not possible to even figure out how it works by analyzing the Java code. I felt that there was certainly no way that I could possibly alter the program to improve it or implement it a different way. It seems to be that jg3d, or at least this version of it was released as a stable version, with no need to fix any major bugs. However, it being an open source project, it should contain or at least provide a location to a place where the source code can be found so that it can be downloaded for modification. Since jg3d does not provide this, it fails to meet the third Prime Directive of Open Source Software Engineering.

In assessing the Open Source project "jg3d", I have found that jg3d does not meet all three of the Prime Directives of Open Source Software Engineering. In fact, it may not even meet one of the requirements because the task of the program is not clear. An end user cannot figure out much about how to use this program or even what it is for, and it does not provide any documentation either that could help in figuring out these things. It is not simple to release an Open Source project, but without these components, it would be difficult to make use of them and thus be rendered as useless to most users.