Planet CDOT

September 30, 2014


Edwin Lum

Investigation into aarch64 registers

This week in the spo600 class, we started looking into different computer architectures, namely, x86_64  and aarch64. My presentation was regarding the aarch64 registers, and to investigate special functions and/or uses that some of these may have.

First and foremost, aarch64 registers are more simply named. aarch64 has 32 registeres, named r0 all the way to r31 when referred to by humans.

These same registers are referred to differently when accessing them as a 32 bit register versus a 64 bit register. w0-w31 for 32 bit, and x0-x31 for 64 bit. simple and straight forward, right?

The above registers are intended for integer operations, there are another set of registers that are more efficient for floating point operations and SIMD (Single instruction Multiple Data). SIMD is especially useful when dealing with multimedia operations that usually have the same instruction that has to be executed on many data points (such as adjusting the brightness of each pixel on the screen).

During a system call, note the following (1):

Usage during syscall/function call:

  • r0-r7 are used for arguments and return values
  • For syscalls, the syscall number is in r8
  • r9-r15 are for temporary values (may get trampled)
  • r16-r18 are used for intra-procedure-call and platform values (avoid)
  • The called routine is expected to preserve r19-r28
  • r29 and r30 are used as the frame register and link register (avoid)

For the Aarch64 architecture, register 31 in particular has the following special properties.

Register ’31’ is one of two registers depending on the instruction context:

  • For instructions dealing with the stack, it is the stack pointer, named rsp
  • For all other instructions, it is a “zero” register, which returns 0 when read and discards data when written – named rzr (xzr, wzr)

 

To me, this is a very useful thing to have. In particular I would imagine using r31 to “zero” other registers and/or contents of memory locations without having to provide an immediate value. This is of particular importance because of the fixed instruction length of Aarch64. One problem that we will run into with aarch64 is that sometimes we will want to specify an immediate value that is too large to fit into a single instruction, which needs to be sidestepped. Having a reliable register that returns 0 just makes things simpler since this is probably an operation that is often performed.

 

References

ZENIT.SENECAC.ON.CA

Aarch64 Register and Instruction Quick Start – Open Source@Seneca

In-text: (Zenit.senecac.on.ca, 2014)

Bibliography: Zenit.senecac.on.ca, (2014). Aarch64 Register and Instruction Quick Start – Open Source@Seneca. [online] Available at: http://zenit.senecac.on.ca/wiki/index.php/Aarch64_Register_and_Instruction_Quick_Start .


by pyourk at September 30, 2014 01:02 PM


Frank Panico

JSHint here I come

I just enabled everyone’s favorite (or at least my favorite) text editor Notepad++ with JSHint. This should help to check Node.js javascript code. Check out how to do it here!

http://willperone.net/Code/codejshint.php


by fpanico04 at September 30, 2014 02:26 AM


Kieran Sedgwick

[SPO600] An overview of the x86 architecture and its registers

I was tasked with putting a presentation on the x86(LINK) architecture’s registers, along with an explanatory blog post on the topic. In particular, I had to focus on the 64 bit version of the architecture, called x86_64.

x86 is at once modern and a history of CPU architecture. Its strength and weakness is found in how it maintains significant backwards compatibility by building on top of, rather than removing, features found in older iterations of the hardware. This is most obvious in the names of the general registers, which we will see in a moment. The architecture uses variable length instruction sets, and was designed for CISC instructions in particular.

General Registers

The x86_64 architecture provides sixteen general purpose 64-bit registers for use in five access modes. These access modes allow smaller values to be loaded into them, and allows earlier software to use the modes they were originally compatible with.

The backwards compatibility’s historical relics are visible in the choice of names for the registers:

  • ax-dx (4)
  • bp, sp (2, base pointer & stack pointer)
  • si, di (2, source & destination indexes)
  • r8-r15 (8, added for x86_64)

It is worth noting that, with the exception of certain operations, the registers can be used interchangeably.

The five access modes are:

  • 64 bit (prefixed with ‘r’), e.g. rax, r8
  • 32 bit (prefixed w/ ‘e’, or suffixed w/ ‘d’)
  • 16 bit (no prefix, or suffixed w/ ‘w’)
  • 8 bit (top of 16 bits, suffixed w/ ‘h’), e.g. ah
  • 8 bit (bottom of 16 bits, suffixed w/ ‘l’ or ‘b’)

XMM Floating Point Registers

In addition, the architecture provides eight 128-bit registers for floating point calculations, named XMM0 through XMM7. They are used for storing single and double floating point values during floating point calculations.

Segment Registers

To my understanding, segment registers hold pointers to memory locations in an effort to prevent a buffer overflow in a segmented memory management scheme. In today’s 64 bit processors, this is considered obsolete and is used for backwards compatibility.

EFLAGS Register

Each bit in this 32-bit register represents a flag of some kind, and is used to keep track of state between instructions without the use of memory. Interestingly, this register is somewhat futureproofed by keeping some bits reserved for future iterations of the architecture that might require new flags.

Conclusion

This has been exciting to explore, since it is the part of computer science I am least familiar with. There is still much to learn!


by ksedgwick at September 30, 2014 12:33 AM

September 29, 2014


Fadi Tawfig

Implementing du in Filer

So for my OSD600 course the first assignment was to implement the du command in Filer, a node.js POSIX file system. Due to a very busy week and my total newness to this style of programming, I’m submitting this assignment slightly late.

At first everything seemed to be going quite well. I could get the size of a file just fine. Same with a simple directory with a couple of files in it. The issues arose when I tried to call my du function on a nested directory tree.

The major issue I had was wrapping my head around the use of asynchronous callbacks when used together with a recursive function such as du. Although my first attempt at calculating the disk usage of a directory did get the correct size of the directory, by the time the function was finished summing up the total usage, du had already returned to the test and reported the size of the directory to be 0 causing the test to fail.

Seems like my head is still stuck in traditional object oriented style programming languages such as java. My first attempt to loop through the entries in the directory was using a for loop. This didn’t work because by the time the loop was finished executing the function already called it’s callback and the test had failed. After looking to the ls command for guidance (a very helpful command for this assignment due to its use of recursion) it seemed that the answer was to use the function async.eachSeries(). eachSeries() takes three parameters, a collection, an iterator and a callback. It then calls the iterator function on each item in the collection in sequence. This ensures that the grand total is calculated before returning to the test.

It’s clear I have a lot more practice to do with this style of javascript programming. Although it can cause some frustration at times there are a lot of things I like about it. Most notably high readability aspect. I’ll continue to read up and practice my javascript for the next release.

Pull request here.

- Fadi


by ftawfig at September 29, 2014 11:37 PM


Lukas Blakk (lsblakk)

New to Bugzilla

I believe it was a few years ago, possibly more, when someone (was it Josh Matthews? David Eaves) added a feature to Bugzilla that indicated when a person was “New to Bugzilla”. It was a visual cue next to their username and its purpose was to help others remember that not everyone in the Bugzilla soup is a veteran, accustomed to our jargon, customs, and best practices. This visual cue came in handy three weeks ago when I encouraged 20 new contributors to sign up for Bugzilla. 20 people who have only recently begun their journey towards becoming Mozilla contributors, and open source mavens. In setting them loose upon our bug tracker I’ve observed two things:

ONE: The “New to Bugzilla” flag does not stay up long enough. I’ll file a bug on this and look into how long it currently does stay up, and recommend that if possible we should have it stay up until the following criteria are met:
* The person has made at least 10 comments
* The person has put up at least one attachment
* The person has either reported, resolved, been assigned to, or verified at least one bug

TWO: This one is a little harder – it involves more social engineering. Sometimes people are might be immune to the “New to Bugzilla” cue or overlook it which has resulted in some cases there have been responses to bugs filed by my cohort of Ascenders where the commenter was neither helpful nor forwarding the issue raised. I’ve been fortunate to be in-person with the Ascend folks and can tell them that if this happens they should let me know, but I can’t fight everyone’s fights for them over the long haul. So instead we should build into the system a way to make sure that when someone who is not New to Bugzilla replies immediately after a “New to Bugzilla” user there is a reminder in the comment field – something along the lines of “You’re about to respond to someone who’s new around here so please remember to be helpful”. Off to file the bugs!

by Lukas at September 29, 2014 06:24 PM


Andrew Li

Release 0.1

For the first release we were given the option to either find a bug or implement the du shell command in Filer. The follow post is a summary of what I learned.

I choose to implement the du command and here is my pull request. Here is my work in progress.

Setting the dev environment

The first step was reading the CONTRIBUTE.md file in GitHub and then forking the project. After forking, a copy was cloned onto my local machine.

git clone git@github.com:liandrew/filer.git

To run the tests, grunt was installed using NPM

npm install -g grunt-cli

After installing grunt ensure dependencies for the project are installed. I forgot to do this so I couldn’t run grunt test.

npm install

Run the tests with

grunt test

After passing all tests I was ready find out more about the du command.

Understanding the bug

In the past I’ve used du to find an estimate of how much space a folder was occupying and for doing quick sanity checks. After reading more on du I found that I had to get the follow to work:

1. return sum total of all sizes for all files in directory recursively
2. return depth-first list of all entries in the directory followed by the directory itself

Using 1 and 2 as test cases sounded right so I programmed what I thought I should see. At first it seemed really strange to write the test cases first but doing it first actually makes sense - it helps with understanding the problem.

Using the specs outlined in Implement du shell command #277 as a guide, I copied the ls command since it’s similar and started to fiddle with it to learn. Putting callbacks and closures into practice took a bit of getting use to. It was hard to follow what was going on in ls since I didn’t have a good understanding of the callback pattern. So I backtracked, went to the web and read a few helpful blog posts on callbacks and then practiced writing a few examples of my own. Other questions that popped up after looking at the ls code:

Using Git is Awesome

Using Git I was able to easily go back and forth between branches and revert code on the fly. I found these commands useful:

Create a new branch from a previous commit

git branch <branch name> <hash>

When things go bad I found this useful for going back to a clean copy of the last commit

git checkout .

Conclusion

I’ve learned a lot through the process of getting from setting up a dev environment to implementing code. Test #2 failed, I was able to return the list in depth-first order but had trouble adding the current path to the end without messing up the callback. I will try to revisit this and reach out to the community and fellow students for help. I definitely need more practice with callbacks and closures. For the next release, I will try to reach out to the community more often, be active on irc to try to help and get help from the community.

September 29, 2014 12:00 AM

September 28, 2014


Yasmin Benatti

Du command Filer Implementation

The first release we had to do on the Open Source class was to implement the du command on Filer. Filer is “a POSIX-like file system interface for node.js and browser-based JavaScript”. You can find the filer project here and the issue we worked on here.

Du is a Unix command that shows the disk space usage for a specific file or directory. There are two manual commands that I used to help me, the GNU one and the Open BSD. You can also find the link for the Apple implementation of the  du command .

My mainly thought after doing this first release is: working with open source and programming is hard! Also it is really challenging, in a way that puts you against yourself and makes you want to find the answer no matter what. I am also learning to program in JavaScript what makes really hard to work in something that big. Luckily, I had someone that helped me a lot! Kieran is a classmate that taught me all the proceeds to set up my machine, how to look for a piece of code, how to reuse the code, how to use Git, Github and anything else I had to do in this project.

During the process to write the tests I and Kieran fond that when using the du command on Matrix and on Mac OS, the results were different. I searched about file systems, block usage and how different implementations of the du command works. The implementation from GNU and Open BSD are slightly different, but what makes the results not match is probably the file system and how directories are created and managed by the OS.

The implementation I did can be found here, but it is not complete. I did just the tests and even this is not complete too. That is because I’m taking this course, even tough it’s much forward in the course and I’m not dominating JavaScript. Anyway, it’s being a very good learning for me and I’m getting in touch with all these tools.

The tests I did involve verifying if the du command (working more as the ls -l command) returns the right size of the file when running without a specified path, on a specific file, on and inside an empty directory, on and inside a directory with a single file and on and inside a directory with multiple files.

Hope to be able to finish it by myself soon!

Cheers.

by yasminbenatti at September 28, 2014 03:27 AM


Glaser Lo

Release 0.1

As a rookie of node.js programming, I encountered lots of problems at the beginning.  The first thing confusing me is the callback structure of node.js programming.  While most of my previous experiences are on procedural programming, learning asynchronous program is a new challenge to me.  For instance, node.js program doesn’t run from top to bottom.  Instead, it involves lots of asynchronous functions to avoid disk I/O or network I/O slowing down the main process.  Therefore, even though there are code files in the project for references, it still takes me some time to recognize the place where I should put my custom code in.  Another issue I encountered is came from the build/test tool – grunt.  Honestly, I had no idea about grunt.  It seems to be a build tool like ant and gradle, but the thing is I am failed to find the way to run debugger when testing.  I end up spending a lot of time on testing.  There are also some minor issues about javascript.  Unlike c++ or java, dynamic data types in javascript cost me a bit of time to figure out variable types.  Overall, after the release 0.1, I can feel that I learned a lot about node.js and open source cultures.  Thanks to the help from my professor Dave and my friends who attended the class.


by gklo at September 28, 2014 03:19 AM

September 27, 2014


Tai Nguyen

Mozilla Webmaker – Application Detail View Issue (0.1 Release Milestone DPS909)

I am currently working on an open-source project known as Mozilla Webmaker. Mozilla Webmaker is a project that is aiming to help non-developers to easily make functional applications for the web and mobile phones. Webmaker revolves around the idea of custom application templates that can be remixed (adding or removing app components) with accordance to the creator’s purpose. A template has a set of components (e.g. image, form) that are specific for a particular use – e.g. instructing. There are templates for teachers, doctors, instructors, etc. Webmaker can be found here.

Mozilla Webmaker Template ViewMozilla Webmaker Template View

The project is in early alpha phase of development. There are still a lot of functionalities and features that needs to be added. In terms of size, the code base is relatively small. The application is using Node.js and Vue.js. For people who want to get involved with the development visit their GitHub.


As for myself, I am currently working on implementing the UI, more specifically the application detail view. The primary function of this view is to allow users to remix application. This basically means, an individual can find an existing app made by another user and modify it to their requirements when building their own app. Other features include sharing the app to social medias to statistics like the number of views that app has received. More information about my focus can be found on the project’s GitHub issue section (UI – Implement “Application Detail” View #174).


So far I have implemented the top section of the UI for this view. There is still much to do for this view to be completely functional and ready for use. For instance, the share function and remix function needs to be completed. Below is an snapshot of what I have done so far. If you would like to assist me on the component of the project please contact me at tylermeetsworld@gmail.com or you can fork my repository of the Webmaker under the branch I named “Detail“.

detail_view

This is my first time working on an open-source project and it was a big learning curve. I learned how to work with Vue.js (which is similar to angular).


Resource Links:


by tylermeetsworld at September 27, 2014 07:06 PM


Omid Djahanpour

Taking a Look at NASM Syntax

Before I begin, I want to point out that for the x86 family, the assembly language used branches off into two categories: Intel syntax and AT&T syntax. A brief overview of the main differences between the two styles can be seen here and here.

The Netwide Assembler (NASM), is an open source assembler that uses the Intel style syntax. NASM is licensed under the Simplified (2-clause) BSD license.

Wrapping things up, here is the link to a small presentation I prepared for the class on this topic.

The very last slide will provide links to more useful resources.


by Omid Djahanpour at September 27, 2014 05:24 PM


Shuming Lin

Release 0.1

The OSD600 release 0.1 is add new command “du” to “filer” open source project that will display deep list for directory and send David the pull request.

Pull Request Links:

  1. Submit OSD600 0.1 release: Implement shell.du
  2. re-submit OSD600 0.1 release: Implement shell.du

There were 4 files that need to modified  and  1 new files:

  1. create: /test/spec/shell/du.spec.js
  2. files need to modified:
    • /src/shell/shell.js
    • /test/index.js
    • /dist/filer.js
    • /README.md

Before I began the project, I followed class note to start it. I was fork the filer open source project form “filerjs/filer” and cloned to my computer. Also, setup Dev Evironment.

Setup Dev Environment

we need to install the node.js and then use command  “npm install” to setup, but that was a ERROR make install failed when I was run the command in git shell. And then google the error message to fixed the issue.

add “C:\Program Files\nodejs\” to user variables path in Environment variables. Problem solved and Environment setup successful. ^_^

Started Project

Now, started to create a new command which is “du”. Follow the introduction, review code and discuss with classmate that found out du command is similarity with ls and cat command. There are have same parameters (callback, size, path, return tree). Therefor, I create du command basis on these two command.

Test and solved bugs

Test the project by create du.spec.js which test by each function at one step. That will easy to fixed bugs. During the testing, I got some ERRORs:

First:

5

Second:

1

Third:

3

how do i deal with these bugs or errors? when I didn’t know how to solved it, I asked for help from classmate and discuss with them to see they are have same error or not.

Finally, it passed without any error.

Cheers.


by Kevin at September 27, 2014 05:20 AM


Ava Dacayo

Release 0.1 submission – Implementation of du command for Filer

After setting up the dev environment for filer (cloning the project in GitHub, installing Node.js, npm install, …) I started by running the tests by typing “grunt test” and all of it was successful. Time to start gathering the requirements for du! I had to read Dave’s instructions here more than twice, researched about the du command and its behaviours. I had the ideas in my head, so I started looking at the source codes and checked which ones I need to update for the implementation.

After having everything prepared, I then started trying to understand the basic flow of the already existing codes. I also searched for codes within the files for other commands which are somehow similar to du so that I can refer to it. I observed how they code so that I can write in a similar way as every contributor to the project. The ls command seemed to be the closest so I used that as my base for my assignment. I had to take time reading it using my basic JavaScript knowledge from the INT222 class and from my co-op work terms. I was also unfamiliar of the other existing codes that have been used and had no idea what they were for. At first, everything seems so overwhelming. Just like being a little fish released in an ocean located in a different planet in an alternate universe… Okay I’m exaggerating and I’m not even sure that made sense, but like what Dave has been mentioning in the class, never panic.

In the end, I can’t say I mastered and wrote the most efficient code and covered all the functionalities of du. But it is definitely a start and will help me on the future releases. Pull request link.


by eyvadac at September 27, 2014 04:23 AM


Gideon Thomas

My first release for the Open Source class

We were tasked with implementing a command called `du` in filer. Now, I am already a contributor to filer so I know about how stuff works in there and what would be expected in this assignment.

My first task was to research what `du` actually did. Well, to be honest, I already knew what it did but I did not know it’s “extra” features such as flags, etc. Straight to Google I go. A lot of what I found I didn’t understand. For example, while looking up the `du` docs, I found that it takes flags such as “-a” and “-x”. But I couldn’t make head or tails of it (at least in the context of filer). So I decided to keep it simple.

By keeping it simple, I mean implementing two extra features since it would be unfair for me to do the same assignment as others who have little to no previous experience with NodeJS, javascript and filer itself. The two extra features were to allow symlinks to be treated as links or files(you can get the disk usage with respect to the space taken by the link itself or the file to which it links) and to allow different size formats (kb, mb or gb) to be specified instead of the default bytes.

So I started to code. It wasn’t that hard. My experience working with rsync code (in makedrive) using recursion certainly helped. Overall it took me maybe two hours to write all the code and the tests. Then,

All my tests pass on the first try. Usually, Dave tells me when that happens, there’s definitely something wrong with your tests. Well, that’s where I spent my next hour…combing through each line of both the implementation and the tests to find a problem. Turns out there wasn’t one. So I was relieved.

So I create a PR and leave it to rest :)

Meanwhile, I help one of my other classmates out and also work on another bug. The bug I worked on was to disallow `unlink`’s on directories. That didn’t take me long but I did not follow the Node standard of dealing with the error and that was a recommendation in my PR. But unfortunately, before I could change that (I took way too long to do so), my PR was merged into the master repo. Hmm, well that sucked. Oh well, I made the fix in a follow up bug and that was merged in.

Thanks to these events, I ended up with 3 bugs that I fixed for release 1. I am thus a happy man.


by Gideon Thomas at September 27, 2014 02:40 AM


Jordan Theriault

Release 0.1

I have forked the Filer repository from Github in order to contribute to the project. Filer is a library which allows you to manage a filesystem within your browser.

Du is a unix command that is used to estimate disk usage by a file or directory on Unix. I have taken the task of working on Filer to implement this shell function in order to have basic functionality and present the directory and file sizes in bytes. The version I have produced will give, for a specified directory, the size of files within it and the size of the directories within it without providing more information about files within the specific directory.

Working with an asynchronous programming approach is a concept I wasn’t yet versed in upon starting this task. I made a large use of the Node API, async README and of course Filer README. Throughout the development I learned a lot about the function nesting and using modularity to make code more readable in order to achieve my goal. An asynchronous functions do not halt other tasks, but instead will continue on to the next lines of code and run the function in the background. This is common in Javascript and especially Node. Further, making use of callbacks once the code has been completed is key to passing values from functions was a lot of get used to.

However, once I started programming and testing, I slowly learned best practices and the workflow. I started by learning from code from other already implemented such as Shell.ls and Shell.cat command and exploring both the implementation and the test cases. Development was largely based on a test-first approach where I would write a test for the command, then program accordingly. This allowed me to easily check if what I had written is correct.

Finally, Grunt was an eye opener. Grunt is a javascript task runner and by using it to run tests, I learned how easy it can be to develop using it. Grunt can automate tasks that developers often run endlessly like minification, compilation, linting, and unit testing. Combined with the test-first approach, it makes sure you’re actually working on the coding, and not the process of compiling everything.

You can check the status of my Du command in Filer in this pull request.

by JordanTheriault at September 27, 2014 02:27 AM

September 26, 2014


Yoav Gurevich

The 0.1 Milestone: Implementing the 'du' linux command in Filer's Shell

For the first major task this semester, I had major logical challenges in implementing this task on my own. The weeks that I've been away from Javascript work and programming work in general have heavily hindered me from making a lot of sometimes obvious logical assumptions and conclusions in code flow and structure that I should have been able to undertake with relative ease. Many thanks to fellow classmate and prior co-worker Gideon Thomas for his reference solution which my own has inevitably become extremely akin to. Time constraints due to other obligations have also played a large part in my half-baked attempt at delivering this task in a signatory fashion. An immediate future goal of mine would be to play catch-up yet again and invest more effort in being able to comprehend complex code structures such as Filer if I am to continue contributing in a productive and meaningful way going forward. 

While I am still familiar with the unit test infrastructure and conventions, and am able to easily understand the high-level purposes and functions necessary to complete the function itself, my original design was completely bereft of very basic and standard node.js conventions including proper error and function argument handling cases. I also had trouble remembering how to properly access method properties (syntactically) for what I can only describe as absolutely no reason whatsoever.

Overall, without external sound logic there would have been no way for me to meet the deadline and I'm quite disappointed at my general performance for this milestone, but am now doubly inspired to bring myself back up to speed and uphold whatever little reputation that I already previously built back into these projects.

by Yoav Gurevich (noreply@blogger.com) at September 26, 2014 11:52 PM


Ryan Dang

Working progress on 0.2 Release

For release 0.2, I am working on the issue UI – Update profile style/content for Mobile Appmaker.

My tasks is to Update the layout of the Profile page, add the list of users created apps to the user profile and change how editing profile work

So the first thing I do was to create 2 sections for the profile page.

The top section will hold all the information about the user and can be switch to edit mode if the user click the edit button. The bottom section will hold the list of user created apps.

After breaking the page down to two section, I started with adding the list of users created apps to the user. There is a similar list on the app page so I tried to duplicate that in the profile page by including the list of apps component in the page. I ran the server, I can see the list of created apps for the users showing. I was happy because it was a lot easier than I thought. I tried to create few more apps to make sure it works. After I created 3 more apps, I went back to the profile page and the server froze! I spent the next 2 days trying to figured out the issue and a way to make this work. I learned how to check the data using indexedDB in order to debug the issue. I finally found a work around after 2 days looking in to this issue. I also found a bug that causes this to not working from the start which is The detached function in  the index.js didn’t work. I filed an issue on their github repo regarding this issue and hopefully it will get fixed soon.

After I got a work around for display the list of user created apps, I started to work on the top section. It also took me a while to position all the element correctly. I also tried to make the input field to be responsive depend on the browser width. I got pretty much everything working after 4 days working on this issue. The only thing I need now is the .svg icon for the edit button. I have request them for the image and hopefully they will provide me with one.


by byebyebyezzz at September 26, 2014 03:01 PM


James Laverty

Release 0.1

Implementing the du into Filer was a challenging learning experience.

It all started with,

NPM

It was a beautiful day outside, the birds were chirping, the sun was shining, and I was in a room with no windows.


Oh, no big deal, I'll just go download it and install it. BOOM, second error. After that, I decided to talk to a friend of mine who is also taking the course, he said he had no problems and that everything installed very easily. I threw a keyboard.

I eventually got it installed and then tried to run the npm install command to get grunt. BOOM, no folder called npm in the user/<user name>/App Data/Roaming/npm folder... WHAT?! Windows installer bug. Bought new keyboard, threw it out.

I then created an npm folder, ran it again, got a page full of errors, but at least something happened. By now, a few hours had passed, so I decided I needed some sun and took a walk.

Grunt

By now, several hours had passed and I was feeling rather successful; NPM was installed(kinda) and Node.JS was installed. Next I thought to tackle Grunt, how hard could that be? Two commands, install, then install globally the cli, easy right? Absolutely not. I spent the next hour intensely pounding on my laptop getting only errors after errors, ready the manuals, and then getting more errors. I eventually decided to switch environments and use SSH to connect to my school.

I'd prefer not to talk about this dark time in my life that was filled with passion, regret, and sadness. Needless to say, I eventually went back to windows and used my new found knowledge to properly install everything.

Filer

This seemed a daunting task, but using the top down method I broke everything down starting with the documentation, then moving onto the test cases, and finally moving on to the implementation. It was the easiest part, even though I've never really used JavaScript before. I cloned it from git hub, pushed my changes and now need to pull request to the Teacher!

I called this a roller coaster of success.

Cheers,

James Laverty

by James L (noreply@blogger.com) at September 26, 2014 01:47 PM


Linpei Fan

OSD600: Release 0.1

I have done the project 0.1 release and send David the pull request (https://github.com/humphd/filer/pull/5)  today. Project 0.1 release was to add the du command to filer project. Following project instruction, I did not have much difficulties to finish it.
First of all, I got the code and had my github ready by forking filer to my github account and coloning it to my local computer.

Then I installed the node.js in order to have npm package installer in my computer.

Next, following the instruction to install the grunt both locally in project (filer) folder and globally in my computer. At this step, I had the trouble to install grunt locally with the errors during the installation. After discuss with the classmate, I found out that I need use Linux command line instead of Windows command line to run the command “npm install”. Then I used git bash to run this command, and it turned out that installation was successful eventually.

Following, I run “grunt test” to test the code. I met the problems as well here. After asking David, it was resolved by updating to the latest code. The problem I met was just solved and updated quite recently, and I had not updated my code yet.

At this point, I had all the required environment ready. Then I started to write the code. I reviewed the code for cat command and ls command because du command has the similarity with these two commands. Both du and cat commands have two parameters – data and callback. And both du and ls commands need to return deep content of a directory tree, and have the file size, file path and file content. Having the reference to these two blocks of code, I built du command without much difficulties. There were 5 files that needed to be modified and/or added. They were:
  •        MODIFY: /README.md
  •        MODIFY: /test/index.js
  •        MODIFY: /src/shell/shell.js
  •        MODIFY: /dist/filer.js
  •        ADD: /test/spec/shell/du.spec.js

At last, it passed the test with no error. 

by Lily Fan (noreply@blogger.com) at September 26, 2014 03:52 AM

September 25, 2014


Ali Al Dallal

Detect browser language preference in Firefox and Chrome using JavaScript

There are multiple ways to detect user's language preferences, but to do it pure client side was not an easy when you have to deal with Chrome and the reason was... ?

Well, Google Chrome does thing differently when it comes to navigator.language where you will expect that it will return you the user's language preference, but that's not the case. Google Chrome returns the value of navigator.language with the downloaded Google Chrome version, so if you downloaded the English Chrome then it will probably return en-US. Though, Firefox return the actual browser language preference that set by the user, so if I set my language preference in the setting page to Thai (th) then Firefox will return th even though my Firefox interface is in English.

Now, how I did I solve the problem when I had to work on localization before? Well, there weren't any other pure client side solution at the time that I can think of and I had to heavily rely on the server side and extract the language from Accept-Language in the request['headers'].

But! Finally, thanks to the open web and the super awesome web standard where people who make awesome thing for the web come together and agree on implementing navigator.languages.

navigator.languages returns a read only array of the user's preferred languages, ordered by preference with the most preferred language first. So, if I have my language in my setting in this order Thai (th), English (Canada) and English then navigator.languages should return ['th', 'en-CA', 'en'].

So, if you want to get the user's preferred language all you have to do now

navigator.languages ? navigator.languages[0] : (navigator.language || navigator.userLanguage)  

The above code will return "th". So, now you have pure client side javascript to get the user's language preference now!

As of now this works on: Chrome (v37), Firefox (v32) and Opera (v24), but not on: IE11

I hope this is helpful for some people who is looking for a pure client side solution and happy localization to all of you :D

by Ali Al Dallal at September 25, 2014 02:02 AM

September 24, 2014


Gabriel Castro

Integer Division CPU Instructions

Integer Division CPU Instructions


This post will look at how integer division is implemented in x86, AArch64, ARMv7

Integer division


In this post integer division refers to whole digit non floating point

division with integer truncation (not rounding).

That is and 13 / 5 = 2, not 13 / 5 = 3 or 13 / 5 = 2.6.

For that matter all the following examples will use 13 / 5 unless otherwise noted.



x86


In x86 it takes four steps to divide 2 integers.

1. Put the dividend 13 into the rax register

2. Put the divisor 5 into any other general purpose register, in this case let’s pick r10

3. Put 0 into the rdx register

4. Call div with the register from step 2

This result in the quotient being placed into rax and the remainder into rdx



mov $13, %rax  // move dividend into eax
mov $5
, %r10 // move divisor into r10
mov $0
, %rdx // put 0 into rdx
div
%r10 // call div for r10
// %rax == 2
// %rdx == 3



AArch64


AArch64 makes things much simpler because unlike x86 it dose not require the use of designated registers.

1. Put the dividend and divisor into any available registers.

2. Call sdiv with three arguments, where you want the result, the divisor, and the dividend



mov  $13, %w0       // Put 13 in w0
mov $5
, %w1 // Put 5 in w1
sdiv
%w2, %w0, %w1 // Divide w0 by w1 and put the result in w2, w2=w1/w0

Unlike x86 this instruction dose not automatically calculate the remainder.

Instead it calculates it using a % n = a - (n * (a/n)).

This means that if you do need to use mod that in AAarch64 it will be several instructions not just one like x86.



32-bit ARM


32-bit ARM processors, do not all have support for hardware integer division.

The ones that do, like ARMv7-A and ARMv7-R use the same sdiv instruction that AArch64 uses.

But the ones that don’t, integer division is implemented by a function called __aeabi_idiv which is

statically compiled into the binary.

Refernce

As usual any code used in this post is available here

by Gabriel Castro (noreply@blogger.com) at September 24, 2014 02:00 AM


Linpei Fan

Static linking vs. Dynamic linking

Linkeras a system program takes relocatable object files and command line arguments in order to generate an executable object file. The linker is responsible for locating individual parts of the object files in the executable image, ensuring that all the required code and data is available to the image and any addresses required are handled correctly.



Static and dynamic linking are two processes of collecting and combining multiple object files in order to create a single executable. 

Static linking is the process of copying all library modules used in the program into the final executable image. This is performed by the linker and it is done as the last step of the compilation process.

During dynamic linking the name of the shared library is placed in the final executable file while the actual linking takes place at run time when both executable file and library are placed in the memory.

Differences between Static linking and Dynamic linking:

Static linking
Dynamic linking
Sharing external program
External called program cannot be shared. It requires duplicate copies of programs in memory.
Dynamic linking lets several programs use a single copy of an executable module.
File size
Statically linked files are significantly larger in size because external programs are built into the executable files.
Dynamic linking significantly reduces the size of executable programs because it uses only one copy of shared library
Easiness to update
In static linking if any of the external programs has changed then they have to be recompiled and re-linked again else the changes won't reflect in existing executable file.
In dynamic linking individual shared modules and bug fixes can be updated and recompiled.
Speed
Programs that use statically-linked libraries are usually faster than those that use shared libraries.
Programs that use shared libraries are usually slower than those that use statically-linked libraries.
Compati-bility
In statically-linked programs, all code is contained in a single executable module. Therefore, they never run into compatibility issues.
Dynamically linked programs are dependent on having a compatible library. If a library is changed, applications might have to be reworked to be made compatible with the new version of the library.

Advantages:
Static linking
Dynamic linking
      Static linking is efficient at run time.
      It has less system calls.
      Static linking can make binaries easier to distribute to diverse user environment.
      It let the code run in very limited environments.
      Dynamic linking is more flexible.
      It is more efficient in resource utilization, taking less memory space, cache space and disk space.
      It is easy to update and fix the bugs.
Static linking
Dynamic linking
Source:




by Lily Fan (noreply@blogger.com) at September 24, 2014 01:42 AM

September 23, 2014


Ava Dacayo

0.1 Release progress

TDD or Test Driven Development is new to me. I have read it in a book which I haven’t finished reading yet (The Clean Coder: A Code of Conduct for Professional Programmers) but I never thought I would actually be working on a project following this process so soon! Since it’s new to me, I’m somehow mixing up the steps in my head. I should first and foremost focus on writing my tests but a part of me wants to figure out how I’m gonna code the du shell command for Filer first. I hope I get used to this VERY SOON.

I had problems setting up my Dev environment for Filer early this week. I felt lost probably because open source development is completely new to me and had to ask for help. After downloading Node.js and typing npm install, error messages spewed out from my command line. It’s looking for a package.json file and I have no idea what it is for and what to put in it. Naturally, I got errors for the other commands too. So I asked David Humphrey, my teacher what was happening and the bottom line is, I did the npm install in the wrong directory – funny mistake I know. Just in case someone out there did the same thing, you should execute it while you are in your project’s directory which in this case is my cloned version of Filer. I also saw a not found: git error which got fixed when I restarted my system.

Dave said I should blog about this to help other students who might be struggling too. I don’t know if it’s just me, but if not, hope it helps!


by eyvadac at September 23, 2014 11:24 PM


Brendan Donald Henderson

Elaborations on GNU as syntax for x86_64 ISA

GNU as Syntax and using assembly directives:

I first want to clarify that the preprocessor capabilities provided by the gcc and g++ compilers are FAR BEYOND what we have access to with GNU’s as assembler.

In fact, we will be showing assembler directives here, not preprocessor directives.

A lot of the common functionality that most people take advantage of still exists though. Here is a table comparing as’s assembler directives and their gcc/g++ preprocessor counterparts:

assembler directives preprocessor directives
.equ / .set #define
.equiv / .eqv
.macro #define(for macro functions)
.include #include

** .equiv and .eqv provide extended functionality to the .equ directive, neither allow symbol redefinition.

Macro constants and functions:

The .set and .equ directives are the macro constant definers, just as #define is for gcc/g++.

Whereas the .macro directive is akin to the #define macros in gcc/g++.

macro constant:

.equ NULL, 0x0

macro function:

.macro SUM a,b

.if  \a  == 0

.exitm

.else

mov \a,%eax

add \b,%eax

.endif

.endm

Declaring variables:

promptStr1: .string “Hello There World\n”

Count: .long 0x1000

FloatNum: .double 1234.567

Types to choose from:

Numeric: .long, .byte, .double, .single/.float, .word/.short/.hword, .octa, .quad

String: .string, .ascii, .byte

Variable Scope:

Variables or labels can define their scope with either of these directives: .local, .global. The local specifier ensures that the variable/label only exists within the current module. The global specifier ensures that the variable/label is visible to all modules, as well as ld(GNU’s linker).

GNU as Syntax Rules:

***Side Note: as supports both AT&T and Intel syntax, for the purposes of this discussion I will assume that only AT&T syntax is supported.

General Statement Syntax:  mneumonic -a([src],[reg], [reg] / number),[dest]

Registers: are prefixed by a ‘%':   mov %bx,%ax

Immediate values are prefixed with a ‘$':    mov $0x10012002,%eax

Extended syntax example:

mov -4(%esi,%ebx,8),%edi                       -> translates to:     mov %esi – 4 + (ebx*8), %edi

 

Other Very Useful Directives:

 

Directive Purpose
.text Specify that the following statements are to be assembled into the .text(or code) section.
.data Specify that the following statements should be assembled onto the .data section
.align pads the location counter to the given alignment.
.if / .elseif / .else / .endif Conditional assembly directives, if the condition is met the statements are assembled.
.space / .skip Emits a specified amount of bytes, with a specific value, zero by default(great for zeroing out arrays)

Creating structs:

Syntax:

.struct expression

label:

Example:

.struct 0

guid:

.struct 100

Player_H:

Alternatively:

PLAYER:

.long 0x1111abc0

.word 0x100

The problem with both of these approaches is that, atleast in the source code, when using these “members” of a struct there is no syntax

that denotes it as a member of anything let alone a specific struct. In contrast, MASM assembler syntax looks like:

mov eax, PLAYER.guid

->Here we see that the guid member of the PLAYER struct is being moved into eax. It is assumed that guid is a 32-bit variable.

Solution: If you are developing on a linux platform and want a better syntax for structs(and probably other parts of the syntax) I would recommend using the NASM assembler!!


 

Resources:

https://sourceware.org/binutils/docs/as/

https://www.tortall.net/projects/yasm/manual/html/nasm-stdmac.html


by paraCr4ck at September 23, 2014 10:28 PM


Gabriel Castro

Hello World disassembly

Hello World!

In this post I will take the famous Hello World program in C and analyse the binary GCC creates.
I’ll then make some changes to tha compiler flags and explain their results.

The code

main.c

#include <stdio.h>

int main() {
printf
("Hello World");
}

MakeFile

CC = gcc

all
: normal

normal
:
$
{CC} -g -O0 -fno-builtin main.c -o hello-normal

clean
:
rm
-rf hello-normal

Results

  • make
    a hello-normal binary file… it dosen’t do much.
  • file hello-normal
    hello-normal: ELF 64-bit LSB executable,
    x86-64, version 1 (SYSV),
    dynamically linked (uses shared libs),
    for GNU/Linux 2.6.24,
    BuildID[sha1]=cf5d717deb5d6fd26df835e42c9a461b012cb41c,
    not stripped
  • objdump -fs --source hello-normal
    objdump is used to dissasemble executables and gives a lot of output,
    below are the parts we care about, but the full output can be seen here

hello
-normal: file format elf64-x86-64
architecture
: i386:x86-64, flags 0x00000112:
EXEC_P
, HAS_SYMS, D_PAGED
start address
0x0000000000400440


Contents of section .rodata:
4005d0 01000200 48656c6c 6f20576f 726c6421 ....Hello World!
4005e0 0a00 ..

Disassembly of section .text:
000000000040052d <main>:
#include <stdio.h>

int main() {
40052d: 55 push %rbp
40052e: 48 89 e5 mov %rsp,%rbp
printf
("Hello World!\n");
400531: bf d4 05 40 00 mov $0x4005d4,%edi
400536: b8 00 00 00 00 mov $0x0,%eax
40053b: e8 d0 fe ff ff callq 400410 <printf@plt>
}
400540: 5d pop %rbp
400541: c3 retq
400542: 66 2e 0f 1f 84 00 00 nopw %cs:0x0(%rax,%rax,1)
400549: 00 00 00
40054c: 0f 1f 40 00 nopl 0x0(%rax)
  1. The first block tells alot of what file told us, this is a 64 bit ELF executable file,
    but it also gives us the start address of code which is 0x0000000000400440
  2. The .rodata section contains constant read-only(ro) data, in our case this is the “Hello World!” string literal.
    To prove that the data is truly read only we can try and run
    #include

Compiler Flags

If you look back at the Makefile you find that we gave gcc four flags
* -g produces debuging info
* -O0 turns off all optimazations
* -fno-builtin don’t use certain gcc function optimizations
* -o hello-normal name the resulting binary “hello-normal”
But we didn’t give
* -static statically links the libraries used
* -O3 maximum optimizatoin
Now here’s a slightly more complicated Makefile to see what each flag dose

Makefile

CC = gcc
DUMP_CMD
= objdump -fs --source

all
: dump-normal dump-static dump-built-ins dump-no-g dump-O3

build
-normal:
$
{CC} -g -O0 -fno-builtin main.c -o hello-normal
build
-static:
$
{CC} -g -O0 -fno-builtin -static main.c -o hello-static
build
-built-ins:
$
{CC} -g -O0 main.c -o hello-builtin
build
-no-g:
$
{CC} -O0 -fno-builtin main.c -o hello-no-g
build
-O3:
$
{CC} -g -O3 -fno-builtin main.c -o hello-O3

dump-normal: build-normal
$
{DUMP_CMD} hello-normal > hello-normal.objdmp
dump-static: build-static
$
{DUMP_CMD} hello-static > hello-static.objdmp
dump-built-ins: build-built-ins
$
{DUMP_CMD} hello-builtin > hello-builtin.objdmp
dump-no-g: build-no-g
$
{DUMP_CMD} hello-no-g > hello-no-g.objdmp
dump-O3: build-O3
$
{DUMP_CMD} hello-O3 > hello-O3.objdmp

clean
:
rm
-rf hello-normal hello-static hello-builtin hello-no-g hello-O3
rm
-rf *.objdmp

This will give result in four binary dumps

Removing -g (degbugging info)

A diff will show that the following section are missing
.debug_aranges debug_info .debug_abbrev .debug_line .debug_str.
They contain all the information for a debugger to work. That all means that the debugger can no logger match the decompiled code to our source file.

Removing -fno-builtin

The no in this flag tells gcc to turn off something called builtins, by removing it we are turning them back on as they are default. Now looking at the main() function we see printf@plt has been replaced by puts@plt
000000000040052d <main>:
#include <stdio.h>

int main() {
40052d: 55 push %rbp
40052e: 48 89 e5 mov %rsp,%rbp
printf
("Hello World!\n");
400531: bf c4 05 40 00 mov $0x4005c4,%edi
400536: e8 d5 fe ff ff callq 400410 <puts@plt>
}
40053b: 5d pop %rbp
40053c: c3 retq
40053d: 0f 1f 00 nopl (%rax)

Changing -O0 to -O3

We are now telling gcc to make all the optimizations it can, even if it means trading size for speed. As a result the hello-O3 binary is 12% bigger than the hello-normal binary. Now it’s not hard to come to the conclusion that there’s not much optimizing that can be done to Hello World. A look at the decompiled binary gives us:
__fortify_function int
printf
(const char *__restrict __fmt, ...)
{
return __printf_chk (__USE_FORTIFY_LEVEL - 1, __fmt, __va_arg_pack ());
400470: be f4 05 40 00 mov $0x4005f4,%esi
400475: bf 01 00 00 00 mov $0x1,%edi
40047a: 31 c0 xor %eax,%eax
40047c: e9 df ff ff ff jmpq 400460 <__printf_chk@plt>
gcc has replaced main directly with a call to __printf_chk@plt which according to the decompiler is
a function that printf calls internally. But this still dosen’t explain the bigger size.

Making things static

The binary is now HUGE, 9200% bigger huge. Also in main we no logger call printf@plt but we call _IO_printf which is a function that is now in our binary, for that matter all of the <stdio> functions are, that’s why it’s so big.

More arguments to printf

Lets’ go crazy and add more agruments to printf than we have acessible registers.
printf("Hello World!");
is now
printf("Hello World! %d %d %d %d %d %d %d %d %d %d %d %d %d %d %d\n", 0,1,2,3,4,5,6,7,8,9,10,11,12,13);
The compiled code now has to do a bit more work, the first 6 arguments are placed dirrectly in registers. For the remaining arguments the code takes the address of the stack in %rsp and then writes each successive argument to an increasing offset from that location. Then <printf@plt> is called like usual.

Moving printf() into output()

We are now going to make Hello World! a tiny bit more complicated, the new main-with-output.c is now
#include <stdio.h>

void output() {
printf
("Hello World!");
}

int main() {
output
();
}
000000000040052d <output>:
#include <stdio.h>

void output() {
40052d: 55 push %rbp
40052e: 48 89 e5 mov %rsp,%rbp
printf
("Hello World!");
400531: bf e4 05 40 00 mov $0x4005e4,%edi
400536: b8 00 00 00 00 mov $0x0,%eax
40053b: e8 d0 fe ff ff callq 400410 <printf@plt>
}
400540: 5d pop %rbp
400541: c3 retq

0000000000400542 <main>:

int main() {
400542: 55 push %rbp
400543: 48 89 e5 mov %rsp,%rbp
output
();
400546: b8 00 00 00 00 mov $0x0,%eax
40054b: e8 dd ff ff ff callq 40052d <output>
}
400550: 5d pop %rbp
400551: c3 retq
400552: 66 2e 0f 1f 84 00 00 nopw %cs:0x0(%rax,%rax,1)
400559: 00 00 00
40055c: 0f 1f 40 00 nopl 0x0(%rax)
In this version <main> now calls <output> where the all to <printf@plt> used to be and the address of “Hello World!” is not used.
All the code moved <output> now looks just like <main> used to.

by Gabriel Castro (noreply@blogger.com) at September 23, 2014 03:31 PM


Adam Nicholas Sharpe

Assembly Generated from Function Calls on x86-64

Two weeks ago in SPO600 we were given a task: compile a hello world C program, look at the Assembly code that gets generated then modify the code in small ways and notice how the Assembly code changes.

A second and separate task we were given was to learn about some feature of Assembly, teach it to the other students in the class in the form of a short presentation, and blog about what we discovered. I chose to investigate what happens when a function gets called in C, in x86-64 Assembly, and in particular what happens to the arguments passed into the function.

These two tasks are two separate labs, but since they are similar (and I'm lazy :P), I will combine them into a single blog entry.

WARNING/DISCLAIMER! What follows is a combination of personal research from reading materials I found on the web, trial and error with compiling and ojbdump-ing, and at times, wild speculation based on what I'm observing. Don't trust anything I say as authoritative!

A function that only references local variables and arguments is a standalone entity. At compile-time, it has no knowledge of where the arguments came from, or what values they should have. Therefore, when a function begins execution, it must look elsewhere to obtain the value of its arguments, as they are not defined within the function itself. Functions need to make assumptions about where to look for arguments, where to look for return values, and what stuff remains the same after a different function gets called and then returns. For a given computer system, such a set of rules governing the placement of arguments, return values, and other expected behavior, is called the "calling conventions" for that specific system.

A stack frame (I've also seen this called an "activation record") of a function at some moment in time, is the region of memory where the function stores local variables, its arguments, and information needed to restore the state of the caller upon returning. I gave the definition with respect to "some moment in time", because according to my understanding, it is possible for a stack frame to grow and shrink throughout the duration of the function's execution.

The way I, and most other stuff I read, visualize memory made the following assumptions about orientation. This is important to clarify, so that when I write about one location being 'above' or 'below' another, or about the direction of memory growth, your mental picture is the same thing as mine is. Throughout the rest of this post I will assume that:

1. Higher memory addresses are visualized as being above lower memory addresses.

2. The stack frame is a stack data structure, that grows downward, towards lower memory addresses.

This implies that if I have two local variables, X and Y, and Y was declared after X, then Y will have an address that is less than X.

I read some basic tutorials, and watched some videos about what happens when a function gets called. Typically, most explanations I saw told a story about what 'would' happen in an 'ideal case', but the tedious details of what actually happens is very specific to an instruction set architecture and an operating system. The 'ideal case' scenario would go something like this:

A function begins execution. There is a register that holds what is called the stack pointer (SP) and base pointer (BP) of the functions activations record. Above the base pointer is the address that would have been held by the program counter (PC), had the function not been called. This is where the function will return to upon returning, by loading this value back into the PC. Assuming that the size of pointer types is 8, then just above these 8 bytes, should be the arguments of the function. How much space each argument takes, is inferred from its type. So, for example, if my function is passes an int, int, and a long double, in that order, and assuming that the size of int and long double is 4 and 10 respectively, then each of these arguments can be addressed by BP + 8, BP + 12, and BP + 16 respectively. Remember! We are adding 8 to account for the saved PC of the function that called us! How did those values get there? It was the responsibility of the function that called our function to put them there, as well as set the SP to point to the right place. So, suppose we wanted to call another function, it would be OUR responsibility to decrement the SP by enough to store the values of the arguments to the function, and put the right values in there. When ever a local variable is declared, the stack pointer moves down as many bytes as is needed to make room for that local variable. So, with the numbers we've been using so far, that would be 4 bytes for an int declaration, 8 for a pointer declaration, and 10 for a double declaration.

But Alas! Things are not really this simple on x86-64 and most 'real' architectures, mostly because we can optimize on this behavior, and because of alignment issues.

1. First, we must manually store the value of the callers BP, by pushing it onto the stack. Then we must set our BP to be equal to our SP which was decremented by the caller.

2. Compilers are smart enough not to have to move the SP for every declaration and function call, but can move it by the right size just once at the beginning of the function. So, if I declare four ints, and then call a function that accepts two ints, then (ignoring alignment) the stack pointer would be decremented by 24 byes at the very beginning.

3. The stack pointer must always point to an address that is a multiple of 16. Also, the compiler may allocate more memory than you would expect to improve efficiency by aligning some variables in such a way as to waste memory but improve speed.

4. Perhaps most relevant to the code below, the caller will, whenever possible, pass the values of the arguments to a called function using registers directly as opposed to pushing them onto the memory stack. The called function can infer whether to look in the registers or above the base pointer for a particular argument from its type.

So let's start compiling functions to see what actually happens! :D First, let's compile 6 simple functions which are exactly the same except except for the types of the arguments and return value:


 1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
int i(int a, int b, int c, int d, int e, int f)
{
return (a + b + c) * (d + e + f);
}

char c(char a, char b, char c, char d, char e, char f)
{
return (a + b + c) * (d + e + f);
}

long long ll(long long a, long long b, long long c, long long d, long long e, long long f)
{
return (a + b + c) * (d + e + f);
}

float f(float a, float b, float c, float d, float e, float f)
{
return (a + b + c) * (d + e + f);
}

double d(double a, double b, double c, double d, double e, double f)
{
return (a + b + c) * (d + e + f);
}

long double ld(long double a, long double b, long double c, long double d, long double e, long double f)
{
return (a + b + c) * (d + e + f);
}

Let's take a look at the Assembly output. I turned on some basic optimization ("-O1" flag) because it makes the calling convention more readily transparent. For example, I noticed that without optimization, the compiler would 'always' store the arguments from their registers onto its stack frame, even if it was not necessary. The Assembly output:


 1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
ex1.o:     file format elf64-x86-64


Disassembly of section .text:

0000000000000000 <i>:
0: 01 f7 add %esi,%edi
2: 01 fa add %edi,%edx
4: 41 01 c8 add %ecx,%r8d
7: 45 01 c1 add %r8d,%r9d
a: 89 d0 mov %edx,%eax
c: 41 0f af c1 imul %r9d,%eax
10: c3 retq

0000000000000011 <c>:
11: 01 f7 add %esi,%edi
13: 01 fa add %edi,%edx
15: 41 01 c9 add %ecx,%r9d
18: 45 01 c8 add %r9d,%r8d
1b: 44 89 c0 mov %r8d,%eax
1e: 0f af c2 imul %edx,%eax
21: c3 retq

0000000000000022 <ll>:
22: 48 01 f7 add %rsi,%rdi
25: 48 01 fa add %rdi,%rdx
28: 49 01 c8 add %rcx,%r8
2b: 4d 01 c1 add %r8,%r9
2e: 48 89 d0 mov %rdx,%rax
31: 49 0f af c1 imul %r9,%rax
35: c3 retq

0000000000000036 <f>:
36: f3 0f 58 c8 addss %xmm0,%xmm1
3a: f3 0f 58 d1 addss %xmm1,%xmm2
3e: f3 0f 58 e3 addss %xmm3,%xmm4
42: f3 0f 58 ec addss %xmm4,%xmm5
46: f3 0f 59 d5 mulss %xmm5,%xmm2
4a: 0f 28 c2 movaps %xmm2,%xmm0
4d: c3 retq

000000000000004e <d>:
4e: f2 0f 58 c8 addsd %xmm0,%xmm1
52: f2 0f 58 d1 addsd %xmm1,%xmm2
56: f2 0f 58 e3 addsd %xmm3,%xmm4
5a: f2 0f 58 ec addsd %xmm4,%xmm5
5e: f2 0f 59 d5 mulsd %xmm5,%xmm2
62: 66 0f 28 c2 movapd %xmm2,%xmm0
66: c3 retq

0000000000000067 <ld>:
67: db 6c 24 18 fldt 0x18(%rsp)
6b: db 6c 24 08 fldt 0x8(%rsp)
6f: de c1 faddp %st,%st(1)
71: db 6c 24 28 fldt 0x28(%rsp)
75: de c1 faddp %st,%st(1)
77: db 6c 24 48 fldt 0x48(%rsp)
7b: db 6c 24 38 fldt 0x38(%rsp)
7f: de c1 faddp %st,%st(1)
81: db 6c 24 58 fldt 0x58(%rsp)
85: de c1 faddp %st,%st(1)
87: de c9 fmulp %st,%st(1)
89: c3 retq
64,35-39 Bot

The important thing to notice is that the integer and floating point arguments are put into particular registers consistently. The order is always the same. For integer types, it's %rdi, %rsi, %rdx, %rcx, %r8, %r9. For floats and doubles, it's %xmm0, %xmm1, ... %xmm7. The return value is always stored in the 'A' register for integer types, and the %xmm0 register for floats and doubles,

However, for long doubles, I am a little bit confused by what I am seeing (maybe someone who understands better can chime in?). After reading the calling convention portion of the System V ABI for x86-64, I assumed that long double arguments should be pushed onto the FPU stack, if they can fit into those registers. On my system they can: sizeof(long double) == 10, CHAR_BIT == 8, and the FPU stack registers are 80 bits wide. Instead, what I am seeing is the long double being put 16 bytes above the base pointer. (The 16 points is where the saved program counter, and caller's base pointer are stored). Perhaps long doubles must padded to be 16 bytes? But then why is the return value pushed onto the %st register (top of the FPU stack)? Weird...

In any case, there were four interesting cases that came to mind:

1. There are arguments of different types in different combinations.

2: There are lots of arguments. Specifically, when there are more arguments than there are registers of the appropriate type to store them.

3. The size of the type of some of the arguments or the return value is too wide to fit into registers (a structure type with many fields, for example).

4. When the function accepts a variable number of arguments.

I will write about the fourth case, functions of a variable number of arguments, in a separate blog entry.

Let's start with the case when there are arguments of different types:


1
2
3
4
float diff_arg_types(int i, char c, long long ll, float f, double d, long double ld, int x, int y, int z)
{
return (i + c + ll + x + y + x) * (f + d + ld);
}

This function produces the following assembly (this time, with no optimizations, since I want to be very explicit about which registers correspond to which arguments):


 1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
0000000000000000 <diff_arg_types>:
0: 55 push %rbp
1: 48 89 e5 mov %rsp,%rbp
4: 89 7d fc mov %edi,-0x4(%rbp)
7: 89 f0 mov %esi,%eax
9: 48 89 55 f0 mov %rdx,-0x10(%rbp)
d: f3 0f 11 45 ec movss %xmm0,-0x14(%rbp)
12: f2 0f 11 4d e0 movsd %xmm1,-0x20(%rbp)
17: 89 4d e8 mov %ecx,-0x18(%rbp)
1a: 44 89 45 dc mov %r8d,-0x24(%rbp)
1e: 44 89 4d d8 mov %r9d,-0x28(%rbp)
22: 88 45 f8 mov %al,-0x8(%rbp)
25: 0f be 55 f8 movsbl -0x8(%rbp),%edx
29: 8b 45 fc mov -0x4(%rbp),%eax
2c: 01 d0 add %edx,%eax
2e: 48 63 d0 movslq %eax,%rdx
31: 48 8b 45 f0 mov -0x10(%rbp),%rax
35: 48 01 c2 add %rax,%rdx
38: 8b 45 e8 mov -0x18(%rbp),%eax
3b: 48 98 cltq
3d: 48 01 c2 add %rax,%rdx
40: 8b 45 dc mov -0x24(%rbp),%eax
43: 48 98 cltq
45: 48 01 c2 add %rax,%rdx
48: 8b 45 e8 mov -0x18(%rbp),%eax
4b: 48 98 cltq
4d: 48 01 d0 add %rdx,%rax
50: 48 89 45 c8 mov %rax,-0x38(%rbp)
54: df 6d c8 fildll -0x38(%rbp)
57: f3 0f 10 45 ec movss -0x14(%rbp),%xmm0
5c: 0f 5a c0 cvtps2pd %xmm0,%xmm0
5f: f2 0f 58 45 e0 addsd -0x20(%rbp),%xmm0
64: f2 0f 11 45 c0 movsd %xmm0,-0x40(%rbp)
69: dd 45 c0 fldl -0x40(%rbp)
6c: db 6d 10 fldt 0x10(%rbp)
6f: de c1 faddp %st,%st(1)
71: de c9 fmulp %st,%st(1)
73: d9 5d d4 fstps -0x2c(%rbp)
76: f3 0f 10 45 d4 movss -0x2c(%rbp),%xmm0
7b: f3 0f 11 45 c0 movss %xmm0,-0x40(%rbp)
80: 8b 45 c0 mov -0x40(%rbp),%eax
83: 89 45 c0 mov %eax,-0x40(%rbp)
86: f3 0f 10 45 c0 movss -0x40(%rbp),%xmm0
8b: 5d pop %rbp
8c: c3 retq
51,1 Bot

There's a lot of stuff here, but we don't care about most of it at the moment. First look at lines 4 through twelve. It looks like the compiler is just using the next avaialble register for that type! For examples, it uses the integer registers until it hits a float and a double. So it puts those arguments in %xmm0 and %xmm1, and continues to put the final three int arguments into registers %rcx, %r8, and %r9. And the long double gets put 16 bytes above the base pointer, since on line 35 we see that locations value being pushed onto the FPU stack.

Now! Let's see what will happen if we pass in more arguments than there are registers to store those arguments. My C code:


 1
2
3
4
5
6
7
8
9
10
11
12
13
14
int i(int a, int b, int c, int d, int e, int f, int g, int h, int i, int j)
{
return (a + b + c + d + e) * (f + g + h + i + j);
}

char c(char a, char b, char c, char d, char e, char f, char g, char h, char i, char j)
{
return (a + b + c + d + e) * (f + g + h + i + j);
}

double d(double a, double b, double c, double d, double e, double f, double g, double h, double i, double j)
{
return (a + b + c + d + e) * (f + g + h + i + j);
}

The Assembly (this time with optimizations turned on again):


 1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
0000000000000000 <i>:
0: 01 f7 add %esi,%edi
2: 01 fa add %edi,%edx
4: 01 d1 add %edx,%ecx
6: 41 01 c8 add %ecx,%r8d
9: 44 03 4c 24 08 add 0x8(%rsp),%r9d
e: 44 89 c8 mov %r9d,%eax
11: 03 44 24 10 add 0x10(%rsp),%eax
15: 03 44 24 18 add 0x18(%rsp),%eax
19: 03 44 24 20 add 0x20(%rsp),%eax
1d: 41 0f af c0 imul %r8d,%eax
21: c3 retq

0000000000000022 <c>:
22: 41 01 f8 add %edi,%r8d
25: 44 01 c6 add %r8d,%esi
28: 01 f2 add %esi,%edx
2a: 01 d1 add %edx,%ecx
2c: 44 02 4c 24 20 add 0x20(%rsp),%r9b
31: 44 89 c8 mov %r9d,%eax
34: 02 44 24 08 add 0x8(%rsp),%al
38: 02 44 24 10 add 0x10(%rsp),%al
3c: 02 44 24 18 add 0x18(%rsp),%al
40: 0f af c1 imul %ecx,%eax
43: c3 retq

0000000000000044 <d>:
44: f2 0f 58 c8 addsd %xmm0,%xmm1
48: f2 0f 58 d1 addsd %xmm1,%xmm2
4c: f2 0f 58 da addsd %xmm2,%xmm3
50: f2 0f 58 e3 addsd %xmm3,%xmm4
54: f2 0f 58 f5 addsd %xmm5,%xmm6
58: f2 0f 58 fe addsd %xmm6,%xmm7
5c: f2 0f 58 7c 24 08 addsd 0x8(%rsp),%xmm7
62: 66 0f 28 ef movapd %xmm7,%xmm5
66: f2 0f 58 6c 24 10 addsd 0x10(%rsp),%xmm5
6c: f2 0f 59 e5 mulsd %xmm5,%xmm4
70: 66 0f 28 c4 movapd %xmm4,%xmm0
74: c3 retq

Here we can see that the compiler uses as many registers as it can, and when it runs out, it starts to place the arguments starting above the base pointer of the callee function. Also note that all the arguments smaller than 8 bytes get aligned to exactly 8 bytes. So in function 'c' for example, where all the arguments are characters, the seventh, eighth, ninth, and tenth argument gets stored at 0x8, 0x10, 0x 18, and 0x20  above the stack pointer, respectively. These are eight byte chunks. (Note: With optimization turned on, the function is not pushing %rbp onto the stack and assigning a new value to it, so it reaches 8 bytes above the STACK POINTER, and NOT 16 bytes above the BASE POINTER as in example 2. I apologize for the confusion).

Similarly, with the double arguments, the first eight are stored in %xmm0 - %xmm7, and the last two are stored at %rsp + 0x8 and %rsp + 0x10.

Now, the last case of interest is when we pass to or return from  the function, values that are too wide for registers:


 1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
typedef struct {
char c;
int i;
long long ll;
float f;
double d;
long double ld;
} big_struct;

big_struct fun(big_struct b1, big_struct b2)
{
big_struct b1b2 = {.c = b1.c + b2.c,
.i = b1.i + b2.i,
.ll = b1.ll + b2.ll,
.f = b1.f + b2.f,
.d = b1.d + b2.d,
.ld = b2.ld + b2.ld };
return b1b2;
}

And the Assembly:


 1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
0000000000000000 <fun>:
0: 48 89 f8 mov %rdi,%rax
3: 8b 4c 24 0c mov 0xc(%rsp),%ecx
7: 03 4c 24 3c add 0x3c(%rsp),%ecx
b: 48 8b 54 24 10 mov 0x10(%rsp),%rdx
10: 48 03 54 24 40 add 0x40(%rsp),%rdx
15: f3 0f 10 4c 24 18 movss 0x18(%rsp),%xmm1
1b: f3 0f 58 4c 24 48 addss 0x48(%rsp),%xmm1
21: f2 0f 10 44 24 20 movsd 0x20(%rsp),%xmm0
27: f2 0f 58 44 24 50 addsd 0x50(%rsp),%xmm0
2d: db 6c 24 58 fldt 0x58(%rsp)
31: d8 c0 fadd %st(0),%st
33: 0f b6 74 24 38 movzbl 0x38(%rsp),%esi
38: 40 02 74 24 08 add 0x8(%rsp),%sil
3d: 40 88 37 mov %sil,(%rdi)
40: 89 4f 04 mov %ecx,0x4(%rdi)
43: 48 89 57 08 mov %rdx,0x8(%rdi)
47: f3 0f 11 4f 10 movss %xmm1,0x10(%rdi)
4c: f2 0f 11 47 18 movsd %xmm0,0x18(%rdi)
51: db 7f 20 fstpt 0x20(%rdi)
54: c3 retq

From this we can infer that the two structs are laid out on top of each other, above the stack pointer. Each field from the struct is added to its corresponding field in the other struct, and stored in a register. For example: 0xc +  %rsp is added to 0x3c + %rsp and stored in %ecx, 0x10 + %rsp is added to 0x40 + %rsp and stored in %rdx, and so on. What's interesting is how the struct is returned. The calling function is expected to put into the register %rdi, the base address of a memory location in which the caller is supposed to store the resturn value. Thus, from lines 16 through 20 we see the values in the registers where the results of our previous calculations were put, being stored at an offset from the address in %rdi.

I felt I learnt a lot from investigating the X86-64 calling conventions on my machine. However, I now have more questions than when I started :) Many of which can probably be answered by a combination of further experimentation and reading documentation and standards, but alas, this is a topic for another blog post! The question at the front of my mind at the moment are:

1. Why aren't arguments of type long double passed through the FPU register stack?

2. What happens if the size of the struct, and the types of the fields are changed? Are structs ever passed in registers?

3. Tricky alignment questions (really, I just want a set of explicit alignment rules).

4. In the last example, I am having trouble understanding lines 14 and 15. I know from reading parts of the ABI standard that the address of where a struct is to be put is stored in %rdi. But here, it looks like %rdi is being manipulated in some way. Also, the first field of the first struct begins at 0xc bytes above above the stack pointer. But here it looks like the computer is grabbing data at 0x8 bytes above the stack pointer? But this leaves only 4 bytes of meaningful data between 0x8 and 0xc. What is this data and what the heck does it have to do with %rdi (the address of where to store the return value)?

by Adam Sharpe (noreply@blogger.com) at September 23, 2014 06:30 AM

September 22, 2014


Dylan Segna

GeckoApp and it’s Extensions

After a panic-filled end to last week involving partitioning errors and the dread of data loss, I was able to smooth things over and now have the recommended build system for Fennec as described here :

https://wiki.mozilla.org/Mobile/Fennec/Android#Linux

With Fennec now successfully built, I have moved on to investigating the GeckoApp class that I mentioned in a previous post.
After some searching, I was able to find the two classes that extend GeckoApp:

http://hg.mozilla.org/mozilla-central/file/5e704397529b/mobile/android/base/BrowserApp.java

BrowserApp is the main activity defined in Fennec’s Android manifest, meaning it is what opens by default when the application runs. This is the Firefox mobile browser.

http://hg.mozilla.org/mozilla-central/file/5e704397529b/mobile/android/base/webapp/WebappImpl.java

WebappImpl is an activity that seems to be created when a web app wants to run using Fennec, and doesn’t have any of the additional features of the full Fennec browser.

I am currently looking for a way to use this activity to initiate a custom web app, and will report my findings in the next post.


by Dylan Segna at September 22, 2014 06:11 PM


Gary Deng

Get to know Redis in 10 minutes

Redis, Remote Dictionary Server,is an in-memory, key-value database, commonly referred to as a data structure server.It is open source software released under the terms of the three clause BSD license. Most of the Redis source code was written and is copyrighted by Salvatore Sanfilippo and Pieter Noordhuis. Many companies including Twitter, Stackoverflow, and Github use Redis. Common use cases are caching, pub/sub, queues,and counters.

Data types supported

  1. Strings:  contain any kind of data, for instance a JPEG image or a serialized Ruby object. A String value can be at max 512 Megabytes in length.
  2. Lists: lists of strings, sorted by insertion order.  You can add elements to a Redis List pushing new elements on the head (on the left) or on the tail (on the right) of the list.
  3. Sets: unordered collection of Strings, and not allowing repeated members
  4. Hashes: maps between string fields and string values, so they are the perfect data type to represent object
  5. Sorted sets: Similarly to Redis Sets, non repeating collections of Strings. The difference is that every member of a Sorted Set is associated with score, that is used in order to take the sorted set ordered, from the smallest to the greatest score.

Install Redis

Download, extract, and compile Redis with the following commands:

$ wget http://download.redis.io/releases/redis-2.8.17.tar.gz
$ tar xzf redis-2.8.17.tar.gz
$ cd redis-2.8.17
$ make

To check Redis server is working,send a PING command using redis-cli.

$ redis-cli
redis 127.0.0.1:6379> ping
PONG

Basic Redis Commands

  • DBSIZE: Return the number of keys in the currently-selected database.
  • EXISTS: Returns 1 if key exists, else return 0.
  • KEYS pattern: Returns all keys matching pattern
  • FLUSHDB: Delete all the keys of the currently selected DB. This command never fails.
  • SHUTDOWN SAVE: force a DB saving operation even if no save points are configured
  • GET key: Get the value of key. If the key does not exist the special value nil is returned.
  • SET key value: Set key to hold the string value, If key already holds a value, it is overwritten, regardless of its type.

by garybbb at September 22, 2014 01:34 AM


Kieran Sedgwick

[OSD600] An overview of Browserify

Browserify is a dependency bundling tool for JavaScript libraries written with the CommonJS module system, most recognizable in node.js.

The CommonJS module loader is intended to read local files as required, similar to include statements. For instance, a JavaScript library might make use of a third-party module, represented here as mathLib, in this fashion:

// main.js
var mathLib = require('mathLib.js');

return mathLib.multiply(2, 2) + 5;

Using this natively in a browser environment, meaning loading dependencies as multiple files, requires the use of a browser-based module loading framework. RequireJS was written for this purpose, and does the job well and for purely browser-based libraries, this isn’t a bad option.

When sharing a codebase between Node.js and the browser however, there are two main downsides. The first is that the extra scaffolding that the framework requires isn’t natively supported in Node.js, requiring hacky workarounds to retain this format:

// main.js
define(function(require, exports, module) {
  var mathLib = require('mathLib.js');

  return mathLib.multiply(2, 2) + 5;
});

The second and far more important downside is the complications that arise when a dependency is only suited for one environment or the other. In the case of a library written to work in node.js and the browser, if the codebase uses any of the node.js framework’s core libraries, the browser is often left without an alternative.

Browserify solves these problem in three ways. First, it bundles all dependencies for a library into a single file, eliminating the need to use a hacky workaround. The code is written in standard CommonJS format, and “compiled” into a single JavaScript file for use in the browser.

Second, whenever Browserify detects a node.js native library being used as a dependency for the project, it replaces it with a browser-safe version in the bundle it creates.

Lastly, in a case where a dependency is an environment-exclusive third-party library, the user can specify an alternative to be included in the bundled version. This makes Browserify ideal for allowing cross-environment compatibility for JavaScript libraries.

The community

Browserify was initially developed by James Halliday, also known as substack. It operates under the MIT license, whose main implications are:

  1. Redistribution and modification of source code
  2. Sale of the source code

Work on the project is tracked on Github, using the Github issues system. Communication is largely done through IRC & twitter and to date, over 100 people have contributed code to the project. See:

by ksedgwick at September 22, 2014 12:01 AM


Hunter Jansen

Single Steppin' with GDB

SPO Lab3 Single Steppin’ With GDB

Written by Hunter Jansen on September 22, 2014

The third lab for my SPO600 course involves investigating a single specific aspect of GDB, and presenting my findings in a short, informative session to my fellow classmates. I’ve used this opportunity to cover how to step through a program step by step and also view the values in the registers

I also used this opportunity to tease/create an early alpha presentation using the presentation platform I’m working on: Show Me The Thing, but that’s for a different post

Lab Description

The lab can be found here.

The Slides

The slides are available on google drive here.

I’ve also made them available on calmlycoding @ calmlycoding.com/spo-debug/#solo

Let’s start Debugging

Alright, so in order to begin debugging with the Gnu Debugger (GDB) we need to compile it using the the -g flag.

For our examples today, we’re going to use the super simple step.c program below:

#include <stdio.h>

void print_string(int num){
    printf("Printing: %d\n", num);
}

void main(){
    int i;
    for(i=0; i<10; i++){
        print_string(i);
        printf("After");
    }
}

All this does is loops 10 times, calling a separate function to print out ‘Printing: ‘ from 0 to 9. Basic, but it’ll work for our needs.

We’ll compile it using:

    gcc -g step.c
    

You’ll still receive your expected a.out file from this which you can run as normal, but you’ll also be able to run it using the gdb command to begin debugging and stepping through. To begin using GDB you simply need to call gdb with the desired program. So for our example we use:

    gdb a.out
    

You’ll know that you’re running with gdb because your command line will have a (gdb) now.

Commands

So, obviously there’s a bunch of commands and stuff you can do with gdb, but I’m just covering the basic functionality here. The most basic thing you can do is use the run command to invoke the program as though you’d called ./a.out normally.

    run
    

This would run through the program normally, and provide us with the ‘Printing: 0’ - ‘Printing: 9’. It’s also worth mentioning
that if you’re running a program that expects arguments to be passed in, you can do this by providing those arguments following
the run command:

    run arg1 arg2
    

Super simple so far - but then again we haven’t really done anything…

Breaking Points

One of the key aspects of any debugger is the ability to add break points to lines of code that you’d like the program to stop on to check everything out. Luckily, adding a breakpoint to gdb is simple.

To add a breakpoint to a specific line of code we just use the following command in gdb:

    break step.c:5
    

In this example, we’re adding a break point to the 5th line in the source. More often then not, you’ll commonly just want to add a breakpoint to a function, pausing when the function is invoked from any source. To do this we call break followed by the desired function name. So if we wanted to add a break point to the print_string function:

    break print_string
    

Following adding a break point the program will stop at that point when you run it. So for our previous example we’d get the following output after issuing the ‘run’ command:

    Breakpoint 1, print_string (num=0) at step.c:4

Stepping Through

Another key bit of functionality when it comes to debuggers is the ability to step through a program one line at a time. There are two different commands when it comes to walking through with GDB - step and next. While similar, they are indeed different. Step will enter a function call, whereas next will skip over it. But what does that mean?

Consider the following snippet:

1    for(i=0; i<10; i++){
2        print_string(i);
3        printf("After");
4    }
    

If you were to step on line 2, you’d find yourself with a break on the first line in the print_string function - however if you use the next command, it’ll execute the print_string function and then break on the following line.

To step in GDB you you simply run the step command in gdb

    step

or, if you want to step through the next machine instruction as opposed to the next line of source you can use:

    stepi

Similary, to use the next command you run next in gdb

    next

or, if you want to step through the next machine instruction as opposed to the next line of source using next::

    nexti

Checking register values

Cool, so now we can set break points and step through instructions and what not. While that’s all fun it’s not really useful without one last part - checking the values of variables. There’s really no point in stepping through a program without being able to see the values of the variables.

As with everything else in this walkthrough, checking variable values is pretty simple. The command to check a variable by name is ‘print’.

So, for example if we were stopped in our print_string function and wanted to check the current value of the num variable in gdb we’d run:

    print num

This would give us the output along the lines of:

    $1 = 4

In this case the ‘$1’ is the amount of times I’ve printed variable values, this auto increments as you print more variables. And since I printed this on my fourth execution of the print_string function, the value of num is 4.

Another value you might want to check on is the values in registers/ a specific register. The command you use to do this is ‘info’. To view all the registers and their values you can enter:

    info reg

or:

    info registers

If you know the specific register whose value you like to check on you can call the info reg command with the name of the register:

    info reg rax

provides the output:

    rax 0xc 12

That’s about all the points that I’ll be touching on this time, and that should be plenty to get you up and running with GDB to help you with debugging!

Until Next time
-Hunter

September 22, 2014 12:00 AM

September 21, 2014


Andor Salga (asalga)

Gomba 0.1

Play demo

I was reading the Processing book Nature of Code by Daniel Shiffman and I came up to a section dealing with physics. I hadn’t written many sketches that use physics calculations, so I figured it would be fun to implement a simple runner/platformer game that uses forces, acceleration, velocity, etc. in Processing.

I decided to use a component-based architecture and I found it surprisingly fun to create components and tack them on to game objects. So far, I only have a preliminary amount of functionality done and I still need to sort out most of the collision code, but progress is good.

This marks my 0.1 release. I still have quite a way to go, but it’s a start.  You can take a look at the code on github or play around with the demo

I got bunch of inspiration from Pomax. He’s already created a Processing.js game engine you can check out here

BTW “gomba” in Hungarian is mushroom :)


Filed under: Game Development, Open Source, Processing, Processing.js

by Andor Salga at September 21, 2014 01:17 AM

September 20, 2014


Fadi Tawfig

Brushing Up On My Javascript

Up until now I had only ever used Javascript for simple scripts to add basic dynamic elements to webpages. Although this was perhaps the original intent of the language, Javascript has grown to serve a much wider variety of purposes and is now more relevant than ever as the web is becoming an increasingly flexible and powerful platform for applications.

Using Javascript was previously the source of quite a bit of frustration for me. I found it finicky and cumbersome to accomplish anything with it. The reason for this was that I had never taken the time to sit down and properly learn the language. I was trying to use Javascript the same way I would Java or similar object-oriented languages and then get frustrated when it behaved differently from my expectations. This was a big mistake.

Upon learning that I would be using Javascript to some significant extent this semester, I figured it was about time I actually learn how to use it correctly. I did some searching for recommended Javascript learning resources. One name I saw come up a few times was Douglas Crockford, a man widely known for his proficiency with the language.

I was pointed to a book by Crockford called Javascript: The Good Parts, as well as a series of video lectures available for free here. I picked up a copy of his book and have been watching the lectures and both have so far been massively helpful.

Probably the most important thing anyone new to the language should know is even though Javascript is referred to as an object-oriented language, the way it implements oo is significantly different from other languages with which you might be familiar. First off, just about everything in Javascript is an object. This includes arrays, regular expressions, even functions. Another important distinction from traditional oo languages is the nature of objects themselves. Javascript doesn’t have classes. Objects in Javascript can be created on the fly using object literal notation. E.g:

//Create an object called student with the properties name and id
//and give those properties a value. 
var student= {
	"name": "Fadi", 
	"id": 20
};
 
//These properties can now be accessed as such:
console.log(student.id);
console.log(student["name"]); 
//Both of these methods of access are entirely interchangeable.

Although this seems a bit strange at first, as I got used to it it became a very intuitive way of creating objects.

This is only the tip of the iceberg when it comes to what sets Javascript apart from other languages. Things such as prototype-based inheritance, function objects, and a lack of  block scope should also be considered when using Javascript and using these features effectively can help you get the most out of the language. I highly recommend anyone who’s interested in (or confused by) Javascript to check  out the book/videos I linked above.

- Fadi


by ftawfig at September 20, 2014 09:16 PM

September 19, 2014


Ryan Dang

Release 0.1

So after looking around the codes and the list of existing issue for Mobile Appmaker, I decided to pick a simple issue to start with. I believe that it’s better to start with small and easy bugs and then slowly work your way up to more complicated bugs. Starting with small bugs will help you slowly and steadily learn about the system.

So the first bug I picked was UI-Update tabBar. 

My task is to update the current 3 tab icons to the one that they provided. At first I thought this is going to be an easy task, but when I started working on it, I noticed that the current system use .svg image for all the icon. What they provided me is just one single .svg file of how the tabBar should look like. I have to get Inkscape, a .svg image editor to extract the 3 icons and then export each icon to a separate .svg file. I also needed to learn how to include .svg file in html page. I had some trouble with the configuration and the images didn’t show up where they supposed to. The big icon in the middle also need to be manually adjusted to fit the specs. It took me a while to get it right.

I finally got everything done after like 8 hours working on this issue. I did a review on the code to make sure I follow their coding style and submitted a pull request to fix the bug on Mobile Appmaker repo. The pull request is merged and the UI-Update tabBar issue is closed :). my first contribution to the project is a success!


by byebyebyezzz at September 19, 2014 07:27 PM


Linpei Fan

SPO600: Lab2

Brief description: 
Wrote a simple c program to display “Hello World!”, and  compiled it using command “gcc –g –O0 – fno-builtin”. Then using command “objdump” with options –f, -d, -s, --source to display the information of the output file.
And then do the following changes to see the changes and difference in the results.

5) Move the printf() call to a separate function named output(), and call that function from main().

Original output file: a.out
Output file after change: hello_all5.out

Before change:
When run the command: 

It only has main section for the source code:

After change
When run the command: 
It shows following:



6) Remove -O0 and add -O3 to the gcc options. Note and explain the difference in the compiled code.
-O3 is to optimize more for code size and execution time. It reduces the execution time, but increase the memory usage and compile time.

Output file before change: hello_all5.out
Output file after change: hello_all6.out

I use “time” command to check the execution time of above files, and get following result.


hello_all6.out is complied with the option –O3. It supposes to have less execution time. However, it takes much longer in real time than the previous one. Well, it does take less time in sys time.

I also compared the sizes of the output files with –O0 and –O3. The hello_all5.out, which is compiled with –O0, has smaller size than hell0_all6.out, being compiled with –O3. Apparently, compiling file with option –O3 does not reduce the file size. Instead, it increases the file size.


Following sreenshots are the result by running “objdump –source” command for both of the files.

Comparing the two results, I found:
1       --- The sequences of <main> section and <output> section in both results are different. For the output file hello_all5.out, being compiled with –O0 option, <main> section appears after <frame-dummy> section. And <output> section is after <main> section. By contrast, for the output file hello_all6.out, being compiled with –O3 option, <main> section appears right after the line “ Disassembly of section .text”. And <output> section still appears after <frame-dummy> section.

2   ---The contents of <main> section and <output> section are different for both results. For the output file hello_all6.out, the contents of both <main> section and <output> section are shorter than those in the result of hello_all5.out. It has 6 actions in <main> section of hello_all5.out and 9 actions in <output> section of hello_all5.out. By contrast, there are only 3 actions in <main> section of hello_all6.out and 4 actions in <output> section of hello_all6.out.










When I ran “objdump –s” for both files, I found more differences.
Contents of section .debug_line and contents of section .debug_str are shorter than the result of hello_all6.out. Moreover, the result generate by hello_all6.out has one more section – contents of section .debug_ranges.
Contents of section. debug_str generated by hello_all5.out

It is good to know that using different compiling options, the compiler compiles the program in different ways. Each option serves the different purposes. Accordingly, the assembler contents of each object files are different as well.

Using “objdump” command, it is good to see the assembler contents of the object file. It’s a good start to learn the assembly language. However, I still don’t fully understand what the assembler contents stand for. With learning more assembly language, I think it won’t be a problem for me anymore.

by Lily Fan (noreply@blogger.com) at September 19, 2014 02:54 PM

September 18, 2014


Edwin Lum

Contributing to an open-source community.

Of the courses I am taking this semester at Seneca College, one of them stands out in particular. SPO600 (Software Portability and Optimization) is a very special course that involves real projects being worked on in the industry. It focuses itself upon open-sourced projects and as such, we are tasked with investigating how the contribution process works for two different communities.

Three.js

The first project I decided to take a look at is called three.js. Which in the author’s words strives “to create a lightweight 3D library with a very low level of complexity — in other words, for dummies.”.

What this means is the ability for more people to create 3D web content without an extensive understanding of the math and back-end stuff that happens in order to get it working. The beauty of this is that three.js also requires no additional plugins on most browsers that support HTML5. (most alternatives do)

Three.js uses GitHub for most of it’s issue tracking as well as bug reporting. There is also a wiki page that talks about how to contribute to three.js. I feel like they tried to make it as simple and centralized as possible so that more individuals can help and contribute. Of course there are still coding style/guidelines that details how the code should be formatted, but this is almost required in order to not have a headache when reading code that could have so many people writing.

Three.js adopts the MIT license.

 Openstack

For the other project to investigate; I chose Openstack: Open Source Cloud Computing Software.

While Openstack also uses Github, they have a much more extensive process overall. They have summarized this in 7 steps : that include having to join their mailing list, join their IRC channel, learning how to work with the “Gerrit” review system, and signing a Contributor License Agreement.

Conclusion

In comparison, Openstack seems to require a lot more steps just to get started. As with all processes, there are pros and cons to each. In my opinion, three.js takes a much more agile approach, by eliminating many restrictions and keeping things simple and centralized. There are also lots of people commenting, helping and mentoring new contributors right on Github. Openstack takes a much more robust approach, requires more reading and understanding before submission, utilizing a different review system entirely, and enforcing the code styling quite strictly. They also have their fair share of resources in helping new contributors as ween in their IRC channel and mailing lists.


by pyourk at September 18, 2014 07:50 PM

September 17, 2014


Glaser Lo

Firefox OS

Firefox OS is the topic I am going to research for OSD600 Case study.  This project was started a few years old with a codename of “Boot2Gecko”.  Mozilla, the non-profit company who made Firefox, brings their successful browser engine to mobile platform and create a whole new operating system with Linux and open web standards.  For App developers, Firefox OS allows them easily create apps with HTML, CSS, javascript, and WebAPIs.  As the nature of open source project, manufacturers can produce very low cost smartphones with Firefox OS. It benefits people who are not able to afford expensive smarthphones and help them to join the era driven by mobile computing.

Since I like how Mozilla make Firefox flexible and stable and their action to keep pushing open web forward, I am quite interested in their new OS project and see what it will bring out in us.


by gklo at September 17, 2014 08:12 PM


Ryan Dang

Mobile Appmaker, first bug found!

Mobile Appmaker, an app that makes apps!

As a web developer, I am always fascinated about new technologies that can provide users with new experiences from interacting with web browsers. I figure this will be an awesome project for me to get involved in.

My first ever experience in trying to run the app on my local machine is this error: Arguments to path.join must be strings. First I thought it must be something in my set up is no configured correctly. I started searching for the reason that might causes this error. I tried update my node.js version to the latest, update gulp version to match the one the project use, change from Window 8 operating system to Window 7 operating system. I tried everything I could’ve think of and the error still there. After 3 hours of trying to fix the issue without any results, I decided to post the issue Arguments to path.join must be strings on Mobile Appmaker repository. I got a reply shortly after asking me for more information about my set up configuration. The issue is then marked as critical bug. Apparently there is problems in webmaker-download-locales that causes the app to be incompatible on Window operating system. The bug is fixed one day after the issue is filed. I was able to run and explore the app locally :).

The next blog post will be my first bug fix for Mobile Appmaker. Stay tune.


by byebyebyezzz at September 17, 2014 03:18 PM

September 16, 2014


Dylan Segna

Using Firefox Mobile with Cordova

Is it possible to package Firefox mobile, known as Fennec, with a PhoneGap application?
After looking closely at Android-specific source code for both Fennec and PhoneGap’s underlying system, Cordova, these are the things that I have found that may help answer this question.

Cordova runs using the CordovaActivity class in Android :

https://github.com/apache/cordova-android/blob/master/framework/src/org/apache/cordova/CordovaActivity.java

This contains the CordovaWebView which displays the HTML-based PhoneGap application using the native web-view capabilities.
CordovaActivity helps create the bridge between the HTML and JavaScript of the PhoneGap application, and the native Android code by handling input and receiving/relaying Android messages to the HTML application.

It is in this class that the substitute for Fennec would have to take place, along with any specific implementations of functions found in the CordovaWebView class:

https://github.com/apache/cordova-android/blob/master/framework/src/org/apache/cordova/CordovaWebView.java

Similarly, Fennec is initialized through native Android code by the GeckoApp class :

http://hg.mozilla.org/mozilla-central/file/3b7921328fc1/mobile/android/base/GeckoApp.java

This class creates the Fennec application and has the ability to load URLs in Fennec.

My hope is that by consolidating the functionality of both of these classes, the end result will allow Cordova to use Fennec as its web client, rather than the native web client.


by Dylan Segna at September 16, 2014 07:35 PM


James Laverty

(Cr)eating Chromium; an introduction.

Hey everyone!

Today I've decided to take a peak at the open source project Chromium from www.chromium.org.

Essentially Chromium is Chrome before it gets built, packaged, and distributed by Google. It is an open source web browser for those of us that don't trust Google enough to want to use their prepackaged build and want to be part of the building process.

Chromium can be used on:

  • Windows
  • OS X
  • Linux
  • Chrome OS
  • Android 
  • iOS
It's actually a lot lighter than chrome on resources which I like a lot. I've only used it a little bit so far, but I'm excited at the prospect. 

More to come soon!

Cheers, 

James Laverty

by James L (noreply@blogger.com) at September 16, 2014 04:04 PM


Gabriel Castro

Contributing to The Android Open Source Project (AOSP)

As my first blog post here I will be looking at what is involved in contributing code to the Android Open Source Project (AOSP). Android is an open source mobile operating system developed by Google and licensed under the Apache License Version 2. Being open source has allowed every device manufacturer and many other communities to create versions of Android customized for their devices and personal needs, while contributing to most of the other open source versions such as CyanogenMod, OmniRomParanoid Android, is technologically the same, this post will look at the master branch of AOSP at Google.

To find something to fix or to add feature Google has an open public issue tracker for Android at https://code.google.com/p/android/issues/ which is used to track issues with all the open source components of Android and it's build tools and SDK.

On the technical side Android is a operating system made of many components which are developed to work together but each in their own git repository. This would normally cause some headaches for developers when a single change involves changing code in multiple repositories. For this there is a tool called repo who's job it is to maintain multiple git repositories synced to the correct branch. Repo has a very similar syntax to git(https://source.android.com/source/using-repo.html), so I wont go much into it. To submit a fix you can run `repo upload` which will push all your changed repositories and create a new merge request in Android's code review system called Gerrit. What is created is one change (example) which can contain one or many git commits in many projects (example). Each commit must be individually approved, but cannot be merged until all commits in the change are approved and merged together.

For a single commit to be merged it must pass three checks.

  1. Verification
    • usually automated, a person or computer will download the change to their local machine and verifies that: 
      • The project still builds successfully
      • All automated tests still pass
      • The change works as described 
  2. Code Review
    • code is reviewed to make check for
      • bugs
      • code style 
  3. Approval
    • can only be done by the project manager or some else who has approval to merge changes
These steps do not have to be done in any order, but if one fails the author must make the appropriate changes. Once a change is made all three steps must be done again. 

by Gabriel Castro (noreply@blogger.com) at September 16, 2014 03:13 PM


David Humphrey

CDOT Planet Blog Feed Cleanup

In recent days our CDOT Blog Planet has begun having issues. Chris was able to determine that we have many feed URLs that are no longer active. Today I wrote and ran a script to check all of the feeds in the list, and removed any that were not found, returned errors, etc. If your blog was accidentally caught up in the sweep, my apologies. Please add it back manually.

by David Humphrey at September 16, 2014 02:16 PM


Catherine Leung

More Data Structures Animations

After refactoring my data structure animation code at the start of the year, it has become significantly easier to add more animations.  So far I have added:

  • merge sort
  • singly linked lists
  • queues (both array and linked list implementations)
  • stacks (both array and linked list implementations)

I have also fixed many bugs in my basic animation objects and spent some time to style the animations.

The other big feature I added was the ability to put in controls and interactive elements.  The animations are written in Processing.js.  The interactive elements are standard HTML elements.  To accomplish this I used the method suggested on the Processing.js site.  That is I built a number of javascript functions that then made function calls to my processing.js object.  This method is fairly easy to do.  In your canvas tag for your sketch you need to add an id.

<canvas id="dsanim" data-processing-sources="..."></canvas>

You will then want to add the following bindJavaScript() function (note the use of the id from the canvas tag to get access to the appropriate sketch):

// javascript reference to our sketch
var pjs = undefined;
var bound = false;
// bind JS to the p5 sketch for two way communication
function bindJavaScript() {
  if(Processing) { pjs = Processing.getInstanceById("dsanim"); }
  if (pjs !== undefined && pjs.bindJavaScript !== undefined) {
    pjs.bindJavaScript(this);
    bound = true; }
  if(!bound) setTimeout(bindJavaScript, 250);
}
bindJavaScript();

You can then  call any function from your sketch by accessing it through the pjs object.

For example, in my linked lists sketches I have a function called insert() that will perform the animation of adding a node to the front of the linked list.  I did not want an interactive element to be able to disrupt the animation part way through as it can cause errors.  Thus, I also added a function called midStep() that returns true, if it is in the middle of an animation routine (for insertion for example).  Both midStep() and insert() are processing.js functions.  With the above bindJavaScript() function, the pjs variable is associated with the sketch and thus, I can call these from a JavaScript function:

function insert(){
  if(!pjs.midStep()){
    var v = document.getElementById('val').value;
    if(v != ""){
      pjs.insert(v);
    }
  }
}

All I need to do is then associate this function with an interactive element and it will trigger the appropriate function call in my processing.js sketch.

You can check out the sketches here.  I plan to add more over the course of the next year as I’m teaching Data Structures and Animations this year.

http://cathyatseneca.github.io/DSAnim/index.html

Please file an issue if you spot a bug.  I tried to get as much out as I can.  In particular I tend to not test on windows at all, so if you are looking at it on a window’s machine and its not working please let me know.

Also, I know that I had initially chosen some colours that were not at all visible on some monitors.  if it looks like part of the animation is missing something, please let me know.  Thanks in advance.


by Cathy at September 16, 2014 06:53 AM

September 15, 2014


Kieran Sedgwick

[SPO600] Contributing to open-source projects

In a stroke of luck, I managed to get into a class taught by Chris Tyler this semester. He’s a member of the Fedora Project Board and the author of at least two books that I’m aware of. Sufficed to say, I’m excited to be in his class.

Our first lab involved investigating how contributor standards and procedures differ between projects which, as it happens, is something I’ve already explored to some degree with Mozilla’s CDOT team. I’ll be comparing that experience to the contributor policy of the Weechat project, an extensible, terminal-based, cross-platform IRC client.

I’ll be examining five things for each project:

  1. License type
  2. Assignment of work & bug tracking
  3. Code standards
  4. Code tracking
  5. Review process

Webmaker

Webmaker is a social initiative by the Mozilla Foundation that aims to increase web literacy by providing free tools and resources to the average person. The Webmaker project’s software tools operate under the Mozilla Public license, whose most important implications are:

  1. Freedom to modify and distribute source code and compiled code
  2. Freedom to sell the code, while retaining the original license

Work on Webmaker is tracked with Bugzilla, an open-source issue tracker and collaboration tool. Pieces of work (i.e. “fix this crash” or “implement this feature”) are represented as bugs, an analogue to an issue. Initially, contributors are assigned by a senior member of the Webmaker team to a bug that suites someone new to the project. As time goes on, they may be given the authority to self-assign bugs.

Webmaker’s code standard were loosely defined during my time on the project. The general rule was stick to the style of the file you’re editing, or match the overall style of the project as closely as possible in new files. Since then, the project has matured and the focus on a uniform style seems to have become more important.

For tracking their codebase, the Webmaker team exclusively uses Github. Because of its iterative release cycle, code is merged directly into the master repository after the review process is completed. The review process itself consists of open a pull request against the master repository, and getting at least one other senior developer to evaluate the code. Often, a third developer will be called on to test the changes, and this process can repeat many times before a patch is authorized to land.

The Webmaker public dev mailing list is free to join.

Weechat

Weechat is a terminal-based extensible IRC client, and was released under the GNU General Public License, version 3. The license’s main implications are:

  1. Freedom to modify and use the software, so long as modifications are tracked with dates.
  2. Freedom to use this software in other software projects, but only if those projects are then licensed under GNU GPL v3.
  3. Inability to sublicense software under this license.

Weechat uses Github’s issue tracking system to track, define and assign work on the project. Each “issue” represents a single unit of work to be completed, and contributors are asked to file Github pull requests for each issue they take on. They are also very specific about commit message format and content. For example, they specify that commit messages may only contain English words.

Likewise, they’re very specific about their coding standards. Some examples include:’

  • Requiring “nick_count” as a variable name, vs “nc”
  • Requiring block comment syntax (/* */) versus single line (//)
  • 4 space indenting, never with tabs.

Like the Webmaker project, Weechat tracks its codebase using Github. Patches are filed against the master repository, and reviewed by senior devs on the team before merging the code in.

The Weechat dev mailing list is also free to join.

Conclusion

The two projects I examined had many similarities in their contributor processes, and it appears to be centered around Github.


by ksedgwick at September 15, 2014 06:40 PM


Yoav Gurevich

Transcript of Open-Source Case Study Presentation

Here is a slightly summarized transcript of my presentation earlier today centered around my open-source case study - Atom.io:

BACKGROUND

A hackable text editor for the 21st Century”
Created and maintained by GitHub and its communities
MIT-Licensed

FEATURES AND USAGE

Core is built with web technologies, making Atom extremely customizable
Primarily made for and used by web developers, but supports a variety of languages
Is currently found to be slow in some cases, including text search in the editor and string manipulation

THE COMMUNITY

GitHub made and GitHub-based
Currently over 100 contributors in the main repository and over 45,000 downloadable packages
Atom’s twitter pagepresently has over 35,000 followers
A few ways to contribute to atom.io are:
Creating a new packageand adding it to the already vast library of modifications to the editor
Directly contribute to the source code by heading to the main repository’s webpage on GitHub.
Conventions and practicesused by this community to add or modify code is well documented on the atom.io webpages 

THE LICENSE

An end user has the right to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, with the following stipulations:
The MIT copyright notice needs to be shown on all of or substantial parts of the codebase
The owner of the codebase is not liable for any hardware or software malfunction that occurs either directly or indirectly as a result of using this product



by Yoav Gurevich (noreply@blogger.com) at September 15, 2014 04:14 PM


Omid Djahanpour

Researching Open Source Communities

This blog post will cover a task that I was assigned in my SPO600 class.

Briefly describing the task, I am required to do research on open source communities to understand how the community works with regards to implementing changes to existing open source software and to pick two open source software packages that have different licenses.

After doing my research, I have come to the conclusion that the two packages I will be discussing about will be OpenSSH and Mosh.

OpenSSH

What is OpenSSH?

OpenSSH as describe from their website:

is a FREE version of the SSH connectivity tools that technical users of the Internet rely on. Users of telnet, rlogin, and ftp may not realize that their password is transmitted across the Internet unencrypted, but it is. OpenSSH encrypts all traffic (including passwords) to effectively eliminate eavesdropping, connection hijacking, and other attacks. Additionally, OpenSSH provides secure tunneling capabilities and several authentication methods, and supports all SSH protocol versions.

In other words, OpenSSH is a secure shell targeting technical users and provides them with a more secure remote environment.

Finding Your Way Around the Community

OpenSSH has it’s own mailing list where anyone can discuss anything related to the development of OpenSSH. This list can be found here. There is also a gateway that can be used to access all of the OpenSSH lists.

There are also other lists that exist outside of the OpenSSH domain, such as this one hosted on BugZilla.

The Community – At a Glance

I did an advanced search from the BugZilla mailing list for OpenSSH and found this bug report with regards to sftp exiting on a bad tab completion as stated on the bug report page.

There wasn’t much commitment to this report, other than a way of reproducing the error. However, this brought the issue to the attention of the OpenSSH development team, and they committed a patch which will be included in the version 6.7 release of OpenSSH.

Licensing

OpenSSH is licensed under BSD 3-Clause.

Mosh

What is Mosh?

Mosh as described from their website:

Remote terminal application that allows roaming, supportsintermittent connectivity, and provides intelligent local echo and line editing of user keystrokes.

Mosh is a replacement for SSH. It’s more robust and responsive, especially over Wi-Fi, cellular, and long-distance links.

Mosh is free software, available for GNU/Linux, FreeBSD, Solaris, Mac OS X, and Android.

Why Mosh?

I specifically checked out Mosh for this assignment as I stumbled across a post on Reddit which introduced me to it. When I checked out their website and saw all the features it provides it really grew onto me to use it one day.

One of the features of Mosh that really attracted me was how it handles network lag as well as the way it keeps you connected while roaming and during dropped connections.

The Source and Community

Mosh has its source code hosted on GitHub where anyone can view or download.

As I have no personal experience with using GitHub, I can’t say much about it, however, I really like how the UI is layed out. It makes things clean and easy to read.

I cannot understand most of the things that are being worked on by the community involved with Mosh over GitHub, however I started with this pull request, which was later closed and referenced by this.

The Mosh community on GitHub come off as friendly and willing to help. It doesn’t seem difficult at all to push patches through to this community.

Licensing

Mosh is licensed under GPLv3+.


by Omid Djahanpour at September 15, 2014 04:32 AM


Habib Zahoori

AngularJS

The topic I picked for my Case study is “AngularJs”. Since I heard a lot about it, and when I saw it among the Case Study projects, I wanted to b to pick this topic, because I wanted to learn more about it.

As we can see, the topic has a JS suffix with it, which is self explanatory that it is a JavaScript Library. It is an Open Source JavaScript framework, which is supported by community as well as Google. As per some websites like “w3schools.com” AngularJS extends the HTML attributes, which means that using this library we can add more attributes to the HTML tags .It is very good for SPAs (Single Paged Applications) and it is easy to use aswell.

For Example using the code below, after including the library to our page, is a live repeater of our text, and it keeps showing it live while we are writing in the text box.

“<div ng-app=””> <p>Name: <input type=”text” ng-model=”name” value=”John”></p>

<p ng-bind=”name”></p></div>”

I am still researching on this topic and I am sure it will be an interesting presentation both for me and my other colleagues in OSD600.


by HZahoori at September 15, 2014 04:04 AM


Yoav Gurevich

Setting Up Development Environment on Windows

As a vehement and outspoken anti-Apple preacher for all things hardware and software related, I have been time and time again mocked or warned about the disadvantages of the development environment limitations of a windows-based machine. The main complaints and tribulations that I needed to address (for myself included) were - 

* Find a way to circumvent having to deal with windows command prompt/line syntax in favor of more familiar and comfortable unix console syntax
* Find a good package manager that can compete with the likes of brew or apt-get

The first conundrum was rather easily solved with the installation and activation of Git (with the process being virtually identical to any other platform) and using the the git shell instead of the windows command prompt. The git shell accepts all unix-like commands so you can feel right at home "cd"-ing and "ls"-ing around your file system. I have yet to try more complex bash scripts or commands, but for my current necessities this works like a charm.

Coincidentally, GitHub also helped me procure the next solution with the desire to install and use their code editor, Atom (which I will be presenting about tomorrow). The suggested method of download and installation for Windows 8 is to use a package manager called Chocolatey. This is now turning out to be a marvel of a package manager that also helped me install Node.js, Grunt, and Bower - all paramount to the Mozilla projects that I will be focusing on this semester.

Stay tuned for next week's updates.

by Yoav Gurevich (noreply@blogger.com) at September 15, 2014 01:07 AM


Andrew Li

A Quick Glance at Polymer

For our open source case study project, I will be exploring the future of the web - “a new kind of library taking advantage of web components” called Polymer.

Polymer is a pioneering library that makes it faster and easier than ever before to build beautiful applications on the web.

Polymer’s main goal is to make it “easier and faster” to develop applications for any device. That means in addition to web applications in desktop browsers you could use Polymer to build mobile applications in mobile browsers as first-class citizens.

Web Components is a set of specs which let web developers leverage their HTML, CSS and JavaScript knowledge to build widgets that can be reused easily and reliably.

Currently browsers do not support Web Components since specifications are still being carved out. This is where Polymer fills in the gap, it brings the future of “Web Compontents” to allow developers to use it today. Thus, Polymer is a polyfill that provides “web component” support.

Using Polymer, developers can create Web Components for sharing and building applications faster.

In the next post, I will dive into what Polymer’s all about including how it is licensed and where to go to get invovled.

September 15, 2014 12:00 AM

September 14, 2014


Linpei Fan

SPO600: Procedure of bug fixing on open source software - Bug 1041788@ Bugzilla@Mozilla

I used advance search at Bugzilla@Mozilla with following criteria: Status-Resolved, Product-Firefox, Resolution-Fixed, and Classification-Client Software to find a resolved bug. It showed me a listed bug met above-mentioned criteria. I sorted them by date of Changed and found Bug 1041788 - Unable to close tab after slow script warning at chrome://browser/content/tabbrowser.xml:1989. The reason why I chose this bug is because this bug is unusual and there were a lot of conversations about it. It is easy for readers to understand what’s going on during the process.

This bug was reported on 2014-07-21 and was modified on 2014-08-06. There were 11 users involved in the comments. The user, bull500, who reported the bug, has less experience in the community. And the user, Mike Conley, is the one who was assigned to solve the bug and has a lot of experience in this community. And there were some other experience users and QA (Paul Silaghi) contact in the reviews.

During the bug fix procedure, user bull500 reported a bug, which is unable to close the table after opening a large number of tags and showing a slow script warning, together with more details like the OS / software version, steps to reproduce and the results. Then Paul and Mike tried to reproduce it. Paul failed. But Mike did and found where cause the bug. And then Mike asked another experience user’s (Tim’s) opinion about this bug. Tim suggested a patch for this bug and asked the bull500’s and Mike’s feedback. They did not have the issue after install the patch. Thus the bug is resolved.
The whole process for resolving this bug took 16 days. Bull500 got the response the day after he reported the bug. When he reported that he met the same issue 5 days after, he received immediate response. Then they actively discussed the issue and found out where it came from. 9 days after the bug was reported, the solution came up. In the following days, they discussed how the solutions worked.

Moreover, I also browsed the Bugzilla@Eclipse. It has the exact same procedure as Bugzilla@Mozilla has.

After reviewing the procedure of above bug, I have some ideas on how the open source software works on resolving the bugs. However, I am still confused about who assigns the tasks or how developers take the task. 

by Lily Fan (noreply@blogger.com) at September 14, 2014 04:57 PM

OSD600/DPS909: First Glance at Node.js

I firstly heard about the Node.js from my friend, who worked at CDOT @ Seneca. He told me Node.js is a server side Javascript and his main job was working with Node.js. Then I got the frist impression about Node.js, which is a JavaScript library in server side.

Now I take the course Open Source Development. Node.js is one of the topics in the list of the Case Study. To be honest, Node.js is the only topic that I have heard about. It is an opportunity for me to dig out what is Node.js and how it works on the server.

This is what is Node.js on Wikipedia:
“Node.js is a cross-platform runtime environment for server-side and networking applications. Node.js applications are written in JavaScript, and can be run within the Node.js runtime on OS XMicrosoft Windows and Linux with no changes.

And according to the Wikipedia, Node.js is gaining popularity and is adopted as a high-performance server-side platform by Groupon, SAP, LinkedIn, Microsoft, Yahoo!, Walmart, and PayPal.

In nodejs.org, Node.js is described as follows:
“Node.js® is a platform built on Chrome's JavaScript runtime for easily building fast, scalable network applications. Node.js uses an event-driven, non-blocking I/O model that makes it lightweight and efficient, perfect for data-intensive real-time applications that run across distributed devices.”

Here it provides more information about Node.js. It describes that Node.js uses an even-driven, non-blocking I/O model, which I will find out what the meaning of these words are later.

The first project I work on in OSD600 is to enable Filter have “du” functionality as Unix does. Filer is a POSIX-like file system interface for node.js and browser-based JavaScript. It seems that my research on Node.js will be helpful on my first project. And I may keep work on Filer in my later projects, or may switch to MakeDrive (MakeDrive is a JavaScript library and server (node.js) that provides an offline-first, always available, syncing filesystem for the web.), which I am interest in.


Source: 

by Lily Fan (noreply@blogger.com) at September 14, 2014 04:56 PM


Shuming Lin

Open Source: Brackets

Apps-Brackets-B-icon
Brackets is a free open-source editor written in HTML, CSS, and Javascript with a primary focus
on Web Development. It was created by Adobe Systems and is currently maintained on GitHub.
Brackets is available for cross-platform download on Mac,Windows, and Linux.
Design In The Browser
Brackets doesn’t get in the way of your creative process. It blends visual tools into the editor while pushing HTML and CSS to the browser as you code.
Live HTML Development
As you code, HTML changes are instantly pushed to browser without having to save or reload the page

To learn more about socket.io, you can visit their website: brackets.io


by Kevin at September 14, 2014 04:55 PM


Stanley Moote

All About The Bower

DPS909 Presentation

For my DPS909 presentation I have chosen Bower from the list of options. I have never heard of bower before but after reading up on it I can understand its usefulness.

So what is bower?

Bower is an OS software for managing all of the different frameworks, libraries assets and utilities on your web server. Anyone who has been doing web development in the recent years knows there are a multitude of these that must be drawn from to make the site run smoother and to ensure you aren’t doing any extra work yourself that could be accomplished via a library.

A couple of questions about Bower:

  • What is Bower?
  • Why do I want to use Bower?
  • How will it make my life easier?
  • What is the future of Bower?

These are a few of the questions I will be asking myself and answering for the class during my presentation. I look forward to installing and messing around with Bower in my own environment.


by golddiggity at September 14, 2014 04:15 PM


Frank Panico

Looking for a JSHint????

So I’ve decided to pick JSHint from the leftovers of research options for our case study.

JSHint is a community-driven tool to detect errors and potential problems in JavaScript code. Developed and maintained by Anton Kovalyov (who holds the Grandmaster title of chess in Canada), it is very flexible so you can easily adjust it to your particular coding guidelines and the environment you expect your code to execute in. JSHint is open source and will always stay this way.

Its License is fairly standard in the Open Source sense. All that’s really required is to place the copyright notice in all copies and portions of the software. Also because it includes the JSLint library, Douglas Crockford (the creator of JSLint) stated in his license that anything with his software must also include this little tidbit just for the lulz…
- “The Software shall be used for Good, not Evil.”

Any code base eventually becomes huge at some point, and simple mistakes—that would not show themselves when written—can become show stoppers and waste hours of debugging. And this is when static code analysis tools come into play and help developers to spot such problems. JSHint scans a program written in JavaScript and reports about commonly made mistakes and potential bugs. The potential problem could be a syntax error, a bug due to implicit type conversion, a leaking variable or something else.

The codebase in github currently has over 150 contributors and is used by over 30 different companies and projects including Mozilla, Facebook, Twitter, SoundCloud and Yahoo!.

Getting involved is very easy. There’s a Git repository with all the code (https://github.com/jshint/jshint). If a developer would like to get involved, they just need to clone that repository, make changes* and open a pull request to get their code in.
* To find what changes one can make, they can go to the list of bugs and fix one of them.

Anton’s Handles are:
email: anton@kovalyov.net.
twitter:https://twitter.com/valueof
github:https://github.com/valueof


by fpanico04 at September 14, 2014 04:51 AM


Fadi Tawfig

MongoDB

I’ve chosen to research and present on MongoDB for a case study assignment for my OSD600 course. This topic was selected from a list of open source projects from which I could choose to research and present on. By the time I signed myself up for a topic, there were only a handful that weren’t yet taken, including MongoDB. I am, however, quite pleased with my topic of choice due to a genuine curiosity in this project.

I’ve seen the name MongoDB several times on programming forums, blogs, etc. but I’ve never really taken the time to really research what exactly it is. I suppose this case study is my opportunity to do so.

After doing some reading on MongoDB’s site and the Wikipedia page for MongoDB I learned that MongoDB is the world’s most successful NoSQL database. What is a NoSQL database? A database which eschews the traditional relational database model. What advantages does this provide? www.mongodb.com claims that NoSQL databases feature increased scalability and superior performance to relational databases. Of course, this is a biased source, but I’m interested in discovering the validity of that statement in further research.

Some questions I’d like to answer in my presentation include:

  • What is MongoDB?
  • What is the history of MongoDB as an open source project?
  • What is a NoSQL database?
  • Why choose a NoSQL database over relational database?

I look forward to answering these questions for my own curiosity, and for any others in the class which were wondering the same things.

- Fadi Tawfig


by ftawfig at September 14, 2014 01:46 AM

September 13, 2014


Jordan Theriault

Less.js – CSS Pre-Processing to Make it Modern

IMG_0031.PNG

Less.js is a preprocessor extension for CSS written in JavaScript. It is used to make writing CSS more efficient and allows you to traverse multiple files more easily. Less accomplishes this by introducing many elements of conventional computer programming such as variables, mix-ins, nested rules, media query bubbling, operations, functions, and more. Essentially Less aims to bring CSS to the modern era and becomes an invaluable tool in a web development full stack. Less is, however, not for production and does best within the development environment. For deployment, it is wise to pre-compile code written using less. This will create lean, fast loading pages.

For beginners, Less does have a steep learning curve but the result is nothing short of magnificent when you are dealing with large, CSS rich pages. Once mastered, Less becomes easy to use with familiar programming paradigms.

An example for variable use:

@nice-blue: #5B83AD;
@light-blue: @nice-blue + #111;

#header {
color: @light-blue;
}

To get started you can visit LessCSS.org.

by JordanTheriault at September 13, 2014 11:46 PM


Ava Dacayo

Introductory blog post on Bootstrap – OSD600

I picked Bootstrap for the open source case study as I was doing some reading about it anyway after seeing it in one of the files included in Visual Studio 2013.

So what is Bootstrap anyway? Here are some of the highlights I found out from http://getbootstrap.com/

  • It is a HTML, CSS, and JS framework for developing websites
  • It automatically scales the websites depending on the device where it is viewed
  • It is Mobile first

You can view the project here on github: https://github.com/twbs/bootstrap

A quick test in ASP.net MVC VS2013 and the Bootstrap v3.0.0 that came with it:

1. Normal size

1

Nothing fancy. I simply ran this in chrome after creating a new project. But take note of the contents.

2. Resized 

2

Ta-dah! Without actually coding ANYTHING, it figured out how to display the page when resized. The menu is now hidden and expandable and the contents are all shown without scrolling.

I’ll probably show a mobile view on my presentation day on different devices and different browsers to test how it will be displayed. And that’s it! End of my quick overview of Bootstrap!


by eyvadac at September 13, 2014 08:57 PM

September 12, 2014


Ali Al Dallal

TIL: How to search on Google and exclude word

Sometime when I'm trying perform a Google search for things related to my work or even random stuff on the web, and it's annoying to see things that's not even related to what I'm looking for at all.

Today I learn that when you search on Google you can simply exclude a word that you don't want to see in your result like this:

alicoding -twitter  

The above will result with a keyword alicoding, but with no twitter word in the search result at all.

I hope you find this blog post useful, and if you have a better way or suggestion please feel free to leave a comment! :)

by Ali Al Dallal at September 12, 2014 06:14 PM


Yasmin Benatti

React.js

Another task that I have for the Open Source class is to learn about a project and then make a presentation to the rest of the class. I decided to research about a project called React. For now I’m just looking at what is possible to do using it. Because I don’t know JavaScript very well I cannot explain the codes right now, but for my presentation (in November) and for later posts I’ll be able to.

Basically, it is a JavaScript library to develop user interfaces. This post on React’s web site explain in a easy way how React works, showing step-by-step the creation of an app. It has some methods used to take input data and displaying it later on the app, to create a todo list, a counter and others, shown on the first page of the website.

React is developed by people from Facebook and Instagram’s team. If you want to look for more information and details, here are some links: GithubIRC and Twitter.

Cheers!

by yasminbenatti at September 12, 2014 05:16 PM


Brendan Donald Henderson

GNU GCC Compiler Options and C Programming:

Recently I was asked to explore the output that the GNU GCC compiler generates when different command-line options are used. To this I thought no sweat, how powerful could these compiler options really be, and that is where this story starts…

The tools I used were:

  • GNU’s GCC compiler, version 4.8.3
  • objdump with the -f -s -d options
  • A simple make file(just for good practice)
  • A very simple C program

I chose to keep the same, relatively simple c program throughout this process because I didn’t want to spend too much time needlessly reversing dead-listings(disassembly) and I wanted to be sure that any change in gcc’s output was based on the command-line options and not on the source code changing.

Here is the source code of the C program:


#include <stdio.h>
int main()
{
printf(“Hello World\n”);
return 0;
}


The make file is very basic, but with all options being used it looks like:
# Makefile
#
w2: w2.c
gcc -g -O0 -fno-builtin -static -ow2


Compile statement: gcc -g -O0 -fno-builtin w2.c
After a call to objdump we find the following disassembly of our main function under the <main> label of the .text section. The .text section contains the executable code, it is analogous to the code segment of a Windows PE binary. By the way if you are wondering where the “Hello World” string constant ended up, take a look in the .rodata section.

disassembly:


<main>

push %rbp                               ;The first 2 lines set up a conventional rbp stack frame,
mov %rsp,%rbp                       ;this is done for many benefits(add more info below)
sub $0x10,%rsp                      ;Create space on the stack(16 bytes, 4 variables)
mov %edi, -0x4(%rbp)            ;These have to do with the parameters argc and argv
mov %rsi, -0x10(%rbp)
mov $0x4005f0, %edi             ;Moving the address of the string constant into edi
mov $0x0, %eax                     ;Clear eax, maybe it’s part of IA-64 convention?
callq 400410 <printf@plt>
mov $0x0, %eax                     ;Return values are commonly stored in eax
leaveq                                    ;High-level procedure exit, paired with enter usually, cleans up stack frame
retq                                        ;Pops the top of the stack into rip register
nopw %cs:$0x0(%rax,%rax,1)    ;I believe that these nops are being used for padding to boundaries
nop


GAS: the GNU assembler uses the syntax(AT&T syntax) seen above, for those used to MASM syntax this can seem quite ugly.
The main things to note from objdump’s output is that there are a few debugging related headers within the executable due to the -g option. The code is not optimized what-so-ever(-O0), this will become very noticable once I show the optimized code a little later.

Finally, the -fno-builtin option instructs the compiler to not optimize built-in function and instead to keep the library function call. GCC provides many built-in version of the Standard C Library functions, these built in version use different, optimized operations to accomplish the same task. However, this option does not ignore functions prefixed with __builtin_, which to the compiler are analogous to the Standard C functions.

Side Note: Our code is using the fastcall calling convention to pass arguments to the function.

Adding the -static option:

Compile Statement: gcc -g -O0 -fno-builtin -static -ow2 w2.c

The -static option causes dynamic linking to be disabled. What this means for our code is that the call to printf(), which exists within the stdio library, will cause the entire stdio library to be embedded into our executable at compile time(still the object file at that exact time).

This option should not be taken lightly! In this case it grew the executable from  roughly 8500 bytes to 1.04 megabytes, it also took the output of objdump from a few hundred lines to approximately 186,000 lines! In the case of objdump this is not only source code, there are a lot of other headers that the compiler generates as a result(.remember our -g debug info option). And how often does your application only contain 1 library?

A few interesting things to note:

  1. the call was changed from <printf@plt> to <_IO_printf>
  2. there were quite a few headers added that began with either _dl or _nl

Removing the -fno-builtin option:
Compile Statement: gcc -g -O0 -fno-builtin -static -ow2 w2.c

The main change here is with the call operation:

  • with -static: from <_IO_printf> to <_IO_puts>
  • without -static: from <printf@plt> to <puts@plt>

puts is GCC’s built-in optimized implementation of the Standard C printf().

Also, the three lines that previously followed the call line:

mov $0x0, %eax
nopw %cs:$0x0(%rax,%rax,1)
nop

were removed.

GCC C built-ins explanation: https://gcc.gnu.org/onlinedocs/gcc/Other-Builtins.html

Removing the -g option:

Compile Statement: gcc -O0 -ow2 w2.c
The size of the executable shrinks slightly because there is no debug information included in the executable. Also the debug section headers are no longer present and neither is debug setup code within the objdump’s output.

This debug information is invaluable during development of your code. However, when you transition to production this code is not only unnecessary and bloats your executable, it can also leave some very interesting information for anyone trying to hack or reverse engineer your application.

Adding arguments to printf() call:

Here I am adding 10 simple integer arguments to the call to printf() to try to determine the order in which registers will be used to pass arguments to a function.

disassembly of <main>:


push %rsp
mov %rbp,%rsp
sub $0x60,%rsp                 ;Creating 96 bytes of stack space, within the frame, for locals
movl $0x1,-0x4(%rbp)        ;These instructions are just moving the numeric constants into the
movl $0x2,-0x8(%rbp)        ;stack space we just created for the locals.
movl $0x3,-0xc(%rbp)
movl $0x4,-0x10(%rbp)
movl $0x5,-0x14(%rbp)
movl $0x6,-0x18(%rbp)
movl $0x7,-01c(%rbp)
movl $0x8,-0x20(%rbp)
movl $0x9,-0x24(%rbp)
movl $0xa,-0x28(%rbp)
mov -0x14(%rbp),%r8d
mov -0x10(%rbp),%edi
mov -0xc(%rbp),%ecx               ;Here is where the parameters for the printf call begin getting
mov -0x8(%rbp),%edx              ;loaded into a combination of stack space and registers.
mov -0x4(%rbp),%eax
mov -0x28(%rbp),%esi
mov %esi,0x20(%rsp)
mov -0x24(%rbp),%esi
mov %esi,0x18(%rsp)
mov -0x20(%rbp),%esi
mov %esi,0x10(%rsp)
mov -0x1c(%rbp),%esi
mov %esi,0x8(%rsp)
mov -0x18(%rbp),%esi
mov %esi,(%rsp)
mov %r8d,%r9d
mov %edi,%r8d
mov %eax,%esi
mov $0x400660,%edi        ;This is the string constant, the first argument in the printf call is
mov $0x0,%eax                 ;the last argument moved into a register.
callq 400410 <printf@plt>
mov $0x0,%eax
leaveq
retq


I have only commented the difference in this new disassembly. One thing to note is the order in which registers are used to pass arguments, also that arguments are loaded into registers from right to left(if you are looking at a function prototype).

The register priority that I was able to determine:

The code loads the first 5 arguments(reverse) onto the stack, then uses r8d, r9d , ecx, edx, esi

Some interesting side notes:

  • The seemingly unnecessary clearing of eax immediately before the procedure call only seems to arise in non-optimized code.
  • The compiler inserts nops of varying lengths to pad the code(in most cases to DWORD boundaries).

Moving the call to printf():
Moving the call to printf() to a  function outside of main() and then calling that function from main() didn’t generate any unexpected output.

Optimizing Our Code!:
Compile Statement: gcc -O3 -ow2 w2.c

To me this part is the most interesting, these optimization are as simply as supplying a command line option to the compiler! No extra work, the compiler will optimize your code for you, as best it can without changing the code’s intent. But how does the compiler actually “optimize” your code??

Here I changed the -O0 option to -O3:


<main>

sub $0x8,%rsp
mov $0x4005f0,%edi
callq 400410 <puts@plt>
xor %eax,%eax
add $0x8,%rsp
retq
nopl (%rax)


Main differences to note:

  • There is no longer any rbp stack frame, but there is a seemingly needless creation of 8 bytes of local space within the function.
  • “xor %eax,%eax” replaces “mov $0x0,%eax”(this is something that to me always gives away compiler optimized code). The difference is a 5 byte instruction into a 2 byte instruction, it doesn’t seem like much now but when there is often a return 0,false, or NULL option for many of your functions then it will add up.
  • A lot of the odd looking code that showed up in previous test runs was gone.

Important: Quite a significant part of optimizations that the compiler will add end up being related to arithmetic operations, condition branching, and rolling out loops into consecutive statements. None of which take place in this program, so do more exploration into this if you are interested!!

Ipsa Scientia Potestas Est ~ Knowledge itself is power


by paraCr4ck at September 12, 2014 12:13 AM

September 11, 2014


Brendan Donald Henderson

Open Source Projects: Patches

Recently I was asked to choose 2 open source projects and look into the patch submission process, from reporting of a bug up until the acceptance of a patch by the maintainers of that project. There are certainly some similarities between how these two projects operate. This makes sense as well established methods can and should be replicated in most cases. An example is that most open source projects use Bugzilla as a bug tracking system. But I digress, here are the 2 projects that I researched:

The following is a link to a Google Doc containing an example of a successfully submitted and accepted patch for both of the projects below.

https://drive.google.com/folderview?id=0By2dimikrupscnZaRjVERFREdFk&usp=sharing

GNU GCC:

License: GNU GPL 3.0
Legal Prerequisites: for “small changes”(they don’t provide the most clear differences between a small and large change) there is no legal aspect beyond agreeing that your work is being submitted to the project under their license. However, for “larger changes” there is a requirement to sign a copyright assignment or if a contributor is uncomfortable with that option then a copyright disclaimer.
Coding Standards: Your patch must conform to the GNU Coding Standards. If the patch is large enough they may also require that you submit documentation, test cases(individually or integrated into their test suite by the patch submitter), and code formatting(such as line indentation, etc.). Note: Sometimes the maintainers provide a script to test your code’s conformance to speed up the process.
Testing your patch: Some general good practices are, to test your code thoroughly and to the best of your abilities, as well as testing your patch on as many implementations as possible(Win32/64,Linux32/64,Intel,ARM,etc.). For this project(and most other large projects) there are specifications for:

  • the minimum level of testing
  • provided test suites that must complete successfully
  • any mandatory bootstrapping

These aspects will all depend on what component you are contributing to and its scope within the package/project.
Example: Obviously if you are patching one of the lower level optimization components of the gcc compiler then you would need to ensure that higher level components don’t break as a result.

Tip: Just as in science, when you are conducting a test(or experiment) you should never change to much before testing the results of those change(s). In fact, the fewer amount of changes to a single test, the better and more accurately assessable the result is(this really matters when other people are looking for reasons to reject your patch, don’t give them any!).

Submission: Once your patch is ready to submit for assessment, assuming that you don’t have maintainer status and aren’t a special/frequent developer and thus can’t don’t have write access to the SVN repository, you will need to package the patch and other required deliverables into a mail message and submit it to the appropriate mailing list.
Required Deliverables:

  • Description of the problem/feature and what it’s fixing/why it’s necessary..
  • Reference a specific bug on https://gcc.gnu.org/bugzilla/ .
  • Make mention of any existing test cases within the for the bug within the GCC test suite.
  • If you stray from the GNU Coding Standards you must justify why.
  • Add a plaintext entry into the ChangeLog.
  • List the host and target combinations used to test your code as well as the results of those tests.

link to their contribution specifications page: https://gcc.gnu.org/contribute.html

 


 

Eclipse:
License: Eclipse Public License(EPL)
Legal Prerequisite: Regardless of the size of contribution that you make the contributor must sign a Contributor License Agreement(CLA). The 1000 lines rule: If your contribution exceeds 1000 added(new) lines(this includes code, comments, and whitespace) then it must go through a mandatory IP Review Process, unless it can be committed via a committer from the same company as you.
Submission: A bug report must exist for the issue/improvement: https://bugs.eclipse.org/bugs/ .
Requirements for your patch:

  • Start with the newest available build of the component to which your patch applies.
  • Add your patch and comment the source code which you have added.
  • Within the file’s header there will be a contributor’s list, add to it: your name, company, etc.
  • Ensure that your patch follows the guidelines of the project to which you are contributing, for the linked example it is the CDT guildines
  • Do not change anything that you are not patching, if you would like to then create another bug report and submit the change seperately
  • Create jUnit test cases for your code and include them with the patch submission(This is optional)
  • There are maintainer committees that monitor Bugzilla and Gerrit(where applicable) and will see your submission
  • If your have submitted a patch and no project member(maintainer, moderator, other staff) has gotten back to you then Follow up with them.

Side Note: Some projects use Gerrit instead of Bugzilla and/or mailing lists for patch submission.

link to the contributing specifications page http://wiki.eclipse.org/CDT/contributing


by paraCr4ck at September 11, 2014 05:08 AM


Ryan Dang

Socket.io for beginner

Socket.io is an amazing JavaScript frame work which was built for Node.js. It “enables real-time bidirectional event-based communication. It works on every platform, browser or device, focusing equally on reliability and speed

What is real time bidirectional event-based communication? It refer to the communication between the server and the client. With socket.io, it will allow the server to push the change to the client without reloading the page. Traditionally, if you want to update a table data in real time, you could do an ajax call every second to constantly update the data. This approach put a toll on the client cpu and the server. With socket.io, we can send update to all the clients that connect to the socket when a change is made in the database or a server side function is called.

Some of socket.io uses are : real time chat application, documents collaboration, real time data update…

 

To learn more about socket.io, you can visit their website:

http://socket.io/

 


by byebyebyezzz at September 11, 2014 04:54 AM