Planet CDOT

October 31, 2014


Yasmin Benatti

FSOSS Day Two

This week I posted about FSOSS and as I said, I had to write two different posts due to the length of the first one. You can see the first post here.

The event’s second day was also nice. I went to three different talks and two keynotes. The first keynote was conducted by Bob Young, Red Hat co-founder. He didn’t talk about a specif topic, but he addressed some really nice things. I liked the way he spoke about learning and life experience, how the Open Licenses were good and bad for him, depending in the position he was (user/businesses). What really call my attention was when he said that on the beginning he couldn’t understand how was possible to support technology with altruism and after all that became what he was doing too. The thing about giving away a piece of code and receiving much more back is something he addressed and other people did so too during the event. What you receive back is much more you could code by yourself. A last thing I could link with the Open Source classes I take was when he talked about maintenance. Someone in the public asked what to do when you see an Open Source project that is not being updated but you really wanted it to improve and Bob answered that you should do it yourself. That reminded me about this post that shows the contrast between the cathedral and the bazaar styles where Raymond shows his trajectory in taking a “abandoned” project and maintaining it by himself.

The second talk I went was conducted by Kieran Sedgwick and was called “Web Maker’s Tech”. He embraced some really nice topics in web developing, such as audio, 2D and 3D canvas, coloring and others, in a very simple way,. I’m not a web developer yet, but I could understand pretty much everything. A nice point he addressed was that apps written in HTML5, CSS and JavaScript are powerful enough to run in mobile and desktop computers, even in different systems and hardware configurations.

The second keynote had Chris Aniszczyk talking about the open source on Twitter. His presentation is available to be seen. I really liked it and the way he put in topics how they roll open source. It was also really interesting to see how Twitter grew and how bugs were solved, like overloading access.

The fourth talk I went was conducted by Regnard Raquedan that works for Mozilla, called “Creativity with Firefox OS”. That went different than I though it would be, but it was interesting. Regnard worked with some activities to show us how to accept new things, how to not worry about mistakes instead using them as a source for new ideas. Them he showed some devices running Firefox OS, a Mozilla operational system that target in emerging markets. (I will talk more about this in a post about my translating projects).

Last talk I wen was about cloud computing  and the software OpenStack. Kent Poots showed how to use this tool step by step. I’m pretty new in this area and I was expecting some more technical information about cloud computing itself what made me a little bit lost.

As a conclusion, I think that there is a main idea that people have when using/working with open source, in instance the amount of code received is much larger than it could be coded by a small team; there is no way a enterprise pay for the number of people that collaborates on Open Source; Open doesn’t mean free and so on.  I couldn’t see a point that had too big divergence between speakers. I think that everybody that is into Open Source have a similar think in the point I cited above but when going to points like licenses, price, tools, availability, release time and “secret sauce” (cited on Chris presentation), each person has a different point of view. Most of the points I said here and what I saw on the event are similar to mine, I’m still building my opinion, once I just started studying and using it conscious that I was. Anyways, I really like the idea of opening code, sharing it with the public, that is who will be able to show the many different uses and bugs that a program can have. I agree with licenses that give the right to use, modify and do whatever you want with the code with the condition that what you do with it is still open source and I don’t think that everything needs to be free. More than that I’m still not having a concrete opinion

Cheers!

by yasminbenatti at October 31, 2014 10:07 PM


Kieran Sedgwick

[SPO600] Diving in

An important side-effect of this course is, presumably, expanding the compatibility of Linux packages for use on other architectures (ARM64 in particular). This practical side is part of what drew me to the course, and now we’re getting right into it.

I was required to be a detective this week! A code detective! We were given a list of packages to investigate, each of them having clues, red herrings and a paper trail. The goal? Determine exactly what would be necessary to make them aarch64 compatible.

I’ll briefly outline what it was I was looking for:

  1. What major Linux distribution families does the package support?
  2. Which architectures does the package support?
  3. What (if any) architecture specific code exists in the package?

My (General) Observations

It’s harder than it looks…

The main rule is that there is no rule. The similarities I found between projects tended to be project-level rather than code or tool-level. I’ll talk about these similarities as I explain the common steps in my investigations:

1. Find where the code lives

“Where the code lives” has a strict definition in my mind. The code lives where the people responsible for maintaining it congregate and communicate with one another about the code itself. In my previous experience this was easy! Github github github. The code actually resided there, the work to be done was tracked there and information on how to contact the main contributors was usually on the project’s README file.

Not so with these packages. Some were on Github, but most had a private instance of an issue or code tracker attached to their project’s website. A good example is the DRDB package, whose website has a dedicated git server.

I found a number of projects who tracked their code through SourceForge, and others through mailing lists. This is not a one size fits all solution, and before finding where the code lived I couldn’t progress any further.

2. Find all the builds

Next was determining which builds exist for a particular package. Are there Fedora builds? Debian? Ubuntu? For this I would have to crawl both the individual Linux distribution’s package systems and the site where the code lives. It was not nearly as straight forward as I would have liked.

Often, packages would be separated by version rather than by architecture. This meant digging around for references to architectures on a version-by-version basis. Sometimes it was detailed where the code lives, but I found different information in different places. Confusing? Yes. A reflection of the open-source nature of the work? Definitely. It seems there are logistical costs to a completely open approach to software development, at least when it comes to new contributors like myself.

I wouldn’t call the search intuitive, or contributor friendly, though I admit that my n00bness probably played a big part in that.

3. Crawl the code

At this point I’m determining whether the code needs work, or testing, or both. This is fairly easy at first, requiring me to run two commands:

## Search the source code for
## any inline reference to assembly
~/drbd8/ $ egrep -R 'ASM|__ASM__|_ASM_|asm|__asm__|_asm_' .

## Search the directory structure
## for any files with common assembly extensions
~/drbd8/ $ ls -la | egrep '.s$|.S$|.asm$|.ASM$' 

The trouble starts when interpreting the results, since it’s at this point where my assessment of the work to be done begins. The first question to ask is always, Do I see architecture specific-instructions? This is an immediate tell that compatibility work needs to be done.

This was where my investigations usually stopped, since I’m no expert on assembly language. Seeing any at all was a sign that changes were needed.

Conclusions

This was a fun lab, and exposed me to a number of open source realities. In particular, it showed me how different the communities are. I’m glad I’ll never stop learning!


by ksedgwick at October 31, 2014 06:05 PM


Brendan Donald Henderson

Plan of Attack

This post is a follow up to my previous post regarding my investigation into the packages that I will be working on. Here I will explain my plan of attack for completing these projects and any knowledge that I will need to obtain before effectively starting work on these packages.

The packages in question:

  • aircrack-ng
  • pyrit

aircrack-ng:

  •  I will need to research the SHA1 hashing algorithm and how it’s different phases work and why SSE2 technology would benefit it so much.
  • What the Intel SSE2 feature in x86_64 processors is and what it’s typically used for.
  • There is a ‘full memory barrier’ implemented for ARM32 platforms due to GCC not properly handling the stack during each SHA1 round, I need to determine if this has been fixed in more recent versions of GCC or if there is a built-in function for this type of memory barrier.
  • I will need to look into the concept of memory barriers and gcc built-ins related to that and built-ins related to sse2 if they exist.

pyrit:

  • Again knowledge of the SHA1 hashing algorithm, but also knowledge of MD5 hashing algo as well
  •  Again knowledge of Intel SSE2 and how it is beneficial for SHA1 hash cracking
  • Again I need to check for the existence of GCC built-ins for SSE2
  • Why byte swap operations are relevant in this software.

Plan of attack:

***These dates are subject to change as progress affecting factors change. The work plan schedule below will either be updated here and referred to from future posts or copied into and updated there.***

Time Frame for completion: October 31st to December 20th

October 31st:

  • Make final decision on packages to work on.
  • Do investigation into their overall purpose as well as where the assembly exists within the package.
  • Post notes from this investigation and decision to blog.
  • Look into what knowledge I will need to work on the 2 packages and construct a work plan.
  • Post the work plan and list of prerequisite knowledge to the blog.

By November 7th:

  • Open a dialog with the upstream community and inform them of my intentions.
  • Have a lot of the prerequisite knowledge researched and understood.

By November 14th:

  • Begin work on the packages. Going to work on both at the same time as porting effort will probably be quite similar.

By November 21st:

  • Ongoing work on both packages(includes profiling, documentation, etc).

By November 28th:

  • Ongoing work on both packages(includes profiling, documentation, etc).

By December 5th:

  • Expecting to be done work at this point, leaving buffer room for any small fixes that upstream requests.

By December 12th:

  • Have profiling data, and any other required deliverables submitted to the upstream community, waiting for approval.

By December 19th:

  • Successfully submitted a patch to the upstream community.

by paraCr4ck at October 31, 2014 05:25 PM

Package Decision and Investigation

This was the package list that I decided on based on the packages listed on the linaro performance challenge site, the ones that were already being worked on by my class mates, and ones that I thought would be completable within the remainder of my school semester.

List:

  • -erlang
  • -mpfr4
  • -zlib
  • -gmp4
  • -cryptopp(Security-related)
  • -pyrit(Security-related)
  • -aircrack-ng(Security-related)
  • -john(Security-related)

From this list I further specified based upon which seemed like full aarch64 asm porting would be required or whether it was a simple build option/source code macro that would need changing. I also took into account my interests, being security and therefore favored software security related packages.

My 2 choices:

  • aircrack-ng
  • pyrit

These packages are both heavily related to software security, and well known within the security/penetration testing field, especially aircrack-ng.

 

aircrack-ng: Is a suite of tools for wireless(802.11) network auditing and WEP/WPA-PSK key cracking. It is a tool commonly found in more security-oriented flavors of Linux such as the infamous BackTrack, or Kali Linux distributions.

  • 1 preprocessed asm source file: src/sha1-sse2.S
    • asm for x86_64 apple, i386, x86_64(intel and AMD separately, no optmztn. for AMD)
    • No asm for ARM32 or ARM64, C fallback exists in src/sha1-sse2.h
  • 2 C source files with embedded asm blocks:
    • src/aircrack-ptw-lib.c: One ‘rc4test’ function has an embedded asm block, seems like a special sse2 optimization and no other archs have similar optional optimizations.
    • src/sha1-git.c: First embedded asm block is for setting rotate left and rotate right operations on an x86_64/sse2 arch for sha1 optimizations, this has a C fallback. Second embedded asm block is actually for ARM32, it implements what they call a full memory barrier because apparently gcc mishandles the stack during each sha1 round. There is a C fallback and it would seem like there is no performance lost here but further investigation will be required?
  • I was unable to find any gcc built-ins within any of the above mentioned files, this could be worth exploring as some level of porting and optimization are likely(the code looks like it hasn’t been touched since 2008).

pyrit: Is a GPGPU driven WPA/WPA2-PSK hash cracker written in python and C(for extensibility). Interestingly it takes the space-time trade off of pre-computing portions of the auth phase.

  • 1 preprocessed asm source file: cpyrit/_cpyrit_cpu_sse2.S
    • This asm source file exports procedures used in _cpyrit_cpu.c
    • The 2 overall purposes of the 4 asm procedures is to: 1. determine if sse2 is enabled, and 2. provide ‘update’ and ‘finalize’ operations for the sha1 and md5 encryption algorithms.
  • 1 C source file with embedded asm blocks:
    • cpyrit/_cpyrit_cpu.c: There are 4 embedded asm blocks. 3 of them are for cpuid, the fourth is a ‘bswap’ instruction that is exclusive to certain cpu archs.
      • The 3 cpuid blocks pertain to a specific series of processors from Centaur, “CentaurHauls”, that probably don’t exist in any modern consumer desktop implementation and maybe not even in the mobile marketplace.
      • The bswap asm block, in the bswap routine, is only used when on a “padlock-enabled” cpu, which appears to only be more recent mobile ARM cpus manufactured by VIA.

That concludes my preliminary investigation of the 2 software packages that I will be working on for the remainder of my semester. The next post, that will be released very soon, will go into more detail regarding my work plan/timeline for the projects as well as any prerequisite knowledge that I will have to look into before properly understanding and modifying the code in either package.


by paraCr4ck at October 31, 2014 03:48 PM

Crypto Package Investigation

Hello! This post builds on my previous Fedora Package Investigation post. Here I will show my results of exploring the following packages and what I uncovered about the assembly within:

Packages to explore:

  • cryptopp
  • polarssl
  • john
  • pyrit

I have found that these 4 packages all already exist on aarch64 Fedora, but in what capacity compared to their x86 counterparts?

john:

  • asm files(.s/.S) found:
    • src/x86-sse.S: approx. 1300 lines of asm, all of it would need porting to aarch64, procedures to look for in c source files:
      • DES_bs_crypt
      • DES_bs_crypt_LM_loop
      • DES_bs_crypt_25
      • DES_bs_crypt_25_next
      • DES_bs_crypt_25_swap
      • DES_bs_crypt_25_start
      • DES_bs_crypt_25_body
      • DES_bs _finalize_keys_LM_loop
      • DES_bs_crypt_LM
      • DES_bs _finalize_keys_expand_loop
      • DES_bs _finalize_keys_main_loop
      • DES_bs _finalize_keys
      • DES_bs _finalize_keys_25
    • src/x86-mmx.S
    • src/x86-64.S
    • src/x86.S
    • src/alpha.S
    • src-mmx/x86-sse.S
    • src-mmx/x86-mmx.S
    • src-mmx/x86-64.S
    • src-mmx/x86.S
    • src-mmx/alpha.S
  • All asm is contained in asm source files(No embedded asm)!
  • Can’t find where the asm procs are being called in the C code to check for C fallbacks. But did see a bunch of intrinsics!?

pyrit:

  • asm files(.s/.S) found:
    • cpyrit/_cpyrit_cpu_sse2.S
    • The above file exports procedures that are used in:
      • cpyrit/_cpyrit_cpu.c(4)
      • The 2 overall purposes of the asm procedures is to: 1. determine if sse2 is enabled, and 2. provide ‘update’ and ‘finalize’ operations for the sha1 and md5 encryption algorithms.
  • embedded asm:
    • cpyrit/_cpyrit_cpu.c: has 4 embedded asm blocks. 3 are for cpuid, the fourth is a ‘bswap’ instruction that is exclusive to certain cpu archs.
    • The 3 cpuid blocks pertain to a specific series of processors from Centaur, “CentaurHauls”, that probably don’t exist in any modern consumer desktop implementation and maybe not even in the mobile marketplace.
    • The bswap asm block, in the bswap routine, is only used when on a “padlock-enabled” cpu, which appears to only be more recent mobile ARM cpus manufactured by VIA.
  • Remember that pyrit is a python based password cracker that interacts with c libraries to extend it’s functionality. The assembly contained within this package could be performance ported to aarch64 by re-implementing all of the sse2 asm procedures with ARM neon instructions instead, this would mean a noticeable amount of assembly work(approx 300 lines for sse2 procs in cpyrit/_cpyrit_cpu_sse2.S) but only small work in cpyrit/_cpyrit_cpu.c.
  • Or: the gcc intrinsics for neon may provide the required functionality and is much more portable than static assembly.

cryptopp:

  • No asm(.s/.S) files!
  • embedded asm:
    • rijndael.cpp
    • vmac.cpp
    • integer.cpp (has both syntax of embedded asm)
    • cpu.cpp
    • gcm.cpp
    • salsa.cpp
    • whrlpool.cpp
    • tiger.cpp
    • misc.h
    • sosemanuk.cpp
    • cpu.h
    • sha.cpp: not sure yet what this embedded asm is doing!

polarssl:

  • No asm(.s/.S) files!
  • embedded asm:
    • include/polarssl/bn_mul.h(has both syntax of embedded asm)
    • library/timing.c
    • library/padlock.c
    • library/aesni.c

Update: These packages are preliminary. I am not sure if my upcoming work will be on these packages or different ones.


by paraCr4ck at October 31, 2014 01:22 AM

October 30, 2014


Gary Deng

Open Source As A Developer-Recruiting Tool

Time is flying. This is the fifth semester of my education in Seneca College. There is only one more semester to go, and I am going to graduate from Computer Programming and Analysis. Every soon-to-be graduate student starts to look for job, but are you really ready? As a computer programmer, how to get those hiring managers’ attention? Yes, a well prepared resume is the key to get you to a job interview; However, most employers have difficulty to find out candidates’ programming skill in a 30-minutes interview. Normally, interviewees are nervous, they couldn’t code and solve problem in their comfortable ways. As the result, some employers might complain that they couldn’t hire proper people to fit a position. And interviewees couldn’t demonstrate all their programming skills because they don’t have good interview skills. In reality, there is always a gap between employers and job seekers.

Recently, more and more employers start to query talents from open source community. Web companies like Netflix, Twitter and Facebook understand that open source can be more: a powerful weapon for recruiting and retaining top engineering talent. Twitter runs monthly queries on contributors to their open source projects and projects of interest. That’s a very good news for open source contributors. As a job seeker, your contributed code on open source project is your new resume. If you want to find your dream job, why not start to work on one of the 10 million open-source projects posted to popular code repository Github. It allows developers to demonstrate coding skills, collaboration abilities and technology interests. For hiring managers, open-source communities may offer better perspectives on technical and soft skills than a reference.


by garybbb at October 30, 2014 07:04 PM


Shuming Lin

Presentation: Open Source – Bracket

what is Brackets?

“Brackets is an open source code editor for web designers and front-end developers.”

it been almost 3 years since the first commit for Brackets landed. It written in Java script, HTML and CSS. Also, it good tooling for JavaScript, HTML and CSS, and related open web technologies.

Brackets is a different type of editor. And the reason why is better editor to use. List as following:

  1. —Quick Edit/Add for CSS rule
  2. —Quick Edit for JavaScript
  3. Peak Definition
  4. —Live Development for HTML & CSS
  5. Live Highlight HTML elements

 

Keyboard Shortcut Cheat Sheet

shortcut

 

Good News:

it is Ready to Declare Brackets 1.0

Brackets is available for cross-platform download on Mac, Windows, and Linux.

 

Reference:

http://html.adobe.com/opensource/

http://brackets.io/index.html


by Kevin at October 30, 2014 03:36 AM

October 29, 2014


Edwin Lum

Package and project choice in SPO600

Recently in the software porting and optimization course I am taking at Seneca College, we have started looking into the main project we will be working on for the majority of the course. As a refresher, Linaro’s Performance Challenge is the focus of this course in which we learn to port and optimize platform specific assembler code into other platforms, with hopes of performance gains through the use of better algorithms, or taking advantage of compiler optimizations.

For my projects to look on, we were told to pick packages in which there was genuine interest, yet also warned that some packages may really be too large and over the scope of this course. There were some really interesting packages available on Linaro’s site, and some really did generate a lot of interest in the class. Of particular I remember there was one that had to do with Java JIT (just in time compiling), to give you guys an idea of some of the more intense and interesting packages available.

Me on the other hand, I seem to have found 2 that I would like to take a look at for this semester. When looking at the Linaro list, Eigen2 and Eigen3 popped up and looked strangely familiar. I remember learning this a while back in linear algebra! It was somewhat of a challenge back then and I feel like I could definitely use a refresher in that topic. To my delight, that Eigen is the same as this Eigen. (Eigenvalues, and Eigenvectors) . Simplified, it boils down to this:

A v = \lambda v

A is a matrix, and v is some non-zero vector. we want to calculate the eigenvalue \lambda, a scalar multiple which when multiplied has the same effect of multiplying by the matrix. In essence, it can make matrix multiplication that we have to perform often simplified…. by a lot.

As such, my primary package of interest is going to be Eigen3, preliminary analysis shows some inline assembly that can be found in /src/Core/util/Memory.h, analysis still has to be done regarding what it actually does but it definitely doesn’t look too bad.

[elum1@red eigen3]$ egrep -R __asm
eigen-eigen-1306d75b4a21/Eigen/src/Core/util/Memory.h:         __asm__ __volatile__ ("xchgl %%ebx, %k1;cpuid; xchgl %%ebx,%k1": "=a" (abcd[0]), "=&r" (abcd[1]), "=c" (abcd[2]), "=d" (abcd[3]) : "a" (func), "c" (id));
eigen-eigen-1306d75b4a21/Eigen/src/Core/util/Memory.h:        __asm__ __volatile__ ("xchg{q}\t{%%}rbx, %q1; cpuid; xchg{q}\t{%%}rbx, %q1": "=a" (abcd[0]), "=&r" (abcd[1]), "=c" (abcd[2]), "=d" (abcd[3]) : "0" (func), "2" (id));
eigen-eigen-1306d75b4a21/Eigen/src/Core/util/Memory.h:         __asm__ __volatile__ ("cpuid": "=a" (abcd[0]), "=b" (abcd[1]), "=c" (abcd[2]), "=d" (abcd[3]) : "0" (func), "2" (id) );

As for the second package to take a look at, there was another one that really caught my eye in Linaro’s list. Coming from a networking and hardware background from Seneca’s Computer Systems and Technology program. DRDB8 seems to be right up my alley, and is basically network based raid-1.

Upon looking into it some more however, it looks like I was able to get it to build on Red (our aarch64 machine.) I basically had to download their source code from github, run autogen, configure and make.

make -C drbd drbd_buildtag.c
make[1]: Entering directory '/home/elum1/packages/drbd8/drbd-8.3/drbd'
make[1]: Leaving directory '/home/elum1/packages/drbd8/drbd-8.3/drbd'
make[1]: Entering directory '/home/elum1/packages/drbd8/drbd-8.3/user'
cp ../drbd/drbd_buildtag.c drbd_buildtag.c
gcc -g -O2 -Wall -I../drbd -I../drbd/compat -c -o drbd_buildtag.o drbd_buildtag.c
gcc -o drbdadm drbdadm_scanner.o drbdadm_parser.o drbdadm_main.o drbdadm_adjust.o drbdtool_common.o drbdadm_usage_cnt.o drbd_buildtag.o drbdadm_minor_table.o
gcc -o drbdmeta drbdmeta.o drbdmeta_scanner.o drbdtool_common.o drbd_buildtag.o
gcc -o drbdsetup drbdsetup.o drbdtool_common.o drbd_buildtag.o drbd_strings.o
make[1]: Leaving directory '/home/elum1/packages/drbd8/drbd-8.3/user'
make[1]: Entering directory '/home/elum1/packages/drbd8/drbd-8.3/scripts'
make[1]: Nothing to be done for 'all'.
make[1]: Leaving directory '/home/elum1/packages/drbd8/drbd-8.3/scripts'
make[1]: Entering directory '/home/elum1/packages/drbd8/drbd-8.3/documentation'
To (re)make the documentation: make doc
make[1]: Leaving directory '/home/elum1/packages/drbd8/drbd-8.3/documentation'

Userland tools build was successful.

And it seems that it ran successfully!

More next time :)


by pyourk at October 29, 2014 10:15 PM


James Laverty

A video bawse at FSOSS

Hey Everyone,

I volunteered as a videographer at FSOSS recently and it was a wonderful experience. I feel pretty happy that I was able to be a part of it and that I got to meet some 'big wigs' in the industry.

Apart from that I also video taped three presentations and watched five. The first one I watched was given by the founder of Red Hat Linux named Bob Young. His talk was really interesting to listen, I really enjoyed the way he spoke and his use of body language. He repeatedly labelled himself as just a typewriter salesman, but in essence is much, much more.

The next talk I watched was of a classmate name Kieran Sedgewick who talked about Webmaker's Tech: The Future of Web Development. He did an excellent job and his presentation was nice to watch, he also handled the questions well.

After that, lunch happened and I had a man named Dan Hodge a senior solutions architect middleware at Redhat. It was a fun and inspiring chat that went well.

After lunch the next speaker was Chris Aniszczyk presented Friday Keynote Presentation. His talk was quite insightful and gave me a better understanding of the people inside the machine. I also found his description of the fail whale during the 2010 world cup quite humorous. 

After that I watched two more presentations then I got a free t-shirt! How Exciting!

Cheers,

James Laverty

by James L (noreply@blogger.com) at October 29, 2014 09:14 PM


Ryan Dang

Problem with uploading media file in Angular

As we all know Angular is designed for SPA – single page application. Every request to the server are handled using $http. However, this creates an issue for uploading media file. Traditionally, to upload an media file we create something like

<form id=”formId” name=”formName” enctype=”multipart/form-data” method=”post” action=”urlUploadFileHandler”>

<input type=”file” id=”file” name=”file” />

<input type=”submit” value=”Submit” id=”submitForm” />

</form>

with this method, you can get the uploaded file easily from doing some backend code in the file urlUploadFileHandler.

We can use this method for angular upload however, it will reload the page when the form is submitted. We don’t want that because it destroys the whole idea of SPA. We want to use $http to handle file upload. However, this is not an easy task to accomplish. I’ve been stuck to this issue for one whole working day. I looked in to some external module like angular-file-upload and tried to make it to work for our project. I will make another post after I figure out how to handle file uploading in Angular.


by byebyebyezzz at October 29, 2014 01:31 PM


Jordan Theriault

Thoughts on Keynote Speakers at Seneca’s Free Software and Open Source Symposium

On Friday October 24th I attended Seneca’s Free Software and Open Source Symposium. Bob Young and Chris Aniszczyk were two keynote speakers for the symposium. Both speakers are industry experts who provide a great deal of insight into the world of open source in successful businesses.

 

Red Hat Linux Creator – Bob Young

Friday’s first keynote speaker is Bob Young, founder of Red Hat Linux, which is now a 1.5 million dollar revenue Company that creates a distribution of the Linux operating system. His talk focused on the idea of open source, and it’s development from his perspective over the past 16 years. Mr. Young started by creating ACC Corp, which sold books and disks for Linux and realized the potential of the public domain and open source. After this venture, Young decided to create Red Hat and risk his children’s education funds in order to pursue his dreams of an open source based business.

Young discussed how previous to the GPL there was no such thing as public domain in software. Programmers would have their work traditionally copy written and allow people to use it. This promissory method of sharing software is risky as a business, as they could pull its free status, legally, at any moment and dismantle a business or project that uses the technology.

GPL and open source represents a bartering system. For example, if you contribute 1MB of code at a couple hundred thousands of dollars as was the case with the developer of some of Linux’s early network drivers, in return you receive a full operating system for free created by likeminded individuals. By sharing, we share not only the wealth, but also our knowledge by writing code. This is why open source is not altruistic per-se. You gain more than the work you put into it, it’s just not monetary.

Bob Young, although not exactly active in the community anymore, provides insight into the history of open source and the ideals of why it began. Bob represents the beginning of a movement, and an achievement that few can find. His speech was a history lesson, which helps us define purpose, and a lesson for staying mindful of open source’s humble beginnings.

 

Head of Open Source at Twitter – Chris Aniszczyk

Chris Aniszczyk is head of Twitter’s open source department. He previously worked with Red Hat and porting Eclipse to Red Hat Linux.

At Twitter, Aniszczyk has seen large movements in how Twitter handles its development. At Twitter, they started a movement to always consider open source options before “reinventing the wheel” and proving creating a new method will be more viable. This has lead to Twitter using a large amount of open source libraries. In order to handle the large amount of traffic that Twitter sees, it has been important to segment from a monorail schema and separate responsibilities while maximizing use of resources through open source means.

Aniszczyk’s keynote speech punctuated something Twitter is famous for: acquihires. Often companies that acquire a company for the talent forget about the companies code because the actual project is not needed for the company. Open sourcing code obtained through acquihires and having that discussion with the creators can keep it alive, which may be important in the future and help people.

In addition to this discussion, the importance in contributing to existing projects and compounded the ideal that you truly gain more from open source than what you put into it. Publically publishing features or fixes helps better the product for others and you gain the value of many more people contributing in a like-minded fashion.

Chris suggested following what he calls the “open source craft”:

  • Use Open
  • Assume Open
  • Define Secret Sauce
  • Measure Everything
  • Default Github
  • Default to Permissive
  • Acquire and Open
  • Pay it Forward

These points are a valuable checklist to approach open source development within a business.

The one interesting point that seems to be maligned is “defining the secret sauce”. In terms of Twitter, the secret sauce is its code base that is not revealed to the public. However, this is the point in which a business gains its advantage and is able to compete in the industry.

 

Comparison

The two keynote speakers revolved around the lack of altruism around open source development and the idea of open source likened to a bartering system. However, this is actually a positive aspect of open source.

Contributing to a project or open sourcing your own project provides you with much more return than the time you spend on the project. This is an interesting view on the open source directive, as it’s commonly viewed as an altruistic venture that benefits others.

Young focused on the beginnings of open source development and how it led into the creation of Red Hat. Aniszczyk discussed the modern day applications of open source and allowed us into the world of how companies, in his case a very large company, views and utilizes the open source paradigm.

 

Conclusion

The most resonating message from the speakers was the realization that open source truly is a bartering system, but one where you often receive more than the value of what you contribute. Throughout my time developing software, I have always enjoyed the extensive amount of open source software and libraries available on the Internet. They have helped me learn and grow my programming skills. The benefits I have gained far outweigh the contributions I have made to the community, as is the case with most developer. But by continuing to contribute, I can continue to add more to the value of bartering in open source.

Further, Aniszczyk’s “open source craft” is a valuable checklist for beginning development on a project and I will be considering it in my future projects in order to best align them with a modern companies views on approaching the open source paradigm.

by JordanTheriault at October 29, 2014 12:02 AM

October 27, 2014


Yasmin Benatti

FSOSS Day One

On the last two days (October 23rd and 24th)  I went to one of the best events in my academic life until today. FSOSS, that standards for Free Software and Open Source Symposium, is and event that brings developers, businesses, academics and students to talk about Open Source, its characteristics, tools, enterprises and the researches done about it. I’ll talk a little bit about the talks that I went, who were the presenters, their ideas, the differences and similarities between them and my overall view. I was a volunteer on the first day in the morning, so I had the opportunity to talk with many people, including speakers while making their registrations.

October 23rd – First day of event.

The first presentation I went on Thursday was David Humphrey‘s keynote about the heartbleed bug. David is a  Seneca’s teacher who leads the Open Source class I take and he have been contributing with open projects for many years. I would like to highlight that I really liked it and that David did a presentation in a such simple way that made me really impressed. The FSOSS’s public is not just programmers, there are also businesses, academics and others and even when he talked about bytes and codes, I’m really sure everybody could understand. Anyway, about heartbleed: in 1998, OpenSSL (secure sockets layer) was written and, later on, 66% of all servers were using it. In 2011 a heartbeat extension was added, where it was possible to verify the connection of other users. In 2013 17% of users had added the extension. In 2014 a Google researcher discovered a bug. It was a mistake when sending memory’s content in this “keep alive” extension, where the amount of memory had no limits to be asked. Two really interesting things that David addressed were the fact that a tool is just noticed when it is not ready to be used or in this case, the problem was only seen after tons of people had their privacy corrupted. He also spoke about how a relative small Open Source project, that has 15 people working on it can be so big and reach so many people.

The second presentation I went was conducted by Mekki MacAuley, who I already know so I already had the chance to listen some of the opinions he has about Open Source before. His presentation was about Mozilla Intellego, a machine translation that is being developed. This was a presentation in which  I was really interested, once I’m getting involved with translation at Mozilla. It is an open project that has contribution from different communities and that has integrated to it other projects, such as amaGama, a web interface. One big point of the machine translation is to break the barriers that language has nowadays on internet, trying to make information available even on languages spoken by a small portion of people. You can read more about the presentation here. There are many ways to help, in technical and non technical areas, such as API hacking, web services, feedback machine, community relation, evangelism, research and corporate maintenance documentation. You can find the bugs here and you can find people on IRC (#intellego on irc://irc.mozilla.org/).

The third and last presentation I went on Thursday, conduct by Gaber Lasalo, was named “Open Source, what does it stand for?”. I was expecting something different from what I saw and I think that he tried to embrace lots of subjects, making it a little bit simple when it could have been focused in one aspect. Gaber talked about the difference between free software and open source and that it doesn’t always come together, a bit about the balance that open source projects should have when it comes to security, functionality and ease of use. He also talked a bit about the Cloud Computing Manifesto and how it should be open, about Prezi and other open versions such as Sozi and Emaze. Another interesting thing he addressed was that people use open source not even knowing about it.

Due to the lenght of this post, I decided to divide it in two parts. On the second one, I’ll talk about the second day’s event and give my final conclusions. You can see it here.

Cheers!

 

 

by yasminbenatti at October 27, 2014 03:23 PM


Ryan Dang

NPM and Bower

NPM is a node.js package manager which use to manage dependencies for application using node.js. NPM deal with server side dependencies. Bower is a package manager for client side stuffs. Most node.js related open sources projects use both npm and bower to manage their dependencies for server side and client side respectively. Sometime you might run in issue that your local application crash or doesn’t render properly after you pull down the latest code from the master branch. One very common reason that might cause your app to crash is that the latest code you pull down has one or more new dependencies. If this happens to you, try to run both “npm install”, and “bower install” from the command line in your project directory. These two commands will automatically install all the dependencies that are specified in package.json and bower.json files respectively. Most of the time, it will fix the issue with your local app being crashed or doesn’t render.


by byebyebyezzz at October 27, 2014 01:59 PM


Shuming Lin

FSOSS 2014@Seneca

I attended FSOSS in seneca at Friday. The Free Software and Open Source Symposium (FSOSS) is a yearly event at Seneca College focused on free and open source software. The event has been held since 2001. It is run by Seneca College faculty. Attendees include both students and IT industry professionals.

I went to 5 Presentations which as below:

  1. Keynote Presentation by Bob Young
  2. Webmaker’s Tech: The Future of Web Development by Kieran Sedgwick
  3. Keynote Presentation by Chris Aniszczyk
  4. How to Implement the Bootstrap CSS Framework by Nelson Ko
  5. Building a CI system with free tools, and duct tape by Julian Egelstaff

Open source as a development model promotes a universal access via a free license to a product’s design or blueprint, and universal redistribution of that design or blueprint, including subsequent improvements to it by anyone.

As Mr Bob Young mentioned gave out less project source code and come back complete project. I agree with what he said people around world work this open source project it is amazing.


by Kevin at October 27, 2014 02:19 AM

October 25, 2014


Andor Salga (asalga)

Gomba 0.15

gomba_015

Play demo

I’m releasing a 0.15 version of Gomba, a component-based Processing platform game. I’m trying to be consistent about releases, so that means making a release every 4 weeks. I didn’t get everything I wanted into this release, so it’s not quite a 0.2. In any event, here are some of the changes that did make it in:

- Added platforms!
– Added audio channels for sound manager
– Many of the same component type can now be added to a gameobject
– Added goombas & squashing functionality
– Added functionality to punch bricks
– Fixed requestAnimationFrame issue for smoother graphics

I’m excited that I now have a sprite that can actually jump on things. But adding this functionality also introduced a bunch of bugs I now have to address. I have a list of issues I’m going to be tackling for the next 4 weeks, which should be fun.


Filed under: Game Development, Open Source, Processing, Processing.js

by Andor Salga at October 25, 2014 12:15 PM

October 24, 2014


Yoav Gurevich

SENECA FSOSS 2014 REPORT

Abstract

Comparing and analyzing two symposium talks from industry leaders of open source technology in the world of IT.

“How Companies Use NoSQL Open-Source Technologies like Couchbase” – Don Pinto, Product Marketing Manager (Couchbase)

Don Pinto (M.Sc., Computer Science – University of Toronto) has previously worked as the director of product management at GridCentric Inc. (now owned by Google), with additional experience as a SQL Server/Azure program manager at Microsoft.

The Problem – “There is lots and lots of data. More users than ever before and the interactive complexity of apps.”

“Consumers & Employees Demand Highly Responsive Apps.”

Old relational stores had a lack of flexibility/rigid and an inability to scale out data easily. Those 2 factors along with performance costs comprised of some of the most popular client complaints when using such tools.

This calls for a new backend technology – NoSQL.
So what could be a candidate to be the right tool?

·        The JSON Data Model Fits today’s developer needs better
o   Aggregates & denormalizes data into single document (Document data model).
o   Handles structured & unstructured data equally well (Docs are distributed evenly across servers)
o   Inferred schema requires no migration
o   JSON rapidly being adopted
o   Access both JSON and binary data as key-value pairs

·        RDBMS needs a bigger, more expensive server to scale up architecture.
·        Auto-sharding vs. Manual sharding (data partitioning).
·        Open-source obviously implies lower costs for maintenance, and usage.

·        Availability – Relational systems use clustering as an afterthought.
o   RDBMS must take database down for “maintenance windows”
o   They struggle to support XDCR (Cross data center replication) across many DCs (data centers).

·        Couchbase offers a full range of Data Management solutions
o   High Availability Cache (Zero downtime administration and upgrades)
§  Always-on functionality for a potentially global user base
§  Couchbase Lite – Mobile application that includes a sync gateway for mobile work to update server.
o   Consistet High Performance
§  Built-in object level cache
§  Fine grained locking
§  Hash partitioning to uniformly distribute data across the cluster
o   Elastic Scalability –
§  Shared-nothing architecture with a single node type
§  True XDCR
§  Push button scale-out

Some use cases for NoSQL:
·        Heavily accessed web landing pages
·        Application objects
·        Popular search query results
·        Session values or cookies (key-value pair store), eg. Shopping carts, flights selected, etc.
·        User profile with a unique ID, user settings/preferences, user application state
·        Content metadata stores (articles, text)

Some known users of Couchbase:
·        Orbitz – 11 clusters with a total of 100 nodes
o   3 TB of data with over 430 million objects
·        McGaw-Hill Education Labs – Content and metadata stores
o   “Building a self-adapting, interactive learning portal”
o   Scale to millions of learners
o   Self-adapt via usage data
·        AOL – Ad-targeting using a Couchbase server
o   40 milliseconds to respond with the decision.
o   User profiles, real time campaign stats
o   Affiliate, event, profile, and campaign data

Mike Hoye, Engineer Community Manager at Mozilla: “Social Engineering – Building Communities With, And On, Purpose”

·       "Process reifies and reinforces values"
·        If you don’t measure it, don’t pretend you care about it
·        The ROI on timely gratitude is ridiculous.
·        Karma is a wheel (courtesy, saying thank you for the things you are given)

“The way you conduct and execute your process is a direct reflection of your values.”

Access, Engagement, Retention - “If you let a patch sit for a week from a first contributor, it is very unlikely you will see them contribute again.”

Accessibility
·        What is in front of a user, if they want to commit a one-line change to your project?
·        The importance of comprehensive documentation
·        The “miraculous” benefits of an easy-to-set-up build environment

Engagement
·         * “Throw the little fish back in the water for the new entrants to the game.”
·         *  Label good first bugs for beginner contributors and give a concise, yet thorough explanation of how to go about fixing them

Retention
·         * “A single toxic contributor can harm an entire community. If people feel unequally welcome in the     community, many will inevitably shy away from it. Don’t be a jerk and don’t let others become           jerks.”
        * Gratitude. Saying thank you. “This bug and your fix matters.” Telling them what to do next.

Why does open source matter? Am I the first person to have this problem?

-        Mythology: Is what is in the absence of real numbers and real data about what works and what doesn’t. Stories are powerful and get into and stay in people’s heads.

-        Open source is meritocratic (we need to stop talking about ourselves like we’re “magic”)

-        Diminishing returns: After 3 sets of eyes looking at a piece of code to figure out a bug, the rest are wasting their time…

-        Strong FSOSS and FSOSS-like communities grow organically

What are the most basic, fundamental things we need to embark upon an open source project?

1)      Source Control
2)      Issue Tracking
3)      Automatic Testing

Do you care?
“There is no regression test for somebody’s mood.”

“Your community is an API to your software.”
What is the state of our community? What problems does our community have?
Are we actively fostering community engagement?

“Have a code of conduct. Have a code of conduct. Have a code of conduct.”

               Choosing this particular pair of talks yielded a very comprehensive picture on the open-source process, since one focused on the community build and project goal philosophies – the preliminary prerequisites - and the other presented the benefits of a completed and deployed product that was a result of many of the same ideals and principles being implemented. There was a clear testament to not only an implied agreement of the two speakers’ points and values, but a symbiosis as well. The focuses and comparisons were of a completely different nature and centered on relatively unrelated processes due to the subject matter, but were ultimately two sides of the same coin.

The open-source paradigm and its implications have not changed for me so much as they had flourished over time. Admiring the philosophies behind open source came naturally, but I was initially confused and doubtful about the financial feasibility of companies and institutions that fully embrace free software and open source processes and values with their intellectual property. The state of this maturing industry after nearly a quarter-century of existence is clear evidence of its resounding and continued success, mainly by awareness of giants like Microsoft or Google investing in more open source venture startups and releasing more open source code and products, as well as witnessing and contributing to companies like RedHat and Mozilla - which are almost entirely based off of open source ideologies in every one of its aspects - rise over the years to become fortune 500 companies.

by Yoav Gurevich (noreply@blogger.com) at October 24, 2014 03:31 PM


Gary Deng

FSOSS 2014@Seneca

As more and more people are involved in open source projects, open source software gains tremendous strength from the power of a network. At the first day of 2014 FSOSS, speakers demonstrate different ways in which open source is being used around the world to enhance various sectors of industry such as education, emerging hardware, and software.

In order to show what current open source world looks like, Professor David Humphrey,today’s keynote speaker, used a bunch of numbers to demonstrate how open source affect us as users, business owners, organizations, or government. I have been an open source software developer for about 2 years, but I didn’t realize that there are billions of people actually being involved in open source world. Now I can feel the power of open source. If you know Archimedes’ Law of the Lever, it’s not hard to understand how open source technologies can actually move the world.
Archimedes_lever_(Small)


by garybbb at October 24, 2014 02:38 AM

October 23, 2014


Glaser Lo

Experience of collaboration with Webmaker team (Release 0.2)

Picking a bug

The first thing to do is picking a bug. It is a thing which is harder than I imagine. When I have no understanding about project details, I was quite confusing about which bug is proper for me to work on. Dave did teach us to pick something with “first bug” tag. However, most of the first bugs were picked already because I started doing it quite lately.

The projects

In order to pick a proper bug, I started to try out some of the project, like Webmaker Mobile, Appmaker, and MakeDrive. It still took me a day to set up the environment because there were some error or issue when I built certain project. As I fixed more issues, I found myself more comfortable with tools like bower, gulp, npm. It was such a great chance for me to understand more about them really quickly. After trying out each of the projects, I ended up taking Webmaker Mobile for my assignment. It is because either MakeDrive or filer are low-level libraries which is more difficult to understand. On the other side, appmaker is having large size of files compared to Webmaker. Finally, I picked a bug that is implementing name and icon chooser, thanks to Habib’s mention.

Beginning

I finally took a bug to work on, but I got other issue! For some reasons, Webmaker always displayed empty page after I logon in. No matter what I did like cleaning up cookies, deleting account, and re-clone the repository, nothing’s working until the team fixed the issue. This makes me realizing that sometimes there are uncertainties in an open source project. I should not always assume everything is good and be too late to start my work.

Webmaker

Finally, I started implementing the name & icon chooser. This bug is quite simple because it is an implementation like I did in Filer contribution. However, it turned out a problem. Icon chooser requires a function of generating a new icon with certain icon image and custom circle color, while there were no information about how an icon is generated and where it is stored. Therefore, I spent too much time on writing code for icon generation and thinking about how I should do it. Then, I started asking on IRC channel because I thought I was doing wrong. Thanks to thisandagain, who told me that the custom icon details were actually missing, they are going to use the noun project for generating icons. At the end, I finally finish the implementation partially and made a pull request.

Conclusion

In the process of contribution, I realize that communication is really important for not only the whole team but also myself. The community plays a big role of helping yourself and helping others in order to help the project moving forward. If I did it earlier and asked people aggressively, I would not be so confused about the bug in the first place. By the way, I’d like to say thank you also to jbuck for his help of guiding me to use canvas for icons generation.


by gklo at October 23, 2014 02:44 AM

October 22, 2014


Shuming Lin

webmaker-App: Working with a bug

Mozilla-Webmaker-logo

Working with webmaker-app bug is pretty interest. And I learn some thing from it ( such as, know more about github, js, css and etc)

I took a bug in earlier and also work with it, but when I tried to asked a question in github after 2 days. Unfortunately, this bug had been solved by someone else. And the owner give me the other bug, but someone also took this. Therefore, don’t just watch on one bug for work with open source. Ineeded to get a new bug to work. Finally I find a bug that no one took.

There had 4 ticket in this issue, I asked for 1 and 2 ticket. It look this bug very easy, but when you work with it you may feel the bug is complicate if you don’t know well in this project.

To solve this bug, I spend most time to read these file are relate this bug and find method. The hard part is to understand how the webmaker works, because they are using js to build this project and i am not good at js so i need to do some research about js.  Luckily, i resolved bug soon and make a merge request. It just changed few line and it really easy if you know this project. :)

webmaker-app: QA: UI bug #394 pull request

After I resolved the bug, and then continued with left issued. During work with other issue,  Kate asked me to remove and add some code then good to merge. But i was working with other issues and in same branch. After I know i was work in same branch, I felt so bad cause I would push all change to this branch. Also I couldn’t find any solution for this so I just pull all changes to this.

So when you work with new issues you need to check out branch and create new branch for new issue.

It’s fun to work with open source and feel great when  fix problem. If they merge you request, you will feel awesome :). Also you will learn more during fix bugs.

 


by Kevin at October 22, 2014 08:46 PM

October 21, 2014


Tai Nguyen

Release Milestone 0.2 – DPS909 (Mozilla Webmaker-App)

In my previous post in regards to my proposal for the release milestone 0.2 for my DPS909 course, I mentioned that I was implementing a UI toggle issue, however things have changed. The issue I initially was working on, was taken by another person – I was late on requesting for the issue. Also, there was a big communication problem that aroused with the person who took my issue, but things were resolved in the end (I’ll talk more about it later in the post on what happened and how to avoid it). Fortunately for me, I had found a similar UI issue that required the implementation of segment controls. Also, I want to note for my release milestone 0.2, my plan was to focus only on the CSS implementation of the segment control, and also that I had to modify someone’s code in order to create my creation. You can find more about the issue here: UI – Segmented Control #168.

Firstly, I would like to talk about the issue for my release 0.2. As I mention above, the issue required the implementation of segment controls. A segment control is basically a UI bar with a number of options, that allows you to swivel between those options by selecting it (the selected option will be highlighted to indicate that the option is selected). Usually, segment controls are used for navigation, which allows people to change from one view another easily. Moreover, my challenge for the issue was to create a pure CSS segment control (this means no JavaScript). This entailed that I had to create an animated UI with only CSS. The good thing is that with the technology behind CSS3, it allows people to easily create animations with the use of the transition property. I just want to note that I am relatively new at CSS, so figuring this out required a lot of research and figuring things out.

I mentioned that I am relatively inexperience with CSS, especially with the new aspects of CSS3 (like transitions and animation capabilities), so the first thing I did was search for help. First, I decided to try to see if there were any examples of what I was trying to achieve. I did a Google search of segment control examples, and came across a really good example that I used to help me create my own version of the segment control on “Cssdeck.com“. In addition,  it is a web platform that let developers test out their CSS and JS components. Developers can publish their works for other people to view it. It has a great collection of creations, which allows people like myself to learn how a particular CSS and JS component can be created.

In order to see if you can use another person’s work as a foundation for your work, you need to make sure that it has the components that satisfies the requirements of your project and that it can be extended if necessary to do so. Also, another thing to note is that you should first try to understand the gist of the person’s implementation in order for you to add or modify the code to meet your needs. And that’s exactly what I had to do for myself. The original work of the pure CSS segment control by fstgerm, was exactly what I needed for my implementation. It did meet the fundamental requirements, which was that it had to function as segment control component, but the only issue was that it didn’t fit the appearance, which can be modified easily. Below are the implementation of the pure CSS segment control creation. The first is the original by ftsgerm and the latter is the one I modified. I would like to mention that I had to request the author’s permission to use his code, and he was happy to help me as long as I did not take credit for it. So, make sure you check the copyright associated to a particular code you would like to use. If you are unsure, you can always ask the owner for permission.

cs_author

http://cssdeck.com/labs/pure-css-segmented-controls

cs_me

http://cssdeck.com/labs/sqefmuxu

I would like to briefly talk about what I had learn about how ftsgerm implemented his segmented controls, which was very difficult for me to understand at first. Also I would like to note some interesting CSS tricks that he used in creating his work. First of all, again ftsgerm created the UI interaction entirely with CSS3 technology. Basically, how the segmented control works is that it is a set of radio input button as the set of options. The radio input buttons are hidden via the left property: -1000px, which moves the input button elements off the screen. The Label elements are associated to the input radios, you can select the labels to swivel to your option. It works by using the :after and :before selectors to create pseudo-elements for each labels. The pseudo-elements  are unhidden with the :checked property when the input radio is checked. One of the pseudo-elements is for the background container, and the other is for the actual label text box. When a option gets checked, the pseudo-element components of unchecked options (hidden) are transitioned to the position of the checked option. This means, the previous checked option transitions from a state of unhidden to hidden, which gives it a fading affect. And the currently checked option transitions from a state of hidden to unhidden making the swivel effect and highlighting the option. Moreover, the next part I would like to talk about is how I adapted the code to tailor to my requirements. Essentially, for the bug, I needed to change the appearance to match up with the design. I had to change the colors and shape of the buttons.

I would like to inform you that when you’re working in a open-source community that clear and concise communication is crucial. Regarding to my incident with the person, It was a lack of clear communication on my behalf, which caused confusion and a feeling a sense of hostility.

Links:

https://github.com/mozillafordevelopment/webmaker-app/issues/168#issuecomment-59543183 – issue link

http://cssdeck.com/labs/pure-css-segmented-controls – ftsgerm segmented control

http://cssdeck.com/labs/sqefmuxu – my implementation

 

 

 

 

 


by droxxes at October 21, 2014 08:26 PM


Aaron Train

Android MediaCodec use in Firefox for Android

James 'snorp' Willcox has landed support for hardware decoding via the public MediaCodec Java class in Nightly for Android for devices running Android 4.1+ (Jellybean). This is replacing OMXCodec & the Stagefright library which was introduced as a replacement for OpenCore for media decoding. This relatively new public Java class is used for decoding H.264/AAC in MP4 for playback in the browser with the benefit of allowing for direct access to the media codecs on the device through a "raw" interface.

This should correct a number of playback issues which have been reported to us regarding problems on Android 4.1+ devices.

Victory!

How to Help

  • Install Nightly (available on 10/21/2014's build)
  • Test video playback on your Android 4.1+ device (e.g, test page)
  • Talk to us on IRC about your experience or problems

October 21, 2014 12:00 AM

October 20, 2014


Gideon Thomas

My first contribution to Webmaker App…

Hey everyone,

So I’m a little late in writing my blog and I apologize for my horrible time management.

I was working on the bug I planned on for release 0.2 in my open source class. Turns out, it was easier than I thought it would be.

The issue basically was to write tests that would make sure that the file that contains app templates as simple JSON data is in fact valid JSON and confines to a proper format. One of the contributors even suggested an excellent library, tv4 for getting this done. Luckily, my previous experience with JSON schemas proved to be very helpful. However, even though I had experience with schemas before, I often found myself referencing the JSON schema docs for refreshers.

I began by doing some prep work which included installing the library and going through their docs, writing the basic skeleton for my tests and determining what my tests would be. I ended up coming up with two tests in mind – one that makes sure that everything in that JSON file has the necessary properties that templates need along with some simple data type validation, format validation, etc.; as for my second test, it basically checked to make sure that every template name was localized, i.e. was present as a key in another JSON file.

I began writing the schema piece by piece. Building it based on a breadth first approach. This meant that I would first, exam the outermost object, create a ‘shallow’ schema for each property with simple checks. If a property was an object, I would basically assign a variable that would later be defined to hold the schema, to that property’s schema. In that way, I created the outermost objects schema. However, if I ran the test, it wouldn’t work as there were several references to variables that did not exist. These were the variables that I was going to create next that would hold the schemas for the properties that were objects. I kept using this approach until I was completely done writing the schema. After running the test a few times, couple of small bug fixes and voila…my first test was finished.

The second one needed some thinking. I had to do something tricky to only validate one property of each object, viz. the name. I had to create the schema skeleton for the JSON object and effectively provide ‘empty’ validation for the properties that I did not care about. Then, I was able to provide a custom validation for the name by checking whether it was in the localization file (the tv4 docs allowed me to provide my own validation function…yay)!

I submitted my PR on Saturday very very early in the morning (like at 2am). Unfortunately, by the time I came to write this blog post, it was already merged into the main repo…whoops :P


by Gideon Thomas at October 20, 2014 11:31 PM


Gary Deng

OSD600 Project 0.2 Release

Mozilla-Appmaker is a tool that helps anyone, not just developers, create mobile applications. Appmaker apps are composed of web components, custom/resusable HTML tags, connected with events and listeners.

In my Open Source Development course project release 0.2, I chose to work on the mozilla-appmaker issue#2253. At the very beginning, I had difficulty to set up my development environment, hopefully, people in the appmaker community are always very helpful. Thanks for all the help from #appmaker IRC channel and my CDOT colleagues.The following contents will briefly introduce how I accomplish this task:

  • Pick bug to work: Go through all the issues in the list, read the comments and ask questions to make sure I can work on the issue
  • Try to play with the appmaker in my local machine in order to reproduce the bug
  • Use my browser debug tools to find out which files I should look into, in this case, the file is “ceci-channel-menu.html”
  • Learn some basic knowledge about Polymer element so that I can understand the logic of original source code
  • Learn the Element Interface, this is completely new to be. For example, I have never heard about Element.shadowRoot before.
  • Finally, fix the issue and test it to make sure I am able to get desired result, then send my pull request to mozilla-appmaker:develop branch

Test Result:

1. There are 3 listener channels for a counter
Capture1
2. After channel C is disabled
Capture2
3. After channel A is disabled
Capture3
4. After all channels are disalbed
Capture5


by garybbb at October 20, 2014 05:23 AM


Brendan Donald Henderson

Fedora Packages: Further Inspection

This post builds on the content of my previous post by taking a more in-depth look into a few of the packages that were already discussed.

The following is a package-by-package analysis to determine where assembly existed, either in the form of assembly source files, or embedded assembly in higher-level language source files.

php-pecl-apcu:

  • No assembly files(.s/.S)
  • Files with embedded(inline) asm:
    • pgsql_s_lock.c: Contains assembly code for test and set operations if spinlocks are enabled. This assembly only pertains to rare conditions: being on a sun3(sun-microsystems) system, or being on a motorola 68K processor not running linux.
      • pgsql_s_lock.h: contains a HUGE amount of assembly for many different architectures(some that would arguably no longer be used except for nostalgic purposes) all with the purpose of implementing test-and-set spin locks(lock and unlock). There is assembly for ARM, but I believe it’s only for 32-bit ARM platforms. I would be truly astounded if this assembly is still valid as a “performance optimization” over what gcc could offer. There is also detection code for different compilers such as Intel’s c/c++ compiler if ia64 arch. Presumably this is because those compilers provide non-assembly solutions?? The fallback to no system-dependent implementation being used is SysV semaphores(in spin.c) which are pre-POSIX semaphores. But I strongly believe that gcc intrinsic functions could provide not only a more portable but also better performing solution.

mpfr:

  • No assembly files(.s/.S)
  • Files with embedded(inline) asm:
    • src/mpfr-longlong.h:
      • Notes mention: inline asm macros(There are C macro fallbacks for unspecified architectures)
      • Notes also mention: The use of newer gcc built-ins(from newer gcc releases) and the fallback for older gcc versions is either ‘generic c'(assuming ANSI standard) or and inline asm block.
      • Not very obvious when built-ins v.s. inline asm blocks are preferred but seems like on certain gcc/architecture combinations(probably more recent ones with optimized built-ins) that optimized built-ins are known to exist and thus used where if they are known to not exist at that time then inline asm is used.
      • Because of this it seems like porting to aarch64 could already be possible as the only versions of gcc that will exist on aarch64 would be recent ones with the preferred gcc built-ins. May just need to add a #ifdef for aarch64.
        • There is already a lot of inline asm for arm32 and extra performance asm for something called ARM M series?
    • No inline asm in the c source(implementation) files

llvm: For this particular package I was brief in my analysis of some of the asm found as there was quite a bit and it was scattered all over the package and I was looking in a very indiscriminate way.

  • Found irrelevant asm:
    • src/lib/ExecutionEngine/IntelJITEvents/ittnotify_config.h: a single block of asm to set the default add operation on none intel ia64 architectures, the entire directory seems intel specific so probably no reason to bother with arm here.
    • src/lib/support/Unix/Memory.inc: 3 very short inline assembly blocks for PowerPC arch, arch and other platforms have c code, not sure if this is relevant to performance.
  • Found Asm:
    • src/lib/Support/Host.cpp: contains several multi-line asm blocks all focused on cpuid for various x86 archs both 32 and 64-bit.
    • src/lib/Target/ARM/ARMJITInfo.cpp: A single asm block, implemented for 32-bit ARM archs, that seems to be used to overrides how gcc typically handles the stack during a procedure call?
    • src/lib/Target/Mips/MipsJITInfo.cpp: single asm block to implement JIT compilation callback as gcc doesn’t properly handle the prologue/epilogue.
    • src/lib/Target/PowerPC/PPCJITInfo.cpp: single asm block to implement JIT compilation callback as gcc doesn’t properly handle the prologue/epilogue.
    • src/lib/Target/Sparc/SparcJITInfo.cpp: single asm block to implement JIT compilation callback as gcc doesn’t properly handle the prologue/epilogue.
    • src/lib/Target/X86/X86JITInfo.cpp: single asm block to implement JIT compilation callback as gcc doesn’t properly handle the prologue/epilogue.
    • src/lib/Target/X86/X86Subtarget.cpp: a simgle asm block that is used to check the OS’s AVX support if the toolchain is gcc and the arch is one of the major Intel/AMD x86 releases.
  • More asm that I did not have time to analyze:
    • A lot of assembly source files(.s/.S) in:
      • src/test/*various archs*
      • src/test/Object
      • src/test/CodeGen
      • src/test/DebugInfo
      • src/tools/clang/test/Driver
      • src/projects/compiler-rt/lib/*various archs*
    • A lot of embedded asm in:
      • src/tools
      • src/projects
  • Note: Seems to be asm used frequently for JIT compilation callback implementation as gcc doesn’t meet requirements!

Future Work:

These packages are ones that I have found interesting and will be doing further investigation to determine whether it is feasible to complete work on them within a few months time. They are all from the LPC Code Module List, and I will include the notes from there below. I will be explaining any asm(files or embedded in source code) found in these packages in my next post.

  • pyrit: This is a cryptographic hash cracking tool. It apparently uses x86 asm for performance with a C fallback. This isn’t surprising as cryptographic hashing is a very expensive arithmetic operation and on many x86 processors, especially more recent ones, there are special instructions and hardware to speed up operations related to cryptographic hashing.
  • john(the ripper): This is another(very infamous) cryptographic hash cracking tool that apparently has completely asm version of the crypto functions. Again this makes sense due to the expensive nature of the necessary operations.
  • cryptopp: C++ class library for cryptographic schemes. Apparently includes asm for performance in crypto and checksum code(not sure which specific asm, probably some x86).
  • polarssl: is a lightweight crypto and SSL/TLS library. It apparently includes assembly in quite a few places:
    • x86 asm for low level driver support(via padlock)
    • timer access on various archs
    • bignum maths on various archs(including arm 32-bit) and has C fallback

by paraCr4ck at October 20, 2014 03:59 AM

October 18, 2014


Linpei Fan

OSD600: Release 0.2

I took issue#319 in Webmaker for this release. I already completed it. Please find it here

This issue is about the UI, to align the line on the homepage with the text instead of centralize it. 

Now it looks as following:



by Lily Fan (noreply@blogger.com) at October 18, 2014 04:59 AM


Yasmin Benatti

Release 0.2

I changed my second release to a project with no programming language, due to the fact that I just finish my first year.

Once I know two languages and Webmaker need translations, I’m working on Transifex and I translated some of the stuff that were missing for them. It is a very nice project, I’m getting in touch with the project and Open Source still and I’m also training my English/Portuguese skills.

That was a very nice opportunity that David offered me. It fits perfectly to what I’m able to do and I can still attend the classes.

Cheers!

by yasminbenatti at October 18, 2014 01:15 AM


Ava Dacayo

My first contribution – Mozilla Webmaker App

As a refresher, here is a link to the issue in GitHub I picked for Release 0.2.

Here’s how I tackled it:

  1. Inside the issue description in GitHub was the link to the document on how to localize views in Webmaker Mobile as well the details of how I can see the page and where the codes are.
  2. Seeing actual codes, for me at least, is different from just reading the documentation. So I looked for an actual implementation within other files by going to the original repo and searched up i18n. I also checked which files I need to change.
  3. Updated index.html and locale\en_US\mobile-appmaker.json to enable localization. Saved.
  4. Gulp dev — NOTHING HAPPENED!

Did I do something wrong?!  Why aren’t the texts changing? Do I need additional files? Is it really being translated? I changed the English translation to gibberish and the changes still did not get reflected.

So what’s happening?

I copy pasted the exact working code and replaced it with the keyname I want to use. I tested it by putting back gibberish again and typed the gulp dev command. SUCCESS!!! Apparently, I typed in il8n instead of i18n. LOL. I tested it by confirming that first, the texts will change if I modify the mobile-appmaker.json US-English file. Second, I updated the French mobile-appmaker.json file using the same keys I used and checked if it translates when I add the ?locale=fr in the address bar and back to english when I type ?locale=en_US which it did.

So I’m ready to go! Almost at the exact moment I pushed a commit which referenced the issue number, k88hudson commented asking if I could get a patch up because they need it next week for Mozfest. (I was gonna say I’m psychic but she beat me to it saying “WOW ESP”). I had to change some of the descriptions based on her instructions and pushed again… And then she replied that it looks like I need to rebase. I remember “rebase vs merge” during class but I’ve never tried rebasing before. So I quickly googled how to do it, but I didn’t get lucky in executing it. In the end I think I unintentionally did a merge while attempting to rebase because I was trying some commands. I was expecting them to get back to me and suggest how to do it properly, but when I looked, thisandagain merged my commits into mozillafordevelopment:master.

Hurrah!!!!!!!!

Now I’m more comfortable with the process and really excited for the next task!

pull request

Oh and I got a rainbow colored .gif from thisandagain.

Can’t describe it so here is a link : https://github.com/mozillafordevelopment/webmaker-app/pull/354#issuecomment-59563677


by eyvadac at October 18, 2014 12:13 AM

October 17, 2014


Andrew Smith

How to stop using webfonts from Google without breaking your wordpress theme

I finally had enough of the old theme on this blog. I would have kept it but with WordPress 4 the fonts looked even smaller than they did before. I tried to fix it but found so many problems (starting withe a default font size set to 62.5%) that decided replacing it entirely will be easier. It was.

The nice new bootstrap-based, mobile-friendly theme installed without much trouble, except for one annoying issue. I turned on Firebug to see how long it takes to load and found more than 10 calls to google.com and gstatic.com. Not cool.

GrumbleGoogleFonts

Notice also how much extra time it takes to finish loading the page just because of those bloody fonts on a server I don’t control. Where are the requests coming from?

cd wp-content/themes/cara # (or whatever your theme name)

includes/tamatebako.php: * return clean and ready to use google open sans font url
includes/tamatebako.php:function tamatebako_google_open_sans_font_url(){
includes/tamatebako.php:        $font_url = add_query_arg( 'family', 'Open+Sans:' . urlencode( '400,300,300italic,400italic,600,600italic,700,700italic,800,800italic' ), "//fonts.googleapis.com/css" );
includes/tamatebako.php: * return clean and ready to use google merriweather font url
includes/tamatebako.php:function tamatebako_google_merriweather_font_url(){
includes/tamatebako.php:        $font_url = add_query_arg( 'family', urlencode( 'Merriweather:400,300italic,300,400italic,700,700italic,900,900italic' ), "//fonts.googleapis.com/css" );
includes/tamatebako.php:         * @link https://developers.google.com/fonts/docs/webfont_loader
includes/tamatebako.php:         * @link http://ajax.googleapis.com/ajax/libs/webfont/1/webfont.js
includes/tamatebako.php:        wp_register_style( 'theme-open-sans-font', tamatebako_google_open_sans_font_url(), array(), tamatebako_theme_version(), 'all' );
includes/tamatebako.php:        wp_register_style( 'theme-merriweather-font', tamatebako_google_merriweather_font_url(), array(), tamatebako_theme_version(), 'all' );
js/webfontloader.min.js:
js/webfontloader.js:    var sa = "//fonts.googleapis.com/css";
js/webfontloader.js:    $.u.w.google = function(a, b) {

The problem is the theme is using webfontloader to load some (presumably nice) font from another server.

If you don’t like that (and you shouldn’t, unless you already use google analytics or other big brother system :)) here’s what you can do to avoid pinging another server for every query to yours:

Find where the font is actually used. In my case my theme is using TAMATEBAKO 1.2.2 (tamatebako.php) which asks for two fonts from fonts.googleapis.com: Open Sans and Merriweather.

Google says “All of the fonts are Open Source. This means that you are free to share your favorites with friends and colleagues.” Great, I will share them with myself by putting them on my server.

I could try to decode how this works in tamatebako.php:

add_query_arg( 'family', 'Open+Sans:' . urlencode( '400,300,300italic,400italic,600,600italic,700,700italic,800,800italic' ), "//fonts.googleapis.com/css" );

but that would just give me an unnecessary headache. Instead I will look at the source for my blog page (or Firebug) to find:

http://fonts.googleapis.com/css?family=Open+Sans%3A300italic%2C400italic%2C600italic%2C300%2C400%2C600&subset=latin%2Clatin-ext&ver=4.0

and

http://fonts.googleapis.com/css?family=Merriweather%3A400%2C300italic%2C300%2C400italic%2C700%2C700italic%2C900%2C900italic&ver=1.0.0

Ok, those are two CSS file I can download, referencing a bunch of .woff files I can also download. Clickedy click click click.. I now have:

$ ls -lh webfonts/
total 388K
-rw-r--r-- 1 andrew andrew  33K Oct 16 15:49 DXI1ORHCpsQm3Vp6mXoaTRa1RVmPjeKy21_GQJaLlJI.woff
-rw-r--r-- 1 andrew andrew  22K Oct 16 15:53 EYh7Vl4ywhowqULgRdYwIFrTzzUNIOd7dbe75kBQ0MM.woff
-rw-r--r-- 1 andrew andrew  22K Oct 16 15:53 EYh7Vl4ywhowqULgRdYwIG0Xvi9kvVpeKmlONF1xhUs.woff
-rw-r--r-- 1 andrew andrew  24K Oct 16 15:53 EYh7Vl4ywhowqULgRdYwIL0qgHI2SEqiJszC-CVc3gY.woff
-rw-r--r-- 1 andrew andrew  34K Oct 16 15:49 MTP_ySUJH_bn48VBG8sNSha1RVmPjeKy21_GQJaLlJI.woff
-rw-r--r-- 1 andrew andrew  32K Oct 16 15:50 PRmiXeptR36kaC0GEAetxmWeb5PoA5ztb49yLyUzH1A.woff
-rw-r--r-- 1 andrew andrew  32K Oct 16 15:49 PRmiXeptR36kaC0GEAetxrsuoFAk0leveMLeqYtnfAY.woff
-rw-r--r-- 1 andrew andrew  17K Oct 16 15:52 RFda8w1V0eDZheqfcyQ4EHhCUOGz7vYGh680lGh-uXM.woff
-rw-r--r-- 1 andrew andrew  22K Oct 16 15:53 So5lHxHT37p2SS4-t60SlHpumDtkw9GHrrDfd7ZnWpU.woff
-rw-r--r-- 1 andrew andrew  17K Oct 16 15:53 ZvcMqxEwPfh2qDWBPxn6ngi3Hume1-TKjJz2lX0jYjo.woff
-rw-r--r-- 1 andrew andrew  18K Oct 16 15:52 ZvcMqxEwPfh2qDWBPxn6nmFp2sMiApZm5Dx7NpSTOZk.woff
-rw-r--r-- 1 andrew andrew  18K Oct 16 15:53 ZvcMqxEwPfh2qDWBPxn6nnl4twXkwp3_u9ZoePkT564.woff
-rw-r--r-- 1 andrew andrew 2.1K Oct 16 15:52 merriweather.css
-rw-r--r-- 1 andrew andrew 1.6K Oct 16 15:49 opensans.css
-rw-r--r-- 1 andrew andrew  33K Oct 16 15:49 u-WUoqrET9fUeobQW7jkRT8E0i7KZn-EPnyo3HZu7kw.woff
-rw-r--r-- 1 andrew andrew  32K Oct 16 15:50 xjAJXh38I15wypJXxuGMBtIh4imgI8P11RFo6YPCPC0.woff

388kB, eh? Wouldn’t have thought it adds up to that much, well that’s what happens when you use someone else’s web resource without thinking very much about it.

Now I put this webfonts directory of mine on my server, for example here: http://littlesvr.ca/grumble/wp-content/webfonts/ (I can protect it from external downloads if it becomes an issue, which it won’t). Replace the Google references inside the css files to my local server as well:

sed -i 's|http://fonts.gstatic.com/s/merriweather/v8/|http://littlesvr.ca/grumble/wp-content/webfonts/|g' merriweather.css
sed -i 's|(http://fonts.gstatic.com/s/opensans/v11/|http://littlesvr.ca/grumble/wp-content/webfonts/|g' opensans.css

And now I will replace the two calls to “add_query_arg(…)” with:

"http://littlesvr.ca/grumble/wp-content/webfonts/opensans.css" and "http://littlesvr.ca/grumble/wp-content/webfonts/merriweather.css"

Checking what Firebug says.. done! Oh wait, why is there still one call to
http://fonts.googleapis.com/css?family=Open+Sans…? Diggidy dig dig dig.. hm.. I think it’s a bug in the theme, one of the add_theme_support() args said “open-sans” instead of “theme-open-sans-font”. How the hell did I figure that out? I don’t even know but I think I might be good at this programming business :)

Now all done, perfect!

by Andrew Smith at October 17, 2014 07:14 PM


James Laverty

A 'bug' story: Part II

Hey everyone,

The Webmaker bug was a success! The biggest problem I encountered was definitely setting up the dev environment. I ran into several errors on different computers, most of the time it was with the npm install to get gulp up and running. After I overcame those obstacles, I ran into a weird one (another bug perhaps?) where after logging into the localized version of my project it would not log me in aka the gulp dev. I tried it on Chrome, then moved onto Nightly, both to no avail. I eventually created a new log in and BAM, success.

After that I talked to a few people and got some help on where to begin, xmatthewx on github suggested that I use the how to template to help put it together and a few other sources put me on the right track. I learned a little bit about JSON and eventually got a template put together.If you want, you can take a look at my pull request! This is my initial request, I plan on getting a little feedback and updating my patch after.

This is a fun process and I get to have communication with professionals, how exciting!

IRC was also an interesting experience, for the most part it's petty quiet, but it seems useful if you ask the right questions.

That's all for today,

Cheers,

James Laverty


by James L (noreply@blogger.com) at October 17, 2014 03:31 PM


Yoav Gurevich

0.2 Milestone Completed - My 1st Appmaker Pull Request is in!

First and foremost, the link to the PR - Fixed tray hover tooltip position offset #2317

As expected, this bug ended up being nothing short of an absolute blessing because the time it took to fix it was an iota of the amount of time it took to successfully set up my Appmaker development environment on my Windows 8 machine in order to be able to properly see and test my work.

In the beginning, besides the odd npm install error log emitting after attempting to download the required dependencies partially failed raising an eyebrow, it was relatively smooth sailing until needing to install and run MongoDB. The Appmaker documentation doesn't really let on much as far as the nitty gritty to my particular use case, save for showing the general link to the official website tutorial for installation and use steps. Those proved to be little more than useless in the end. I GUI-installed MongoDB just fine, but as soon as I opened up a command prompt to try and run it as per the instructions on the Appmaker README it would fail and often close the window altogether. When I managed to isolate the error it purported, it looked something like this:

********************************************************************
 ERROR: dbpath (\data\db\) does not exist.
 Create this directory or give existing directory in --dbpath.
 See http://dochub.mongodb.org/core/startingandstoppingmongo
********************************************************************

But much like Tweety Bird constantly reaffirming that it did, it did in fact see a pootie tat... I too clearly made a \data\db folder and later even set and assigned that location to the dbpath explicitly in the config file. After trying to make the most of my google-fu skills to find a step-by-step how-to on Windows related MongoDB installation and execution, I finally found the most relevant and comprehensive one titled Running MongoDB on Windows. If you follow this with a fine-tooth comb, you will be able to both start mongo on both the command prompt, and have it perpetually keep running as a local service via the service manager.

NOTE: For the love of all that is good and righteous in this world, the Service Controller (sc.exe) command you will try to run to create a local service out of MongoDB can ONLY be executed by the administrator account -- not your own custom user account, even if it has the "administrator" label/type. If you want to be extremely thorough, follow the steps mentioned in this answer article to the same problem for a different sc command.

--------------------------------------------------------------------------------------------------------------------------

Finally, the bug itself proved to be initially daunting, but with inspiration from a casual comment given to me in-class on Wednesday from a fellow student, once I managed to get the node app/server to run, the procedure I followed ended up being close to my original strategy with a few additions:

1) Open my Appmaker localhost browser instance.

2) Find the "tray hover tooltip" element as described in the bug. It wasn't called that, so after thinking about it, I actually just used my own vision to infer where its location on the DOM was by looking at the image posted on the issue site in Github and compared it with my Appmaker site. I know, it's horribly crass, but hey, it seems to have worked out this time...

3) I used the Chrome Developer Tools to try and inspect the element by hovering over it. This was where my biggest issue was - the tray hover tooltip only appears when you hover over the tray's inquiry icon. Because life was never meant to be this easy. After a bit of thought, I figure I might be able to pinpoint it in the source code by finding where the logic for its parent element was - a div with an ID named "showInfo".

4) In Atom, I initiated a global search for that particular string. I ended up finding it in a place I would never have guessed from past experience with css properties - an .html file. What? Fine, I'll indulge in new flavors. So I started parsing the rest of the file and ended up landing on this interesting looking property nugget:

floatingMoreInfoPopup.style.top = pos + 54 - (height/2) - offset + "px";

I'll be honest. I don't have any background context on the specifics of this statement. It looks more like Javascript than CSS so my assumption is that it's some sort of new library or framework for CSS injection that I'm not yet aware of, or something much more comprehensive and cool altogether. What I did see is the name of the property itself, which by the looks of it seemed like just about exactly what I was looking for. So I did what any Curious Carl would do at this point...

5) Start playing around with values, and see what changes. Turned out to be pretty much as easy as that. I changed the pixel value after the "pos" variable by 14 pixels and voila!

The original positioning:


My fix's positioning:


And for now, that's all she wrote ladies and gentlemen. Stay tuned for next week update on FSOSS!

by Yoav Gurevich (noreply@blogger.com) at October 17, 2014 05:26 AM


Andrew Li

Release 0.2

For the second release, I got the opportunity to collaborate with the Appmaker community to help fix issue #2235.

The community made me feel welcomed, I was given lots of background information about the bug as well as tips on where to look to get started.

1. Understanding the bug

2. Approaching the bug

3. Pushing changes to origin

4. Pull Request

Understanding the bug

When a button is pressed, the background changes giving feedback that the button is now in a “pressdown” state. Once the press is released, a “pressup” event gets triggered and the button changes back to the original color.

But the ‘pressup’ event never gets triggered when pressing a button that links to another card tab. Thus the button never changes back to the original state.

Approaching the bug

I started by looking into the suggestions and the links given like the pressdown and pressup functions in the button component.

I read up on how bricks are built to try to understand how the project is structured.

Then explored around to find out where things were and how things get triggered by using javascript alert functions combined with firebug and Chrome DevTools.

Pushing changes to origin

I followed this guide on submitting a pull request.

I also followed this guide on updating a forked repo to make sure my forked origin had the latest changes from upstream from Appmaker.

Pull Request

Pull request link.

October 17, 2014 12:00 AM

October 16, 2014


Jordan Theriault

Release 0.2

For release 0.2 I completed Issue #308 for Mobile Webmaker which is an implementation of checking to determine if a username is taken while signing up. From my previous blog post, I have changed the functionality to check as a user types.

Screen Shot 2014-10-16 at 7.31.50 PM

This issue was solved by using Vue’s functionality for listening inside an html element. Using the v-on directive listening for a key-up I am able to trigger a javascript function which uses a function in the Webmaker Authentication Client in order to determine if a username is taken or not then I display the result to the user in a less robotic way.

The pull request can be found here.

I also worked on implementing a map brick, however that will involve much more time to complete and is therefore for another release. Currently, the map is displaying correctly but the architecture for how the map brick should be implemented needs further work.

by JordanTheriault at October 16, 2014 11:32 PM


Glaser Lo

Install the latest version of Node.js on Ubuntu 14.04

Since I am using Elementary OS (Freya, based on Ubuntu 14.04) on my laptop, the Node.js version in repository is a bit old.  When I tried setting up Makedrive, I got something like this:

sh: 1: node_modules/.bin/bower: not found
npm WARN This failure might be due to the use of legacy binary "node"
npm WARN For further explanations, please read /usr/share/doc/nodejs/README.Debian

It’s kinda weird that it said “not found” while the file actually exists, but usually it is related to binary incompatibilities .  In order to fix it, we need to add a NodeSource repository:

curl -sL https://deb.nodesource.com/setup | sudo bash -

Then install Node.js

sudo apt-get install -y nodejs

Enjoy!

Source: Offical guide on github


by gklo at October 16, 2014 12:49 PM


Andrew Smith

Announcing Everyone’s Timetable

After more than a year of work I finally got this app into a stable, usable state and published it.

Everyone’s Timetable is an Android app to help people in a school share their timetable. It’s particularly useful for finding a professor’s timetable though I’ve discovered it’s actually quite a handy way to look at your own timetable as well.

The timetable data can come from the professors themselves but I’m not expecting every prof to use it, so one of the neatest features of the app is the timetables can be crowd sourced. meaning I can put in the timetables for people I hang out with (like John Selmys) and I don’t need to wait for them  to do it.

phone-home

If you’re a Seneca or Sheridan student or prof – please give the app a shot! If you’re from another school – please contact me so I can add it to the list!

by Andrew Smith at October 16, 2014 11:58 AM

Fritzing for FSOSS: Designing a PCB in Linux

Next week I’m going to the Free Software and Open Source Symposium. It’s always worth going, and especially so this year, there are several great speakers for sure and many more with potential.

One of the things running during the symposium is a Robots competition. My humble contribution to this competition is the design of the PCB – a printed circuit board to hold in place the ultrasonic sensors, connectors for the bumper switches and motors, and the resistors needed to make sure the sensors don’t fry the Raspberry Pi.

A very simple circuit, but the last time I made my own circuits I had to design them using pencil and paper and make them by painting my circuit on with oil-based paint, so it would protect the copper I wanted to keep as the acid dissolved the rest of it.

At Seneca we have a machine that will very precisely cut out the PCB for us. But that machine needs instructions and those instructions are created by software. At first I thought I’d never get to give that a shot because they suggest proprietary software on Windows but then John Selmys found that you can use Fritzing, and it can export into the same Etched Gerber RS-274X which the lab needs. Woo!

There was definitely a learning curve, but given that this is my first encounter with this kind of software, I’m quite happy with the results. It took 6 tries but finally I got the design right. Here it is, a double-sided PCB design with a ground fill:

Fritzing-1

Fritzing-2

Next week we’ll get 5 of these printed and assemble them and then I’ll post some more photos.

by Andrew Smith at October 16, 2014 11:47 AM


Linpei Fan

OSD600: Release 0.2 – Webmaker – issue#319

In project release 0.2, I would like to take the issue #319 in webmaker. And I already left a message on github to ask and am still waiting for the response. This issue is about the UI, making the line to align with the text in the homepage. It is a small bug and easy to fix. However, it will a good start for me to go through the whole project and get the ideas about how it works. Whenever I get confirmed, I will send the pull request to make my first contribution on a real open source project. 

I have been hesitating in taking bugs for a while. Inspired by David's article "Minimum Viable Product" in Oct. 8, I decide to start with small bugs. Taking a small bug makes me feel more confident on open source development. And I am willing to take more bugs. 

Also, there is another small bug (#315) I am interested in. After I am done with this, I am most likely to work on that if it’s still available.

by Lily Fan (noreply@blogger.com) at October 16, 2014 02:07 AM

October 15, 2014


Frank Panico

Lesson Learned… [ -__- ]

Never will i put my personal email as a git repo email again… im at about 80 left to delete, out of the 200 I woke up to this morning…


by fpanico04 at October 15, 2014 10:49 PM


Yoav Gurevich

0.2 Milestone Progress Report

A few weeks into the workload at hand, and after finding a bug and initially attempting to setup the development environment a few times, a problem that seems to be repeating itself in recent times is the inherent constraint of other work from full-time courses as well as a cursed history of personal less-than-stellar time management skills when dealing with more than one substantial subject or task at a time. Unfortunately, with this particular upcoming Friday being riddled with more than just this deadline, I will be scrambling to procure a pull request for this in the next day or two. 

At the very least, I'm in slight familiar ground with CSS and armed with a broad but tested strategy of element/CSS property inspection using web browser development tools to pinpoint where the snippet of code I'm looking for is located.

This epic cliffhanger will conclude with a blog post at the end of the week.

by Yoav Gurevich (noreply@blogger.com) at October 15, 2014 09:07 PM


Jordan Theriault

Fixing Issue #308 on Mobile Webmaker

Another bug I am working on is issue #308 for Mobile Webmaker. This issue is to add functionality to the sign-up form so users can see before they submit if a username is taken. In order to detect changed content, I’ve used v-on=”changed: ” directive within the HTML element. Once focus is taken away from the element and a change has been made to the content of the input, a method is triggered to check the authorization server via POST.

Alternatively, this functionality can be easily changed to check upon each key pressed, but may put a larger load than necessary on the device and authentication server to process the many POST requests.

Ultimately, the server responds with if the name is taken or not. Implementation for how this response will be displayed is still to be determined.

To see the code, you can view the branch here and learn about the webmaker authentication client here. Additionally information on Vue directives, which drive Mobile Webmaker’s visuals, can be read here.

by JordanTheriault at October 15, 2014 06:01 PM


Ryan Dang

ocLazyLoad dynamically loading Angular module

So recently, a project I was working on required me to change the way how all the existing angular modules are loaded. instead of loading all angular modules at once when the user visit the page, we want to only load the required modules for the page that the user currently at. The reason we want to only load specific modules because it will reduce the initial loading time. So we had over 20 angular modules for our web page and they are all loaded when the user visit a page. The goal is to only load 1 module at the time.

After doing some research on the web, I found ocLazyLoad is what I need to get the job done. It is very simple to use. You can install it by running bower install ocLazyLoad or npm install ocLazyLoad. Once you have it installed, you can load any module with

$ocLazyLoad.load({
    name: 'TestModule',
    files: ['testModule.js', 'testModuleCtrl.js', 'testModuleService.js']
});

You can learn more about ocLazyLoad at https://github.com/ocombe/ocLazyLoad. Everything is documented there and they also have few examples to help you start.


by byebyebyezzz at October 15, 2014 02:40 PM


Andrew Smith

Development/production setup for work on live Android app with a server backend

One of the interesting challenges working on Everyone’s Timetable is that it’s a live application with a server backend. That means that any one of the following can cause a very serious problem:

  • A change to the Android app that’s not compatible with the PHP server code
  • A change to the PHP server code that’s not compatible withe the Android app
  • A change to the PHP server code that’s not compatible with the MySQL database schema
  • A change to the MySQL database schema that’s not compatible with the PHP server code
  • A loss of real user data in the MySQL database.

It’s hard enough to promote such a limited-reach mobile app. If on top of that the app stopped working all of a sudden, or the users found their accounts deleted or their data missing – they would probably leave complaining and never give the app another shot.

But I need to do development, which means I need to touch all of the points listed above – the app, the server code, the database schema, and the data in the database. How do I do it without risking a catastrophic bug?

This may be something that web people do all the time, I imagine it’s quite a common problem, but it’s the first time I’ve run into it so I will write up my experience. Since I am a beginner with online services – perhaps my experience will serve other beginners as well.

1. Version Control

First of all you need version control. I chose to use Git for this project mainly because it’s time for me to learn it. For many reasons it’s a dumb system for small developers who aren’t already Git experts but its popularity is undeniable, so might as well get used to it. Whatever I’ve done with it you should be able do in SVN just fine.

1.1 Release Tags

I have one repository for my Android code, and one repository for my server PHP code. At first I’ve done all the development in master. Then I got to the first release, 1.0. For this I created a tag, since a branch seemed unnecessary.

1.2 Branches

Now with the release properly versioned I had to set up the development branches. I chose to create a “devel” branch in both repositories.

You will find a thousand pieces of advice online about how to do branching “properly”.  I am following neither. You can pick and choose what advice you want if you’re so inclined, in this guide I’ll only explain how to make the simplest development setup.

The idea is this: after the first release:

  • All development work will be done on the devel branches, unless some major emergency happens in master and then I’ll deal with it as a one-off.
  • The code in the client devel branch will only talk to the code in the server devel branch.
  • The code in the server devel branch will not touch the production database.
  • When it’s time for a new release (and not earlier) a merge will be done from devel to master.

Read on to see how I actually managed to accomplish that.

2. Development database

Bad code can cause not only cosmetic problems (a crash, a failed request) but more serious data corruption problems. The database could get corrupted because of bad client requests, or a mismatch between the client and server, or bad server code. All of that is fair game during development and none of it is an acceptable risk for production.

So just as we need separate branches for the code – we need a separate database for the data. Not a separate server or anything, just a separate database on the same server.

I didn’t have a script to create the original database so I had to relearn how I created it, and do the same for the new one. It wasn’t very hard, just a CREATE DATABASE and a couple of GRANTs. Then to populate the devel database (with the schema and data) I did something like:

mysqldump -u root -p et > et.dump
mysql -u et -pMyPassword etdevel < et.dump

Notice that I am using the same user (et), I don’t see a problem with that.

If in the future I decided I need more test data or more current data – I could simply rerun those two commands.

3. Server: two copies & post-checkout hook

The best idea I could come up with for the code on the server was to have two copies of the server repository in two directories. The release code in the master branch checked out in the et directory and the devel code in the devel branch checked out in the et-devel directory.

That way I could have both exist at the same time and not step on each other’s toes, except they both access the same database. To make sure each branch uses the correct database I set up a git hook that generated a PHP file with a variable definition in it. My .git/hooks/post-checkout looks like:

#!/bin/bash

BRANCH=`git rev-parse --abbrev-ref HEAD`

echo -n '<?php $branch = "' > branch.php
echo -n $BRANCH >> branch.php
echo '"; ?>' >> branch.php

Which generates a very handy branch.php with just this in it:

<?php $branch = "master"; ?>

And with that variable defined I can now make sure that when I connect to the database I connect to the correct one, for example:

if ($branch === "master")
  $db = @mysqli_connect("localhost:3306", "user", "pass", "et");
else
  $db = @mysqli_connect("localhost:3306", "user", "pass", "etdevel");

4. Client: pre-build hook

I did not want to have multiple copies of the client repository on my laptop. I wanted an easy way to switch between devel and master, and an easy way to merge devel into master. Creating a branch was the easy part – the hard part was making sure that the code in the devel branch would only use the server devel branch and the code in the master branch would only use the server master branch.

It so happened that I was already only referencing the URL of the web service in a single place in my client (java) code. A single string that looked like this:

public static final String wsURL = "https://littlesvr.ca/et/et.php";

If I had references to my server production code (et/) in multiple places – that would make the process slightly more complicated but not so much.

I replaced that one line with this:

public static final String wsURL = MainActivity.context.getString(R.string.et_php_url);

I won’t bother explaining the static MainActivity.context, you can find your own way to deal with that java shit. The interesting part is R.string.et_php_url. Where does it come from? It comes from the XML file res/values/auto.xml. Where does that come from? It is automatically generated by my pre-build script, something like this:

$ cat pre-build.sh
#!/bin/bash

BRANCH=`git rev-parse --abbrev-ref HEAD`
AUTOXML=res/values/auto.xml

echo '' > $AUTOXML
echo '' >> $AUTOXML
echo -n '  ' >> $AUTOXML
if [ $BRANCH = 'master' ]
then
  echo -n 'https://littlesvr.ca/et/et.php' >> $AUTOXML
else
  echo -n 'https://littlesvr.ca/et-devel/et.php' >> $AUTOXML
fi
echo '' >> $AUTOXML
echo '' >> $AUTOXML

Note that this auto.xml is not versioned, in fact it’s in the .gitignore. The whole point was to have the server string switch seamlessly when I checkout master/devel, without merging anything.

To make sure the pre-build.sh script gets called before a build I did this in Eclipse: right-clicked the project -> Properties -> Builders -> New -> Program. Filled that in and moved the new builder all the way to the top (above Android Resource Manager).

Could I have called the script from a post-checkout hook? Yeah probably, but I’ve done this before I learned about that hook, and might as well show you two ways to do it :)

There’s more I could write on the topic but this post is already way too long, so there you go, I hope it was helpful.

by Andrew Smith at October 15, 2014 04:30 AM


Andrew Li

Open Source Case Study on Polymer

I posted an introduction to Polymer a while back, here is the follow up post with more info on how it is licensed, the community and where to go to get involved.

1. What is Polymer

2. License, Code and Community

3. How to get involved?

4. Download Presentation Slides

What is Polymer

Polymer is a library that utilizes Web Components. Everything in polymer is an element so HTML, CSS and Javascript can be bundled together to create an application. Once bundled, you can use it by declaring it just like any regular HTML tag.

Currently, Polymer implements a set of polyfills to make current browsers compatible. Eventually the polyfills will be eliminated as browsers get native support.

License, Code and Community

The code is licensed under BSD. You can modify or distribute the code as long as the copyright information is included, the disclaimer message is provided and the names of its contributors are not used to endorse or promote.

You can get the code here and find documentation here. The Polymer developers hang out on IRC in the #polymer freenode channel. To get in the loop join the Google Groups mail list. For just the highlights you can follow them on Google+ or Twitter.

How to get involved?

Explore the code, read the contributor’s guide, test and file bugs, create elements and share it with the community.

Download Slides

dps909-polymer-presentation.pdf

October 15, 2014 12:00 AM

October 14, 2014


Ali Al Dallal

How to enable new Firefox Preferences page now

In currently Firefox Stable, Beta or Aurora when you enable Firefox Preferences page you will see this popup window (CMD+,) or Firefox Menu and choose Preferences

In Firefox Nightly by default if you access your Preferences page you will see this new dedicated page (Preferences in content)

If you want to get this new page on any other version of Firefox now you can easily enable this in your about:config and search for browser.preferences.inContent and change the value from false to true.

You will now have new Firefox Preferences page :)

by Ali Al Dallal at October 14, 2014 01:12 PM


Shuming Lin

Webmaker

During the weekend, I tried to find a open source project bug to do for release. I am pretty interest the below three project.

Webmaker App

Mozilla Webmaker is all about building a new generation of digital creators and webmakers, giving people the tools and skills they need to move from using the web to actively making the web. To this end, the Webmaker App is an entry point to the Webmaker community that provides a radically simple interface for creating mobile application directly on device.

Mozilla Appmaker

Appmaker is an free Webmaker tool for creating mobile app without learning to code. Using building blocks called Appmaker Bricks, users can create and share mobile apps quickly right in their browser.

Appmaker is the first experiment in a series of tools, platforms, programs, and studies designed to provide a mobile experience which encourages free, decentralized, functional user content creation.

Brackets

Brackets is an open source code editor for web designers and front-end developers. This is a pretty cool editor to develop a web cause Live HTML development which means as you code, HTML changes are instantly pushed to browser without having save. I have post a blog about BRACKETS.

It’s hard to choose one of them, but i finally pick the webmaker. In Week 5,Ms. Kate Hudson talked about Webmaker Mobile prject in class. So I know more about this project and i am starting for it. I may try to do others in future.


by Kevin at October 14, 2014 01:28 AM


Gideon Thomas

UI is not my thing!

Hello everyone,

Sorry for being away for a week. Have had a lot going on and just couldn’t find the time to voice my thoughts.

So we had to find a bug to work on for release 0.2 which is due in less than a week. And as much as I would have loved to, I could not work on MakeDrive for this. So, I found it hard to find a way to approach searching for an appropriate bug to work on. I looked at the different options available to me – Mobile Webmaker, Cordova/Firefox OS, Appmaker and Brackets.

Brackets was something that I decided to work on possibly for release 0.3 by finding a bug that was not UI related. As far as Cordova is concerned, a lot of my peers have told me to stay away from it due to its sheer complexity. I was thus left with two options – Appmaker and Mobile Webmaker. From a Github standpoint, I wanted to get a ‘Repositories Contributed To’  entry for one of these, so as to show my versatility for web development. However, based on previous experience with Appmaker, I decided to search in Mobile Webmaker instead as I knew that otherwise I would be stuck dealing with something that primarily dealt with UI.

So I began searching through bugs in the Mobile Webmaker repository. I was disappointed. Contrary to my expectations, the bugs were primarily UI related bugs/features. I tried several bug filtering techniques to find a bug that was appropriate for me. Searching by keywords such as ‘feature’ or ‘work’ and searching by labels did not help. So I decided to filter by their milestones. As I searched through the milestones, ascending the periods by date, I finally found a bug that seemed to be sort of back-end related part of their Mozfest milestone.

This issue pertained to testing their localization files which were in JSON, for adherence to a JSON schema. Luckily, I know a decent amount about JSON schemas and since there was enough information in the bug, I decided to take it on. I did so by asking (in the issue itself) two of the lead developers for permission to tackle the bug, which was promptly granted to me.

Hence, after some great work, I was able to find a good bug for me in the UI haystack. Now to begin working on it…


by Gideon Thomas at October 14, 2014 01:26 AM

October 13, 2014


Ava Dacayo

Mozilla Webmaker App

I’ve decided to work on Mozilla’s web-maker app for Release 0.2! What is it? It’s a web app which users can use on their Firefox phones and later Android to build stuff like stores, blogs, etc. Firefox OS smartphones aren’t available here in Canada (correct me if I’m wrong) but here’s a picture I took when it was being passed around during class:

firefox os

I started by looking for bugs in their GitHub repo and found one which was about building one of the templates. Unfortunately I didn’t get to work on that one because the milestone wasn’t done yet and I think I might get more confused if I continue.

So I went on and looked for another bug and found #Issue 294 – Localize sign-up page. Currently the sign-up page is not localized (viewing the texts in a different language).

I’ll be posting more about it next time when I finish working on it!


by eyvadac at October 13, 2014 11:50 PM


Brendan Donald Henderson

Exploring packages to determine aarch64 compatibility

This post is based on my research into linux packages to determine:

  • Their existence in the Fedora, Debian, and Ubuntu Linux Distributions
  • Their existence on aarch64 Fedora
  • If the packages contain any platform specific code and if so what is the code’s purpose?
    • If platform specific assembly does exist, is there assembly to support aarch64?
    • Can this assembly be replaced by C code and can performance be optimized as a bonus?

I do want to note that this blog does not completely cover analysis of the platform specific code or optimization opportunities, however I do point out things that I believe would show to be great performance optimization opportunities on further inspection. For a few of these packages this further inspection will be in the next post!

List of packages that will be discussed:

  • pcre3
  • unrar-free
  • vflib3
  • php-apc
  • mysql-5.5
  • fwts
  • llvm-3.1
  • smlsharp
  • mpfr4
  • gccxml
  • puf
  • insserv

Investigation Notes for each package:

pcre3:

  1. Available on: Fedora as pcre, Ubuntu as libpcre3, Debian as libpcre3
  2. Not Available on: N/A
  3. Purpose: Perl-compatible regex library that also includes a posix api front-end.
  4. Priority: Low as seems to already exist on aarch64 as pcre.aarch64 package.
  5. Opportunities: the embedded asm is used for JIT code for performance, interesting JIT opportunity!

unrar-free:

  1. Available on: Debian and Ubuntu(same package name)
  2. Not Available on: Fedora, aarch64 Fedora
  3. Purpose: Compression/decompression tool.
  4. Priority: Low priority as there are other software packages with the same purpose.
  5. Opportunities: embedded x86 asm is used for decompression performance, very likely that modern gcc will beat this and so could be very great performance optimization opportunity!

vflib3:

  1. Available on: Ubuntu and Debian(same package name)
  2. Not Available on: Fedora, aarch64 Fedora
  3. Purpose: Font-rasterizer library.
  4. Priority: Low priority as not crucial package.
  5. Opportunities: embedded x86 asm is used for performance, very likely that modern gcc will beat this and so could be very great performance optimization opportunity!

php-apc:

  1. Available on: Fedora as php-pecl-apc, Ubuntu and Debian as php-apc. aarch64 Fedora as php-pecl-apcu.aarch64
  2. Not Available on: N/A
  3. Purpose: Alternative PHP Cache module for php5
  4. Priority: High priority.
  5. Opportunities: asm for various platforms for atomic operations, seems like great opportunity to let modern gcc do a better, more portable job?

mysql-5.5:

  1. Available on: Fedora as mysql, Ubuntu and Debian as mysql-server-5.5. Might exist on aarch64 Fedora as community-mysql.aarch64
  2. Not Available on: Not sure if the aarch64 package above is the same as the ones mentioned for x86 distributions.
  3. Purpose: mysql database, version 5.5.
  4. Priority: Highish priority.
  5. Opportunities: x86 asm for performance with crypto and checksum as part of embedded yassl code. Very interesting performance/security optimization opportunity!

fwts:

  1. Available on: Couldn’t find it for any of the x86 distros or aarch64 Fedora
  2. Not Available on: info above.
  3. Purpose: Firmware Test Suite
  4. Priority: Marked as “already being worked on” on the code module page on the Linaro site. x86 asm is trivial, possibly not important?
  5. Opportunities: N/A

llvm-3.1:

  1. Available on: Fedora as llvm, Ubuntu has newer llvm-3.3, Debian as llvm-3.1. aarch64 Fedora as llvm.aarch64(not sure if same package)
  2. Not Available on: N/A
  3. Purpose: Low Level Virtual Machine.
  4. Priority: Low Priority.
  5. Opportunities: N/A

smlsharp:

  1. Available on: Ubuntu
  2. Not Available on: Fedora, aarch64 Fedora, Debian.
  3. Purpose: Standard ML Compiler.
  4. Priority: Low priority.
  5. Opportunities: Contains asm source code file, may be old procedures? Embedded asm for atomics and checksum performance(both have C fallback) but great opportunity to find out if gcc can bring performance optimization!

mpfr4:

  1. Available on: Fedora as mpfr, Ubuntu and Debian as libmpfr4. aarch64 Fedora as mpfr.aarch64
  2. Not Available on: N/A
  3. Purpose: Multiple precision floating point math library.
  4. Priority: Medium Priority as it is a toolchain dependency.
  5. Opportunities: arm32 asm done but not aarch64. Is aarch64 still new enough that gcc doesn’t provide competitive floating-point optimization options?

gccxml:

  1. Available on: Fedora, Debian, and Ubuntu as gccxml.
  2. Not Available on: aarch64 Fedora
  3. Purpose: XML description generator for C++ programs.
  4. Priority: Low Priority, not widely used.
  5. Opportunities: N/A

puf:

  1. Available on: Ubuntu and Debian
  2. Not Available on: Fedora, aarch64 Fedora
  3. Purpose: Parallel url fetcher.
  4. Priority: Low priority for porting.
  5. Opportunities: trivial x86 asm for bitops performance with C fallback. Great opportunity to see if gcc can optimize performance.

insserv:

  1. Available on: Ubuntu and Debian
  2. Not Available on: Fedora, aarch64 Fedora
  3. Purpose: Boot sequence organizer
  4. Priority: Low Priority.
  5. Opportunities: No asm, preprocessor definition needs to be changed.

 

These are the basic notes from my research of the packages.

I have not yet dived into the packages and started searching for the assembly myself but that will be the topic of the next upcoming blog post!

I will also detail porting and optimization related info much more in the next post.


by paraCr4ck at October 13, 2014 06:21 PM


Frank Panico

Ramping up on Webmaker-app

So after adventuring through David’s choices for a project to work on for our next release; I’ve chosen Webmaker-app.

I found this appealing mostly due to the fact that I’m also in York University’s Tech Ed program, meaning that I’m interested in finding ways to use the web and other tools to inspire and engage youth into technology and that its use goes beyond taking sweet selfies.

I’m also sure that since the main community is all about furthering this exact cause, that they’ll be empathetic to me being a n00b trying to learn new things myself, so I shouldn’t feel overwhelmed to help out or ask for help.

I’ve signed up for bug #305 (https://github.com/mozillafordevelopment/webmaker-app/issues/305) which is to display to the user that something has gone wrong with their sign up process because they haven’t “accepted” the terms and conditions.

I’m excited to start foraging through code and get to resolving this.


by fpanico04 at October 13, 2014 06:48 AM

October 10, 2014


Jordan Theriault

Leaflet.js Woes

In continuation with my last post, I been developing the Map Brick for Mobile Webmaker. In order to allow many different types of Maps to be used, I have begun integrating Mobile Webmaker with Leaflet.js which is a javascript library that allows for easy, interactive, mobile friendly maps. There is a Node Package Manager installation which I have used but there exists little documentation on it’s usage.

Integrating the existing Mobile Webmaker with Leaflet has been challenging. Getting the map to display properly is clearly a lack of proper implementation of the Leaflet CSS and JS so I’m working on find out how to properly implement it.

Currently I am using OpenStreetMaps and getting a very fun result (Firefox Nightly Build).

Screen Shot 2014-10-10 at 12.00.04 PM

I will update with progress once the maps are properly displaying to show you the amazing maps that Leaflet.js in combination with a map API produces. You can follow my progress on implementing Leaflet here.

by JordanTheriault at October 10, 2014 04:14 PM


David Humphrey

Minimum Viable Product

This week in class I was discussing the value of thinking in terms of a Minimum Viable Product (MVP), and how open source tends to favour the approach, because it allows one to ship, test things with real users, get feedback quickly, find and fix bugs, and repeat. Structuring your project, and the scope of your bugs, such that you can ship quickly really matters. I mentioned a fantastic image I'd seen on Twitter (source):

MVP

What you start with often feels completely lacking, almost embarrassing; and yet you have to do it, you have to make a start.

Tonight I'm reading about the RC for Redis Cluster, and I see this amazing paragraph toward the end:

Finally we have a minimum viable product to ship, which is stable enough for users to seriously start testing and in certain cases adopt it already. The more adoption, the more we improve it. I know this from Redis and Sentinel: now there is the incremental process that moves a software forward from usable to mature. Listening to users, fixing bugs, covering more code in tests, …

Everything I was trying to say is there. This is how you do it.

by David Humphrey at October 10, 2014 02:58 AM

October 09, 2014


Tai Nguyen

Class Notes – Git 101: Recursive Merge vs. Fast-Foward Merge

In Git, when you want to join two different branches together you can use two different merge strategies; fast-forward and recursive.

When you merge two branches together, you can either do a fast-forward merge or a recursive merge. The fast forward merge essentially moves the current branch pointer to the latest commit if you have done a git fetch; fast forward merge is usually applied when there is no difference in the histories of the two branches. If there the histories of the two branches are different, then a recursive merge is necessary. A recursive merge creates a new commit and does a three way association. The three way association involves three other commits. Basically, the newly created commit is linked two parents; the parents being the latest commit from each branch. The last commit associate is the commit where the branches departs from each other.


by droxxes at October 09, 2014 03:33 AM


Gary Deng

Setup Appmaker Development Envrionment

After playing around with Mozilla appmaker, I choose to work on issue #2253.  To be honest, I had a hard time to setup the development environment. Thanks Ali for pointing me to the right direction, now I am able to run appmaker locally.

Webmaker-suite is very helpful for new developer to install all webmaker components to get started. It is for the terminal-menu based package manager and task runner for the webmaker suite of tools; However, I was stopped by the Elasticsearch dependency. I have tried different installation methods, and Elasticsearch was working properly, but webmaker-suite just didn’t pick up that. It keeps telling me that dependency is not installed which almost drove me crazy! This morning, I went to IRC #appmaker channel and post my question. What a coincident, another developer got the same problem. He or she told me to ignore the issue and run Elasticsearch manually.

But I still can’t run the command “node run” which gave me error message about the Elasticsearch. Then I went to CDOT to seek some help from Ali. Finally, Ali helped me solve the problem which was that my Mongodb was running automatically before I do “node run”. I should kill the process before running command “node run”. If anyone wants to use webmaker-suite package manager to set up dev environment for appmaker, you only need “Login”,”MakeAPI”,”Make valet”,”Appmaker”,and “MongoDB”. Google around to find any relevant instruction is not enough, asking questions in the project community is the most direct way to get help quickly.

 


by garybbb at October 09, 2014 02:14 AM

October 08, 2014


Kieran Sedgwick

[OSD600] Staying in touch with a project

I’m making a concerted effort to jump back into open-source development now, inspired by both my OSD600 class and my enjoyment of contributing, and I was struck by a thought I’d had before, but been unable to vocalize.

Open source development of popular projects moves quickly, and within just a few weeks projects I was intimately familiar with have, or are about to, change dramatically. MakeDrive is a great example. I was part of the core development team on this project this past summer, and within days David Humphrey will be landing an enormous patch that completely overhauls the guts of the project. This is normal, and excellent!

It’s also a good reminder that the only way to stay knowledgable about any codebase, but open-source ones in particular, is to keep working on them! Learning a codebase is like carrying a bowling ball up a flight of stairs, and stepping away from it for even a pretty brief period is like dropping that bowling ball and watching it roll down to the bottom of the stairs.

On the one hand, this is why being a developer is so engaging! There’s always an opportunity to learn, to explore, to grow. It also means that we mostly start from scratch when approaching a new project, or an old one we’ve been uninvolved with for a while. The experience translates over to an easier time taking off with the project, but it means being complacent creates work in the form of refamiliarizing oneself with a project.

I happen to be enjoying this process, and have more thoughts on digging into an unfamiliar project, but those will likely appear in a follow up post.


by ksedgwick at October 08, 2014 04:15 PM


Adam Nicholas Sharpe

Basic Loops and Conditionals on x86-64

Last week in SPO600, we were given a lab that had us writing loops and conditionals in Assembly. I missed that class, so I had a heck of a time figuring out how to do this on my own haha...

Anyhow, the gist of the lab was using only Assembly and Linux system calls (is this the same as POSIX?), write a loop that displayed all the numbers from 0 through 30. Then, try to do this while suppressing the leading zero. As an optional challenge, we could display the 12 x 12 multiplication table.

The solutions I came up with were ugly, not robust, and easily breakable, but they got the job done. I definitely did not write Assembly in the proper way, but I wanted play around with the memory stack and solve the problems in a kind of funny way...

My solution to the first problem is as follows:

 1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
.text
.global main

main:
push %rbp
movq %rsp, %rbp
subq $0x2, %rsp
movq $30, %r15
movq $0x0, %r14

condition:
cmp %r15, %r14
jg loop_done

movq $0x1, %rax
movq $0x1, %rdi
movq $loop_text, %rsi
movq $loop_text_len, %rdx
syscall

movq %r14, %rax
movq $10, %r13
xor %rdx, %rdx
divq %r13
addq $0x30, %rax
addq $0x30, %rdx
mov %al, -0x1(%rbp) /* quotient */
mov %dl, -0x2(%rbp) /* remainder */

movq $1, %rax
movq $1, %rdi
leaq -0x1(%rbp), %rsi
movq $char_len, %rdx
syscall

movq $1, %rax
movq $1, %rdi
leaq -0x2(%rbp), %rsi
movq $char_len, %rdx
syscall

movq $1, %rax
movq $1, %rdi
movq $new_line, %rsi
movq $char_len, %rdx
syscall

inc %r14
jmp condition

loop_done:
movq $0, %rdi
movq $60, %rax
syscall

.data
loop_text: .ascii "Loop: "
.set loop_text_len, . - loop_text
new_line: .ascii "\n"
.set char_len, . - new_line

I probably didn't need to use the memory stack, but wanted to play around with that. I put two bytes on the stack, one to store the quotient, one to store the remainder. Basically, the idea is to divide by 10, convert the quotient to a character (by adding 48), display it, and do the same for the remainder. There are two things wrong with this solution:

1. It is limited to displaying numbers in decimal notation that have at most two digits.
2. It displays a leading zero.

A better solution would avoid these problems. Here is what I tried. This solution can display up to eight digit numbers, and suppresses leading zeroes. However! A better solution would be to keep dividing until the quotient is zero (so as to avoid the hard coded 8-digit limit). Also, the way I suppressed leading zeroes also suppressed the leading zeros for the number 0, resulting only in spaces. The way I 'fixed' this was a simple conditional before entering the loop. However, this solution sucks quite frankly...

Anyhow, here it is:

 1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
text
.global main

main:
push %rbp
movq %rsp, %rbp
subq $0x8, %rsp
movq $30, %r15
movq $0, %r14

/* Test if my index variable is less than or equal to 30 */
condition:
cmp %r15, %r14
jg loop_done

/* Print the string "Loop: " */
movq $1, %rax
movq $1, %rdi
movq $loop_text, %rsi
movq $loop_text_len, %rdx
syscall

movq $1, %r12
movq %r14, %rax

/* Hacky solution to display the number if it's zero. My int_to_chars below
* suppresses leading zeroes, even if the value is zero haha... There are better
* ways to do this, (ie this solution sucks) but I'm getting fairly tired, and
* am running out of freakin' callee preserved registers... */

cmp $0, %rax /* If RAX is zero... */
jne int_to_chars
movb $48, 0x7(%rsp) /* Move a zero character to last character position
* to be printed, which in my int_to_chars section,
* is 7 bytes above the stack pointer... */

addq $1, %r12 /* And add one to our counter used in the int_to_char (because the
* first and only digit ('0') is already on the stack... */

/* Section to convert a number in decimal notation, with up to 8 digits to a
* sequence of characters in reverse order, replacing leading zeroes with a
* space character, and on store them in the memory stack */
int_to_chars:

cmp $9, %r12
je display_integer

cmp $0, %rax
je put_space

movq $10, %r13
xor %rdx, %rdx
divq %r13
addq $48, %rdx /* Converts remainder digit to its character */
movq %r12, %r10
movq %rbp, %r11
subq %r10, %r11 /* Where to put the remainder digit in memory */
movb %dl, (%r11) /* Move character to address in previous calculation */

addq $1, %r12
jmp int_to_chars

put_space:
movq %r12, %r10
movq %rbp, %r11
subq %r10, %r11 /* Where to put the remainder digit in memory */
movb $32, (%r11) /* Move character to address in previous calculation */
addq $1, %r12
jmp int_to_chars

display_integer:
movq $1, %rax
movq $1, %rdi
leaq -0x8(%rbp), %rsi
movq $8, %rdx
syscall

/* Print the new-line character... is there a better way than actually storing
* the literal newline in the data part of the code? Probably... */
movq $1, %rax
movq $1, %rdi
movq $newline, %rsi
movq $char_len, %rdx
syscall

addq $1, %r14
jmp condition

/* Exit with status 0 */
loop_done:
movq $0, %rdi
movq $60, %rax
syscall

.data
loop_text: .ascii "Loop: "
.set loop_text_len, . - loop_text
newline: .ascii "\n"
.set char_len, . - newline

The last challenge we were given was to display the 12 by 12 multiplication table. Again, the limit of the two numbers being at most two digits, and the products being at most three digits is a hard coded limit... My idea was to put a formatted string onto the memory stack, and add in the number parts based on what I was calculating in the for loop. A more robust solution would dynamically construct the entire string, including the hard-coded characters each time the loop iterates, keeping track of its size. Also, my solution does not suppress leading zeroes (If I were forced to, I would do this in a similar way to the previous problems):


  1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
.text
.global main

main:

push %rbp
movq %rsp, %rbp
subq $0x10, %rsp

/* The idea here is to layout on the memory stack, a string of the form:
*
* "__ * __ = ___\n"
*
* , where the blanks are values to be calculated below, for each iteration
* of the double loop. Start by setting up the constants in the string: */
movb $32, -0xe(%rbp) /* ASCII value for space */
movb $42, -0xd(%rbp) /* ASCII value for '*' character */
movb $32, -0xc(%rbp)
movb $32, -0x9(%rbp)
movb $61, -0x8(%rbp) /* ASCII value for '=' character */
movb $32, -0x7(%rbp)
movb $10, -0x3(%rbp) /* ASCII value for newline */

movq $0, %r15 /* Outer loop index */
movq $0, %r14 /* Inner loop index */

outer_loop:
cmp $12, %r15
jg end

movq $0, %r14

inner_loop:
cmp $12, %r14
jg inc_outer_counter

print_nums:

/* Divide the outer loop counter by ten... */
xor %rdx, %rdx
movq %r15, %rax
movq $10, %rbx
divq %rbx

/* and then put the characters that represent the quotient and remainder
* onto the memory stack in the correct position... */
addq $48, %rax
addq $48, %rdx
movb %al, -0x10(%rbp)
movb %dl, -0xf(%rbp)

/* and then do the same for the inner loop index. */
xor %rdx, %rdx
movq %r14, %rax
movq $10, %rbx
divq %rbx

addq $48, %rax
addq $48, %rdx
movb %al, -0xb(%rbp)
movb %dl, -0xa(%rbp)

/* Do the actual multiplication on outer loop and inner loop */
movq %r15, %rax
mulq %r14
movq %rax, %r13 /* Move our product out of the way */

/* Since the product may be three digits, we must divide and take the
* remainder twice: */
xor %rdx, %rdx
movq %r13, %rax
movq $10, %rbx
divq %rbx

addq $48, %rdx
movb %dl, -0x4(%rbp)

xor %rdx, %rdx
divq %rbx

addq $48, %rax
addq $48, %rdx
movb %al, -0x6(%rbp)
movb %dl, -0x5(%rbp)

/* Now print the thing... this is fun! :D */
movq $1, %rax
movq $1, %rdi
leaq -0x10(%rbp), %rsi
movq $14, %rdx
syscall

/* Add one to the inner loop counter, and do it all again! */
inc %r14
jmp inner_loop

/* Add one to the outer loop counter, diplay the dashy line, and then do it
* all again hohoho... */
inc_outer_counter:
inc %r15

movq $1, %rax
movq $1, %rdi
movq $line, %rsi
movq $line_len, %rdx
syscall

jmp outer_loop

end:
movq $60, %rax
movq $0, %rdi
syscall

.data
line: .ascii "----------------\n"
.set line_len, . - line

I actually had a lot of fun coding Assembly, and would love to spend lots and lots of time figuring out the most robust (ie: no hard coded limits) solutions to displaying nicely formatted output. Alas, these problems have already been solved (it's called stdio.h :P). Actually, over the upcoming reading week, I would actually like to revisit this lab and do a 'proper' solution, or at least look at the source of some of the C standard library to see how this stuff is really done.

*Sigh* At times I feel as though I was born in the 'wrong' generation of programmers... I would love to do this kind of stuff for a living...

by Adam Sharpe (noreply@blogger.com) at October 08, 2014 01:57 AM

October 07, 2014


Jordan Theriault

Mozilla’s Webmaker App – Building the Map Brick

I have proposed and assigned myself the task of developing a map brick for Mozilla’s Webmaker App project. I proposed the idea in this issue on the Github page.

I intend to use Google Map’s API in order to serve the maps to the web site creator. The user will be able to select a location on the map to give the location as well as include an address. The personas for this application are varied: a business giving the location of the store, a group giving a meeting location, an adventurer providing a geocaching co-ordinate, a host giving attendees of their party their house address, and more.

The published view will give a map with a marker on the location indicated by the editor. On the editor side, there will be a map in order to let the editor select a location for the marker(center of the map will be put into the attributes long and lat of the map brick). Further, a string will be available to set the readable address for the location.

For potential barriers, an agreement will need to be made with Google in order to provide this API service to a more scalable audience. Further, the map may need to be cached in order to provide a lower load on the mobile devices being used.

If you have any questions, comments, ideas, or want to get involved for the development of this issue, please comment on the issue page on Github.

by JordanTheriault at October 07, 2014 06:37 PM


Ava Dacayo

Release 0.2 – Still looking for bugs

There are different projects available and I still don’t know which one to pick!

Basically, I’m looking for bugs that seem doable for the release 0.2 due date in Oct 17. And I better be fast because I am thinking the others are doing the same thing too! :p That also involves talking to the contributors if that bug is available and of course assign the bug to myself (or someone assigns it?) so that it is clear that I will be working on it! I hope to have started on something by the end of this week. I will be posting more details about the bug I will be working on soon!


by eyvadac at October 07, 2014 12:23 PM


Linpei Fan

OSD600: Project for Release 0.2

Kate Hudson introduced Mobile Webmaker project in last Wednesday. It is a mobile application providing users the framework to create the mobile apps. In other words, it is a software being used to create software. She also showed some examples and gave the links of how to start. This is a cool project, which also have the clear information to start.

Moreover, I have some friend who are going to work on this project as well. Thus, I can discuss with them. And we may help each other although we work on different issues.

I need to look at the issues in detail and then decide which one to take. I will do so no later than this Wednesday, and will post which issue I would like to take.


by Lily Fan (noreply@blogger.com) at October 07, 2014 04:17 AM

SPO600: Lab2

Brief description: 
Wrote a simple c program to display “Hello World!”, and  compiled it using command “gcc –g –O0 – fno-builtin”. Then using command “objdump” with options –f, -d, -s, --source to display the information of the output file.
And then do the following changes to see the changes and difference in the results.

5) Move the printf() call to a separate function named output(), and call that function from main().

Original output file: a.out
Output file after change: hello_all5.out

Before change:
When run the command: 

It only has main section for the source code:


After change
When run the command: 
It shows following:



6) Remove -O0 and add -O3 to the gcc options. Note and explain the difference in the compiled code.
-O3 is to optimize more for code size and execution time. It reduces the execution time, but increase the memory usage and compile time.

Output file before change: hello_all5.out
Output file after change: hello_all6.out

I use “time” command to check the execution time of above files, and get following result.


hello_all6.out is complied with the option –O3. It supposes to have less execution time. However, it takes much longer in real time than the previous one. Well, it does take less time in sys time.

I also compared the sizes of the output files with –O0 and –O3. The hello_all5.out, which is compiled with –O0, has smaller size than hell0_all6.out, being compiled with –O3. Apparently, compiling file with option –O3 does not reduce the file size. Instead, it increases the file size.


Following sreenshots are the result by running “objdump –source” command for both of the files.

Comparing the two results, I found:
1       --- The sequences of <main> section and <output> section in both results are different. For the output file hello_all5.out, being compiled with –O0 option, <main> section appears after <frame-dummy> section. And <output> section is after <main> section. By contrast, for the output file hello_all6.out, being compiled with –O3 option, <main> section appears right after the line “ Disassembly of section .text”. And <output> section still appears after <frame-dummy> section.

2   ---The contents of <main> section and <output> section are different for both results. For the output file hello_all6.out, the contents of both <main> section and <output> section are shorter than those in the result of hello_all5.out. It has 6 actions in <main> section of hello_all5.out and 9 actions in <output> section of hello_all5.out. By contrast, there are only 3 actions in <main> section of hello_all6.out and 4 actions in <output> section of hello_all6.out.











When I ran “objdump –s” for both files, I found more differences.
Contents of section .debug_line and contents of section .debug_str are shorter than the result of hello_all6.out. Moreover, the result generate by hello_all6.out has one more section – contents of section .debug_ranges.
Contents of section. debug_str generated by hello_all5.out

It is good to know that using different compiling options, the compiler compiles the program in different ways. Each option serves the different purposes. Accordingly, the assembler contents of each object files are different as well.

Using “objdump” command, it is good to see the assembler contents of the object file. It’s a good start to learn the assembly language. However, I still don’t fully understand what the assembler contents stand for. With learning more assembly language, I think it won’t be a problem for me anymore.

by Lily Fan (noreply@blogger.com) at October 07, 2014 03:19 AM

SPO600: Static linking vs. Dynamic linking

Linkeras a system program takes relocatable object files and command line arguments in order to generate an executable object file. The linker is responsible for locating individual parts of the object files in the executable image, ensuring that all the required code and data is available to the image and any addresses required are handled correctly.




Static and dynamic linking are two processes of collecting and combining multiple object files in order to create a single executable. 

Static linking is the process of copying all library modules used in the program into the final executable image. This is performed by the linker and it is done as the last step of the compilation process.

During dynamic linking the name of the shared library is placed in the final executable file while the actual linking takes place at run time when both executable file and library are placed in the memory.

Differences between Static linking and Dynamic linking:

Static linking
Dynamic linking
Sharing external program
External called program cannot be shared. It requires duplicate copies of programs in memory.
Dynamic linking lets several programs use a single copy of an executable module.
File size
Statically linked files are significantly larger in size because external programs are built into the executable files.
Dynamic linking significantly reduces the size of executable programs because it uses only one copy of shared library
Easiness to update
In static linking if any of the external programs has changed then they have to be recompiled and re-linked again else the changes won't reflect in existing executable file.
In dynamic linking individual shared modules and bug fixes can be updated and recompiled.
Speed
Programs that use statically-linked libraries are usually faster than those that use shared libraries.
Programs that use shared libraries are usually slower than those that use statically-linked libraries.
Compati-bility
In statically-linked programs, all code is contained in a single executable module. Therefore, they never run into compatibility issues.
Dynamically linked programs are dependent on having a compatible library. If a library is changed, applications might have to be reworked to be made compatible with the new version of the library.

Advantages:

Static linking
Dynamic linking
      Static linking is efficient at run time.
      It has less system calls.
      Static linking can make binaries easier to distribute to diverse user environment.
      It let the code run in very limited environments.
      Dynamic linking is more flexible.
      It is more efficient in resource utilization, taking less memory space, cache space and disk space.
      It is easy to update and fix the bugs.
Static linking
Dynamic linking
Source:




by Lily Fan (noreply@blogger.com) at October 07, 2014 03:08 AM

SPO600: Lab3 - Loop in Assembly

In lab3, we were asked to write the assembly programs in both x86_64 and aarch64 to display number 0-29 in a loop.

This was my first assembly code. And it took me one night and 2 whole days to get them work. Now the loop is working to display the numbers. But I still have the problem for the newlines. My outputs do not have the newlines as I expect.  In other words, my output is on one line. I spent some time on this issue, but have not found the solution.

One frustrating thing in assembly is that different platform requires different syntax. The code I run in one platform without problems could not smoothly transform to the other platform. I still need to spend amount of time to recode it even though both programs have the same logic and same output.


Here is the code on X86_64 in GAS syntax.

 .text  
.globl _start
start = 0
max = 30
_start:
mov $start,%r15 /* starting value for the loop index*/
loop:
/*showing digit*/
mov $'0',%r14
mov $10,%r13
mov $0,%rdx
mov %r15,%rax
div %r13
cmp $0,%rax
je singledigit
mov %rax,%r13 /*store the second digit from right*/
add %r14,%r13
mov %r13,msg+5
singledigit:
mov %rdx,%r12
add %r14,%r12
mov %r12,msg+6
/*showing loop in front of digit*/
mov $len,%rdx
mov $msg,%rsi
mov $1,%rdi
mov $1,%rax
syscall
inc %r15 /* increment index*/
cmp $max,%r15 /* see if we're done */
jne loop /* loop if we're not */
movq $0,%rdi /* exit status */
movq $60,%rax /* syscall sys_exit */
syscall
.data
msg: .ascii "loop: '\n'"
.set len, . - msg

And ARM assembly in aarch64:
 .text  
.global _start
_start:
start = 0
max = 30
mov x15,start /*starting value for the loop index*/
loop:
mov x28,0
mov x27,10
adr x12,msg /*loading the message*/
udiv x10,x15,x27 /*getting the quotient*/
msub x9,x10,x27,x15 /*getting the reminder*/
cmp x10,0 /*if quotient equals 0, then go to execute single digit*/
beq singledigit
add x14,x10,0x30 /*display quotient - second digit from right*/
str x14,[x12,6]
singledigit:
add x11,x9,0x30 /*display reminder - first digit from right*/
str x11,[x12,7]
/*system call write*/
mov x1,x12
mov x2,len
mov x0,1
mov x8,64
svc 0
/*loop*/
add x15,x15,1
cmp x15,max
blt loop
/*system exit*/
mov x0,0
mov x8,93
svc 0
.data
msg: .ascii "loop: ##\n\r"
len= . - msg

Through the practice in this lab, I got the basic idea how assembly works with memory and register, how to do the system call and how loop works.


In the end, I would like to ask if anyone has any good tutorials in assembly for beginners or suggestions, please comment it because I could not find one good for beginners. I highly appreciate.   

by Lily Fan (noreply@blogger.com) at October 07, 2014 02:27 AM