Planet CDOT

April 19, 2014


Petr Bouianov

Final Release

Alright, having been given an extension till the end of exam week (today) to finish up our contributions, its time for the final release post.

 

Since the last post, I’ve finished up rsync, adding more functionality and fixing more bugs. The new pull request  is fully functional and has support for the following flags:

recursive: true //default 'false'
size: 5 //default 750. File chunk size in Kb.
checksum: false //default 'false'. False will skip files if their size AND modified times are the same (regardless of content difference).
time: true //default 'false'. Preserves file modified time when syncing.
links: true //default 'false'. Copies symlinks as links instead of resolving.

While the recursive and size flags existed in the prior post, the rest of the flags have been added since then.

The checksum flag behaves identically to its linux rsync counterpart. Without it (default), rsync will skip files if their modified time (mtime) and size are identical, regardless of the actual contents of the files. With the flag set to true, rsync will still attempt to sync these files, incase the contents do differ.

The time flag, as well works like its linux counterpart. With it set to true (default of false), the file’s modified time will be copied and the newly synced file will be modified to use that mtime.

The links flag (default of false) will let you copy symbolic links instead of resolving them as is the default behavior.

By combining these three flags (filer has no concept of permissions, groups, ownership or special devices as of right now), one can use them as rsync -a (archive flag) which is a combination of -rlptgoD (recursive, preserve links, preserve permissions, preserve times, preserve groups, preserve ownership. That also means the the copy method for filer which would be based on rsync can finally be implemented once this PR is reviewed and merged. Considering the time left after tackling the rsync changes and finishing the pull request, I decided to work through some other simple bugs to try and clear up the issues list for filer a bit (it’s now up to 3 pages!)

 


Next up is Issue78. In the filer tests for every test that expected an error, error codes / names should be checked to make sure the error is the one you are expecting to get. This would not only ensure that our tests and functions were working as intended, but also ensure that with thorough testing we can catch future problems before they’re merged in and cause new problems. This issue was fixed in this pull request.

 


Next was issue #61, reserving the first three stream descriptors (0,1,2) for POSIX stream descriptors (stdin, stdout, stderr). This was a simple change that involved increasing the initial value our file descriptors start at, and then adding constants and exposing them via the FileSystem object as properties. This is found in this pull request.

 


Next was issue #96, standardizing our encryption adapter to take a provider as its first argument. Thankfully our encryption provider was not used anywhere in code other than several tests, so this was a simple change. I’m glad that something like this was caught and done early on though, as its always good to have standardization between different components of your system. It’s a lot worse to realize a few months later that every single adapter you have is designed completely different and they all need to be reworked to a single standard. The changes for this pull request can be found here.

 


The last issue I tackled was Issue #75. An issue that was originally opened a while ago and was picked up but never resolved. I figured I’d clear up this issue as it has now been open for a while with no progress and I knew that the assignee was no longer working on the Filer project. As the array format for flags was already present in the documentation and examples found in the README, all that was required for this issue was the addition of a test. A test that as of right now passes using Chrome’s in-browser testing suite but fails with TravisCI’s test process. This is something I have to further look into as of right now. Pull request for this can be found here.

 

Lastly, I’ve found that I’ve learned a lot more than I thought throughout this course. Not only am I more comfortable with JavaScript (something I’ve had 0 experience with before these two DPS courses) and unit testing (having never had to really unit test my work), but I’ve noticed I’m able to track down and figure out issues I’m not familiar with a lot faster, and am starting to recognize certain problems right away when encountering them. I’ve also experienced firsthand just how a small observation (like Chrome’s optimization of readonly IndexedDB transactions before readwrite ones) can lead to huge changes worldwide (in the IndexedDB spec in this example). This course provided much needed real world programming experience we would have otherwise lacked during our course of study.

 


Links

 

Issue 72Pull Request

Issue 78Pull Request

Issue 62Pull Request

Issue 96Pull Request

Issue 75Pull Request


by pbouianov at April 19, 2014 12:39 AM

April 18, 2014


Kevin Kofler

Release 6, Release 7, and Reflections

Hello everyone. This will be my last post for my Open Source electives, and I can honestly say I’ve enjoyed my time working with the assortment of projects I was able to get involved with. Without further ado, my updates:

I had a few outstanding issues to resolve with earlier PRs. For my issue 122 fix I had to resolve a nit modeswitch had with my sh.mv documentation. I had explicitly included ‘./’ in most of my examples, and he said that since that would be implicit I could just omit every reference. I also had to fix an asynchronicity issue in my Unix timestamp fix (done() was called twice in one test, and I resolved by making timestamp conversion synchronous and catching parse errors instead).

My first new issue, issue 158, involved adding a new test to fs.watch()’s spec to ensure that when a file is changed, watches on hardlinks are updated as well. I added the test, and was able to verify that it worked, but found out that fs.watch() is not working correctly. When original files are modified, watchers tracking hardlinks will not be notified, so my test currently times out. When I receive more feedback on my PR, I’ll probably be filing a new bug to report this issue.

My next issue, issue 87, required an implementation of fs.fsync(), a command which typically writes open file buffers to disk when called. Because of the way filer’s architecture is laid out at the moment, fsync() doesn’t apply (we don’t have unwritten buffers that would wait for such a command). For the time being, fsync() has been implemented as a no-op. I did include some basic tests and documentation changes, as well.

Issue 7 required project documentation to be altered so that it explicitly mentioned the accepted type for our ‘buffer’ parameters. The only valid argument is a subclass of ArrayBufferView; ArrayBuffers were invalid, but there was no mention of that in the docs, and for some it would be counter-intuitive. This was a quick and straight-forward change.

Issue 97 was a similarly simple documentation change. In an earlier issue, Dave and modeswitch had determined that we will not be implementing fs.realpath() at this point in the software stack, because there is no awareness of a current working directory. For that reason, it has been relocated to filer.io, which should provide that context. I changed the documentation entry to explain this choice in case users wanted it for node.js parity.

My last issue, issue 42 required modifications to fs.read() to accommodate node.js’ legacy argument format. I managed to get this implemented (despite some headaches with remapping arguments), and included a test.

Finally, I wanted to summarize how these courses have shaped my opinions of open source development. Above all, I have realized the value of open cooperation and collaboration. These tenets result in an unequaled environment for learning best practices and production level programming. Having a networked community (via Github and IRC) presents opportunities that wouldn’t appear in typical bedroom coding scenarios. Through these courses, I learned that JS is actually a viable language for web application development (and that the open frameworks supporting that paradigm are much more refined than I would have thought). I was introduced to a variety of technologies and patterns that I would not have been exposed to in my regular courses; marketable skills that have helped me in my co-operative education pursuits. It also gave me the ability to orient myself in new code bases, experience I had been sorely lacking until now. I’ve learned that if I want to be successful moving forward, I need to make more of an effort to assert myself, communicate with my peers, and schedule ample time for professional development.

Thanks everyone. I just wanted to say that I’ve had fun, and I certainly intend to continue with my contributions.

GLHF

Kevin Kofler

PRs:
Issue 86
Issue 122
Issue 158
Issue 87
Issue 7
Issue 97
Issue 42


by kwkofler at April 18, 2014 10:44 PM


Alexander Snurnikov

OSD Final Release – CSP and more

Good Day! The time goes fast, and I’m already at the end of my 5th semester and writing my final release for my Open Source class.
First of all, I would like to say Thank you to Dave for this amazing experience and excellent class – Open Source Development. Such a great class with such an awesome professor. I learned much more that I expected. Also, I would like to say thank you to all Webmaker team for their help and willing to help in all kinds of situations. I was involved in a real code, in a real projects with all sorts of bugs and improvements. That was my first time dealing and being in a real world programming life and Open Source as well. I am more than motivated to continue my contribution to Mozilla Open Source World. Personal thanks to @jbuck (Jon Buckley), who was mentoring me during my CSP implementation. :)
I learned a lot new tools and improved my skills in all sorts of Web Development. My personal thought is: when you are involved in Open Source, you are open yourself to learn more and more every day; and this is super cool. You will be always up to date and interested in what you are doing.

Lets get back to my release. During the last week I was working on requireJS bug and also finished and merged my CSP implementation for Thimble and Goggles.

This week contribution:

  • Bug 995318update index.html in Goggles to use requireJS. Fixed and merged. PR can be found here. Two more similar bug are waiting to be fixed. That will be my next step.
  • Bug 959271CSP for thimble. Finally got landed and I’m so happy with that. PR can be found here. All the webmaker components, except popcorn, are using CSP. It took a semester for me to learn, create, fix and land this feature to all the webmaker components and now I am really happy that it works good and I included my work into webmaker.
  • Bug 990786fixing CSP for Goggles. Basically, removing CSP from all the pages, but ‘publish’, where injection could be present. Fixed and merged. PR can be found here.
  • Bug 995781Correcting alignment and profile image shape for Goggles. Fixed and merged. PR can be found here. I fixed the image shape and alignment on the publish page. After my little involvement with CSS, I found out that it is really fun and interesting as well.

That is basically it for my final release.
I would like to say that my classmates were really involved into the course as well, and I liked the presentations they did during the semester, where I learned something new and something more. This kind of approach gave all the classmates to learn even more than only their area of bugs.
Great experience with a great team! :) Thank you!:)
PS: will be off next week – going to see Montreal :)


by admixdev at April 18, 2014 10:40 PM

April 17, 2014


Hesam Chobanlou

Final Thoughts on SPO600

SPO600 was an interesting yet challenging course. First off, I enjoyed the fact that the course was not following a simple and linear structure; rather it encouraged students to think on their own and find solutions to each of their unique problems. In another sense what I mean is that what we were taught in this course was not something that you can just pick up a book on and learn on you own. Rather, SPO600 was a course that enhanced my problem solving abilities and overall Linux knowledge. The most important aspect of SPO600 however was its introduction to Open Source software. More specifically, it helped me to understand the overall structure of Open Source communities and their development processes. Although in the past I had wanted to join an Open Source community, I had never been able to find a place where I could start, however, as it stands now I have the preliminary knowledge necessary to jump into most projects out there with some sense of purpose. With that said, I don’t think this course is over for me just yet and won’t be for some time. I think I will continue to build on top of what I’ve learned from this course.

Of course, none of it would have been possible without Chris Tyler and his unique teaching style. I hope that the course will continue for many more semesters and have a similar impact on future students.

by Hesam Chobanlou (hesamc@hotmail.com) at April 17, 2014 09:29 PM


Armen Zambrano G. (armenzg)

Mozilla's pushes - March 2014

Here's March's monthly analysis of the pushes to our Mozilla development trees (read about Gaia merges at the end of the blog post).
You can load the data as an HTML page or as a json file.

TRENDS

March (as February did) has the highest number of pushes EVER.
We will soon have 8,000 pushes/month as our norm.
The only noticeable change in the distribution of pushes is that non-integration trees had a higher share of the cake (17.80% on Mar. vs 14.60% on Feb.).

HIGHLIGHTS

  • 7,939 pushes
    • NEW RECORD
  • 284 pushes/day (average)
    • NEW RECORD
  • Highest number of pushes/day: 435 pushes on March, 4th
    • NEW RECORD
  • 16.07 pushes/hour (average)

GENERAL REMARKS

Try keeps on having around 50% of all the pushes.
The three integration repositories (fx-team, mozilla-inbound and b2g-inbound) account around 30% of all the pushes.

RECORDS

  • March 2014 was the month with most pushes (7,939 pushes)
  • March 2014 has the highest pushes/day average with 284 pushes/day
  • February 2014 has the highest average of "pushes-per-hour" is 16.57 pushes/hour
  • March 4th, 2014 had the highest number of pushes in one day with 435 pushes



DISCLAIMERS

  • The data collected prior to 2014 could be slightly off since different data collection methods were used
  • Gaia pushes are more or less counted. I will write a blog post about in the near term.

Creative Commons License
This work by Zambrano Gasparnian, Armen is licensed under a Creative Commons Attribution-Noncommercial-Share Alike 3.0 Unported License.

by Armen Zambrano G. (noreply@blogger.com) at April 17, 2014 09:18 PM


Moshe Tenenbaum

Problem Areas Isolated…. Now Attack!

In order to increase portability, I thought that eliminating inline assembler is the obvious way to go about it. In the root/include/libbb.h file, I replaced the following code (note Line 4):

snip10with

snip11 snip12

 

PThreads are a fairly complex method of threading in C. What they offer, though, is the ability to introduce portability into what would otherwise require memory barriers.

After creating this test file, I attempted in multiple ways to attempt a way to integrate this code into the original file. Due to the complexity of this as well as the later byteswapping code, I decided to leave this as is and focus on the byteswapping code.

If anyone would like to use my work above, feel free to do so.


by mctenenbaum at April 17, 2014 08:51 PM


Michael Veis

Final Release

I am not sure when final marks are due for the professors, so I just wanted to get this release out there now. For this release I worked on two issues for makerstrap and one thimble bug which had a makerstrap tie in. The two issues I worked on for makerstrap was issue 59 and issue 13. The bug I worked on for thimble was bug 971878.

Issue 59 for makerstrap involved going through the entire makerstrap documentation and making sure the new hosted version of makerstrap was being linked everywhere. This meant I had to check all the links where the old link was published and make the change. I also had to go through all the makerstrap thimble pages and update the link. I had created a lot of these pages myself, so I was able to just edit them directly. The ones Kate had made I had to remix to my thimble and update the page and then update the entire link in the documentation. They also wanted users to be able to see older versions of makerstrap through a hosted link. To do this I created a new section in the documentation called versions where I created a table and provided that host link so users could replace the versions section in the url and check different versions. The pull request for this issue can be found here.

Issue 13 for makerstrap involved explaining the difference between the makerstrap.complete.min.css and makerstrap.min.css in the makerstrap documentation. When first starting this bug I really wasn’t sure why we had two versions or what the difference between them was. After investigating they are very similar, the only difference is the makerstrap.complete.min.css has all the makerstrap css plus it includes links to the open sans font and font awesome icons. Therefore you can just link to this file and have everything, however it should not be used in production as the open sans font and font awesome icons link can just be linked separately and only if needed in production. To document this to users I just add a section in the versions table that I created in issue 59 and documented that they were both minified files, makerstrap.complete.min.css shouldn’t be used in production and that it had font-awesome icons and open sans links included in the file. Then for makerstrap.min.css I noted that it didn’t have the open sans font and font awesome icons link included. I also noted that this is the one to use in production. The pull request for this issue can be found here.

Bug 971878 was a bug in thimble to include makerstrap templates and code snippets from makerstrap. During work week it was reported by users that they would like some templates to work with in thimble. Now I had not done any work with thimble before so I didn’t have the repository forked. Now I was expecting the setup for me to be painful to get thimble running but it was actually very easy. I forked the repo and I was able to follow all the instructions and not encounter any problems. This was quite an accomplishment for me as I remember when I was first setting up webmaker back in DPS909 and had all sorts of trouble setting everything up. Now after I had thimble running locally I created a new button called “New from template”. I had spoken to Kate and she suggested I use that name and then just link templates on that page. I setup a new thimble page with the title “Makerstrap Templates”. I had already created different makerstrap components in thimble pages so I created links to all those on the thimble page. I also created a blank thimble page that just had a link to the makerstrap.complete.min.css file in case anyone had used makerstrap before and just wanting to start righting some code in thimble right away. I also added a “Mix and Match” template that just had a variety of different makerstrap componenets all linked on one page. I did this in case there was anyone that didn’t want to open a bunch of different pages, so this way they could see a variety of components at one. Once I linked all the templates I published the page and linked it to the “New form Template” button. The pull request for this bug can be found here.

Overall I just want to wrap this blog post up by saying that I had a great semester 7 and 8 taking dps 909 and dps 911. I had never done any open source work before taking dps 909 and Dave did such a great job getting everyone involved with the webmaker community. Dps 911 gave us a lot more freedom in getting even more active in the community. The presentations in class were very good to as we got a chance to get up in front of an audience and try and explain what we had been doing so everyone could understand. I felt that I had improved my communications skills a lot from this course and it helped me in another class when I made a presentation about Firefox OS because I wasn’t so nervous because I had done so many presentation throughout the semester.  I didn’t only improve my presentation skills from these presentation, but I also learned about other projects other people in the class were working on, which were all very interesting and allowed me to learn about other things I had not heard about before or heard very little about.

I would like to end this post by thanking Dave for teaching these two courses as they provide a great opportunity for students to work with a real team of people that are working on real code that is constantly being updated and being used by users world wide. I would also like to thank the entire webmaker team for welcoming me in the community and answering all questions when I did have them. I would also like to thank Kate for allowing me to be an active contributor for makerstrap. As you can see from this post I had an amazing two semesters taking both these courses and I will defiantly be staying active in the community after the course is over. I learned so much and can’t wait to learn even more in the future as I continue to do more open source work.

Wrap Up

The pull request to issue 59 can be found here.

My initial commits for issue 59 can be found here.

The pull request for issue 13 can be found here.

My initial commits for issue 13 can be found here.

The pull request for bug 971878 can be found here.

My initial commits for bug 971878 can be found here.


by mlveis at April 17, 2014 06:11 AM


Eugen (Jevenijs) Sterehov

Project Updates (Final)

Well, with the little time that I had, I dabbled with the Eina package, trying to get it t build for AArch64, but I can't say I have had any luck with it. Doesn't look like there is any AArch64 support for it, and I did not have enough time to try and get to the bottom of why that is.

Seeing how this is my last post, for this course at least, I must leave it with a conclusion and my overall thoughts. Well it's been a ride, for sure. It seemed like surprisingly long 4 months, but I definitely feel like I've come quite a long way. Knowing nothing about assembly and very minimal knowledge about architectures, I must say this course has definitely left me with a solid foundation for those topics that I can easily build up on. And even though I haven't put together any patches for the open source projects I was looking at, I've gained much knowledge of where and how about the open source communities and contribution procedures. Last but not least, as a previously very minimal Linux user, this course has forced me to quickly adapt and learn my way around the OS and taught me to appreciate it a whole lot more. Looks like I'll be leaving it installed on my PC (as a dual boot).

If anyone in Seneca is looking into getting their feet wet in the open source contribution and learning their way around the process itself, I must say, SPO600 is definitely the way to go. Best part about the course, of course, is the lack of the exam, ha-ha.

As I said, I will be taking a lot of useful knowledge away with me from this semester and definitely looking at expanding on that knowledge in the near future!

by Eugen S. (noreply@blogger.com) at April 17, 2014 12:53 AM

April 16, 2014


Matthew Grosvenor

The Final Countdown

So it's Wednesday the 16th of April. And for some reason we're still with snow. Quite an odd day, but one in which I will be posting what is likely my last mark-able blog post for SPO600. As much as I'd like to say it's going to be built, that is unlikely.

I've updated my previous rough draft code a bit, once I found my mistake with the random register values. For example, the atomic add now looks like:

           static __inline__ void atomic_add(int i, atomic_t *v)
           {
                     __asm__ __volatile__(
                                   SMP_LOCK “add  %1, %0, %0“
                                   :“=m” (v->counter)
                                   :"ir” (i), “m” (v->counter))    ; - this needs fixing/porting to proper aarch64
           }
    
No need to provide actual registers since their own code is just using whichever ones are available to them.

I've also manager to complete the port of subtract, subtract and test, increment, increment and test, decrement and decrement and test, which I've maybe inconveniently added below. It wasn't too much to change really. I just hope "=m" and "=qm" are still valid in aarch64 assembler, as I was not able to find replacements for them. The c-style comments are for blog readability and not actually in the code.

//subtract
static __inline__ void atomic_sub(int i, atomic_t *v)
{
    __asm__ __volatile__(
            SMP_LOCK “sub %2, %1“
            : “=m” (v->counter)
            :"ir” (i), “m” (v->counter));
}   
        
//Subtract and test
static __inline__ int atomic_sub_and_test(int i, atomic_t *v)
{
    unsigned char c;

    __asm__ __volatile__(
            SMP_LOCK “sub %2, %0; beq %1“
            :"=m” (v->counter), “=qm"(c)
            : “ir” (i), “m” (v->counter) : “memory”);
    return c;
}

//Increment
static __inline__ void atomic_inc(atomic_t *v)
{
    __asm__ __volatile(
           SMP_LOCK “add %0“
           :"=m” (v->counter)
           :"m” (v->counter));
}

//decrement
static __inline__ void atomic_dec(atomic_t *v)
{
            __asm__ __volatile__(
                    SMP_LOCK "sub %0"
                       :"=m" (v->counter)
                       :"m" (v->counter));
}

//Decrement and test
static __inline__ int atomic_dec_and_test(atomic_t *v)
{
        unsigned char c;

            __asm__ __volatile__(
                      SMP_LOCK "sub %0; beq %1"
                     :"=m" (v->counter), "=qm" (c)
                     :"m" (v->counter) : "memory");
        return c != 0;
}

//increment and test
static __inline__ int atomic_inc_and_test(atomic_t *v)
{
        unsigned char c;

            __asm__ __volatile__(
                     SMP_LOCK "add %0; beq %1"
                         :"=m" (v->counter), "=qm" (c)
                       :"m" (v->counter) : "memory");
        return c != 0;
}

//Check to see if addition results in negative

static __inline__ int atomic_add_negative(int i, atomic_t *v)
{
        unsigned char c;

        __asm__ __volatile__(
               SMP_LOCK "add %2,%0; bne %1"
                       :"=m" (v->counter), "=qm" (c)
                       :"ir" (i), "m" (v->counter) : "memory");
        return c;
}

What remains of the atomic.h is a mask clear and set, and I need to track down the proper aarch64 versions of  logical 'andl' as well as 'orl', which thanks to the ARM pdf file we grabbed ages ago for class, have made themselves evident. I am not sure if they even require a port, since the code comments say they are x86 specific, Better safe than sorry I say (and say only for this particular instance). 

It's also not immediately clear if exclusive OR rather than inclusive OR needs to be used, which is another issue, but I would think the comments would mentioned if it wasn't inclusive. Inclusive it is (and firing off an email to the devs just to be sure).

//Mask code:
#define atomic_clear_mask(mask, addr) \
__asm__ __volatile__(
SMP_LOCK "AND %0,%1" \
: : "r" (~(mask)),"m" (*addr) : "memory")

#define atomic_set_mask(mask, addr) \
__asm__ __volatile__(SMP_LOCK "ORR %0,%1" \
: : "r" (mask),"m" (*addr) : "memory")


So for the purposes of SPO600, I do believe that's about all she wrote.
Ported atomics.h code for aarch64, the other asm is in dependency files, which obviously I have no control over, but all of which have aarch64/noarch versions for arm64 either out in the wild or just not in yum repositories (i'm looking at you, fftw3), and that odd "you're missing these tools" problem when I try to run the ./autoregen.sh.

I will likely still continue to work on this package over the summer with the community on my own time, and fix any issues with my code, and perhaps even submit it. Sadly, that can't be taken into account for marking purposes, but moral victories have value too

by Gabbo (noreply@blogger.com) at April 16, 2014 08:57 PM

For For The Win 3 or how I got fftw3 installed on qemu

So, as I mentioned in an early blog post, fftw3 is required for Rubberband to operate and install smoothly on Linux, but v3 does not exist on the yum repository. What is a person to do!?

Well, I went ahead and grabbed the aarch64 compatible rpm source from rpmfind (as well as the fftw3-libs long and single sources), and ftp'd the files into my directory on Ireland in both x86 and arm64.

So while I cannot definitively say "yes" to fftw3 working on arm64 at the moment (all of the files installed properly at the very least), all the issues with x86 are out of the way, and the one hanging thread of a dependency on arm64 is but a tar unpack/install away from also being as such. Again, I went to work porting code, since that seemed of more pressing interest to the sake of this blog/course.

Of course running /autoregen.sh still gives me this:
----------------------------------------------------------------------
Checking basic compilation tools ...

    pkg-config: found.
    autoconf: found.
    aclocal: found.
    automake: found.
    libtool: found.
    gettext: found.
You do not have autopoint correctly installed. You cannot build SooperLooper without this tool.

No matter how many times I've gone back and made sure those particular files/packages are installed on arm64.

The code is more important...

by Gabbo (noreply@blogger.com) at April 16, 2014 08:52 PM


Yoav Gurevich

The Last Ditch Effort, Pt. 2

To my great dismay, I'm finding myself riddled with more roadblocks to a successful build and benchmark on the aarch64 port of Gnash as the semester comes to a close:

Word has reached me from the community regarding some of the inline assembly in "jemalloc.c". Apparently, as I've been researching, they integrated an updated version of the memory allocation algorithm into Gnash that already includes aarch64 implementation logic. Their github repository can be found and cloned using this link. The "pause" instructions for the cpu_spinwait loop logic blocks remain the same, but were moved to the configure.ac file, while a new memory allocation value for aarch64 was added as a pre-processor directive block in "jemalloc_internal.h.in" located in the include/jemalloc/internal directory:

# ifdef __aarch64__
#   define LG_QUANTUM       4
# endif

From reading the API's documentation, it simply serves to appropriately declare the minimum or ideally efficient amount of bytes for memory allocation use cases and is utilized in conjunction with other tools such as Valgrind for memory leak detection, memory allocation debugging, and profiling. The aforementioned block of code is one of many for various architectures. Below is a quoted paragraph from the implementation notes of the API that seem most relevant to the logic behind this code block:

"Traditionally, allocators have used sbrk(2) to obtain memory, which is suboptimal for several reasons, including race conditions, increased fragmentation, and artificial limitations on maximum usable memory. If --enable-dss is specified during configuration, this allocator uses both mmap(2)and sbrk(2), in that order of preference; otherwise only mmap(2) is used.
This allocator uses multiple arenas in order to reduce lock contention for threaded programs on multi-processor systems. This works well with regard to threading scalability, but incurs some costs. There is a small fixed per-arena overhead, and additionally, arenas manage memory completely independently of each other, which means a small fixed increase in overall memory fragmentation. These overheads are not generally an issue, given the number of arenas normally used. Note that using substantially more arenas than the default is not likely to improve performance, mainly due to reduced cache performance. However, it may make sense to reduce the number of arenas if an application does not make much use of the allocation functions.
In addition to multiple arenas, unless --disable-tcache is specified during configuration, this allocator supports thread-specific caching for small and large objects, in order to make it possible to completely avoid synchronization for most allocation requests. Such caching allows very fast allocation in the common case, but it increases memory usage and fragmentation, since a bounded number of objects can remain allocated in each thread cache.
Memory is conceptually broken into equal-sized chunks, where the chunk size is a power of two that is greater than the page size. Chunks are always aligned to multiples of the chunk size. This alignment makes it possible to find metadata for user objects very quickly.
User objects are broken into three categories according to size: small, large, and huge. Small objects are smaller than one page. Large objects are smaller than the chunk size. Huge objects are a multiple of the chunk size. Small and large objects are managed by arenas; huge objects are managed separately in a single data structure that is shared by all threads. Huge objects are used by applications infrequently enough that this single data structure is not a scalability issue.
Each chunk that is managed by an arena tracks its contents as runs of contiguous pages (unused, backing a set of small objects, or backing one large object). The combination of chunk alignment and chunk page maps makes it possible to determine all metadata regarding small and large allocations in constant time.
Small objects are managed in groups by page runs. Each run maintains a frontier and free list to track which regions are in use. Allocation requests that are no more than half the quantum (8 or 16, depending on architecture) are rounded up to the nearest power of two that is at least sizeof(double). All other small object size classes are multiples of the quantum, spaced such that internal fragmentation is limited to approximately 25% for all but the smallest size classes. Allocation requests that are larger than the maximum small size class, but small enough to fit in an arena-managed chunk (see the"opt.lg_chunkoption), are rounded up to the nearest run size. Allocation requests that are too large to fit in an arena-managed chunk are rounded up to the nearest multiple of the chunk size.
Allocations are packed tightly together, which can be an issue for multi-threaded applications. If you need to assure that allocations do not suffer from cacheline sharing, round your allocation requests up to the nearest multiple of the cacheline size, or specify cacheline alignment when allocating.
Assuming 4 MiB chunks, 4 KiB pages, and a 16-byte quantum on a 64-bit system, the size classes in each category are as shown in Table 1."

Table 1. Size classes
CategorySpacingSize
Smalllg[8]
16[16, 32, 48, ..., 128]
32[160, 192, 224, 256]
64[320, 384, 448, 512]
128[640, 768, 896, 1024]
256[1280, 1536, 1792, 2048]
512[2560, 3072, 3584]
Large4 KiB[4 KiB, 8 KiB, 12 KiB, ..., 4072 KiB]
Huge4 MiB[4 MiB, 8 MiB, 12 MiB, ...]

========================================================================

I have yet to hear from the community in relation to my further inquiry about the minimalist gnu for windows asm block in "utility.h" and whether or not there is a way or a necessity to port this operation into aarch64. The code block can be seen below:

#ifndef __MINGW32__
#undef assert
#define assert(x)           if (!x)) { __asm { int 3 } }
#endif

From my own research, there is a way to build MingW on cross-platforms for whatever required processor, via these how-to instructions that effectively tell you to download the required libraries and change the specific target variables to your particular processor/architecture. Either way, the arm instruction equivalent that I would have tested would have been a "BRK 0" instruction replacing the "INT 3" Intel x64 call for a debugging breakpoint, as mentioned in my earlier march summary recap post. Unfortunately, I cannot test any of this proposed logic due to the dependency issues in the build, discussed below.

My biggest trouble has been trying to simply run the configure and make files on the qemu environment. The purported development version of Gnash is currently 0.8.11, yet in any of the channels that I've tried to download the source code from, including the repository clone from github, the repository from fedpkg, and a few Canadian FTP Mirrors for all GNU-related packages, everything stops at 0.8.10 even though 0.8.11's changelog is already posted up on Gnash's Wiki page

Regardless, after running the ./configure command with the added parameter of changing the build to aarch64 in the package's present state ("./configure --build=aarch64-unknown-linux"), I cannot install one crucial missing dependency on the qemu environment according to the resulting output - that being xulrunner-devel. This is a mozilla-related package that according to this Bugzilla discussion has still yet to be upstreamed in an aarch64-compatible version as of the 10th of April. This is further proven with the lack of any matches in relevant yum repositories of the package when trying to yum install or run a yum search all query.

In conclusion, with the added scheduling and time constraints put on by the 14-week curriculum of the semester structure and the workload of other classes, I was not able to achieve the desired and ideal amount of work in order to achieve the optimal amount of progress for a package with the amount of obstacles I have come across such as Gnash. My hope is that my research and work herein will serve as a potentially useful tool for the rest of the community going forward in the effort to successfully build, run, and optimize this package for 64-bit ARM implementations. I will upload a public github repository of my environment in its current state and post a link to it on this blog at the community's behest.

by Yoav Gurevich (noreply@blogger.com) at April 16, 2014 08:39 PM


Armen Zambrano G. (armenzg)

Kiss our old Mac Mini test pool goodbye

Today we have stopped running test jobs on our old Revision 3 Mac Mini test pool (see previous announcement).

There's a very very long lit of people that have been involved in this project (see bug 864866).
I want to thank ahal, fgomes, jgriffin, jmaher, jrmuizel and rail for their help on the last mile.

We're very happy to have finally decommissioned this non-datacenter-friendly infrastructure.

A bit of history

These minis were purchased back in early 2010 and we bought more than 300 of them.
At first, we run on them Fedora 12, Fedora 12 x64, Windows Xp, Windows 7 and Mac 10.5. Later on we also added 10.6 to the mix (if my memory doesn't fail me).

Somewhere in 2012, we moved the Mac 10.6 testings to the revision 4 new mac server minis and deprecated the 10.5 rev3 testing pool. We then re-purposed those machines to increase the Windows and the Fedora pools.

By May of 2013, we stopped running Windows on them.
During 2013, we moved a lot of the Fedora testing to EC2.
Now, we managed to move the B2G reftests and Firefox debug mochitest-browser-chrome to EC2.

NOTE: I hope my memory does not fail me

Delivery of the Mac minis (photo credit to joduinn)
Racked at the datacenter (photo credit to joduinn)



Creative Commons License
This work by Zambrano Gasparnian, Armen is licensed under a Creative Commons Attribution-Noncommercial-Share Alike 3.0 Unported License.

by Armen Zambrano G. (noreply@blogger.com) at April 16, 2014 05:54 PM


Andrew Smith

FSOSS 2013 Robots Competition

Took me a half a year to finish this video. It’s my first exercise in video editing. Enjoy!

It took me this long because:

  • The only program I had any idea how to use was Windows Movie Maker. But I didn’t want to use it for several reasons that should be obvious to you.
  • My wife edits video all the time but she uses a mac. iMovie works for her but I have no desire to become a mac user.
  • I didn’t really want to learn Adobe Premiere or the like because that kind of software costs too much money.
  • Linux is.. as interesting when it comes to video editing as it is in many other ways.

I’ve decided to invest the time into learning Cinelerra. It’s the most serious video editing tool on Linux, has been around for a while, and has a paid supported version which means it’s more likely to survive for a while still.

It was a large investment in time, editing video is very different from any other kind of editing, but hopefully it will pay off longterm.

by Andrew Smith at April 16, 2014 05:46 AM


Matt Jang

SPO Project Part 7: What to do about FreqTweak? & Project Conclusion

SPO Project Part 6

Recap

So, in FreqTweak there is one part of inline assembly that counts cpu cycles. In part 4 I show the comparisons between the assembly and fallback versions. The fallback produced very different results and comments hinted at there needing to be a aarch64 version coded for it.

gettimeofday vs rdtsc

I decided to look up the differences between the fallback and the x86_64 version. The x86_64 version uses a directive called rdtsc. The fallback uses a function named gettimeofday. On a few discussions threads like this one, some people actually said that clock_gettime is better than gettimeofday as far as precision goes. However, another source said that the gettimeofday function can have the same precision depending on hardware also. Regardless of how well either of these functions preform, its clear that they are both not as accurate or do not fulfill the same purpose at rdtsc.

FreqTweak Conclusion

If I had more time for this project, I would look into coding an aarch64 version of the cycle counter for this software. Strictly speaking, this code isn’t even needed for the program as its only used for debugging. The actual function of the program will not change too much if this function even just returned 0 always (of course it would be harder to debug the certain piece of code this is used in.

Project Conclusion

So I have established that neither package has any part of their source code that actually needs to be changed in order to compile on aarch64. Why haven’t they been compiled for aarch64 yet then? Well there is a reason for each one.

  • MapServer: For map server, there is actually a dependency missing before it can be compiled. This dependency is gdal. If this gets fixed, I assume it will only be a few more steps to compile this on aarch64. This isn’t too bad though as it is only one dependency out of around 40.
  • FreqTweak: I actually ran into a different problem compiling FreqTweak on aarch64 other than dependencies. First, when I tried to compile with the make command, there were weird random strings of “unsupported architecture aarch64″ or something put into compiler options causing random errors. So I removed them and then I got a random error saying that it couldn’t find some files from some wx library. This error actually showed up a whole lot and I wasn’t able to find the cause of it. I was actually able to install all of the dependencies for this package so in my mind, this is the last hurtle to get this to compile. I have the wxWidgets installed successfully (I assume that is where the files are from) but it doesn’t seem to pick them up like it does on x86_64.

I will keep playing with this new architecture on my own after this course because it really is actually interesting. This project is actually the first time I really used linux for anything really. I felt like I actually had to get a bit deeper into the whole linux tool-chain to get some stuff done. I used the command line more than I have ever in my life to date (and I enjoyed it).

 


by sinomai at April 16, 2014 01:18 AM

April 15, 2014


Marcus Saad

Firefox OS, the web is the plataform!

The event

On April 10th and 11th of 2014, the V Workshop Tocantinense de Sistemas de Informação took place in Palmas, Tocantins. The event was held by Faculdade Católica do Tocantins (FACTO) in partnership with other institutions such as UNITINS. The goal of the event is to stimulate entrepreneurship inside academic environment and to encourage the production of apps for mobile platforms.

Day One

Opened to the general public, the event was divided in two days. For the opening of the first, we had Alfredo Beckert talking about “How to build a startup and endeavor inside the academic environment”, followed by my talk “Firefox OS, The web is the platform!” in regards of Mozilla’s community.

From right to left, Alan Rincon, Galileu Guarenghi, Alfredo Beckert e Marcus Saad How to build a startup and endeavor inside the academic environment, by Alfredo Beckert Firefox OS, the web is the platform! Conversation during our lecture

Day Two

On day two, we had several mini courses happening concurrently and talking about development for mobile platforms. Professor Silvano Malfatti offered an Introduction to Objective-C (iOS). Taking place at the same event, an Introduction to Android Programming was being offered by Luiz Carvalho.

I was invited to demonstrate and talk more about our newly released mobile operating system, Firefox OS, to the crowd who was already familiar with mobile development. That being said, the event acted as entrance point to some of the biggest mobile platforms available, demonstrating their strength and weaknesses, pleasing either the crowd that was looking forward a more mature and stable platform or those looking for an exciting, innovative and filled with opportunities.

On our mini course about Firefox OS development, the content presented allowed an introduction on what is new on HTML5, CSS3 and JavaScript, along with a demonstration of B2G capabilities, design and development best practices, tools for development and debugging, frameworks such as building blocks, l10n.js, WebAPIs and finally what to expect regarding Marketplace’s submission and review process.

Attendees were then stimulated to group up and develop a fast yet useful application, so that they could be aware of common mistakes that happen during the development phase and how to validate and submit their apps to Marketplace.

Ideas presented vary from a BMI calculator, a GPS data collector, to a tourist app that showcases the most visited beaches in Palmas.

Firefox OS, the web is the platform! Waiting people to arrive Helping attendees One of the apps created - Palmas Beach

Day Three

After the event was officially finished, Professor Silvano Malfatti invited me to go even further on Firefox OS to his post-graduate class, at Faculdade Católica do Tocantins (FACTO). The group was formed by already experienced app developers with apps published in both Apple Store and Google Play Store. Their biggest excitement about the platform is the ease of development and the facilitated process of getting it live on the Marketplace. However, some weaknesses were raised by them, such as commercializing their apps. They want to be able to sell their apps, which at the time of the talk, wasn’t possible yet. Another highly emphasized topic, is that the majority of WebAPIs they were interested in are unavailable for certified apps. (This is a vision that I personally share with them. Offering some of the most interesting APIs only to OEM and Mozilla apps, can be a shot to the feet in the future).

 

Conversa sobre dispositivos disponíveis durante aula para alunos de pós graduação.

Talking about devices that are already available for the public.

 

Local Midia Coverage

Local media coverage was extremely exciting, the event had been extensively promoted by the involved institutions, hence the large number of attendees from different cities and educational institutions. Links collected on the internet are:

 Metrics and Feedback

Because this event was supported by Mozilla through the Budget program for Reps, we had to define some metrics to be able to analyse whether or not it was a successful activity. Some of the metrics involve the amount of attendees and the return to Mozilla Brazil community. Let’s go ahead and talk further more about it so that we can measure how successful it was.

Metric 1 – 200 attendees on the opening event

Success! Event opening was a big blast, bringing more attendees than we expected. We had around 250 people at UNITINS’s auditorium, mixing people from several different backgrounds, just like Mozilla. As can be seen in the pictures, the crowd as excited and energetic, replying to questions and laughing on jokes.

Who believes that Firefox has been better in the past?

Who believes that Firefox has been better in the past?

 

After my talk has ended, we opened some time for questions from the public, and some in specific caught my attention.

Marcus, how do you make a living with FLOSS? Is it possible to live mainly from free software/open source? How can Mozilla pay its bills if it “sells” a product?

This is a very common question, and that never should be ignored due to its importance. Unfortunately, I cannot make a living only contributing to open source projects (even though that is my dream [Mozilla, I'm unemployed!]). However, it is indeed possible. It is still very hard, but in some countries where the open source crowd has higher voice and money is being put on open source solutions it does happen. There is a noticeable growth in the Brazilian scenario, being the south and southeast the main area of FLOSS projects.

Mozilla Foundation is a not for profit organization that survives through some artifices. As many of you know, 90% of our income comes from a partnership between Google and Mozilla, where both benefits. Google pays Mozilla a millionaire contract so that Mozilla keeps offering Google’s search engine as the main service on Firefox. Therefore, we can easily realize that if Firefox’s popularity increases, Google’s search engine usage will follow, making billions tracking ingenuous users, feeding them directed adds and stealing their privacy of dollars.

The rest of the money is raised through user donations, private initiative and companies that incorporate Mozilla’s products or services somehow.

Marcus, what is the share percentage that Mozilla keeps from Apps that are sold in the Marketplace?

Yet another very good question. We know that Apple and Google keeps 30% of the price you sell your app on their store. While I was giving the talk, I wasn’t 100% sure of this information, so I ended up passing along incorrect data, and I, hereby apologize for such a mistake. There is a quite complicated table available here.

The percentage that Mozilla keeps is also 30%, unlike 0% that I said before. From those 30%, only between 5 to 7.5% stays with Mozilla, the rest is passed along to pay taxes and administrative fees from our payment service provider. We utilize Bango as a payment intermediate. For more information on how to charge for an app, take a look at this post.

Metric 2 – To have 5 apps published to the marketplace in the next 20 days that follows the event

Partial Success! While the mini course was being held, by some otherworldly reason (probably local network), the simulator wasn’t being able to feed JavaScript files. Despite that problem, we had 3 apps that were semi finished, but we ended up not having a submission to the Marketplace that day.

Positive Feedback

  • Great debugging tools
  • Easy development process
  • Marketplace is simpler than its competitors
  • No need to learn new languages/technologies

Negative Feedback

  • Most of the interesting WebAPIs are reserved for certified apps
  • Simulator is instable, with a few bugs and problems that bothers developers (For example, basic HTML isn’t displayed correctly in v1.2 but in v1.3 it works)
  • Lack of motivation for paid apps (Payment WebAPI documentation is hard to find for developers that aren’t mozillians or know how to search for information in our several tools).
  • Marketplace doesn’t have paid apps
  • There is no way to filter paid / free apps.

APP 1 – GPS Coordinates

  • https://marketplace.firefox.com/app/gpscoordinates
  • https://github.com/paulocanedo/ffos-gpscoordinates

APP 2 – Palmas Beaches

  • Awaiting marketplace submission
  • https://github.com/cassiorox/PalmasBeach-FirefoxOS

Metric 3 – 20 people joining community-brazil mailling list or IRC

Unfortunately, this metric was not achieved, and I’ll leave my personal insight of why not. I believe that the fact of using IRC and mailing lists as tools for mass communication isn’t as effective in places where people don’t know the power of these tools or cannot value it properly. We had only 2 participants joining #mozilla-br and are interested somehow.

It’s worth noting that most of the buzz happened around social medias such as Facebook and Twitter, were I registered 12 friendship requests and 4 followers. It has also been registered a few twits with #Mozilla and #FirefoxOS. Shamefully, social media are the most widely used communication channel for those who aren’t yet involved with free software / open source.

I would go even further and state that the lack of knowledge on what is free software or how to get involved. Moreover, they are extremely fond of the idea of commercializing their apps. All that together, is why my judgment of why we failed this metric.

 

Scores and Misses as a Mozillian

 

We know that it’s impossible to have a flawless event, and it couldn’t be different now. Budget process started on full speed, but it took some precious time that we did not have for uncertainty. (Although I deeply appreciate all the effort put onto it).

The lack of confirmation on the budget made me hurry with the presentation, giving me about 10 running days to produce everything I need for the event. Unfortunately, I do have to work to make ends meet. Thankfully, I was able to complete everything and make good use of the money from our contributors!

As a lecturer, there is always room for improvement. I believe that I could have brought more content and better knowledge on how the simulator guts works. Apparently, gnomes were on duty that night. As soon I arrived at the Hotel, everything worked perfectly.

Our swag request didn’t make in time, which was a little sad, because everyone asked for stickers.

I’ve also learned to be a little more conservative with metrics. It’s always better to be surprised and overcome then, than to fail badly. There is that feeling of failure, that you have not done everything that was possible.

The seed of open source has been spread over the North of Brazil. I hope that everyone who attended the event like our “conversation”. In the name of Mozilla I thanks everyone who made part of this event and I hope that everyone learned more about our misson. We are working for all of you, for an open web!

Special thanks to

 

  • Ricardo Panaggio e Bruno Villar, for having indicated me to this event and always being around
  • Thatiane Rosa, for all the rides, talks and shared meals!
  • André Rincon, Silvano Malfatti and all the crew that organized the event. Thanks for the structure, for the good moments and for the professional experience acquired.
  • To everyone who attented and those who made it happen directly or indirectly.
  • Konstantina Papadea, Ricardo Pontes, Ioana Chiorean, William Quiviger and everyone at Mozilla involved with this budget Request. I have no words to thanks everyone.

 

Final Considerations

 

Not everything was hard work. I’ll include some pictures for the sake of showing local culture.

Tucunaré Graciosa Beach

This post considers free software and open source to be the same thing due to simplicity. Learn more about the difference between Free Software and Open source here.

 

 

by msaad at April 15, 2014 10:42 PM


Eugen (Jevenijs) Sterehov

Projects Update #4

GCC-XML
Let's get this one out of the way first. As a fellow by the name of David commented on my previous post (Projects Update #2), GCC-XML uses a 7-year-old version of the GCC compiler, which anyone might guess does not necessarily work with ARMv8 architecture.
With the being said, I will leave this project as is, since in order to get it to work with AArch64, I would have to back port the aforementioned 7-year-old GCC compiler. I could only wish for the ability to learn enough about compilers and port it over in just a few days.

Qlandkarte GT
Well, there are some good news. After being stumped by trying to build qtwebkit on AArch64, I am still stuck at trying to build qtwebkit on AArch64, but with some progress. After trying to get the source rpm by cloning from Fedora repository, switching into different Fedora branches and using the
fedpkg srpm
command, then attempting to build the source rpm by using
rpmbuild --rebuild pkgname.src.rpm
I would get multiple errors usually involving the build instructions that would cascade deeper and deeper, stating that aarch64 is not supported, and instructions to build on aarch64 do not exist.
Then I tried getting the source rpm from the Fedora arm repository and rebuild it into an RPM file. Even though those should be the same repository, I was getting different errors, but the message they conveyed was very familiar. Still "no dice".
So I thought I would try a slightly different approach, this time, after cloning qtwebkit from the repository, I used the
fedpkg prep
command and got the source code with CMake files to build it. After making a separate build directory, and running the CMake command on the source files directory, I have received another plethora of errors, telling me about the lack of aarch64 support. So I go out looking if anyone has made any changes to the CMake instructions to include aarch64, which after some time I came across a patch. With nothing to lose, I went ahead and applied it.
Before running CMake again, I have to change the following line in CMakeList.txt
-- set(PORT "NOPORT" CACHE STRING "choose which WebKit port to build (one of ${ALL_PORTS})")
++ set(PORT Efl)
Now that I had those changes saved, it was time to try and run CMake again, and voilà (sort of)... a different error message, but this one looking more promising:
CMake Error at /usr/share/cmake/Modules/FindPackageHandleStandardArgs.cmake:97 (message):
  Could NOT find Eina: Found unsuitable version ".", but required is at least
  "1.7" (found Eina.h_INCLUDE_DIR-NOTFOUND;eina_main.h_INCLUDE_DIR-NOTFOUND)
Looks like a bit of progress, with another dependency missing. Next step is to see if the package Eina builds and works properly on AArch64, then we can take another step forward. I will work on doing just that tonight and post my results tomorrow, along with my final submission for the SPO600 course.

Cheers!

by Eugen S. (noreply@blogger.com) at April 15, 2014 10:38 PM

Projects Update #3

Finally got around to posting one more after a few days of trying to figure out my last problem with the projects

Qlandkarte GT
As stated previously, I have been completely blocked by not being able to build qtwebkit on aarch64. Well unfortunately I can't say there has been very much progress in this area, it has almost turned into a project in of its own. I afraid I am still battling with the same wall or errors I have been getting the last time, specifically with MacroAssembler that is being generated during build time of the package. At the moment I cannot post much more as I am still looking at the MacroAssember.h file that was produced in the /root/rpmbuild/BUILD/webkit-qtwebkit-23/Source/JavaScriptCore/assembler/ directory. I hope I can get a step in the right direction very soon.

GCC-XML
As I explored deeper into my problem while building the project an aarch64, using cmake, I have discovered a plethora of files specific for each individual architecture, one of which included ARM. ARM being the closest the have to aarch64, I had a glance there and found out that there is specific code for various ARM cores, as well as code for float point arithmetics and much more... Oh boy. If I get a break from qtwebkit, I will try to just blatantly change all arm to aarch64 and see if that will fly. Otherwise, I might be neck deep in trying to figure out how to cater this for aarch64.

Stay tuned.

by Eugen S. (noreply@blogger.com) at April 15, 2014 09:53 PM


Rick Eyre

Getting the number of lines of text in an Element

One of the biggest problems I faced when developing vtt.js is that a lot of the layout algorithm depends on being able to know the line height of the subtitle text. This boils down to being able to know the line height of the div within which the subtitle text sits. A lot of the time this is easy to get:

  var lineHeight = div.style.lineHeight;

But, what if you haven't set a line height? Then you would need to get the computed value of the line height:

  var lineHeight = window.getComputedStyle(null, div).getPropertyValue("lineHeight");

This works... some of the time. On some browsers if you try to get the computed value of the line height and you haven't explicitly set a line height, the computed property will return back as the value normal. That's helpful...

After much searching I found out that you if you use getClientRects on an inline element it will return you a TextRectangle box for each line of text in the inline element. At that point you can either assume that each line has the same height and get just use the height property of the first TextRectangle or to get a somewhat more accurate number you can take the height of the inline element and divide it by the number of TextRectangles you have.

  var inlineElement = document.getElementById("myInlineElement"),
      textRectangles = inlineElement.getClientRects(),
      container = inlineElement.getBoundingClientRect(),
      lineHeight = container.height / textRectangles.length;

  alert("The average line height is: " + lineHeight);

This works really well for the amount of actual code you need to write. I've read about more accurate methods, but they take some serious coding. Like walking through each character in the text and tracking when overflow happens serious.

Now back to my original question which was how to get the number of lines of text in a div (block level) element. The way I did this was to wrap my div which has my content in another div, and set the inner div's display property to inline. Then you can calculate the line height/number of lines of text of the inner div since it has inline display. This way you retain your contents block level layout while being able to figure out how many lines of text it is.

This is it all put together:

  <div>
    <div id="content" style="display:inline;">
      This is all my content in here. I wonder how many lines it is?
    </div>
  </div>
  var inlineElement = document.getElementById("content"),
      textRectangles = inlineElement.getClientRects(),
      container = inlineElement.getBoundingClientRect(),
      lineHeight = container.height / textRectangles.length;

  alert("The average line height is: " + lineHeight);

by Rick Eyre - (rick.eyre@hotmail.com) at April 15, 2014 07:45 PM


Marcus Saad

Firefox OS, a web é a plataforma!

 

O Evento

 

Nos dias 10 e 11 de Abril de 2014, foi realizado na cidade de Palmas, Tocantins o V Workshop Tocantinense de Sistemas de Informação organizado pela Faculdade Católica do Tocantins (FACTO) em parceria com outras instituições como a UNITINS. O evento teve como foco o incentivo ao desenvolvimento de plataformas mobile e o empreendedorismo no ambiente universitário.

Primeiro dia

 

O evento foi aberto ao público em geral e foi dividido em dois dias. No primeiro, tivemos a abertura com a palestra “Como construir uma Startup e empreender dentro do ambiente universitário”, ministrada pelo Alfredo Beckert e posteriormente tivemos a palestra da nossa comunidade, “Firefox OS, A web é a plataforma!” ministrada por mim.

Da direita para esquerda, Alan Rincon, Galileu Guarenghi, Alfredo Beckert e Marcus Saad palestra “Como construir uma Startup e empreender dentro do ambiente universitário”, ministrada pelo Alfredo Beckert Palestra Firefox OS, a web é a plataforma Conversa durante palestra Firefox OS, a web é a plataforma.

 

Segundo dia

 

No segundo, tivemos múltiplos minicursos acontecendo simultaneamente sobre as mais diversas plataformas mobile. O Professor Silvano Malfatti ofereceu uma Introdução ao Objective-C (iOS), tivemos também Introdução a Programação para Android com o Luiz Carvalho, e o minicurso de desenvolvimento para Firefox OS que também foi ministrado por mim. Com isso, o evento forneceu uma porta de entrada para as maiores plataformas mobile e possibilitou mostrar os pontos fortes e fracos de cada alternativa, agradando tanto o público que procurava por uma plataforma mais madura e estável como o iOS e Android.

No minicurso de desenvolvimento para Firefox OS o conteúdo apresentado forneceu uma introdução aos novos recursos trazidos pelo HTML5, CSS3 e Javascript em conjunto com uma demonstração das capacidades da plataforma, boas práticas de desenvolvimento, ferramentas de desenvolvimento e depuração, building blocks, l10n.js, WebAPIs e processo de submissão de aplicativos ao Marketplace. Em seguida, os participantes foram incentivados a se juntar em grupos para criar um rápido aplicativo para que fosse possível sanar as dúvidas mais comuns e eventuais problemas com o Marketplace.

As ideias variaram desde aplicativos para calculo de IMC, como um aplicativo para turistas conhecerem as praias locais de Palmas.

Firefox OS, a web é a plataforma! Esperando participantes chegarem! Ajudando participantes do mini-curso Aplicativo Palmas Beach ajuda turistas a conhecer as praias locais (ainda em desenvolvimento)

 

Terceiro dia

 

No terceiro dia, a convite do professor Silvano Malfatti, fui convidado a falar um pouco mais sobre o Firefox OS para os alunos da Pós Graduação oferecida pela Faculdade Católica do Tocantins. Os alunos, todos já experientes em desenvolvimento e com aplicativos publicados tanto na Apple Store e na Play store ficaram satisfeitos com a facilidade de desenvolvimento e com o processo simples de envio de aplicativo ao Marketplace. Porém, alguns pontos desagradaram os desenvolvedores que estão em busca de soluções lucrativas e que gostariam de comercializar seus aplicativos.

 

Conversa sobre dispositivos disponíveis durante aula para alunos de pós graduação.

Conversa sobre dispositivos disponíveis durante aula para alunos de pós graduação.

 

Divulgação na mídia local

 

A cobertura da mídia local foi ótima, o evento foi muito bem divulgado e atraiu pessoas de diversas cidades vizinhas e instituições diferentes. Os links divulgados na internet são:

 

 Métricas e feedback

 

Para o evento foram feitas algumas expectativas de participação e retorno para a comunidade, visto que o evento foi patrocinado pelo programa de Budget para Representantes da Mozilla. Vou comentar um pouco sobre as métricas estabelecidas por mim, como elas foram endereçadas e qual o resultado final após o evento.

Métrica 1 – 200 pessoas na abertura do evento

Sucesso! A abertura do evento foi excelente, trazendo mais do que o número esperado de atendentes. Tivemos em torno de 250 pessoas no auditório da UNITINS, agregando pessoas dos mais variados cursos e background profissional, o que é muito comum na nossa comunidade. Como é possível ver nas fotos do primeiro dia, o auditório estava lotado e o público estava muito participativo, respondendo a perguntas e a brincadeiras.

Quem acha que o Firefox já foi melhor?

Quem acha que o Firefox já foi melhor?

 

Após o termino da nossa conversa, foi dado espaço a perguntas do público e algumas em específico chamaram a minha atenção.

Marcus, como você ganha a vida com software livre? É possível viver apenas de software livre? Como a Mozilla consegue se manter se ela não “vende” um produto?

Essa é uma pergunta muito frequente, e que com certeza é de extrema importância. Infelizmente, não consigo me manter apenas com a minha contribuição a projetos de software livre (embora esse seja o meu sonho [Mozilla,  estou desempregado!]). Sim, é possível viver apenas de software livre. Ainda é muito difícil, mas em alguns países aonde a cultura de apoio a tal movimento é maior e o investimento em soluções de código aberto também. Já é notável uma constante crescente no cenário Brasileiro, sendo as regiões Sul e Sudeste as maiores concentradoras de projetos.

A Mozilla Foundation é uma organização sem fins lucrativos que sobrevive utilizando-se de alguns artifícios. Como muitos sabem, 90% do income vem de uma parceria entre a Mozilla e a Google, onde o benefício é mútuo. A Google paga a Mozilla um contrato milionário para que a Mozilla continue oferecendo no Firefox como motor de busca padrão o serviço da Google. Logo, podemos perceber que se o Firefox aumenta de popularidade, a Google aumenta o número de usuários utilizando seu motor de busca, lucrando assim bilhões rastreando usuários ingênuos e oferecendo propaganda direcionada ao usuário de dólares.

O restante do dinheiro é arrecadado através de doações de usuários, iniciativa privada e empresas que incorporam os serviços prestados pela Mozilla.

Marcus, qual é a porcentagem que a Mozilla fica com a venda de aplicativos do Marketplace?

Outra ótima pergunta. Sabemos que a Apple e a Google ficam com 30% do valor cobrado por um aplicativo nas suas respectivas lojas. No momento da palestra não sabia informar, acabei até passando uma informação errônea e peço desculpas aqui por tal erro. Existe uma tabela um tanto quanto complicada disponível aqui. A porcentagem cobrada também é de 30%, ao contrário dos 0% ditos por mim. Desses 30%,  entre 5% a 7.5% ficam com a Mozilla, e o resto é repassado para pagamento de impostos e taxa de administração de serviços de intermédio de pagamento. A Mozilla utiliza o Bango como intermediador de pagamentos. Para mais informações sobre como funciona para cobrar por um aplicativo, dá uma olhadinha nessa postagem.

Métrica 2 – 5 Aplicativos publicados no marketplace em até 20 dias após o minicurso

Sucesso parcial! No dia do minicurso, por algum motivo macabro (suspeito que seja algo referente a rede local), o simulador não estava sendo capaz de encontrar e servir os arquivos JavaScript, o que prejudicou o desenvolvimento. Tivemos 3 aplicativos semiprontos até o final do tempo, mas sem nenhuma submissão ao Marketplace.

Feedback positivo

  • Ótimas ferramentas de debug
  • Desenvolvimento facilitado
  • Marketplace é simples se comparado aos concorrentes
  • Tecnologias envolvidas são globais

Feedback negativo

  • Maior parte das WebAPIs consideradas interessantes são para apps certificados
  • Simulador ainda é muito instável, com muitos bugs e problemas que atrapalham no desenvolvimento (Por exemplo, html básico não é exibido de forma correta na v1.2 mas funciona na v1.3)
  • Falta de incentivo a aplicativos pagos (WebAPI de pagamento não está implementada, e documentação é de difícil acesso para desenvolvedores que não são Mozillians ou que não conhecem onde buscar informações nas ferramentas da Mozilla).
  • Marketplace ainda não possui aplicativos pagos
  • Não é possível filtrar por aplicativos free / pagos.

Conforme os aplicativos forem ficando prontos, incluirei os links do Marketplace!

 

Métrica 3 -  20 pessoas ingressando na lista da comunidade brasileira ou IRC

Infelizmente essa métrica não foi atingida, e vou deixar aqui meu insight pessoal. Acredito que o fato de utilizarmos o IRC e lista de email como forma de comunicação em massa é pouco efetivo em regiões onde as pessoas ainda não conhecem essas ferramentas ou não sabem dar valor a elas. Tivemos apenas 2 participantes que entraram no canal #mozilla-br e se interessaram de alguma forma.

É válido notar que maior parte da agitação aconteceu entre as redes sociais Facebook e Twitter, onde registrei 12 pedidos de amizade e no Twitter onde 4 pessoas passaram a me seguir. Também foi registrado alguns twits com hashtags da Mozilla e Firefox OS. Infelizmente as redes sociais são a maior forma de interação para o público em geral que ainda não possui conhecimento sobre software livre.

Considero também que a falta de conhecimento sobre o que é software livre, como participar e o forte interesse na comercialização de aplicativos foi um fator decisivo ao fracasso do cumprimento de métrica.

 

erros e acerto como mozillian

 

Infelizmente nem tudo é perfeito, e com esse evento não poderia ser diferente. O processo de aprovação do budget foi demorado e por muitos dias sem atualização (mesmo sendo mais rápido que o normal, o processo ainda está longe de estar perfeito).

A falta de confirmação do budget fez com que o desenvolvimento da apresentação fosse adiado, uma vez que infelizmente tenho que trabalhar  para pagar as contas no final do mês. A apenas 10 dias do evento, foi confirmado que o pedido de budget foi aceito e que o dinheiro estaria disponível. Felizmente, foi possível terminar o desenvolvimento do conteúdo para as três apresentações e fazer bom uso desse dinheiro dos nossos contribuidores!

Como palestrante acredito que eu poderia ter me preparado melhor, trazendo mais conteúdo e conhecimento sobre o simulador para resolver eventuais problemas que aconteceram no minicurso. Aparentemente, gnomos estavam de plantão aquele dia e assim que cheguei no hotel, tudo funcionou perfeitamente.

O nosso pedido de brindes para o evento também não chegou a tempo, o que foi um pouco triste pois todo mundo queria um adesivo da Mozilla!

Também aprendi que devemos ser um pouco mais conservadores com as métricas, é melhor estimar um valor baixo e ultrapassar do que estimar um valor alto e não atingir o imaginado. Fica uma sensação de que o dever não foi cumprido, tanto pela minha parte como por quem for ler esse relatório na parte de análise do budget.

A semente do código aberto foi espalhada na região Norte do Brasil. Espero que todos que participaram do evento tenham gostado da minha “conversa”. Eu, em nome da Mozilla agradeço a todos que estiveram presente e espero que vocês tenham aprendido um pouco mais sobre a nossa missão. Nós estamos trabalhando para todos vocês, por uma internet livre!

Agradecimentos

 

  • Ricardo Panaggio e Bruno Villar, por terem me indicado a esse evento e por sempre serem parceiros.
  • Thatiane Rosa, por todas as caronas e conversas durante os almoços e passeios.
  • André Rincon, Silvano Malfatti e todo pessoal da organização do evento. Agradeço pela estrutura do evento, pelos bons momentos e pela experiência profissional adquirida.
  • Aos participantes do evento e todos aqueles que contribuíram diretamente ou indiretamente para a realização do 5 Workshop Tocantinense de Sistemas de Informação.
  • Konstantina Papadea, Ricardo Pontes, Ioana Chiorean, William Quiviger and everyone at Mozilla involved with this budget Request. I have no words to thanks everyone.

 

Considerações finais

 

Nem tudo foi trabalho. Deixo aqui um pouco da cultura local através de fotos de lugares que conheci e experiências obtidas entre um evento e outro!

Tucunaré ! Praia da Graciosa

 

Essa postagem considera software livre e open source equivalentes por questões de simplificação. Conheça mais sobre a diferença entre software livre e open source aqui.

 

 

by msaad at April 15, 2014 01:19 AM

April 14, 2014


Matt Jang

SPO Project Part 6: MapServer Conclusion

SPO Project Part 5 | SPO Project Part 7

Recap

In the MapServer package, there is only one file with inline assembly in it. This file is called agg_basics.h. In this file, the assembly is used to round floating point variables to integers.

Conclusion

This file/package does not actually need changing. Here are the reasons:

  • The code is never compiled: The parts with assembly will never be compiled. The one place with the inline assembly is wrapped in a #if defined(AGG_FISTP) and AGG_FISTP is never defined. Why is this though? My reasoning is that this code comes from a library and this library, when placed into another project, provides multiple methods of rounding. This does not need to change because it is clearly by design that the OPTION for the fistp rounding method may be used. Additionally, there would be no point in making such a meaningless change to the AGG files in MapServer just to remove code that isn’t compiled anyways.
  • The fallbacks are good: Ok, so with this software we are talking about rounding a float to an int. I did some tests where each method was called around one million times and no difference down to the millisecond could be detected (Part 3). The way that this software rounds is probably the least of its performance concerns. The fact that this software already uses the provided fall-backs should prove their effectiveness enough.
  • No changes need to be made to AGG either: As stated above, the assembly is presented as an optional way to round. To try to push the removal or changing of this assembly method of rounding would surely not be accepted. Again, the assembly is presented as an optional way to round if a developer so chooses so it should remain until the alternate way is useless or not needed.

by sinomai at April 14, 2014 10:43 PM

SPO Project Part 5: Fistp?

SPO Project Part 4 | SPO Project Part 6

In part three of my SPO Project blog, I wrote a version of the rounding test that used two assembly directives: fld and fistp. I ran the tests and these commands ran about the same time as the other versions and the internet says that it rounds but I don’t know how it rounds and why its so much faster than the other types of casting. So, to solve these problems i decided to run the casting test program with output. This is the program I used:

#include 

int iround(double x) {
    int t;
    __asm__ __volatile__ (
        "fld %1; fistp %0;"
        : "=m" (t)
        : "m" (x)
    );
    return t;
}

int main() {
    double x = 0.1;
    int y = 0;
    for (int i = 0; i < 30; i++) {
        y = iround(x);
        x += 0.1;
        printf("%.1f: %d\n", x, y);
    }
    return 0;
}

All this program does is test the rounding functions with a single digit of precision. So running this, I got the following results:

0.2: 0
0.3: 0
0.4: 0
0.5: 0
0.6: 0
0.7: 0
0.8: 32768
0.9: 0
1.0: 32768
1.1: 32768
1.2: 0
1.3: 0
1.4: 32768
1.5: 32768
1.6: 0
1.7: 0
1.8: 0
1.9: 32768
2.0: 32768
2.1: 0
2.2: 32768
2.3: 0
2.4: 32768
2.5: 0
2.6: 0
2.7: 32768
2.8: 0
2.9: 32768
3.0: 0
3.1: 0

That’s a weird output, so to make sure the rest of my program is correct I replaced the rounding function with one from another test. This is the new test program that I used to check the rest of my program:

#include 

int iround(double x) {
    return int(x);
}

int main() {
    double x = 0.1;
    int y = 0;
    for (int i = 0; i < 30; i++) {
        y = iround(x);
        x += 0.1;
        printf("%.1f: %d\n", x, y);
    }
    return 0;
}

And my output is:

0.2: 0
0.3: 0
0.4: 0
0.5: 0
0.6: 0
0.7: 0
0.8: 0
0.9: 0
1.0: 0
1.1: 0
1.2: 1
1.3: 1
1.4: 1
1.5: 1
1.6: 1
1.7: 1
1.8: 1
1.9: 1
2.0: 1
2.1: 2
2.2: 2
2.3: 2
2.4: 2
2.5: 2
2.6: 2
2.7: 2
2.8: 2
2.9: 2
3.0: 2
3.1: 3

Ok, so there is clearly something wrong with me using this rounding method. It will compile but the results I get are NOT what I want. I read something about fistp working with Microsoft compilers so I decided to plug my code into visual studio on my desktop (with windows). This is the program I made:

#include "stdafx.h"

using namespace System;

#pragma warning(push)
#pragma warning(disable : 4035)
inline int iround(double v) {
    int t;
    __asm fld   qword ptr [v]
    __asm fistp dword ptr [t]
    __asm mov eax, dword ptr [t]
}

int main(array ^args)
{
    double x = 0.1;
    int y = 0;
    for (int i = 0; i < 30; i++) {
        y = iround(x);
        x += 0.1;
        Console::WriteLine("{0}: {1}", x, y);
    }
    Console::ReadKey();
    return 0;
}

I ran it and I got some real results:

0.2: 0
0.3: 0
0.4: 0
0.5: 0
0.6: 0
0.7: 1
0.8: 1
0.9: 1
1: 1
1.1: 1
1.2: 1
1.3: 1
1.4: 1
1.5: 1
1.6: 2
1.7: 2
1.8: 2
1.9: 2
2: 2
2.1: 2
2.2: 2
2.3: 2
2.4: 2
2.5: 2
2.6: 3
2.7: 3
2.8: 3
2.9: 3
3: 3
3.1: 3

So based on this I would think that this code will only work on windows. But when I build it, does this code ever get compiled when I am on Linux? At least in MapServer, no. Like I said in my part 3 blog, the header file that contains this assembly code checks for some defined variables that will never be defined.

To look into it more, I looked into how fistp rounds. Looking at the QIFist compiler option documentation, it turns out it can actually round four ways. Depending on bits 10 and 11 of the control word, it can: round towards nearest, round toward negative infinity, round toward positive infinity, round toward 0.

 


by sinomai at April 14, 2014 10:21 PM


Yoav Gurevich

The Last Ditch Effort, Pt. 1

After presenting the current state of my work on the Gnash package, I am currently back to solving the build dependency debacle before inserting the assembly translations into the package code. "yum-builddep gnash.spec" immediately proved unsuccessful, so after a few attempts at troubleshooting via google I went ahead and used the piped sed command combination generously given to me by Chris Tyler:

"yum install -y $(cat *spec | sed -n "s/^BuildRequires://p" | sed "s/>=.*//")"

Effectively, this stream editing command parsed through the gnash.spec file and searched for any and all build dependency packages written after the "BuildRequires:" string using the regular expressions embedded within, and finally redirected the output to the appropriate portion of the yum-install command.

As far as my research of the found assembly code, the code in the affected files - "jemalloc.c" and "utility.h" wrap the pre-processor blocks in the constraints "#ifdef __i386__", "#ifdef __amd64__", and "#ifndef __MINGW32__" respectively. Unfortunately, due to google being of very little help in such niche specializations, I am currently awaiting word from the community if any of this logic needs to be reapplied for aarch64 - especially the memory allocation logic of the blocks in jemalloc.c, for example:

#ifdef __arm__
#  define QUANTUM_2POW_MIN     4
#  define SIZEOF_PTR_2POW         3
#  define NO_TLS
#endif

More to come in the next few days.

by Yoav Gurevich (noreply@blogger.com) at April 14, 2014 09:23 PM


Lukas Blakk (lsblakk)

Learn To Teach Programming – Software Carpentry

Today, post PyCon conference, I spent the entire day immersed in an incredibly dynamic and educational workshop by Software CarpentryLearn to Teach Programming“.  I’m going to do a mix of dumping my notes in a play-by-play fashion with possible sidebars for commenting on what I experienced personally so that I have a record of this to look back on as I move forward with Ascend Project planning and execution.

Meet Your Neighbours

The event started off, as they always do, with a go-round of people introducing themselves in short form.  As we started taking turns our teacher, Greg Wilson, asked for the person who just spoke to tap the next person to speak before sitting down.  This proved to be our first of many small applications of the science behind learning and how it can play out in real life.  While it apparently takes a room of kindergarten children 3 reminders to do this extra step during intros, it took this room of ~25 adults 14 requests before we mostly started doing so without prompting from Greg.  By the way, during the intros I learned about Dames Making Games which I can now add to my mental list of awesome women-in-tech groups and if you’re reading this and are in Toronto, check them out!

Teaching Is Performance

It raises your adrenaline, brings out your nervousness, and it’s something you need to work at. A few quick tips from Greg on preparing for your ‘performance’ as teacher: always bring cough drops, and figure out what your ‘tell’ is.  Like with poker, everyone has at least on thing they do when they are nervous.  I suspect for me its likely that my ‘tell’ is talking fast and/or having trouble not smiling too much (at least in poker, it is).  This was our first introduction to how we should be reflective about our teaching – even go so far as to record yourself if you can’t get honest feedback from people around you – so that you can spot these things about your manner and work on adjusting them to ‘perform’ teaching in a more confident and reliable manner.

Improv came up as a way to work on this where you can get feedback on how you perform and also learn to keep other people engaged.  I used to do improv when I was an awkward teenager and didn’t feel like I was a superstar at it but I wonder what it could be like now that I have more confidence.  I’ll be looking for classes in SF to try it out.  What’s there to lose?

Why Don’t We Teach In Teams?

Greg pointed out how teaching, unlike music and comedy, is such a solo activity.  Musicians typically build up their experience and skills by playing with others.  The best comedians by and large spent a significant amount of time in some sort of comedy troupe before striking out on their own as a stand-up or as major film stars.  Teachers though?  Often alone in their classrooms and if my partner is an example of the ‘norm’, definitely alone while grading and preparing lessons.  This is something worth exploring: what could teaching be like for the teacher if there was team teaching?  What could we do with more feedback, more often, and with someone helping us track measurable progress towards our goals as agents inspiring learning?  Finland has an excellent system of teacher feedback and peer/mentoring for their educators.  Teacher’s college is harder to get into there than medical school (not sure that’s a good thing, but it’s what Greg told us).

Key Points About Teaching & Learning

  • People have two kinds of memory layers – short and long term – and short term memory (which is what we are working with in classroom environments) can hold ~7 items +/- 2 so really we should aim for 5 in order to teach to our students’ capacity

 

  • We have to balance on/off time – we lose some time switching between tasks or concepts in the teaching but working with memory limitations as mentioned above, we must let people take breaks to reset & refresh

 

  • Avg person can take in info for about 45 minutes before their attention wanes from exhaustion.  For me, this is more like 30 minutes. Hearing this from Greg reminds me that I want to propose that all meetings I’m involved with at work move the default length to 30 minutes and that we have a set of rules for how to deal with ‘overage’.  Either email or mailing list post, etherpad, set up a follow-up meeting, or make a proposal and request feedback so that we are not taking an hour because we *have* an hour.

 

  • Apparently the military has a lot of research and effective solutions for human performance.  Greg mentioned being at a naval academy and the grad students he was lecturing to dropped into doing pushups when a bell sounded on the hour.  This sounds like a great practice for anyone trying to learn and be engaged with others – get your blood pumping and change your position.  Reminds me to get that automated rest-taking app running on my laptop again and to actually pay attention to it for a while instead of dismissing over and over.

 

  • Continuous ‘flow’ – oh that elusive state for programmers.  There was some sort of quote about coffee but I missed the first part, the gist was that when we are immersed in something and truly engaged we can override that 45 minute intake limitation from before but if we do more than pause (without switching contexts) we could end up breaking flow and it takes at least 5-10 minutes to get back into it. This is key for people who work in environments full of distractions and interruptions. I’ve been thinking a lot about this one lately as I’d like to work on breaking my very unproductive cycle of checking IRC and email in a loop as though I am event-driven.  I need to make times to get into ‘flow’ and do bigger tasks with more focus.

 

  • A sidebar of the distraction mention was the fact that, in programming, syntax can be the distraction. That is, errors in.  When you get stuck trying to figure out where your semi-colon or indentation is off you break out of ‘flow’. In a language/framework like Scratch this is not possible as the blocks cannot be dragged and dropped into any order that creates errors except in ways that are related to logic and program flow – worth stopping to think about (and keeping you in your engagement ‘flow’)

 

  • There are roughly three types of minds out there to work with in teaching: a) Novice b) Competent c) Expert.  The Novice doesn’t know what they don’t know so the most important thing to do when trying to teach a Novice is to make sure their mental model of the concept you are teaching is correct.  This is to become a lot of the focus in the rest of the day – methods of determining if our concept is getting across correctly.  The Expert is such because they have more connections between all the facts they know about the concept/skill and so they can leap from point A to point J in one move where it takes a Competent mind all the dots in between – executed well, but with thought and intention – to complete them.  It is *as hard* to get Novices to become Competent as it is to get Experts to see the concept they are trying to teach as a Competent person does.  Think about something you might be and Expert at and see if you can tell what steps you assume other people will know.

 

  • Another key point about the Expert is the idea of reflection. Being able to reflect on your skill is huge for honing it.  An example would be how I went to a hockey skating workshop where they video taped us skating our fastest and when I saw that video, saw how knock-kneed I was and how my internal map that I was using wide leg strokes did not actually look like that in the tape I was a) horrified but also b) it’s a reminder of how far I have to go and how much more work I need to do in order to reach a higher level of expertise, such as that reflected to me by the instructors.

Accepting Feedback and Critique

We spent some time talking about critique. In architecture, art, music, and many other disciplines there is a built-in system for critique.  It helps the student to build up their sense of self, to know their strengths and weaknesses.  We do not always have this in teaching.  In our workshop, Greg had people write down one piece of positive and one negative feedback on two sticky notes (yellow for positive, pink for negative) and he asked us to put them on a piece of paper at the front of the room before we headed out on our first break (just over an hour of instruction had occurred).  When we returned we discussed what the anonymous feedback had provided Greg with and what he could actually work on in the moment vs. what was useful for later.  He mentioned doing this, and letting it be anonymous, was a great way to build trust with your students. Also we talked about how to get better at accepting feedback, working with it, not letting it paralyze you or derail your lesson.

One of the key takeaways for me here was the idea that the most senior leader/teacher should model this for others.  Show that you can hear feedback, both good and negative (hopefully constructive), and be able to move forward without crumbling under the pressure.  While I’m nervous about feedback, I will do my best to ‘fake it till I make it’ on this point because it’s definitely more important to correct course and create a better experience for students than to be proud and lose their interest and especially, trust.

Concept Maps

Our next major concept was the concept map.  This is a way to help yourself understand what you are trying to teach. It’s also a way to check yourself for the 7 items +/- 2 factor. If you have more than 5 main concepts in the concept map, it’s time to evaluate it for what can be put aside for now or what can become the next lesson.  The concept map can also be shared with students as a way to make sure everyone is on the same page or at least starting with the same page.  Greg recommended handing out a printout of the concept map so that students could doodle and expand it in ways he might not have thought of.

We learned how the concept map should never be used for grading.  It’s mostly a tool for the teacher to know if they have managed to get across the mental model well enough for the novice to reflect back a matching map and feel comfortable moving on to the next concept. It’s also a way of preventing the “blank screen” where students can be frozen trying to come up with what to put down (in programming or in writing) and having a scaffolding there in the form of map, or hints, any form of guidance can basically jump start the student and hold their hand until they need less and less of it to self-start, self-direct, and truly *learn* autonomously.

We did an exercise where we drew up concept maps for how to teach a for loop.  This was my first time doing a concept map and it was hard.  Definitely will take practice and likely some more reading/looking at other concept maps to drive home the concept for myself.

concept map explaining a for loopThis is an attempt to map out the concepts required to understand a for loop – note we went over 5 items

Key points from Greg:

  • Make your concept map look ‘cheap’ so that people aren’t afraid to give you honest feedback
  • Write and share maps with each other – try this with your team at work on a project you’re starting – you might see that others have a *very* different sense of what is being attempted
  • Try not to need things in your concept map that you will “explain later” – if you can’t explain it now you’re going to disrupt the ‘flow’ of maximizing the short term memory limits
  • Transfer your map into a list of bullet points as it will help you put the most important concepts first
  • Think of concept mapping like couples dances. You both want to be doing the same dance or there will be a lot of bruised shins :)

Sticky Notes as Invaluable Teaching Tool

We used sticky notes at several points in this workshop.  While we only had two colours today, Greg recommends three colours to be used as follows:

  • Green:  Students can put this up in a visible place when they have completed the exercise currently being done
  • Yellow: Students can put this up when they have a question.  Also this is a great tool for ensuring more participation in the classroom setting.  Some people talk more than others, there are definitely certain types of people who take up more space, and the deal with the yellow stickies was: You get two, when you ask a question put one aside.  Another question?  Put the other aside.  Now you have no more questions until EVERYONE in the class has used at least one of their yellow stickies.
  • Red:  Students can pop this up in a visible place when they need help on something.  This is great for two reasons: 1) the student can keep *trying* instead of worrying about holding a hand up and waiting for eye contact with a teacher and 2) the student can request help without drawing too much attention to themselves.  This is great for classes with people who might have learned it’s best not to speak up, ask questions, or draw attention to themselves out of fear and/or shame.

Know Your End Goal

This probably shouldn’t have *blown my mind* but it did.  It’s so obvious yet I’ve never once designed curriculum with this approach. You can bet that’s all changed now.  Here’s the key point:

DESIGN YOUR LESSON BY WRITING THE ‘EXAM’ FIRST

Ya.  It’s maybe obvious.  You want to make sure the students leave knowing what you intended to teach them?  Well, figure out how you’re going to measure that success *first*, then build your lesson up to that.  “They understand the for loop” is not enough.  Be specific.  Have a multiple choice question that tests the output of a for loop and gives 3 plausible answers and one right answer.  Use this to check if you are teaching well – their failure to choose the right question is your failure to teach the concept correctly.  This doesn’t have to be for actual grading (unless you want to grade yourself). Think of this like Test Driven Development for curriculum.  Teach to the goal.  You will develop lessons faster and more efficiently.  Your learners will appreciate it.  They can tell when they are learning vs. having a lecturer do a brain dump on them that goes nowhere in particular.  Backwards design works.  Greg’s book plug related to this section:  “Seeing Like a State

Another tip?  Create one or more user profiles for your lesson.  In our workshop we created Dawn: 15 year old girl who is good at science and math, learning programming in a one-day workshop. Then we did an exercise in crafting a question that would confirm if we had successfully taught how functions work to her.

We learned about Allison Elliott Tew‘s work and about “Concept Inventory” which is a way to use common mistakes in mental modeling to create multiple choice questions where the incorrect answers can help you understand *how* someone has misunderstood the concept you are trying to teach.  Multiple choice is great because it’s quick to get you an assessment (teacher grading time).

Peer Instruction

Related to multiple-choice as test of understanding is Peer Instruction.  This is a method that uses a multiple choice question in a really interesting, and engaging fashion.

Developed by Eric Mazur in the 1990′s this method expects students to have done some pre-work on the material before coming to class so that the entirety of the lesson can be used to compare and correct conceptual maps and understanding of the material.  It goes like this (at least Greg’s interpretation – it differs in Wikipedia as to how Eric designed it):

  1. Provide a multiple choice question based on the pre-work content.  Ensure 3 plausible answers and one correct
  2. Students select and *commit* to an answer (there is not yet software for this, though there are clickers) – you can also ask people to hold up the number of fingers for their choice and have classroom helpers count
  3. If everyone picks the right answer you can move on but otherwise you ask people to talk in groups with their neighbours to examine each other’s choices and what the correct answer might be and why.  This is great for having people explain their mental model/map
  4. Vote again and have students commit to the answer
  5. Instruction reveals the answer as well as perhaps a single sentence explaining why
  6. Groups discuss again, this time they can explore their understanding with the correct answer alongside people who, likely, had the correct model

This teaching technique was proven in 1989 but is still widely unused (esp. in MOOCs). Greg told us that he can usually do about 10 of these types of questions in a 1 hour class.  We did an example of one in the workshop to test out the method and it was a lively exercise.  This was also an opportunity for Greg to help us notice how noise in the room helps a teacher determine when a good time is to check in, continue the lesson, or make sure people aren’t stuck.  Active, engaged learning is boisterous and noticeably relaxed.  Quiet can mean focus, and then as people complete the exercise you can hear some discussions start up as those who are done talk with each other about the exercise.  I look forward to getting a bit of expertise at this level of listening and was impressed by Greg’s skills in classroom energy level reading.

F*ck It, I’m Outta Here

I have several more pages of notes but it’s getting late and this is a long post. There’s one more part of the workshop that I’d like to write about:  The moment when you decided you didn’t want to learn something anymore.

This is a really great piece of advice for teachers.  Greg started by saying that he used to ask students what motivated them to learn, what great experience in learning they had so he could tap into that motivation as a teacher.  Now?  He asks people what DE-motivated them.  You get a lot out of people this way.  Ask someone (or think of your own experiences): “What was something you were curious about, working on, getting into, and what happened that made you say ‘f*ck it’ and drop it? If you could go back in time what would you change?”.

For my example I spoke about returning to gym class at 12 years of age after recovering for many months from a very physically traumatic incident where I was hit by a car while on my bike (15 bones broken, 6 months in a wheelchair).  Being immobilized *and* being a pre-teen caused me to put on a fair amount of weight and I was no longer very physically active or able.  I also had yet-to-be-diagnosed asthma.  Not only did I have to endure a gym class where those with natural talents were help up while the rest of us were discarded but I also continued to fail tremendously at getting more than a “Participation” certificate(! Every other result got a very nice badge) for the Canada Fitness Test.

My “F*ck it” moment was when I got so frustrated with never getting a badge that I stole someone’s gold badge when no one was watching.  I also ended up eschewing all sports and athletic pursuits for many years if there was any hint of tryouts or actual talent needed.  Years later, at 29, I taught myself how to run by using a couch-to-10K program that did repetitions of running and walking in order to build up endurance.  Not only did I succeed at that but I learned to *love* running and feeling healthier in my body.  If I could go back in time I would become a Physical Education teacher and make sure every kid in my class knew that it’s not about natural talent at anything. It’s about setting achievable goals for yourself and comparing your results against your OWN RESULTS.  Never mind some test, and other kids. We’re all very different but no one should be denied a sense of accomplishment.  It’s what keeps you coming back to learn & build on what you’ve learned.

Badges awarded to Canada Fitness Test ParticipantsThe coveted badges.

 

Now Go Read More: Keep Learning How to Teach

It was an amazing day.  I have more notes to transcribe for myself but I think I’ve managed to capture the major concepts I learned today that will all be invaluable in my work on Ascend and beyond. Greg is an experienced, passionate, driven teacher and his enthusiasm for *knowing* what works in education is contagious.  I want to be a better scientist and educator too. The Software Carpentry movement is picking up momentum.  Look for workshops, blog posts, and opportunities to participate in a town near you.   See their site for up to date information and also check out their materials page for additional resources.  I’ve got a few new books to read on the plane home tomorrow.

by Lukas at April 14, 2014 06:00 PM


Hesam Chobanlou

Porting analysis pt.4

This is a follow up to the previous post on my porting analysis project.

There is really not much to tell at this point. I’ve still not had the time to test gridengine to verify its functionality on aarch64. However, I have posted a patch on gridengine’s mailing list. I am sure that if anyone from the community looks at this patch, they will exercise a great amount of care to ensure it does not do more harm than good. I also think if my approach is incorrect then surely someone will let me know that this is the case and perhaps guide me further along in the right path.

In any case, the patch can be obtained here.

by Hesam Chobanlou (hesamc@hotmail.com) at April 14, 2014 09:57 AM


Alexander Snurnikov

Releas #7 – requirejs

Good Sunday evening! I am coming to the end of the semester and to my final release in OSD. I would say, that it was and it is a great class that opens new horizons to students and possible opportunities to their future. I am so happy that I was and I am involved with such great companies as CDOT and Mozilla.

Previous week was the hardest one for me in terms of time, all the assignment presentations and assignment deadlines were there. Also I finished reading a great Sci-Fi book “Calculating God” last Sunday.

Lets get back to my work throughout the last week.
I was working on implementation GA events to goggles – bug968291, I was continuing this bug from the week before the last week. I became a little bit harder than I excepted, but it even better. It turned out that, from this bug we started 3 more new bugs, , which I took already:

Basically, all 3 bugs related to the same problem/feature: to update goggles to use requireJS whenever its possible. Why 3 bug? Because, in this way it’s easier to stay on track.
I started to work on bug995318 (my commit with progress) and made progress on that already. The main idea of the requireJS is to optimize in browser use and it makes app more efficient and work faster.
I would provide short guide on how it works.
The tree of directories:

  • project-directory/
    • index.html
    • js/
      • main.js
      • require.js
      • browser-screen.js
      • sso-override.js
      • sub/
        • util.js
        • jquery.js

Sample index.html file

<!DOCTYPE html>
<html>
    <head>
        <title>My Project</title>
        <!-- data-main attribute tells require.js to load
             js/main.js after require.js loads. -->
        <script data-main="js/main" src="js/require.js"></script>
    </head>
    <body>
        <h1>My index.html file</h1>
        <p class="test">Some weird text here</p>
    </body>
</html>

main.js – special configuration file in requireJS

requirejs.config({
  baseDir:'/js', // this dir will be used as home for our js files
  paths: {       // do not specify file extension, it assumes that it is .js
    'jquery':           'sub/jquery',  
    'text':             '/bower/text/text',
    'localized':        '/bower/webmaker-i18n/localized',
    'languages':        '/bower/webmaker-language-picker/js/languages',
    'list':             '/bower/listjs/dist/list.min',
    'fuzzySearch':      '/bower/list.fuzzysearch.js/dist/list.fuzzysearch.min',
    'browser-screen':   'browser-screen',
    'sso-override':     'sso-override',
    'utils':            'sub/utils'
  }
});

// Now you specify which scripts will be loaded after require.js is fired in index.html
require(['browser-screen', 'sso-override', 'jquery', 'utils'],
  // you can give more instructions here
});

Now, inside one of your js file, you do the following.
utils.js

require(['jquery'], function($) {
  $(.test).text("your new text");
});

With this example, you don’t need to include jQuery script in html file. This approach is really efficient in terms of coding, speed of application and code management. Also, it is much more easier to add more functionality in the future with such application structure.

Also, during this week, I pushed to PR and merged small css bug where I had to fix the alignment in navigation area when user logged in.

I am looking forward to continue my contribution to Mozilla and CDOT :) Cheers!


by admixdev at April 14, 2014 02:33 AM

April 13, 2014


Michael Veis

Progress on Final Release

This was a very busy week for me having final assigments due in every class and exams starting on Friday. I only got a chance to work on one issue this week and I also had some issues that had been reviewed that needed to be rebased.

The issue I was working on this week was issue 59, which involved going through all the makerstrap documentation and thimble example pages an changing the makerstrap links to the new hosted link. All the examples I had made in thimble could just be updated, but the ones made by other people had to be changed and updated. I also added a new section in the documentation called versions. This table just shows a link, which the user could paste in and look at the different hosted versions of makerstrap.

I also had some bugs I had to rebase. The reason I needed to rebase was because there was quite a big change in makerstrap where it was decided that all the variables that we had among all the different files would be placed in a variables.less file. This change just got landed a few days ago, so my few pull requests that hadn’t been reviewed yet got reviewed with an R+, but couldn’t be merged due to the changes. Therefore I had to get the new changes and rebase them so they could be merged to makerstrap.

Thats all for this week, but I hope to get a few more bugs done before next Friday.

 


by mlveis at April 13, 2014 08:56 PM

April 12, 2014


Matthew Grosvenor

Code Snippet

Since I won't have much time after today, before Wednesday to get a lot of coding in (exams come first), I thought I'd at least post the bits of aarch64 assembler code that I've managed to template out on my machine.

It's not too much, and I think I might need to edit a line or two of it so far, but it's coming along. I guess this is like my rough draft:

#if defined(__aarch64__) || defined(__arm64__)

#ifndef __ARCH_AARCH64_ATOMIC__
#define __ARCH_AARCH64_ATOMIC__

#ifdef CONFIG_SMP
#define SMP_LOCK “lock ; ”
#else
#define SMP_LOCK "”
#endif

typedef struct {volatile int counter;} atomic_t;

#define ATOMIC_INIT(I) { (i) }

#define atomic_read(v)  ((v)->counter)

#define atomic_set(v,i)  (((v)->counter) = (i))

static __inline__ void atomic_add(int i, atomic_t *v)
{
    __asm__ __volatile__(
           SMP_LOCK “add  x1, x0, x0“
            :“=m” (v->counter)
            :"ir” (i), “m” (v->counter))    ; - this needs fixing/porting to proper aarch64
}   

static __inline__ void atomic_sub(int i, atomic_t *v)
{
    __asm__ __volatile__(
            SMP_LOCK “sub x2, x1“
            : “=m” (v->counter)
            :"ir” (i), “m” (v->counter));
}

by Gabbo (noreply@blogger.com) at April 12, 2014 04:59 PM

April 11, 2014


Rick Eyre

Hosting your JavaScript library builds for Bower

A while ago I blogged about the troubles of hosting a pre-built distribution of vtt.js for Bower. The issue was that there is a build step we have to do to get a distributable file that Bower can use. So we couldn't just point Bower at our repo and be done with it as we weren't currently checking in the builds. I decided on hosting these builds in a separate repo instead of checking the builds into the main repo. However, this got troublesome after a while (as you might be able to imagine) since I was building and commiting the Bower updates manually instead of making a script like I should have. It might be a good thing that I didn't end up automating it with a script since we decided to switch to hosting the builds in the same repo as the source code.

The way I ended up solving this was to build a grunt task that utilizes a number of other tasks to build and commit the files while bumping our library version. This way we're not checking in new dist files with every little change to the code. Dist files which won't even be available through Bower or node because they're not attached to a particular version. We only need to build and check in the dist files when we're ready to make a new release.

I called this grunt task release and it utilizes the grunt-contrib-concat, grunt-contrib-uglify, and grunt-bump modules.

  grunt.registerTask( "build", [ "uglify:dist", "concat:dist" ] );

  grunt.registerTask( "stage-dist", "Stage dist files.", function() {
    exec( "git add dist/*", this.async() );
  });

  grunt.registerTask("release", "Build the distributables and bump the version.", function(arg) {
    grunt.task.run( "build", "stage-dist", "bump:" + arg );
  });

I've also separated out builds into dev builds and dist builds. This way in the normal course of development we don't build dist files which are tracked by git and have to worry about not commiting those changes. Which would be the case because our test suite needs to build the library in order to test it.

  grunt.registerTask( "build", [ "uglify:dist", "concat:dist" ] );
  grunt.registerTask( "dev-build", [ "uglify:dev", "concat:dev" ])
  grunt.registerTask( "default", [ "jshint", "dev-build" ]);

Then when we're ready to make a new release with a new dist we would just run.

  grunt release:patch // Or major or minor if we want too.

by Rick Eyre - (rick.eyre@hotmail.com) at April 11, 2014 09:35 PM


Matthew Grosvenor

Looking before I leap

In the interest of getting anything done, as the communication has slowed between myself and the dev's, I'm going to attempt something outrageous, which is this.
I will at least begin transcribing the x86/i386 assembly code into an aarch64 version, without implementing it directly into the program itself, as this is still likely a bit far off in terms of viability.

First the atomic.h, and then down the rabbit hole I go for any other assembly relating to it, of which there is some).

More on this, with samples of code/the entirety of what I right to come.

by Gabbo (noreply@blogger.com) at April 11, 2014 03:42 PM

April 09, 2014


Nick Kemp

Groonga Testing Phase

Introduction

Last week blogged about the issues that I was having while trying to run Groonga’s test script. My issues pretty much boiled down to getting the right version of ruby installed and not having cutter installed.

Updated Script

I also posted a script that would install everything for me because I had to do it so many times. I have since updated the script to log the progress of the script and to test if commands ran successfully. I also added some dependencies that I would need to the script that I didn’t add before. Here’s the new version:

#!/bin/bash
top_dir=$(pwd)
logfile=$top_dir/logfile.txt

#testing if the logfile exists
if [ -e $logfile ]; then
  echo Deleting old log file
  rm -f $logfile
fi

touch $logfile

echo Installing dependencies for ruby
yum groupinstall “Development Tools” -y
yum install autoconf gdbm-devel ncurses-devel libdb-devel libffi-devel openssl-devel libyaml-devel readline-devel tk-devel procps dtrace git cmake cutter* -y
echo Changing directory to top directory
cd $top_dir

#testing for older version of the tarball
if [ -e ruby-1.9.3-p545.tar.gz ]; then
  echo deleting any old instances of ruby tarball
  rm -f ruby-1.9.3-p545.tar.gz
fi

#testing for older version of the ruby
if [ -e ruby-1.9.3-p545 ]; then
  cd ruby-1.9.3-p545
  echo Removing older version of ruby
  make clean
  echo deleting old repo
  cd $top_dir
  rm -rf ruby-1.9.3-p545
fi

if [ -e /usr/local/bin/ruby ]; then
  echo Deleting /usr/local/bin/ruby
  rm -f /usr/local/bin/ruby
fi

if [ -e /usr/local/bin/gem ]; then
  echo Deleting /usr/local/bin/gem
  rm -f /usr/local/bin/gem
fi

if [ -e /usr/local/bin/erb ]; then
  echo Deleting /usr/local/bin/erb
  rm -f /usr/local/bin/erb
fi

if [ -e /usr/local/bin/rake ]; then
  echo Deleting /usr/local/bin/rake
  rm -f /usr/local/bin/rake
fi

if [ -e /usr/local/bin/rdoc ]; then
  echo Deleting /usr/local/bin/rdoc
  rm -f /usr/local/bin/rdoc
fi

if [ -e /usr/local/bin/testrb ]; then
  echo Deleting /usr/local/bin/testrb
  rm -f /usr/local/bin/testrb
fi

if [ -e /usr/local/bin/ri ]; then
  echo Deleting /usr/local/bin/ri
  rm -f /usr/local/bin/ri
fi

if [ -e /usr/local/bin/irb ]; then
  echo Deleting /usr/local/bin/irb
  rm -f /usr/local/bin/irb
fi

#getting ruby’s tarball
echo Getting ruby's tarball
wget http://ftp.ruby-lang.org/pub/ruby/1.9/ruby-1.9.3-p545.tar.gz

#extracting ruby tarball
echo Extracting ruby's tarball
tar xvzf ruby-1.9.3-p545.tar.gz

echo Entering to source directory
cd ruby-1.9.3-p545

echo Configuring ruby
if ./configure —build=aarch64-unknown-linux; then
  echo successfully configured ruby » $logfile
else
  echo Couldn't configure ruby. Exiting… » $logfile
  exit 1
fi

echo Building ruby
if make; then
  echo Successfully built ruby » $logfile
else
  echo Couldn't build ruby. Exiting… » $logfile
  exit 1
fi

echo installing ruby
if make install; then
  echo Successfully installed ruby » $logfile
else
  echo Couldn't install ruby. Exiting… » $logfile
  exit 1
fi

if [ -e /usr/local/bin/ruby ]; then
  echo ***Sucessful Install of ruby***
else
  echo **Ruby not installed. Exiting***
  exit -1
fi

echo Installing required gem's
/usr/local/bin/gem install yajl-ruby msgpack test-unit test-unit-rr test-unit-notify

echo Installing groonga's dependencies
yum install mecab-devel zlib-devel lzo-devel msgpack-devel zeromq-devel libevent-devel python2-devel php-devel libedit-devel pcre-devel systemd -y

echo changing directories
cd $top_dir

if [ -e groonga ]; then
  echo deleting any old instances of groonga
  cd groonga
  make clean
  cd $top_dir
  rm -rf groonga
fi

if [ -e /usr/local/bin/groonga ]; then
  echo Deleting /usr/local/bin/groonga
  rm -f /usr/local/bin/groonga
fi

if [ -e /usr/local/bin/groonga-benchmark ]; then
  echo Deleting /usr/local/bin/groonga-benchmark
  rm -f /usr/local/bin/groonga-benchmark
fi

if [ -e /usr/local/bin/groonga-suggest-create-dataset ]; then
  echo Deleting /usr/local/bin/groonga-suggest-create-dataset
  rm -f /usr/local/bin/groonga-suggest-create-dataset
fi

if [ -e /usr/local/bin/groonga-suggest-httpd ]; then
  echo Deleting /usr/local/bin/groonga-suggest-httpd
  rm -f /usr/local/bin/groonga-suggest-httpd
fi

if [ -e /usr/local/bin/groonga-suggest-learner ]; then
  echo Deleting /usr/local/bin/groonga-suggest-learner
  rm -f /usr/local/bin/groonga-suggest-learner
fi

#cloning groonga
echo cloning groonga repository
if git clone https://github.com/nrkemper/groonga; then
  echo Successfully cloned groonga » $logfile
else
  echo Couldn't clone groonga. Exiting… » $logfile
  exit 1
fi

echo changing directories into groonga's top directory
cd groonga

#running groonga/autogen.sh
echo running autogen.sh
if ./autogen.sh; then
  echo autogen.sh ran successfully » $logfile
else
  echo autogen.sh failed. Exiting… » $logfile
  exit 1
fi

#configuring groonga
echo configuring groonga
if ./configure —with-ruby19=/usr/local/bin/ruby —build=aarch64-unknown-linux; then
  echo configured groonga successfully » $logfile
else
  echo Failed to configure groonga. Exiting… » $logfile
  exit 1
fi
 
#building groonga
echo building groonga
if make; then
  echo Built groonga successfully » $logfile
else
  echo Failed to build groonga. Exiting… » $logfile
  exit 1
fi

#installing groonga
echo installing groonga
if make install; then
  echo Successfully installed groonga » $logfile
else
  echo Failed to install groonga. Exiting… » $logfile
  exit 1
fi

if [ -e /usr/local/bin/groonga ]; then
  echo ***Successfully installed groonga***
else
  echo **Groonga did not install successfully**
fi

Progress

I have made a lot of progress on testing. I FINALLY got Groonga correctly configured with Cutter and Ruby and I was able to run the test suite. I figured it would be better to run the test suite on x86 to see if my code broke Groonga in any way. Somehow the code failed every test. It didn’t seem possible so I cloned their repository again but this time I wouldn’t add my code. I rebuilt Groonga and ran the test suite again and it STILL failed every test. I figured that I was just using an unstable release so I used fedpkg to get the version in the fedora repo. The one that they release for production. I built it and STILL it fails ALL the tests. Either I did something wrong when building or they release code that didn’t pass their own test suites. I feel as though both are a possibility. Either way I emailed the community to see what they think of my issue. Hopefully they have a solution or a direction that I should going because I am at a loss. If it doesn’t pass the test on x86 how can I be sure that it passes on ARM?

Conclusion

This are going well but I am at the end of my ropes lol. I have had my code written for a month or so now. My only issue has been this test suite. Once I get my code to pass the test I can submit a patch file to the community and be done with this. In the meantime I think I’m going to mess around with GProf to see how much my code is actually being used. 

April 09, 2014 03:15 AM

April 08, 2014


Michael Stiver-Balla

Month in review 3 - CLisp

Last minute update. I had hoped to get this up much earlier, but sadly I didn't get much farther with CLisp. What I have done is built in X86-64, successfully ran tests, and played around with the environment so I knew how the process worked. I then tried (and failed) to build on aarch64, so I put in many hours into getting CLisp to build in this environment. This meant updating config.guess/config.sub so it would recognize Aarch64, and updating many CLisp header files to recognize Aarch64. This work manage to allow me to build in Aarch64, however when I try and run, it throw a segmentation fault. I hit this wall a little while ago, and was hoping to get past it. Or at least make some progress. I figure it is an issue with Aarch64 and the data type sizes. Which means I've been looking at having aarch64 target the proper memory sizes for the system. Sadly I haven't, and while I have reached out to others more knowledgeable in the CLisp community, I can't claim much success with this project at the moment.

April 08, 2014 12:59 PM


Hua Zhong

Learning Porting to Aarch64: Fossil(2)

Learning Porting to Aarch64: fossil

These days I am learning about porting software to Aarch64, and Fossil is the one can not be built in aarch64 architecture environment.

Fossil is a distributed version control like Git and Mercurial. Fossil also supports distributed bug tracking, distributed wiki, and a distributed blog mechanism all in a single integrated package.

I use Foundation model as the virtual aarch64 environment and rpmbuild tools to build the software. OS is fedora 19.

1. Install all the needed tools for rpmbuild,
  • "Fedora Packager"
  • rpmdevtools
  • rpmlint
  • yum-utils
2. Download source

    fedpkg clone -a fossil
    cd fossil
    fedpkg srpm

3. check dependencies

    yum-builddep *.rpm (under the fossil directory)

4. preparation for rpmbuild

    rpm -i *.rpm (same directory as above)

5. build it!

    cd ~/rpmbuild/SPECS/
    rpmbuild -ba fossil.spec

Issue: build error because the autosetup's config.guess file can not recognize the aarch 64 machine. 

Then I check the config.guess and find that file's last modified date is 2010-09-24. I go to the internet and find that the lastest version is made on 2014-03-23, I check the script and find that it support aarch 64.
 This the link:
http://git.savannah.gnu.org/gitweb/?p=config.git;a=blob_plain;f=config.guess;hb=HEAD
Then I replace the config.guess file and build again.

Building successfully:

Wrote: /root/rpmbuild/SRPMS/fossil-1.28-1.20140127173344.fc19.src.rpm
Wrote: /root/rpmbuild/RPMS/aarch64/fossil-1.28-1.20140127173344.fc19.aarch64.rpm
Wrote: /root/rpmbuild/RPMS/aarch64/fossil-doc-1.28-1.20140127173344.fc19.aarch64.rpm
Wrote: /root/rpmbuild/RPMS/aarch64/fossil-debuginfo-1.28-1.20140127173344.fc19.aarch64.rpm
Executing(%clean): /bin/sh -e /var/tmp/rpm-tmp.NTvRE9
+ umask 022
+ cd /root/rpmbuild/BUILD
+ cd fossil-src-20140127173344
+ /usr/bin/rm -fr /root/rpmbuild/BUILDROOT/fossil-1.28-1.20140127173344.fc19.aarch64
+ exit 0

Then it is time to take a look at the assembly code in the file to see if I can do something.

only one line:

#define SHA_ROT(op, x, k) \
        ({ unsigned int y; asm(op " %1,%0" : "=r" (y) : "I" (k), "0" (x)); y; })
#define rol(x,k) SHA_ROT("roll", x, k)
#define ror(x,k) SHA_ROT("rorl", x, k)

#else
/* Generic C equivalent */
#define SHA_ROT(x,l,r) ((x) << (l) | (x) >> (r))
#define rol(x,k) SHA_ROT(x,k,32-(k))
#define ror(x,k) SHA_ROT(x,32-(k),k)
#endif


#define blk0le(i) (block[i] = (ror(block[i],8)&0xFF00FF00) \
    |(rol(block[i],8)&0x00FF00FF))
#define blk0be(i) block[i]
#define blk(i) (block[i&15] = rol(block[(i+13)&15]^block[(i+8)&15] \
    ^block[(i+2)&15]^block[i&15],1))

Obviously, it doesn't optimize for aarch64, so in aarch64 it will generate C code "((x) << (l) | (x) >> (r))" for this part.

In aarch64 it only support ror(rotate right), no rol. We can try right a rotate asm code for aarch 64 and try running and compare to the c code part to see which is faster.

Hua


by hua zhong (noreply@blogger.com) at April 08, 2014 12:21 PM


Michael Stiver-Balla

Month in review 2 - Chicken

My venture into the CHICKEN scheme compiler has been mildly rocky. As mentioned in my previous post, I started a little late and I had problems building in an Arm64 emulated enviroment. For some background, some architectures have 'hacks', that is, assembly files targeting a specific archetecture. For example, when I built on my X86-64 machine, the file apply-hack.x86-64.S was included, and with it, the function _C_apply_hack was used instead of a C version of the same function. At first I was trying a build, and I was getting a build error, telling me it couldn't find the file apply-hack.aarch64.S. The idea was that the C fallbacks would be used, but the build system was looking for aarch64 assembly files.

My problem came because I did a build first on x86, cleaned the files, then transferred the sources over to build on aarch64. There must be something up with the 'make clean' command, because it didn't clean the sources perfectly. So when I built in the emulator, I was getting the above build errors. Initially, I assumed this meant that I needed to do some porting, so I spent time learning Chicken's build process and the sources. Since then, it was politely pointed out to me that getting a fresh copy of the project sources should fix this problem. And it did, I was able to build in the aarch64 enviroment with no problems.

However, a new problem came from this. I ran the provided tests/benchmarks on the built Arm64 binaries, and the tests do indeed fail for the Arm64 version. Okay, my efforts haven't been entirelly wasted (hopefully). Specifically, the tests are complaining about a rounding error in one of the asserts. I've contacted the developers about the issue (if it is a problem with running on aarch64 specifically, or a common problem also present on other supported platforms). Who knows, but it does mean there is more work/investigation to be done.

For context:

===================================== library tests ...

Error: assertion failed: (inexact= 43.0 (fpround 42.5))

        Call history:

        <syntax>          (assert (inexact= 43.0 (fpround 42.5)))
        <syntax>          (##core#if (##core#check (inexact= 43.0 (fpround 42.5))) (##core#undefined) (##sys#error "assertion ...
        <syntax>          (##core#check (inexact= 43.0 (fpround 42.5)))
        <syntax>          (inexact= 43.0 (fpround 42.5))
        <syntax>          (fpround 42.5)
        <syntax>          (##core#undefined)
        <syntax>          (##sys#error "assertion failed" (##core#quote (inexact = 43.0 (fpround 42.5))))
        <syntax>          (##core#quote (inexact= 43.0 (fpround 42.5)))
        <eval>    (inexact= 43.0 (fpround 42.5))
        <eval>    (fpround 42.5)
        <eval>    [inexact=] (< (abs (- 1 (abs (/ a b)))) 1e-10)
        <eval>    [inexact=] (abs (- 1 (abs (/ a b))))
        <eval>    [inexact=] (- 1 (abs (/ a b)))
        <eval>    [inexact=] (abs (/ a b))
        <eval>    [inexact=] (/ a b)
        <eval>    (##sys#error "assertion failed" (##core#quote (inexact= 43.0 (fpround 42.5))))

April 08, 2014 03:51 AM

April 07, 2014


Hesam Chobanlou

Porting analysis pt.3

This is a follow-up post to part 2 of my porting analysis project. At this time I have placed a hold on porting kde-workspace and amtueal.

 

For amtueal, I had contacted the community to find out if the project was still being maintained. I received a reply from a member of the RedHat community who had knowledge of the project and informed me that it has long been replaced. You can read the post here.

 

I've since picked a new project, gridengine. Below is a bit of info on the project and my progress so far.

 

Gridengine is a resource management system used to enable high-availability and scalable application deployments. One of Gridengine's prominent features is its ability to divide tasks efficiently amongst its cluster nodes and its ability to replicate nodes to enable fault tolerance. Gridengine was originally developed by Sun Microsystems but is now being maintained by an open-source community.

 

My goal with this project has been to compile it for the ARMv8 architecture on Fedora Linux. So far I've had some success. I owe this success mostly to the existing RPM's that have been packaged for Fedora 20. The .spec file seems to take care of the many errors I encountered trying to compile this package manually, however a bit of intervention was required to complete the build.

 

In this post I will describe the few steps that were necessary to build Gridengine on Fedora 20 for aarch64.

 

Building on aarch64

The first step is to grab the source RPM from an available repository:

yumdownloader --source gridengine

 

When the command completes, you'll find in you current working directory a file with a name similar to the one below:

gridengine-2011.11p1-15.fc19.src.rpm

 

The next step is to inflate the source files into the RPM build tree:

rpm -i gridengine-2011.11p1-15.fc19.src.rpm

 

Then, switch over to the RPM build tree's SPEC directory:

cd ~/rpmbuild/SPEC

 

Then try to build gridengine using the girdengine.spec file recipe. However, you may be presented with a problem:

rpmbuild -ba gridengine.spec

 

Gridengine provides a script called amik, which stands for Architecture Independent Make. This script assists with the identification of the system's architecture to support compiling gridengine for the host system. If you receive an error like the one below, continue reading:

+ ./aimk -only-depend 
Unsupported architecture UNSUPPORTED-linux-aarch64

 

Digging deeper, it turns out that aimk is calling another script located under source/dist/util/arch. Examining this file, i found that gridengine may already have support for armv8, however, the string returned from my system was identified as aarch64. To resolve the issues, add the following line into the switch statement in source/dist/util/arch file under Linux):

Linux)
...
aarch64)
lxmachine=arm64
;;
...

Note: ignore the ellipses and the Linux) portion, you should find the Linux) portion in the file and the ellipses indicate that there is code there.

 

After the additions, attempt to run the build process again:

rpmbuild -ba gridengine.spec

 

Things should hopefully go further this time. You might be presented with an error that the linker ld is unable to find -ljvm. To fix the issue, run the following command to create a symbolic link of libjvm.so into one of the folder ld expects to find the jvm library.

ln -s /usr/lib/jvm/java-1.7.0-openjdk-1.7.0.25.aarch64/jre/lib/aarch64/server/libjvm.so /usr/lib64/libjvm.so

Note: to find out where ld expects to find a library, you can execute ld -ljvm and you will be presented with a list of folder that ld will traverse.

 

The final error I encountered was when gridengine attempted to compile qmake. It turned out that the config.guess and config.sub files were outdated for this package. To get around the issues, I added the following lines before the %build section in the gridengine.spec file.

wget -O ~/rpmbuild/BUILD/GE2011.11p1/source/3rdparty/qmake/config/config.guess 'http://git.savannah.gnu.org/gitweb/?p=config.git;a=blob_plain;f=config.guess;hb=HEAD' 
wget -O ~/rpmbuild/BUILD/GE2011.11p1/source/3rdparty/qmake/config/config.sub 'http://git.savannah.gnu.org/gitweb/?p=config.git;a=blob_plain;f=config.sub;hb=HEAD'

 

I would advise that you ensure that the output of these two commands are to the correct directory. Running the build process once again, all the steps completed successfully.

 

Installation

Switch over to ~/rpmbuild/RPMS/aarch64. There you'll find several files which you can use to install gridengine.

 

To install an RPM run:

rpm -i < nameofpackage.rpm >

 

You'll find that the RPM's will also require dependencies to install, most are easy to acquire. For me, gridengine-2011.11p1-15.fc19.aarch64.rpm required perl (Env). I acquired this dependency by installing the following package:

yum install perl-core.aarch64

 

Next few steps

We are now left with testing Gridengine on an aarch64 machine to ensure it works correctly. Unfortunately, I have not had the time to perform this test, but I will as soon as possible. It may very well be that core code modifications will be necessary to get Gridengine ported fully to aarch64. Finding armv8 in the arch file is not enough proof to conclude that it has been fully ported. Of course, testing the application will tell.

 

I will follow-up with another post on testing gridengine soon.

by Hesam Chobanlou (hesamc@hotmail.com) at April 07, 2014 08:10 PM


Michael Veis

DPS911 Presentation 3

We had a lot of presentations in class this week and there was no time for me to present so I am going to make a summary of what I was going to say in my presentation.

Slide 2:
I have continued my work with makerstrap and as you all know makerstrap is a bootstrap theme based on the webmaker style guide.

Slide 3:
The three issues I was going to present on was issue 37, which was adding a flexbox mixin to makerstrap. Issue 38, which involved fixing the form focus states and issue 3, which was adding gzip compression to the grunt build task.

Slide 4:
Issue 37 allowed me to work with an existing flexbox module and intergrate it into makerstrap. To do this I had to setup the makerstrap.less file to recognize the flexbox module and then create a new section in the documentation called “LESS Mixins”, where I explained the different flexbox classes that could be used.

Slide 5:
Flexbox is a layout mode for providing arrangements of elements. Basically it allows elements on the webpage to react appropriately to different screen sizes.

Slide 6:
You can really see how useful flexbox is from the picture in this slide. The image on the left show the layout on a desktop site. Then as a user resizes the browsers width and height the elements will become smaller to reflect the width and height. If the user makes it really small it will optimize itself for a tablet view. This can be seen more clearly from the brief code samples on slide 7 and 8.

Slide 7:
From this code sample which highlights the main area and the article section we can see that the display is set to flex. Then the flex-flow is set to row and this is just for the different rows in the layout. In the main article section we have flex, which tells that section to be 60% by default and the order is 2 because it is in the middle. Based on that we can now resize the browser and see the change in the layout.

Slide 8:
In this slide we can see that a media query was setup. The flex-flow and flex-direction was set to column and the main purpose of this media query it to get the better resize when the browser was made really small. This was shown on the right side(the picture) on slide 6.

Slide 9:
This slide just shows all the classes that have been implemented into makerstrap that can be used.

Slide 10:
This is an image that shows how this flexbox module was implemented into makerstrap. After the flexbox.less file was implemented then I needed to go into makerstrap.less and add “@import ‘custom-components/flexbox’;. Now it can be used when using makerstrap.

Slide 11:
Issue 38 involved fixing the form focus states because the current one was too blurry. To fix this issue I had to make a modification to makerstrap.less. In makerstrap.less i needed to remove the box-shadow from the form focus states.

Slide 12:
As you can see from the image in the slide there is a blur around the forms when you click on them. This was what needed to be removed.

Slide 13:
As you can see from this image, the intended look was for the outline in the specific color to stay, but for the blur(box-shadow) to be removed.

Slide 14:
From the image in the slide you can see that I had to add .form-control:focus { box-shadow: none }. Doing this removed the box-shadow from the default form input. However there was also some error inputs that needed to be removed and to simplify code I was able to nest this and remove them all at once. This was only a small modification so I was able to make the change in makerstrap.less instead of a separate file.

Slide 15:
For issue 3 I needed to make some changes to the gruntfile.js. Right now all makerstraps hosted files are being uploaded directly to s3 and because of this we wanted to compress the css files as much as possible.

Slide 16:
So you might be wondering what grunt.js is. Well it is a task based command line tool written in JavaScript on top of Node.js. The nice thing about it is that it allows you to write complicated tasks and use them in your project. The basics of a grunt file contains a “wrapper function”, then project and task configuration, then loading of grunt plugin and task and finally any custom tasks if there are any.

Slide 17:
From the image in the slide you can see that i needed to add compress and the mode “gzip”. Then i needed to specify to take the css files in the dist folder and put them back in the same folder but zipped. After that I needed to add ‘compress’ to the array in grunt.registerTask().

Slide 18:
Now this was the first time I have ever worked with the gruntfile.js before. I always needed to run grunt to run makerstrap, but I had never needed to look at the code to see exactly how it worked, so I was not too sure how to add gzip compression at first. I went to the documentation and it was useful but a little confusing still, so after I got a little direction I was able to figure out how to accomplish this task.

Link To Presentation
Makerstrap3.

 


by mlveis at April 07, 2014 05:32 AM


Nick Kemp

Groonga Progress So Far

It has been a nightmare trying to get this tester running for groonga. I have built and rebuilt groonga several times. The issue was that it was not configuring correctly with ruby. It turns out the issue was that I needed to install ruby1.9.3 and not the one in the fedora package. This means building from source…on qemu…:(. I already built ruby and groonga from source on my local machine but when I went to run the tester it didn’t work again because I forgot to add the —with-ruby19=PATH_TO_RUBY option when configuring groonga so groonga couldn’t find ruby for the test. And when I settled that issue I forgot to install a couple of dependencies for ruby and I got some errors when I tried to run it. And after that my qemu stopped working on my local machine and I couldn’t install the dependencies I needed for ruby. So now I finally got smart and wrote a script that can handle all these tasks for me and I can just calmly walk away and go work on other stuff while it runs on ireland. It’s Nothing too fancy, I just wanted to learn how to write a Bash script. Here’s the script:

#!/bin/bash
echo Installing dependencies for ruby
yum groupinstall “Development Tools” -y
yum install autoconf gdbm-devel ncurses-devel libdb-devel libffi-devel openssl-devel libyaml-devel readline-devel tk-devel procps -y

echo Changing directory to root
cd /root/

#testing for older version of the tarball
if [ -e ruby-1.9.3-p545.tar.gz ]; then
  echo deleting any old instances of ruby tarball
  rm -f ruby-1.9.3-p545.tar.gz
fi

#testing for older version of the ruby
if [ -e ruby-1.9.3-p545 ]; then
  cd ruby-1.9.3-p545
  echo Removing older version of ruby
  make clean
  echo deleting old repo
  cd /root/
  rm -rf ruby-1.9.3-p545
fi

echo Getting ruby's tarball
wget http://ftp.ruby-lang.org/pub/ruby/1.9/ruby-1.9.3-p545.tar.gz

echo Extracting ruby's tarball
tar xvzf ruby-1.9.3-p545.tar.gz

echo Entering to source directory
cd ruby-1.9.3-p545

echo Configuring ruby
./configure —build=aarch64-unknown-linux

echo Building ruby
make

echo installing ruby
make install

echo Installing required gem's
/usr/local/bin/gem install yajl-ruby msgpack test-unit test-unit-rr test-unit-notify

echo Installing groonga's dependencies
yum install mecab-devel zlib-devel lzo-devel msgpack-devel zeromq-devel libevent-devel python2-devel php-devel libedit-devel pcre-devel systemd -y

echo changing directories
cd /root/

if [ -e groonga ]; then
  echo deleting any old instances of groonga
  cd groonga
  make clean
  cd /root/
  rm -rf groonga
fi

echo cloning groonga repository
git clone https://github.com/nrkemper/groonga
echo changing directories into groonga's root
cd groonga
echo running autogen.sh
./autogen.sh
echo configuring groonga
./configure —with-ruby19=/usr/local/bin/ruby —build=aarch64-unknown-linux
echo building groonga
make
echo installing groonga
make install

April 07, 2014 03:55 AM

April 06, 2014


Eugen (Jevenijs) Sterehov

Projects Update #2

Qlandkarte GT
After getting past the minor issue of having no access to the internet on QEMU, I have managed to get most of the dependencies, including manually building GDAL for AArch64. However, there is one that keeps giving me issues, and it's QtWebKit. The only binary package of QtWebKit for AArch64 available at the moment is for Fedora 21(Rawhide), which would have probably been fine if there wasn't a conflict of libpng versions between the new QtWebKit and slightly older QT4-devel. QT4 uses libpng15, whereas QtWebKit (F21) uses libpng16, and they can't seem to agree. I have been offered a good solution for that issues and that was to try and build QtWebKit for AArch64 from source, using a fedpkg branch of an older Fedora release, since the lib dependencies shouldn't make a difference, and this way they would be compatible. Unfortunately I haven't been successful while rebuilding from *.src.rpm to binary rpm (getting this wall of errors), and this is once again where I am stuck on trying to get it to build on AArch64.

I have also contacted upstream about testing and benchmarking the program on X84_64. I was told by project administrator Oliver Eichler that "as QLandkarte is an event driven application with no permanent need to compute data there is no general benchmark system to supervise performance", but he also told me that the area that always needs optimizations is the rendering of the map, especially with a very large amount of way points or trackdata. I have yet to take a look into that, and hopefully I'm not in over my head on this part. So even if I can't build it on AArch64, I can at least try and make some adjustments that would benefit the performance of map rendering of the application.

GCC-XML
After looking at Traverso, I got a feeling that I might have bit off more than I could chew and decided to change direction for my second project. I chose a command line tool GCCXML which produces an XML description of a C++ program from GCC compiler's internal representation. This is to ease the task of other development tools that work with C++ programs by avoiding the C++ parser.

So far attempts at building it for AArch64 haven't been successful, but I am in the process of figuring out the details. Figuring out which area to focus my efforts on for optimization is to follow shortly after.

by Eugen S. (noreply@blogger.com) at April 06, 2014 09:57 PM


Matthew Grosvenor

March Roundup

So it seems that a dependency issue with sooperlooper may prevent it at the moment from being build-able on aarch64. Dammit fftw3, why do you and Rubberband have to have such a close relationship? It doesn't appear to exist in the yum repository, so I'll have to scrounge around for it somewhere else.

Anyways, soldiering on in spite of that issue. Still working out bugs in the x86 build, which the upstream team have been adding back into their own code repository. I guess even if I end up doing rather poorly in a course designed to produce working aarch64 code, I may end up helping get sooperlooper updated from its previous state to one that is slightly more up to date. Still hoping to jump itno the atomic.h (which, oddly enough is where mediatomb led me for month+) this time, no arm64 code and from the look of it, no fall backs either. Should be fun.

With any luck, Chris will allow me to have access to australia/ireland after the class ends so I can continue to work on the package, as I certainly wish to. Maybe even work on others if possible in my spare time (Who am I, and what have I done with the other me?)

by Gabbo (noreply@blogger.com) at April 06, 2014 07:52 PM

Sooper [dooper] looper! draft

So I've picked Sooper looper to work with mainly because I a) got a response from a member of the community (the main developer I believe), and b)said developer (Jesse) has been quite helpful in helping me solve issues with building the x86 version. In this case, I had to rewrite a couple lines of code, both relating to the wxWidgets dependency package. Code examples below.

This particular error:
gui_app.cpp:308:18: error: invalid conversion from ‘const char*’ to ‘wxChar {aka wchar_t}’ [-fpermissive]
if (_host == "127.0.0.1" && _never_spawn) {

Needed to altered ever so slightly to:

if (_host == wxT("127.0.0.1") && _never_spawn).

Then:
gui_app.cpp: In member function ‘virtual bool SooperLooperGui::GuiApp::
OnInit()’ :
gui_app.cpp:250:38: error: ‘SetAppDisplayName’ was not declared in this scope
SetAppDisplayName(wxT("SooperLooper"));
Needed to changed to:
 #if wxCHECK_VERSION(2,9,0)
 SetAppDisplayName(wxT("SooperLooper"));
#endif, simply because wXWidgets has new features that old versions do not support.

I went straight from contacting Jesse to trying to build, without a thorough look at the code, so it may be a disaster (again), but I'm going to find out quicker if it builds on arm64 now and jump to libmad if that is the case.

by Gabbo (noreply@blogger.com) at April 06, 2014 07:45 PM


Alexander Snurnikov

Release #6, tough week

Good Sunday.
This week was extremely busy and full of deadlines for the assignments and labs. The next one will be the same or even more loaded. Anyways, I am super motivated with my Open Source involvement, specifically with Mozilla team. I am strongly direct myself, that event after my OSD class will come to an end I will continue my contribution to Mozilla…my experience and willing to code grew enormously for the last couple of months.

github

OpenSource contribution matters

During this week I updated my previous bug with Google Analytics implementation for Goggles, my PR can be found here -> GA for goggles, right now I am waiting for review and hopefully it will be landed soon. While working on GA for goggles, Jon helped me to find out that we actually do not need CSP for goggles’s index page, no input fields there, so no potential vulenrability exists. After that, I filed a new bug (to remove CSP from index page, and leave it only for publish page) and send PR, which is under review.

Also, I updated CSP for Thimble, where I missed some sources when together.js were activated. I updated my PR here.

The last thing I did during this week was my little involvement with CSS, where a small fix had to be done. In Goggles, username, language picker and ‘sign in’ button were not aligned in one line. I took that bug, fixed it and pushed PR. I would say, that CSS is also a lot of fun, especially now, with all its power and functionality.
I was working mostly on small bugs this week, due to my lack of time. One major bug is still has to be done, its recoding the popcorn instance. I would work on that next week and week after, so that I would finish it before my next release.

And started my runnings today, which is absolutely great:) Summer almost came!
Bye, bye!:)


by admixdev at April 06, 2014 05:44 PM


Michael Stiver-Balla

Month in review

Due to various unforseen and unavoidable complications, this post (and the many posts I've missed) has been massively delayed. Therefore, I'm going to spend the next few hours writing a multi-part update on all my work.

Some Context:

For project class SPO600, I selected two OSS projects, and have been going through the process of analysing and testing the need for porting to an aarch64 architecture. The first project was Chicken, an optimized scheme-to-C-to-machine code compiler and CLisp, an implementation of the Common Lisp language. Both projects are in the 'Lisp family', and my interest in functional languages motivated me to take a stab at them. While Chicken is straight-forward, with little architecture specific code. I ended up underestimating the amount of assembly in the CLisp source. Oh well, that's something I'll end up covering a little bit later.

To Start, Both projects were easy to get and straight-forward to build (My machine: x86-64/Fedora 20). CLisp was obviously larger, but ended up having slightly better instructions. My biggest challenge throughout the last few months is getting settled back into a Linux enviroment, which I hadn't touched since second semester. Following that, setting up emulator enviroments was a nightmare and between technical difficulties and personal things, manage to fall behind. Regardless, A large majority of my time was spent in clisp. It was a massive pain to get the libraries together to try and compile under the emulator.Most of my effort went towards CLisp, so I'll split that into a specific post.

I had problems getting chicken to build on aarch64. There wasn't anything inherintenly architecutre specific about the assembly present in the files. In fact there appears to be no reason why it wouldn't build. certain processors had assembly files dedicated to a 'hack' that was loaded for that architecture (ppc.darwin, ppc.sysv, x86, x86-64). It perplexed me, until a couple days ago when I fiddled with some configuration and managed to get it to build in the emulator. Looks like a simple configuration update to use the C fallback on an unknown architecture. More in-depth details to come.

More posts coming in the next hour, focused a project. Now though, Time for a nap.

April 06, 2014 12:22 PM

April 05, 2014


Matthew Grosvenor

A humorous aside

So we've seen that systemd spits out errors a lot, in our case, with the Foundation Model.
Well, it seems the spat that Chris mentioned today between Kernal devs and systemd devs has taken its next obvious step.

systemd developer susepended by tux's daddy himself, Linux Torvald.

Looks like it's put up or shut up time.

Just thought some might find this funny.

by Gabbo (noreply@blogger.com) at April 05, 2014 01:06 AM

April 04, 2014


Michael Veis

Release 1.0

As I mentioned last week in my previous blog post I was working Issue 44 and Issue3. Issue44 was making some small tweaks to the navbar and I had submitted a pull request for this issue. My pull request got reviewed and it turned out I had one small typo in my code. I fixed that up and now that it ready to be merged. The updated pull request can be found here.

I was also working on issue 3. Issue 3 was adding gzip compression to the grunt build task. This was my first time working with the gruntfile.js. I always use grunt to build and run makerstrap but I had never gone into the code and see how it exactly worked. This gave me a chance to look at the code and go to the grunt documentation and learn how it is used. After I read the documentation I was still a little bit confused on how I would get the compression to work. I spoke to Kate and she gave me some tips on how to do this. I submitted a pull request but I had a few things to fix up. I needed to add”grunt-contrib-compress” to package.json so that it could be installed when someone does an npm install. I also had to change the folder from dist/ to s3/ and include that in the .gitignore file. I have made all the required changes for this issue and am just waiting for it to be reviewed again. The pull request can be found here.

I also started working on issue 48. I started investigating the breakpoints in makerstrap’s current like the issue suggested. After looking through the bootstrap.css code I saw how everything was setup up with different @media with the different cols for example col-xs etc.. In my case I needed to create a grid.less file. Then I needed to import it in the makerstrap.less file. The next step was setting setting up the 320px media query and inside there define classes for col-mirco-1 to col-micro-12. I also had to include col-micro-offset-1 to col-micro-offeset-12. Everything is setup and I have submitted a pull request, which can be found here.

I also worked on issue 54, which was an issue submitted by cassiemc suggesting we include the dark green from the style guide. She also suggested that we remove mid grey and just go with light grey because they were so similar. I added dark green and showed it in our document page. I also commented out the code that was showing the mid grey in the documentation and just put a comment in our color.less file that the color was depreciated. It was suggested that we don’t remove the actual variable because it would break projects if other people were using it. The pull request for this can be found here.

Wrap-up

Updated pull request for issue 44 can be found here.

Updated pull request for issue 3 can be found here.

Commits for Issue 48 can be found here.

The pull request for issue 48 can be found here.

The pull request for issue 54 can be found here.


by mlveis at April 04, 2014 11:57 PM


Ali Al Dallal

Webmaker Workshop with young students

Yesterday, Rick and I went to a middle school in the Toronto District School Board to do a Webmaker workshop for students in grade 6, 7 and 8.

We had three goals for this workshop:

  1. Get the students interested in considering a career as a software developer or a programmer.
  2. Educate them about the web.
  3. Teach them about Webmaker.org

There were many challenges in holding this workshop:

The students were very young.

Although grade 6–8 is the best age to educate students about something that could make them interested in their future, it might have been a boring workshop for some of them.

Photo from telegraph

Working at the school computer lab

Some of you might not know that working in a school computer lab is a nightmare when you have to access the internet. Why is that? Simply because some computers don't even have have Firefox or Chrome and you have to work with Internet Explorer 8. I'm sure you wouldn't have a good time working on Internet Explorer even with with version 9 or 10.

Working with many students at once

This is not really hard if you have patience. But, let me tell you that working with 30 students each workshop (we had 3 workshops) and trying to teac has never be an easy job for many of us... When I was that age, I didn't really have much of interest in learning about something new when I was on my computer. I just wanted to play Pokemon or something to kill some time.

Photo from BBC

How did we deal with that many students and did we succeed in our goals?

Holding three workshops with 30 students each was not easy for me since it was my first time doing this, but I did handle it very well. I also have to thank Rick for coming to the workshop to help me. It wouldn't have been as easy to do everything myself.

So how did we do it? Our first workshop was a bit of a mess because we didn't know we would be working in the computer lab. We also didn't know what the students were interested in or their knowledge of the web. However, we did learn from our first workshop. We had 15 minutes before the second workshop to prepare.

Let me list the things we didn't do well in our first workshop:

  1. We don't know how many students were coming. We expected there would be around 20 students, but there were definitely more. If I can recall the first workshop we had around 30+.

  2. The computers were very slow and really hard to work with. Thankfully Webmaker.org works really well on a slow computer as long as we have access to Firefox or Chrome (we only had access to Chrome which is fine too).

  3. We were short of time. We didn't know students would need us to answer their passport survey (a survey about the job). Since we wanted to cover three of our main goals, the timeframe we had (40 minutes) wasn't enough at all.

  4. We couldn't get most of the students' attention. It's hard to do that, but we had a plan – to give away some t-shirts. Unfortunately, it failed because we didn't have enough time.

Like I said, we did learn from our first workshop, so in that 15 mins we had before the second workshop, we did a quick plan on what we wanted them to do and what we wanted to tell them.

The second and third workshop we did went very well!

  1. We got 90% of students' attention. We made sure we covered things that they needed to get done first (their passport survey) and asked them to pay attention to our workshop because we had a prize. When we said that everyone just went super quiet and paid full attention to what we were showing them.

  2. They did learn something. I told them if they wanted to win the prize they would have to make sure they knew what was going on. At least 70% of them could answer our questions!

  3. They made something on Webmaker.org using Thimble. Well this is because we asked them to make something so they could win the prize...

4: We were on time! Hooray.

Just to conclude, we were really happy to see the smiles on the students' faces when they published a make on Webmaker.org and when they won a prize. I felt super happy and I'm sure what we did will influence this young new generation of kids to a have better future.

by Ali Al Dallal at April 04, 2014 01:37 PM


Hua Zhong

     Learning Porting to Aarch64: Fossil(3)

I do some search for the potential optimization by adding rotate function for arm machine.

#define SHA_ROT(op, x, k) \
        ({ unsigned int y; asm(op " %1,%0" : "=r" (y) : "I" (k), "0" (x)); y; })
#define rol(x,k) SHA_ROT("roll", x, k)
#define ror(x,k) SHA_ROT("rorl", x, k)

in aarch64 command, it the rotate function is

ROR Wd, Wm, #uimm
Rotate Right (immediate): alias for EXTR Wd,Wm,Wm,#uimm.
ROR Xd, Xm, #uimm
Rotate Right (extended immediate): alias for EXTR Xd,Xm,Xm,#uimm.
 
I try do test in x86_64 first.


#include<stdio.h>
#include<stdlib.h>
#include <sys/types.h>


#define INT_BITS 32
#define TESTNUM 16

//under aarch64 original code,using C
//#define SHA_ROT(x,l,r) ((x) << (l) | (x) >> (r))
//#define rol(x,k) SHA_ROT(x,k,32-(k))
//#define ror(x,k) SHA_ROT(x,32-(k),k)


//under X86_64 original code
#define SHA_ROT(op, x, k) \
        ({ unsigned int y; asm(op " %1,%0" : "=r" (y) : "I" (k), "0" (x)); y; })
#define rol(x,k) SHA_ROT("roll", x, k)
#define ror(x,k) SHA_ROT("rorl", x, k)

char * bit_representation(unsigned int num) {
  char * bit_string = (char *)malloc(sizeof(char)*sizeof(unsigned int)*8+1);
  unsigned int i=1, j;
  for(i=i<<(sizeof(unsigned int)*8-1), j=0; i>0; i=i>>1, j++) {
    if(num&i) {
      *(bit_string+j)='1';
    } else {
      *(bit_string+j)='0';
    }
  }
  *(bit_string+j)='&#92&#48';
  return bit_string;
}

/* Driver program to test above functions */
int main()
{
 
  display = rol(TESTNUM, 2);
  display = rol(TESTNUM, 2);
  display = rol(TESTNUM, 2);
  display = rol(TESTNUM, 2);
  display = rol(TESTNUM, 2);
  display = rol(TESTNUM, 2);
  display = rol(TESTNUM, 2);
  display = rol(TESTNUM, 2);
  display = rol(TESTNUM, 2);
  display = rol(TESTNUM, 2);
  display = rol(TESTNUM, 2);

  display = ror(TESTNUM, 2);
  display = ror(TESTNUM, 2);
  display = ror(TESTNUM, 2);
  display = ror(TESTNUM, 2);
  display = ror(TESTNUM, 2);
  display = ror(TESTNUM, 2);
  display = ror(TESTNUM, 2);
  display = ror(TESTNUM, 2);
  display = ror(TESTNUM, 2);
  display = ror(TESTNUM, 2);
  display = ror(TESTNUM, 2);

}
I run this test program for 20000 times and record the time.

5 times, get 3 times in the middle. calculate average,
under x86_64:

real    0m7.057s
user    0m0.546s
sys    0m0.918s

real    0m7.065s
user    0m0.499s
sys    0m0.959s

real    0m7.049s
user    0m0.534s
sys    0m0.906s

real    0m7.101s
user    0m0.486s
sys    0m0.986s

real    0m7.069s
user    0m0.486s
sys    0m0.983s
under C code:

real    0m7.073s
user    0m0.569s
sys    0m0.857s

real    0m7.008s
user    0m0.549s
sys    0m0.856s

real    0m7.065s
user    0m0.528s
sys    0m0.897s

real    0m7.001s
user    0m0.568s
sys    0m0.833s

real    0m7.044s
user    0m0.549s
sys    0m0.862s

result:
X86_64 asm:
user: (0.499+0.534+0.486)/3=0.50633 S

C :
user: (0.549+0.528+0.568)/3=0.54833 S

under x86_64,
assembly rotation will be  7.66% faster than C function rotation.

I will try to create the assembly code for rotating under arrch64.

by hua zhong (noreply@blogger.com) at April 04, 2014 12:37 PM

Learning Porting to Aarch64: cxxtools(2)

Learning Porting to Aarch64: cxxtools(2)

 

Today I take a look at the cxxtools, which is a set of libraries including a lot of functionality. 

First of all download and prep, get everything in the rpmbuild directory.

Taking a look at the source code, we can find it has a separate file to code the assembly for arm.

Looks like everything is there?

When I build the project, it throws a lot of warnings:

/bin/sh ../libtool --tag=CXX   --mode=compile g++ -DHAVE_CONFIG_H -I.  -I../src -I../include -I../include -Wno-long-long -Wall -pedantic  -O2 -g -pipe -Wall -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector --param=ssp-buffer-size=4 -grecord-gcc-switches  -fno-stack-protector  -c -o csvformatter.lo csvformatter.cpp
libtool: compile:  g++ -DHAVE_CONFIG_H -I. -I../src -I../include -I../include -Wno-long-long -Wall -pedantic -O2 -g -pipe -Wall -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector --param=ssp-buffer-size=4 -grecord-gcc-switches -fno-stack-protector -c csvformatter.cpp  -fPIC -DPIC -o .libs/csvformatter.o
In file included from ../include/cxxtools/string.h:34:0,
                 from ../include/cxxtools/formatter.h:32,
                 from ../include/cxxtools/csvformatter.h:32,
                 from csvformatter.cpp:29:
../include/cxxtools/char.h: In function 'bool cxxtools::operator==(const cxxtools::Char&, wchar_t)':
../include/cxxtools/char.h:143:35: warning: comparison between signed and unsigned integer expressions [-Wsign-compare]
             { return a.value() == b; }
                                   ^
../include/cxxtools/char.h: In function 'bool cxxtools::operator==(wchar_t, const cxxtools::Char&)':
../include/cxxtools/char.h:145:35: warning: comparison between signed and unsigned integer expressions [-Wsign-compare]
             { return a == b.value(); }
                                   ^
../include/cxxtools/char.h: In function 'bool cxxtools::operator!=(const cxxtools::Char&, wchar_t)':
../include/cxxtools/char.h:156:35: warning: comparison between signed and unsigned integer expressions [-Wsign-compare]
             { return a.value() != b; }
                                   ^
../include/cxxtools/char.h: In function 'bool cxxtools::operator!=(wchar_t, const cxxtools::Char&)':
../include/cxxtools/char.h:158:35: warning: comparison between signed and unsigned integer expressions [-Wsign-compare]
             { return a != b.value(); }
                                   ^
../include/cxxtools/char.h: In function 'bool cxxtools::operator<(const cxxtools::Char&, wchar_t)':
../include/cxxtools/char.h:169:34: warning: comparison between signed and unsigned integer expressions [-Wsign-compare]
             { return a.value() < b; }
                                  ^
../include/cxxtools/char.h: In function 'bool cxxtools::operator<(wchar_t, const cxxtools::Char&)':
../include/cxxtools/char.h:171:34: warning: comparison between signed and unsigned integer expressions [-Wsign-compare]
             { return a < b.value(); }
                                  ^
......
 
And then stuck.

I was thinking about how to fix this, until I take a look at their latest patch:
....
 -            else if (ch == L'\n' || ch == L'\r')
+            else if ( (int) ch == (int) L'\n' || (int) ch == (int) L'\r')
             {
                 log_debug("title=\"" << _titles.back() << '"');
                 _noColumns = 1;
-                _state = (ch == L'\r' ? state_cr : state_rowstart);
+                _state = ( (int) ch == (int) L'\r' ? state_cr : state_rowstart);
             }
-            else if (ch == L'\'' || ch == L'"')
+            else if ( (int) ch == (int) L'\'' || (int) ch == (int) L'"')
.....

They fixed similar issue by force casting the variables to int and make the comparison safe.

I will do same thing for the char.h and try building again.
 

by hua zhong (noreply@blogger.com) at April 04, 2014 09:55 AM

April 02, 2014


Armen Zambrano G. (armenzg)

Mozilla's recent CI improvements saves roughly 60-70% on our AWS bill

bhearsum, catlee, glandium, taras and rail have been working hard for the last few months at cutting our AWS bills by improving Mozilla RelEng's CI.


From looking at it, I can say that with the changes they have made we're roughly saving the 60-70% on our AWS bill.

If you see them, give them a big pat on the back, this is huge for Mozilla.

Here’s some of the projects that helped with this:


Creative Commons License
This work by Zambrano Gasparnian, Armen is licensed under a Creative Commons Attribution-Noncommercial-Share Alike 3.0 Unported License.

by Armen Zambrano G. (noreply@blogger.com) at April 02, 2014 03:53 PM


Ali Al Dallal

Why you should use "dir" attribute more for right to left localization

So, in the past week I have been working on the implemetation for right to left localization on one of Mozilla Webmaker tool call Popcorn Maker and I have struggled a lot and having real hard time working on making the tool right to left compatible because...

Popcorn Maker is a Video Editor. I've done a lot of research and got some user input and discussed on how right to left being done with video player or video editor and most of right to left users preferred that it's the same as left to right language. Therefore I have to keep some part of the site to be same as LTR and the rest RTL.

If you have read my previous blog post about right to left localization using CSS selector and some library such as CSSJanus to convert properties and values in CSS from one direction to the other so it can be use for Right to left languages. Now, that is not really going to work in this case because we can't automate everything since we want to have some part of the site to have the same style with LTR.

Also, when you use dir attribute on <html> tag and if the direction is rtl the browser will try to render most of the element that was from one side to the other and this is really useful but most of the time we will just use this life-saver attribute on <html> tag and that's it, but if you don't already know dir attribute can be used in other element as well and this is why I think we should use this more.

Now, let's why and how this can be life-saver for many of you who's working on right to left and wants to make sure you can save some time battling with CSS.

I have this snippet:

<!doctype html>  
<html dir="rtl">  
  <head>
      <meta charset="utf-8">
    <title>Your Awesome Webpage</title>
  </head>
  <body>
    <p dir="ltr">This text will be on the left</p>
    <p>This text will be on the right</p>
  </body>
</html>

So:

This text will be on the left

This text will be on the right


As you can see from example above that we have two <p> tag and one is being override by dir="ltr" and the other one is being control by <html dir="rtl">.

That is one of the usefulness of the attribute where we can have mixed direction in the same page and not having to worry about writing a lot of CSS to control the direction yourself when the browser have this functionality by default.

Another example:

<p dir="ltr">The bulk of the content is in English and flows left to right,  
until this phrase in Arabic makes an appearance,  
<span lang="ar" dir="rtl">مرحبا</span> (meaning hello), which  
needs to be set to read right-to-left.</p>  

The bulk of the content is in English and flows left to right, until this phrase in Arabic makes an appearance, مرحبا (meaning hello), which needs to be set to read right-to-left.

Also, note that the dir attribute cannot be applied to the following elements:

  • applet
  • base
  • basefont
  • bdo
  • br
  • frame
  • frameset
  • iframe
  • param
  • script

So, generally dir attribute is really useful and will save you a lot of times when working on localization especially when you have to deal with bidirectional.

You can read more from MDN about dir attribute here.

by Ali Al Dallal at April 02, 2014 03:22 AM

April 01, 2014


Yoav Gurevich

Comprehensive March Recap

As expected, the latter cadence of the current semester has been nothing short of chaotic with the general relegation of project work and test preparation, so an apology is due on behalf of yours truly for not consistently updating on the progress of the software package port. The progress below will be chronologically (to the best of my ability) subdivided into relevant sections in order to best reference any and all steps, obstacles, and solutions that have been encountered over the past month:

Acquisition of the package - 

Shortly after the codebase analysis lab (found here) covering Unix commands, file extensions, and directories that were and are useful in the importation of software packages and the search for assembly code and its implications, a list of software packages and libraries carried off of Linaro's Linux on ARM64 porting project was posted on the course wiki in order to give students a reasonable amount of time to decide on an appropriate package that satisfies the prescribed requirements (http://zenit.senecac.on.ca/wiki/index.php/Winter_2014_SPO600_Software_List). After careful consideration that includes downloading the package, analyzing the scope of the work involved in order to successfully be able to either port or eliminate any assembly-related dependencies to the new architecture within the time constraints of the semester, my sights were set on the GNU foundation's open source flash decoder package - Gnash.

About Gnash - 

The GNU flash player. An open source, web-based video and audio decoder dealing mostly with .swf files, up to version 9. The software is based on a library called gameswf. Any other details regarding the software's overview are found on its homepage. The developement community is rather scarce at this point, with one individual on their IRC channel pointing me to the mailing list as being the last bastion of community discussion in relation to bugs or patches. After having subscribed to it for at least a month, there has been no ongoing activity whatsoever, unfortunately.

Local Environment Setup - 

Given the options provided by Chris Tyler for analysis, implementation, and benchmarking, as well as the constraints of running Windows 8 natively on my local machine before embarking on this course, my initial task was to install Fedora 20 on a virtual machine interface. Using Oracle's Virtualbox software, a quick Google search ended at a very useful Wikihow Article on an efficient way to install fedora on windows using the Oracle VM.

Installing the 64-bit ARM emulator available to us on the Ireland remote server on the local machine seemed like the next ideal task to accomplish and was achieved with relative ease using the SPO600 Wiki's instructions at the bottom of the Qemu overview page. After a slight misunderstanding of where to properly transfer the "qemu-arm64.conf" file to the local machine (the etc/binfmt.d directory path relative to root and not the newly created arm64 directory used for the emulator) and some fumbling around with the tar command options for unpacking compressed version of the environment, it was mostly straight-forward.

Lastly, downloading and installing the Fedora AArch64 Foundation Model was a slightly odd experience, with the biggest stumble occuring when trying to download the files from the arm64 website and not seeing the actual "download" link appear on the bottom of the page. This was promptly solved by reverting to the Windows machine to download the needed file and e-mail it to myself back on the virtual machine.

The x86_64 Build and First Bugfix -

After peering, perusing around, and finding the assembly code necessary to begin the research and work involved into directly translating and compiling the package on Qemu, the first attempt at building Gnash on x86 proved to be unsuccessful after a 30 minute delay. The culprit was determined to be a nullptr exception variant in two cases in the npruntime.h file. Simply opening the file and editing the code manually proved sufficient enough for the build to be successful on the 2nd run through.

Ongoing aarch64 Porting Implementation -

Using the recursive grep command on patterns containing "asm" "__asm" or "(asm", the output resulted in the following results:

gnash-0.8.10/libbase/jemalloc.c:#  define CPU_SPINWAIT __asm__ volatile("pause")
gnash-0.8.10/libbase/jemalloc.c:#  define CPU_SPINWAIT __asm__ volatile("pause")

gnash-0.8.10/libbase/utility.h:#define assert(x) if (!(x)) { __asm { int 3 } }

With professor Tyler's advice, as well as deductive research in the ARM ISO, I have managed to narrow down the aarch64 alternatives to be a BRK 0 instruction as a replacement for the "int 3" debugging breakpoint macro, and the "pause" command will be substituted for a "YIELD" command on the Qemu environment in relation to spin/wait loops.

I am using a few references to look further into these operations and their purpose:

Stack Overflow Query
INT (x86_instruction) - Wikipedia
ARMv8 Instruction Set Overview and Developer Manual

This is currently in the progress of being implemented and tested.

Miscellaneous Calamities -

A mistake of seemingly seismic proportion occurred on my part on March 27th. In the middle of a lecture, a notice to update the operating system popped up at the bottom of my screen. When an update process is ongoing, it effectively downloads the new dependencies first before deleting the old ones. Whether or not this is a symptom of default settings on the virtual machine or not needs to be looked into further, but nevertheless the update process was likely interrupted by the virtual machine powering off right in the middle of downloading and installing new versions of packaging and deleting the older versions akin to what was just installed, ultimately resulting in a GUI-related package called "gdm" that contained duplicate dependencies and other packages using obsolete versions which translate to the operating system being unable to properly load any of the user interface layers that aren't the terminal environment. The subsequent 4 days following that consisted of complete and utter mental stress, chaos, and anguish, only to be solved by (yet again) the technical expertise and prowess of Chris Tyler. The residing lesson here would be never to update your virtual machine's operating system without unassailable confidence that you know and have complete control over the processes that will be executing as well as their timings.

by Yoav Gurevich (noreply@blogger.com) at April 01, 2014 10:03 PM


Moshe Tenenbaum

Taking my first steps with the Porting of Busybox

Having played around with Busybox a bit, I was ready to go in and change around some settings, break some stuff, and hopefully put it back together in a more portable and optimized package, ready for anything the near future could throw at it!

My first order of business was to search the code for
  1. Inline assembler calls that can be replaced with high level code C/C++, etc for portability and platform independence
  2. Inline assembler calls  that can be replaced with atomic constructs in C/C++, etc for portability and platform independence
  3. Malloc and free memory calls that can be replaced by RESERVE_CONFIG_BUFFER/RELEASE_CONFIG_BUFFER by auditing the system calls and using the CONFIG_BUFFER mechanism – this is an issue that is still pending so any comments / help will be appreciated.

I identified the following files (with associated code) for the first two items detailed above:

 

 root/include/libbb.h

snip1


root/shell/ash.c

snip2


root/procps/powertop.c

snip4snip5


root/e2fsprogs/blkid/probe.h

snip6 snip8 snip9

 

My plan is to use some PTHREAS library codes to replace the

#define barrier() __asm__ __volatile__(“”:::”memory”)

inline assembler with C/C++ code in the root/include/libbb.h and root/shell/ash.c files. I’m also looking at changing the code for byte swap routine in the root/e2fsprogs/blkid/probe.h file with some atomic code.

Any thoughts on the matter are welcome!

 


by mctenenbaum at April 01, 2014 09:29 PM


Matt Jang

SPO Project Part 4: Comparing Cycle Counting Fallbacks on x86_64

SPO Project Part 3 | SPO Project Part 5

Just like the previous post, I made a small program to test the cycle file from FreqTweak/Ardour. The latest date found on this file is 2004, so just like before this might just be a case of nobody bothering to update the file and the fallback might work. However, one thing that has me doubt that is the fact that there is a warning on the fallback. Clearly there was a problem with the fallback at some point where it did not do exactly what the assembly version did. On x86_64, here are the two test programs I ran.

Method 1:

#include <sys/time.h>
#include 

typedef long cycles_t;

extern cycles_t cacheflush_time;

static inline cycles_t get_cycles(void)
{
       struct timeval tv;
       gettimeofday (&tv, NULL);

       return tv.tv_usec;
}
int main() {
	for (int i = 0; i < 10; i++) {
		std::cout << get_cycles() << std::endl;
	}
	return 0;
}

Method 2:

#include <sys/time.h>
#include 

typedef unsigned long long cycles_t;

extern cycles_t cacheflush_time;

#define rdtscll(val) \
     __asm__ __volatile__("rdtsc" : "=A" (val))

static inline cycles_t get_cycles (void)
{
	unsigned long long ret;

	rdtscll(ret);
	return ret;
}

int main() {
	for (int i = 0; i < 10; i++) {
		std::cout << get_cycles() << std::endl;
	}
	return 0;
}

If they ran at the same speed and had similar outputs that would be great, however this was not the case. Again compiling with g++, these are the outputs / time of the first method:

FIRST RUN
1476161480
1476258702
1476267522
1476310370
1476317544
1476324272
1476337836
1476352486
1476364048
1476371816

real 0m0.001s
user 0m0.000s
sys  0m0.000s

SECOND RUN
1144974940
1145247376
1145283316
1145316652
1145345632
1145477464
1145535268
1145589868
1145651848
1145709868

real 0m0.004s
user 0m0.002s
sys 0m0.001s

THIRD RUN
315236292
315515196
315550140
315581460
315610512
315639540
315760548
315791664
315819156
315876768

real 0m0.004s
user 0m0.001s
sys  0m0.002s

FOURTH RUN
3243562044
3243856572
3243905100
3244057164
3244101036
3244143468
3244211148
3244293096
3244359552
3244418004

real 0m0.004s
user 0m0.001s
sys  0m0.003s

FIFTH RUN
2832403976
2832696716
2832744488
2832882092
2832925088
2832966200
2833029104
2833116272
2833196636
2833256456

real 0m0.004s
user 0m0.001s
sys  0m0.003s

AVERAGE
real 0m0.003s
user 0m0.001s
sys  0m0.001s

Now I don’t know what this output means in the context of CPU cycles but I do see that although the number doesn’t persistently increase throughout the consecutive runs, each run the number increases continually. The program runs fairly quickly and there are 10 digits per line of output every single time.

Now here is second method:

FIRST RUN
146081
146177
146183
146186
146190
146194
146197
146201
146205
146208

real 0m0.004s
user 0m0.002s
sys  0m0.002s

SECOND RUN
84273
84394
84409
84466
84479
84491
84517
84537
84557
84579

real 0m0.004s
user 0m0.002s
sys  0m0.002s

THIRD RUN
927403
927521
927536
927549
927562
927573
927629
927663
927694
927725

real 0m0.004s
user 0m0.003s
sys  0m0.001s

FOURTH RUN
695796
695837
695841
695844
695847
695850
695868
695873
695878
695883

real 0m0.004s
user 0m0.001s
sys  0m0.002s

FIFTH RUN
414180
414299
414314
414327
414339
414350
414404
414428
414449
414469

real 0m0.004s
user 0m0.001s
sys  0m0.003s

AVERAGE
real 0m0.004s
user 0m0.002s
sys  0m0.002s

Ok, so it takes almost the same time to run. The output is sequential for each program run but the numbers are really different than the first (correct) method. The first method had output that was always 10 digits long. Clearly the second method doesn’t calculate the same way that the first method does.

From this, I think the best idea is to look into how to count clock cycles on the aarch64 platform. I actually don’t know how to do this yet so I will need to research a bit and find out the best way to go about this. I am also wondering if this difference in output actually matters. Depending on how this function is used, maybe the second method actually gets the job done well enough.


by sinomai at April 01, 2014 06:26 PM

SPO Project Part 3: Comparing Casting Fallbacks on x86_64

SPO Project Part 2 | SPO Project Part 4

I did some tests on the assembly in MapServer and the results are as follows.

MapServer Casting Assembly

In the AGG file imported into MapServer, there are three different ways that doubles are cast to integers. The first is with assembly and more specific, the “fistp” directive. The second way is with the syntax “int(x)” and the third way is with “int((x < 0.0) ? x – 0.5 : x + 0.5)”. Just from looking at these I can see that they do different things. The second and third method cast the same way but it would appear that the third method rounds where the second doesn’t.

The if directives were not that helpful in the file. The first method of casting checks if AGG_FISTP is defined, the second checks for AGG_QIFIST and the third is just an else statement. I searched through the entire source that I downloaded and I found no instances of these terms other than in this file.

Looking up QIFIST, I found it was an option for the Visual Studio compiler. This option, “/QIfist”, ensures that the rounding mode of the floating-point unit is towards zero. This is actually exactly what the third rounding method does.

QIFist Reference

I also looked up “fistp”. After looking around a bit I found that people use fistp for super fast casting to integers. The latest date mentioned in the source file is 2005 so things might have changed and the rounding might not be faster than what a compiler can make.

To test the speed of the three different methods of rounding, I set up three small programs that use a certain rounding method 1,000,000 times. The source codes of each are as follows:

Method 1:

int iround(double x) {
	int t;
	__asm__ __volatile__ (
		"fld %1; fistp %0;"
		: "=m" (t)
		: "m" (x)
	);
	return t;
}

int main() {
	double x = 5.55;
	int y = 0;
	for (int i = 0; i < 1000000; i++) {
		y = iround(x);
	}
	return 0;
}

Method 2:

int iround(double x) {
	return int(x);
}

int main() {
	double x = 5.55;
	int y = 0;
	for (int i = 0; i < 1000000; i++) {
		y = iround(x);
	}
	return 0;
}

Method 3:

int iround(double x) {
	return int((x < 0.0) ? x - 0.5 : x + 0.5);
}

int main() {
	double x = 5.55;
	int y = 0;
	for (int i = 0; i < 1000000; i++) {
		y = iround(x);
	}
	return 0;
}

Compiling each one with g++, I got the following assembly for each rounding method:

Method 1:

00000000004005b0 <_Z6iroundd>:
int iround(double x) {
  4005b0:	55                   	push   %rbp
  4005b1:	48 89 e5             	mov    %rsp,%rbp
  4005b4:	f2 0f 11 45 e8       	movsd  %xmm0,-0x18(%rbp)
	int t;
	__asm__ __volatile__ (
		"fld %1; fistp %0;"
		: "=m" (t)
		: "m" (x)
	);
  4005b9:	d9 45 e8             	flds   -0x18(%rbp)
  4005bc:	df 5d fc             	fistp  -0x4(%rbp)
	return t;
  4005bf:	8b 45 fc             	mov    -0x4(%rbp),%eax
}
  4005c2:	5d                   	pop    %rbp
  4005c3:	c3                   	retq

Method 2:

00000000004005b0 <_Z6iroundd>:
int iround(double x) {
  4005b0:	55                   	push   %rbp
  4005b1:	48 89 e5             	mov    %rsp,%rbp
  4005b4:	f2 0f 11 45 f8       	movsd  %xmm0,-0x8(%rbp)
	return int(x);
  4005b9:	f2 0f 10 45 f8       	movsd  -0x8(%rbp),%xmm0
  4005be:	f2 0f 2c c0          	cvttsd2si %xmm0,%eax
}
  4005c2:	5d                   	pop    %rbp
  4005c3:	c3                   	retq

Method 3:

00000000004005b0 <_Z6iroundd>:
int iround(double x) {
  4005b0:	55                   	push   %rbp
  4005b1:	48 89 e5             	mov    %rsp,%rbp
  4005b4:	f2 0f 11 45 f8       	movsd  %xmm0,-0x8(%rbp)
	return int((x < 0.0) ? x - 0.5 : x + 0.5);
  4005b9:	66 0f 57 c0          	xorpd  %xmm0,%xmm0
  4005bd:	66 0f 2e 45 f8       	ucomisd -0x8(%rbp),%xmm0
  4005c2:	76 17                	jbe    4005db <_Z6iroundd+0x2b>
  4005c4:	f2 0f 10 45 f8       	movsd  -0x8(%rbp),%xmm0
  4005c9:	f2 0f 10 0d 0f 01 00 	movsd  0x10f(%rip),%xmm1        # 4006e0 <__dso_handle+0x8>
  4005d0:	00 
  4005d1:	f2 0f 5c c1          	subsd  %xmm1,%xmm0
  4005d5:	f2 0f 2c c0          	cvttsd2si %xmm0,%eax
  4005d9:	eb 15                	jmp    4005f0 <_Z6iroundd+0x40>
  4005db:	f2 0f 10 4d f8       	movsd  -0x8(%rbp),%xmm1
  4005e0:	f2 0f 10 05 f8 00 00 	movsd  0xf8(%rip),%xmm0        # 4006e0 <__dso_handle+0x8>
  4005e7:	00 
  4005e8:	f2 0f 58 c1          	addsd  %xmm1,%xmm0
  4005ec:	f2 0f 2c c0          	cvttsd2si %xmm0,%eax
}
  4005f0:	5d                   	pop    %rbp
  4005f1:	c3                   	retq

I couldn’t find a QIfist option for g++ on linux but I included the compiled version anyways just to see what it compiles to.

To no surprise, method one looks a lot shorter (and in turn faster) than method three. Just two directives compared to upwards of ten makes it seem like the first method would be around 5 times faster if not just faster. I ran each method five times and here are the results:

Method 1:

FIRST RUN
real 0m0.011s
user 0m0.010s
sys  0m0.001s

SECOND RUN
real 0m0.014s
user 0m0.013s
sys  0m0.001s

THIRD RUN
real 0m0.008s
useR 0m0.007s
sys  0m0.000s

FOURTH RUN
real 0m0.016s
user 0m0.014s
sys  0m0.001s

FIFTH RUN
real 0m0.014s
user 0m0.012s
sys  0m0.002s

AVERAGE
real 0m0.013s
user 0m0.011s
sys  0m0.001s

Method 2:

FIRST RUN
real 0m0.007s
user 0m0.006s
sys  0m0.000s

SECOND RUN
real 0m0.015s
user 0m0.013s
sys  0m0.002s

THIRD RUN
real 0m0.008s
user 0m0.008s
sys  0m0.000s

FOURTH RUN
real 0m0.008s
user 0m0.006s
sys  0m0.001s

FIFTH RUN
real 0m0.009s
user 0m0.006s
sys  0m0.002s

AVERAGE
real 0m0.009s
user 0m0.007s
sys  0m0.001s

Method 3:

FIRST RUN
real 0m0.017s
user 0m0.015s
sys  0m0.002s

SECOND RUN
real 0m0.008s
user 0m0.006s
sys  0m0.001s

THIRD RUN
real 0m0.009s
user 0m0.008s
sys  0m0.000s

FOURTH RUN
real 0m0.008s
user 0m0.006s
sys  0m0.002s

FIFTH RUN
real 0m0.013s
user 0m0.010s
sys  0m0.002s

AVERAGE
real 0m0.011s
user 0m0.009s
sys  0m0.001s

Based on these tests, in which code was run one million times, there was almost no difference in the execution time . In fact, the difference was so small it could be left to random chance. The rounding method hardly effects the performance at all, if anything. So why is it there? Since this program was written, maybe nobody has looked at it since. I would conclude that the fallback works and preforms well enough to replace the assembly versions.


by sinomai at April 01, 2014 05:59 PM

SPO Project Part 2: Assembly in Freqtweak

SPO Project Part 1 | SPO Project Part 3

The second package that I am going to be looking at is FreqTweak.

Assembly in FreqTweak

All of the inline assembly in FreqTweak is in a file called cycles.h. This file provides functions to count CPU cycles. There are many different functions for different platforms in this file and there is a fallback written in C that displays a warning but will still work. This file, based on the comments is actually part of a program called Ardour. Ardour is an open source digital audio workstation and this file was probably just copied from there.

Project Plan

I am going to look into if the C fall back compares to the assembly version in both speed and results. If so then I will be looking into if its reasonable to just remove all of the assembly and just have the C fallback that is provided. If its not like that, writing a version for aarch64 would probably be the best idea.


by sinomai at April 01, 2014 05:16 PM


Dmitry Yastremskiy

Approaching the end of the semestr

End of the semester is coming and my fixes are not landed yet. Wouldn’t be so great to have them landed as logical end of the course? Hopefully can get it done this week, but I’ve encountered some issues that prevent landing. Last week I was working on 3 bugs:

https://bugzilla.mozilla.org/show_bug.cgi?id=982188

https://bugzilla.mozilla.org/show_bug.cgi?id=982229

https://bugzilla.mozilla.org/show_bug.cgi?id=982195

and those are about LESS linting as well. To be more specific, after I’ve fixed LESS linting for Thimble project ( https://bugzilla.mozilla.org/show_bug.cgi?id=916944) , I’ve discovered that other projects need LESS linting as well.

First thing that I had to realize that actual linter which is RECESS, doesn’t understand string interpolations.

Screen Shot 2014-03-31 at 10.04.02 PM

This issue/feature was requested at RECESS GitHub repo ( https://github.com/twitter/recess ), but seems to be nobody has contributed there for a long time, so I decided to find another solution.

I found this app that supports string interpolations and the author of it is GItHub staff member, so at least source should be reliable. ( https://github.com/jgable/grunt-lesslint ) and that app actually worked perfectly.

The payment of this job that I’ve encountered some LESS, CSS and JS issues that have to be solved before patches can be landed. And the guy who doesn’t let me land it has name Travis.

Travis is a great automative tool that built in in GitHub and available for you for free under public account. ( https://travis-ci.org/ )

It allows you to set a test environment, that locates on their servers and tests your GitHub code. It supports many languages and the one we use is NodeJS is entirely supported!  To set up your testing enviroment, you have to log in on Travis website ( https://travis-ci.org/ ) with your GitHub credentials and just turn on tumblers of repos that you interested to test.

Screen Shot 2014-03-31 at 10.04.17 PM

 

It will place for you a file .travis.yml to the folder of your GitHub project, where you have to specify what language you are going to use.

Screen Shot 2014-03-31 at 10.04.26 PM

By default Travis executes commands “npm install” and “npm test”, so it is your responsibility to specify what is “test” in your package.json file.

Basically that is it… Now every time you open a PR on Github, Travis will automatically execute your testing environment and report if you passed the test or not. The report available by link created against each commit in PR or it can email results to you. Very simple, great, free of charge tool!

Screen Shot 2014-03-31 at 10.04.39 PM

 

Screen Shot 2014-03-31 at 10.04.55 PM

In order to move forward my PR’s, I have to fix those LESS, CSS, JS issues. Most of them I believe can be avoided by setting up rules on how to lint. I will look at similarities  of adjacent projects and try use common rules as well as fixing of real errors if those exist.

End of the course is approaching and I’m sad that it is finished so fast, however I’ll continue to contribute and learn. I’m happy to discover what is open source and how we all can learn from it.

 

by admin at April 01, 2014 02:09 AM

March 30, 2014


Alexander Snurnikov

Release #6 part 1

Good Sunday!
This week I was working on random bugs as well as I was fixing errors on my CSP implementation for Thimble. Also I reviewed Jon’s CSP implementation for Popcorn.webmaker.org.
To be more specific, here is my progress so far:

  • Google Analytics Events for Goggles, bug968291:
    I picked up this bug, while I was searching for some interesting things to implement. The basic idea of this bug is to add GA events to goggle, so when user clicks on different buttons (‘Activate X-Ray’,’Undo’,’Redo’,’Publish’ etc.). So the path to implement it was:

    1. Add webmaker-analytics to the bower
    2. Require ‘analytics’ inside the JS file, where ‘click’ events implemented
    3. Add analytics.event("Activate X-Ray", { label: "Activated" });
    4. Do it for every click event, where it needs
    5. Test…fix
  • The problem I faced in this bug was, that when I implemented analytics.event to ‘Undo’,’Redo’,’Publish’,’Help, ‘Quit’ – I wrote as a name of the event was passed as a text var, and so when analytics fires – this ‘text’ will be shown. The problem here is with localization, it means that if the language is different – the ‘text’ var will be in that language.(Thank @aali for pointing it out) But it is not what we need, that is why I added separate events to all the buttons. My pull request. I would like to thanks @thecount for the help he gave me during this bug. Also, I learned requirejs tool for javascript a little bit.

  • Refactoring ‘HOSTNAME’ vars in webmaker components to ‘HOSTNAME_APP’bug951709 – all components reviewed and merged.
  • Fixed minor bugs in Thimble CSPPR here
  • Removed ‘Add to Map’ link from webmaker-eventspull request 63 (merged)

Also my progress can be seen at my github page (admix)
For the next week I will be working on a final CSP for Thimble, recode the popcorn instance for popcorn.webmaker.org.


by admixdev at March 30, 2014 03:17 PM

March 29, 2014


Michael Veis

Progress on 1.0

This week I worked on two bugs for makerstrap. I worked on Issue 44, which involved making some tweaks to the navbar and the inverse navbar. I also worked on issue 3, which involved adding gzip compression for makerstrap.

For issue 44 I needed to make some small tweaks to the navbar and the inverse navbar. The whole idea was to make it look similar to the webmaker.org navbar. The first thing I needed to do was to check if any Less variables could help me accomplish what I needed to do. For this case I was able to use a Less variable for horizontal padding and the default state of the inverse navbar. For the rest I needed to use css. I have submitted a pull request for this issue and it can be found here.

Issue 3 involved adding gzip compression to the grunt build task. For this issue I needed to make changes to Gruntfile.js. This was something I had never done before so I wasn’t exactly sure how to start.The first thing I was look at the Grunt documentation, which can be found here. After reading it I had a better idea of what I need to do, but still wasn’t sure how I was going to do it. I tried making some tweaks to the file based on what I had read in the documentation, but I still couldn’t get it to work. After that I had a brief conversation with Kate and she pointed me in the right direction on how to accomplish fixing this issue. After that I was able to implement the gzip compression. I have submitted a pull request, which can be found here.

I have also been assigned to issue 48, which is investigation the current breakpoints in makerstrap. Currently there is a thought that we might need a breakpoint that is smaller than the current “xs”, but were not sure. Right now i’m just looking into it and seeing if this is something worth doing or not.


by mlveis at March 29, 2014 09:42 PM


Kevin Kofler

A Belated Release 5

Better late than never, I suppose. For this release I have a few more Filer bugs, all of which add new functionality to the project.

First up, issue #122. David Humphrey recently got a Filer shell implemented, and a couple of shell commands were requested. Issue #122 involves the implementation of sh.mv with tests and documentation, which I’ve recently completed. The relevant pull request can be found here. sh.mv takes two arguments (source and destination) and a callback, and will attempt to “move” the node at source to the path specified in destination. The majority of my problems with implementing mv involved misunderstanding the use of callbacks; a couple of times some dependent code was accidentally left out of the callback (statting a directory before I was sure that the async creation had finished, for instance -_-). Dealing with the many callback nests in mv has given me a much better understanding of the pattern, however.

Next, we have issue #136. This one was very simple. A couple of our storage providers (IndexedDB and WebSQL) will fail when the user is trying to run an instance of Filer in a Private Browsing window. This issue involved adding a more verbose error for this particular issue to minimize the headaches involved in debugging it.
The relevant pull request is here.

Finally, we have issue #86, in which I added support for Unix-style timestamps. The relevant pull request is here.


by kwkofler at March 29, 2014 07:24 PM


Seneca Health Projects Blog

Thinking about the Next Steps for the Seneca Health Research (CDOT / NexJ MMDI) Project

After the MMDI project’s showing remarkable progress in building the wireless communication adapter and in processing and analyzing personal health data, it is the time to consider and plan the possible next steps for the project.

On one hand, extending or enriching the functionalities of the MMDI wireless communication adapter is an important task in the near future. The MMDI project was intended to build a universal adapter to connect multiple medical devices to cross-platform mobile devices with different wireless technologies. Medical devices can be classified into different categories and each category can have different products provided different venders. Similarly, mobile devices can be divided into various platforms, such as Android, iOS, Blackberry and etc. as well as mobile frameworks like Cordova/Phonegap. Meanwhile, the wireless technologies used in the device communication are also various, including Bluetooth, NFC and Wi-Fi. Therefore, different combinations of these factors make the MMDI project have a large number of aspects or areas that need to be researched or probed. Currently, the MMDI project has completed the implementations of using Android platform and Cordova on Android platform to connect to a certain number of medical devices based on Bluetooth technology. This means we have merely finished the first one or two step(s). The possible new research areas of the MMDI project could among the following list:

  •   Extending the MMDI project to support other mobile platforms or Cordova which are based on these platforms, such as iOS, Windows Mobile, webOS, Blackberry and etc.
  •   Adding NFC or Wi-Fi connectivity to the MMDI for supporting alternative wireless communication technologies.
  •   Implementing more Bluetooth enabled medical devices into the MMDI project to extend the current Bluetooth communication library for Android and Cordova/Phonegap on Android platform.

On the other hand, building up PHR data-process-adapter and common PHR data model would be the significant task or goal which will make the CDOT (NexJ) Health Research team be leading in the research of people-centered PHR system in healthcare industry. Actually, the Seneca Health Research team’s research and implementation of the applications based on the APIs of FitBit, Withings and MyOSCAR have made the team on the way toward this goal.

What is PHR data-process-adapter?
The PHR data-process-adapter is a new concept here, referring to universal the mobile/Web adapter which can retrieve data and update/upload data (if possible) from/to different personal health record (PHR) servers. For end user applications on Android platform, the PHR data-process-adapter could be an Android (Java) library project. For web (mobile Cordova/Phonegap or desktop) front-end applications, the PHR data-process-adapter should be a JavaScript library.

What is Common PHR Data Model?
Nowadays, there is no standard data model for building personal health record applications. The personal health data retrieved from different PHR servers may have different naming formats and data structures. The Common PHR (person health record) Data Model is the standard data model for retrieving, updating, analyzing personal health records (PHRs) used in mobile native or front-end web applications. Within these applications, all PHR data from different PHR servers will be converted into the common PHR data model by the PHR data-process-adapter. Thus, the end user (mobile native or front-end web) applications can process personal health records from different PHR servers seamlessly, supporting people-centered PHR analysis process.

Why we need the PHR data-process-adapter and the Common PHR Data Model?
The PHR data-process-adapter and the Common PHR Data Model will be used to deal with the mess or problems in today’s PHR software market and to realize the break-through of building people-centered personal health record analysis applications.
Today, there are a number of different PHR software/systems in the market. The interoperability among different PHRs is the issue to which no PHRs architecture wants to face. As results, people/patients may be forced to used certain PHR systems but none of these systems can provide comprehensive PHR data from different systems; doctors and health coaches are facing the same problems when accessing patient’s PHR.
Please view the following situations.

  •   People who use FitBit and Withings serial medical devices have to use vendor-provided mobile apps and servers for the collection and storage of personal health data.
  • A patient who is suffering chronic illness may be asked to use MyOSCAR for his/her personal health records by his/her family doctor.
  • A patient is probably asked to use TELUS PHR if the patient visits a specialist for his/her chronic illness.
  • The health coaches for chronic illness may ask their patients to use the NexJ PHR systems created by the connected health and wellness project.

The above situations can mostly happen in particular for the patients who suffer multiple chronic diseases.

Building up PHR data-process-adapter and common PHR data model is to create a framework for building front end (mobile and Web) PHR applications which support people-centered PHR analysis or comprehensive PHR data analysis from multiple data sources.


by Wei Song at March 29, 2014 02:16 PM

March 28, 2014


Ali Al Dallal

Seneca CDOT Faculty and Student Open House V2

On November 21, 2013 we had an Open House at CDOT for Faculty and Student about various open source projects and yesterday is our 2nd Open House here at CDOT and I had a chance to present my work that I have done for Mozilla Webmaker to Students and Faculty here at Seneca College @York.

Last time when I present my work, mainly it was about the tools that we have on Webmaker and try to explain what our tool does and what is the goal for Mozilla.

So, this time I got more to present since it has been four months since last Open House and I did show a lot of our works that have been completed such as our new localization across our tools, new improved Thimble app, Popcorn Maker and X-Ray Goggles. Also, I have introduced to our workflow specifically in our team how we work in an open such as: using IRC to community, or github to collaborate with coding.

Again... I was super busy with explaining/talking with students and faculty and didn't have a chance to take any photo, but this time I must say it was better than last time. Hopefully we will have another one again soon and with more people :)

by Ali Al Dallal at March 28, 2014 10:45 PM


Andrew Smith

Ridiculous PayPass/PayWave

Both of my credit cards have been replaced (without any request from me) with new versions which have a wireless, authentication-less, confirmation-less, and protection-less systems called either Mastercard PayPass or Visa PayWave.

I’ve never understood the old american system where your card number alone can be used to take money from you. Yes – it is your money and not the bank’s since the burden of noticing and proving that you weren’t at fault was ultimately your responsibility.

Finally in Canada we got a better system (catching up with the europeans) where (shock!) entering a pin is required to allow someone to take money from your account.

And then we went back an era in security time to a system where your card doesn’t even need to be visible, information is wirelessly read from it and used.. however the reader wants to use it, with some limits like 100$ per transaction. I will dare presume this was done because a typical moron is too lazy to insert a card, type in a pin, and wait for verification.

Not only that, but it turns out that your name, credit card number, and expiry date can apparently be read from your card using a 10$ device. Shockingly stupid.

More shocking? Read through this or this thread. It’s incredible how many people will claim (clearly without thinking it through) that this system is more secure! Trying to understand how they arrive to that conclusion and doing some research I figured it out:

  1. They don’t understand that chip&pin and PayPass/PayWave are unrelated technologies, and they assume that you must have both or else go back to the magnetic stripe. Clearly false, and I know that for a face because for at least 2 years I had credit cards from both companies that had chip&pin but no radio functionality at all.
  2. They take the bank’s word for “you will not be held responsible for fraudulent transactions”. Really? Have you read a credit card statement recently? How many of the transactions on there can you tell with certainty where they came from? I recall once my card number was used fraudulently (without the PIN of course, why would you require a pin) at York University. I happened to work at Seneca, at the campus shared with York university. It took me a long time to figure out that I really didn’t pay 75$ at the admissions office there, partially because the bank insisted it could have been for something not admissions-related such as parking.
  3. They also parrot the MasterCard and Visa statements that “this technology is extremely secure and the information such as your name and credit card number is useless to thieves”. Aha? Another time when my credit card was misused (again, without a PIN, cause who needs that) someone bought over 1000$ worth of furniture and Caribbean trips from Sears. The bank noticed and I wasn’t held responsible but my card had top be destroyed and I spent about an hour on the phone with them and it took a lot of arithmetic over a couple of statements to confirm that I didn’t get charged for this misuse. Stress on top of stress.

Credit cards generally are a retarded idea. They allow you to spend money you don’t have. Extremely convenient – pay online and anywhere else, interest-free for a month, with no transaction fees, but do you know why is so convenient? It’s because of the incredible number of poor schmucks who end up buying too much stuff with money that’s not their own and end up paying nearly-illegally-large interest fees on it.

In principle I don’t necessarily mind that some dumbass is paying for my convenience, but I do mind when the card makers force an incredibly insecure payment system down my throat.

What can I do about it? Cancel all my cards? You know perfectly well that would mean I would not be able to rent a car or a pair of skis or do a number of other things that really have nothing to do with credit. I have to accept that some otherwise-perfectly-reasonable companies were sold the idea that a credit card should be requirement even when no credit is needed.

So what I’ll probably do is: try to find a good RFID-blocking wallet and use the credit card even less than I’m using it now (i.e. almost never). It will be hard because I’m quite picky about my wallets, they looked like the same leather wallet for the last 20 years, but there are a number of options available and the credit cards aren’t the only RFID concern, so I’ll deal with it.

I guess that won’t be teaching the companies a lesson, that’s exactly what they want (fewer savvy users and more sloppy spenders), but so be it.

by Andrew Smith at March 28, 2014 09:44 PM