Planet CDOT

August 25, 2014


David Humphrey

Introducing MakeDrive

I've been lax in my blogging for the past number of months (apologies). I've had my head down in a project that's required all of my attention. On Friday we reached a major milestone, and I gave a demo of the work on the weekly Webmaker call. Afterward David Ascher asked me to blog about it. I've wanted to do so for a while, so I put together a proper post with screencasts.

I've written previously about our idea of a web filesystem, and the initial work to make it possible. Since then we've greatly expanded the idea and implementation into MakeDrive, which I'll describe and show you now.

MakeDrive is a JavaScript library and server (node.js) that provides an offline-first, always available, syncing filesystem for the web. If you've used services like Dropbox or Google Drive, you already know what it does. MakeDrive allows users to work with files and folders locally, then sync that data to the cloud and other browsers or devices. However, unlike Dropbox or other similar services, MakeDrive is based purely on JavaScript and HTML5, and runs on the web. You don't install it; rather, a web application includes it as a script, and the filesystem gets created or loaded as part of the web page or app.

Because MakeDrive is a lower-level service, the best way to demonstrate it is by integrating it into a web app that relies on a full filesystem. To that end, I've made a series of short videos demonstrating aspects of MakeDrive integrated into a modified version of the Brackets code editor. I actually started this work because I want to make Brackets work in the browser, and one of the biggest pieces it is missing in browser is a full featured filesystem (side-note: Brackets can run in a browser just fine :). This post isn't specifically about Brackets, but I'll return to it in future posts to discuss how we plan to use it in Webmaker. MakeDrive started as a shim for Brackets-in-a-browser, but Simon Wex encouraged me to see that it could and should be a separate service, usable by many applications.

In the first video I demonstrate how MakeDrive provides a full "local," offline-first filesystem in the browser to a web app:

The code to provide a filesystem to the web page is as simple as var fs = MakeDrive.fs();. Applications can then use the same API as node.js' fs module. MakeDrive uses another of our projects, Filer, to provide the low-level filesystem API in the browser. Filer is a full POSIX filesystem (or wants to be, file bugs if you find them!), so you can read and write utf8 or binary data, work with files, directories, links, watches, and other fun things. Want to write a text file? it's done like so:

  var data = '<html>...';
  fs.writeFile('/path/to/index.html', data, function(err) {
    if(err) return handleError();
    // data is now written to disk
  });

The docs for Filer are lovingly maintained, and will show you the rest, so I won't repeat it here.

MakeDrive is offline-first, so you can read/write data, close your browser or reload the page, and it will still be there. Obviously having access to your filesystem outside the current web page is also desirable. Our solution was to rework Filer so it could be used in both the browser and node.js, allowing us to mirror filesystems over the network using Web Sockets). We use a rolling-checksum and differential algorithm (i.e., only sending the bits of a file that have changed) inspired by rsync; Dropbox does the same.

In this video I demonstrate syncing the browser filesystem to the server:

Applications and users work with the local browser filesystem (i.e., you read and write data locally, always), and syncing happens in the background. That means you can always work with your data locally, and MakeDrive tries to sync it to/from the server automatically. MakeDrive also makes a user's mirrored filesystem available remotely via a number of authenticated HTTP end points on the server:

  • GET /p/path/into/filesystem - serve the path from the filesystem provided like a regular web server would
  • GET /j/path/into/filesystem - serve the path as JSON (for APIs to consume)
  • GET /z/path/into/filesystem - export the path as export.zip (e.g., zip and send user data)

This means that a user can work on files in one app, sync them, and then consume them in another app that requires URLs. For example: edit a web component in one app and include and use it in another. When I started web development in the 1990s, you worked on files locally, FTP'ed them to a server, then loaded them via your web server and browser. Today we use services like gh-pages and github.io. Both require manual steps. MakeDrive automates the same sort of process, and targets new developers and those learning web development, making it a seamless experience to work on web content: your files are always "on the web."

MakeDrive supports multiple, simultaneous connections for a user. I might have a laptop, desktop, and tablet all sharing the same filesystem via a web app. This app can be running in any HTML5 compatible browser, app, or device. In this video I demonstrate syncing changes between different HTML5 browsers (Chrome, Firefox, and Opera):

Like Dropbox, each client will have its own "local" version of the filesystem, with one authoritative copy on the server. The server manages syncing to/from this filesystem so that multiple clients don't try to sync different changes to the same data at once. After one client syncs new changes, the server informs other clients that they can sync as well, which eventually propagates the changes across all connected clients. Changes can include updates to a file's data blocks, but also any change to the filesystem nodes themselves: renames, deleting a file, making a new directory, etc.

The code to make this syncing happen is very simple. As long as there is network, a MakeDrive filesystem can be connected to the server and synced. This can be a one-time thing, or the connection can be left open and incremental syncs can take place over the lifetime of the app: offline first, always syncing, always available.

Because MakeDrive allows the same user to connect multiple apps/devices at once, we have to be careful not to corrupt data or accidentally overwrite data when syncing. MakeDrive implements something similar to Dropbox's Conflicted Copy mechanism: if two clients change the same data in different ways, MakeDrive syncs the server's authoritative version, but also creates a new file with the local changes, and lets the user decide how to proceed.

This video demonstrates the circumstances by which a conflicted copy would be created, and how to deal with it:

Internally, MakeDrive uses extended attributes on filesystem nodes to determine automatically what has and hasn't been synced, and what is in a conflicted state. Conflicted copies are not synced back to the server, but remain in the local filesystem. The user decides how to resolve conflicts by deleting or renaming the conflicted file (i.e., renaming clears the conflict attribute).

MakeDrive works today, but isn't ready for production quite yet. On Friday we reached the end of our summer work, where we tried hard to follow initial mockups are very cool. If you have a web-first filesystem, you can do some interesting things that might not make sense in a traditional filesystem (i.e., when the scope of your files is limited to web content).

  • Having a filesystem in a web page naturally got me wanting to host web pages from web pages. I wrote nohost to experiment with this idea, an httpd in browser that uses Blob URLs. It's really easy to load DOM elements from a web filesystem:
  • var img = document.createElement('img');
    fs.readFile('/path/into/filesystem/image.png', function(err, data) {
      if(err) return handleError();
    
      // Create a Blob and wrap in URL Object.
      var blob = new Blob([data], {type: 'image/png'})
      var url = URL.createObjectURL(blob);
      img.src = url;
    });
    
    • Using this technique, we could create a small bootloader and store entire web apps in the filesystem. For example, all of Brackets loading from disk, with a tiny bootloader web page to get to the filesystem in appcache. This idea has been discussed elsewhere, and adding the filesystem makes it much more natural.
    • The current work on the W3C stream spec is really exciting, since we need a way to implement streaming data in and out of a filesystem, and therefore IndexedDB.
    • Having the ability to move IndexedDB to worker threads for background syncs (bug 701634), and into third-party iframes with postMessage to share a single filesystem instance across origins (bug 912202) would be amazing
    • Mobile! Being able to sync filesystems in and out of mobile web apps is really exciting. We're going to help get MakeDrive working in Mobile Appmaker this fall.

    If any of this interests you, please get in touch (@humphd) and help us. The next 6 months should be a lot of fun. I'll try to blog again before that, though ;)

    by David Humphrey at August 25, 2014 04:11 PM

    August 22, 2014


    Marcus Saad

    17th Semana da Computação USP – São Carlos

    The 17th Semana da Computação (Computing week) took place at University of São Paulo – Campus São Carlos. Being known for its education quality, at first the event had every component to be a blast; Venue has been the same since the beginning, becoming a tradition for students in the city; Sponsors of all types promoting the event; A highly competent organization that has been doing this for 17 years. Lets agree, it ain’t easy to organize such successful event for 17 year.

    Having said that, let’s move to my general insight on how the event itself was and how metrics were not achieved.

    At the same time this event was happening, BrazilJS was taking place at Porto Alegre/RS. Mozilla’s all-star Chris Mills was there to be a keynote presenter, which dragged most of the Brazilian community to the event (That is to mention the fact that it also dragged Ricardo Panaggio, who was meant to present about Webmaker with me at the event I attended).

    With some downs on my team (Can’t compete with Chris), I went ahead and give them a fast brush of what Webmaker, Appmaker and our efforts on the Open web are. I’m not as skilled on the matter as Panaggio is, so I decided to move on and start talking about what I’m proficient, Firefox OS.

    Settled to start at 8AM on a Thursday, most of the attendees arrived around 8:20 to 8:30, making us lose 30 precious minutes. However, I won’t blame attendees for being late, I’ll blame on the organization for allocating my event on Campus 2, far far away from the known-to-be- for 17 years location of the event. I accredit some of the failure of this event on the fact that the event was spread through both campus.

    For my presentation, I’ve created an Firefox OS app that works as a presentation that can be found on my github/appresentation-firefox-os. The idea behind the app was to introduce to basic structure of apps in general, how to begin coding and how Mozilla’s building blocks framework works. Overall, feedback on the app was good. I’ll take the risk and say that if it wasn’t for the lack of knowledge on basics such as Javascript event manipulation and listening, HTML input creation and things like that, we could have done much more. It’s important not to forget that only half of the expected 40 attendees were actually there.

    Metrics (let the shame begin)

    I’ll quickly go through metrics. And when I say quickly I mean that I want to get done with this failure as fast as possible.

    • Metric 1
      • Number of Firefox Marketplace app demos – 20
        • Huge failure. Because prerequisites.
    • Metric 2
      • Number of new web literates at the end of event – 40
        • Fail. Because attending events you paid for is for poor people. (Please, stop occupying slots on presentations if you’re not going)
    • Metric 3
      • Number of new webmaker.org accounts created – 40
        • Fail with a ZERO (Read zed-e-r-o for my Canadian friends). Because BrazilJS, Mozilla’s all-star, Panaggio.
    • Metric 4
      • Number of participants interested in hosting their own Maker Parties – 5
        • Word.
    • Metric 5
      • Number of people who will contribute to Mozilla after the event- 5
        • Fail. Not a single attendee joined mailing list.
    • Metric 6
      • Number of press/blog articles generated – 2
        • Fail. Mine will be the only one.

     

    Pics and Gifts

    I would like to thanks Christian Carrizo for the support given throughout the event. He also took some pictures for us and gave me a few gifts! Thank you!

    Crowd hacking on FFOS apps!

    Crowd hacking on FFOS apps!

     

    T-shirt and Certificate!

    T-shirt and Certificate!

    My criticism.

    Mozilla needs to stop putting it’s community against itself.

    Mozilla needs to trust and empower local communities. We need local leaders. PLURAL. Not a single face you all run to when an event organized by Mozilla itself happens in Brazil.

    We had to do magic last FISL on our launch party budget. Now I’m looking at the extravaganza happening at BrazilJS. Is that the way it is meant to be? When organized by Mozilla it is fine to dump money on things? When organized by local community we will automatically cut their budget in half because they probably are overspending. Come on, EQUALITY. There are a few hard working community members who are hard working to keep Brazil’s community alive and as one. Meanwhile, there are a few corporate climbers, that won’t attend a SINGLE EVENT THE WHOLE YEAR, but when people from outside the country comes, they will be the first to confirm their attendance in the event. For you people, you have my deepest scorn.

     

    “I generally don’t like to say out loud I’ve done something. In Brazil we
    have this problem that people do little but say too much about it. Most
    people here do almost nothing, and keep marketing themselves using that
    almost nothing as it was a big thing”
    quoting a friend from moz-br.

     

    Thanks

    by msaad at August 22, 2014 03:01 PM

    August 21, 2014


    Ali Al Dallal

    Hide bar item in Weechat for a specific buffer

    So, I have been using this IRC client called Weechat. It runs on many platforms like Linux, Unix, BSD, GNU Hurd, Mac OS X and Windows (cygwin). Weechat is similar to Irssi. Both clients are terminal based IRC client for UNIX systems, and some people would wonder why using IRC in a terminal, but I must say this is really a powerful IRC client where you can so much with plugins/scripts but I won't go in details why I choose this client over a nice native IRC client like Textual, LimeChat or any other native app for Mac or Windows.

    In this post I just wanted to share on how I'm able to hide a specific bar item in Weechat buffer.

    What I have right now is something like this

    As you can see, I have two window splitted vertically (top 20%, bottom 80%).

    That top bar I just want to have it to read twitter timeline or use it for highmon (See who mention or highlight your name). But, that annoying status bar, title bar and nicklist taking so much spaces and I want to get rid of it.

    To do that search your configs for:

    weechat.bar.input.conditions            string   ""  
    weechat.bar.isetbar.conditions          string   ""  
    weechat.bar.nicklist.conditions         string   ""  
    weechat.bar.status.conditions           string   ""  
    weechat.bar.title.conditions            string   ""  
    

    Now, lets go that that buffer you want to hide the bar and get the buffer name using /eval

    /eval ${name}
    

    I did it in hignmon buffer and got highmon in return.

    Now all I need to do is write a condition that will evaluate this:

    weechat.bar.nicklist.conditions         string   "${name} != highmon && ${name} != &bitlbee.#twitter_alicoding"  
    weechat.bar.status.conditions           string   "${name} != highmon && ${name} != &bitlbee.#twitter_alicoding"  
    weechat.bar.title.conditions            string   "${name} != highmon && ${name} != &bitlbee.#twitter_alicoding"  
    

    At the end I have something like this:

    I hope that this post is useful for people who started to learn or use Weechat like me :)

    by Ali Al Dallal at August 21, 2014 12:49 PM

    August 08, 2014


    Edward Hanna

    Recent Updates

    In the recent weeks I have done  many sub-projects related to the EDX platform research project here at Seneca CDOT. These include installing EDX on a commercial server that will be accessed for EDX research and lesson planning at Seneca. I have also continued working on the theme; including the colors and layout.  Time was also spent working on the Java Assignment Auto-Grader that will accept assignments submitted by students in the form of Jar files, including validation, grading, and attempts. The Java-Auto-grader is written in Django and is currently being deployed with Tornado. While I settled on using Tornado, I spent many hours exploring other options and ways of deploying the Auto-grader. I found out about Heroku, OpenShift, Apache ModWSGI, PythonAnywhere, to say just a few. I am not saying that these do not work, they work for some, but not all. Stack Overflow is a great resource to ask questions, and find answers to questions your ready to ask. The Java-Auto-Grader will need a way to insert grades for students, so I am currently exploring that option with the MySQL workbench and Robomongo for the respective database types.  Yes it is true that EDX uses models for many of these things.  Aside from this I spent time writing documentation to affirm the steps that I have been taking to operate the EDX platform. It is important to be aware how these change and to be up-to-date with them.

    I learned how to do something interesting with the EDX platform. I used a vagrant Fullstack image and Webmin, with Postfix relaying to GMAIL. That way I didn’t have to edit code manually or change any of the EDX code default settings. By configuring Postfix to relay to GMAIL, the Fullstack mail function works right out of the box. It is true though that you will have to customize the server-var file with your own unique settings to really make EDX your own unique EDX platform.  I only expect you to understand this if you have read EDX documentation.

    I found many solutions at the Google Groups for the EDX platform. There are two groups including the “Open edX operations” and “General Open edX discussion” Both groups have a member base where I have learned a lot from the EDX contributors.

    My current goals are to continue working on the theme. This is easier to do once you can understand FireFox Inspector and Chrome Inspector. Currently I am trying to install the Production Stack on a VMWare image of the Ubuntu 12.04 Server 64Bit specification looking forward to the end of the installation. I will continue working on the Auto Grader, repository maintenance and syncing, blogging, and EDX project goals.


    by Edward Hanna at August 08, 2014 10:50 PM

    August 07, 2014


    Zakeria Hassan

    Release 0.7



    This week I am working on hawtio.

    https://github.com/hawtio/hawtio

    I am currently working on implementing a feature that function similar to how google drag and drop works.
    The user will be able to drag a file from their desktop and drop it on the page to upload files to a JMS messaging system called Apache ActiveMQ.

    Plans for my next release:


    To implementing a drag and drop feature which will leverage open standards FileAPI to uploading files to an Apache ActiveMQ JMS Queue.

    I've completed the UI portion and I will release another update soon.


    by Zak Hassan (noreply@blogger.com) at August 07, 2014 04:23 AM

    August 05, 2014


    Aaron Train

    Profiling Gecko in Firefox for Android

    At last week's Mozilla QA Meetup in Mountain View, California I demonstrated how to effectively profile Gecko in Firefox for Android on a device. By targeting a device running Firefox for Android, one can measure responsiveness and performance costs of executed code in Gecko. By measuring sample set intervals, we can focus on a snapshot of a stack and pinpoint functions and application resources produced by a sampling run. With this knowledge, one can provide a valuable sampling of information back to developers to better filed bug reports. Typically, as we have seen, these are helpful in bug reports for measuring page-load and scrolling and panning performance problems, frame performance and other GPU issues.

    How to Help

    • Install this (available here) Gecko Profiler add-on in Firefox on your desktop (it is the Gecko profiler used for platform execution), and follow the instructions outlined here for setting up an environment
    • If you encounter odd slowdown in Firefox for Android, profile it! You can save the profile locally or share it via URL
    • Add it to a (or your) bug report on Bugzilla
    • Talk to us on IRC about your experience or problems

    Here is an example bug report, bug 1047127 (panning stutters on a page with overflow-x) where a profile may prove helpful for further investigation.

    Detailed information on profiling in general is available on MDN here.

    August 05, 2014 12:00 AM

    August 04, 2014


    Yoav Gurevich

    Does anybody remember unit tests? But also a wicked kickin' readme!

    The week that ended July (and from the weather patterns, ended summer as we know it also) was a little off-balance in terms of rhythm, with project lead David Humphrey away on vacation, his presence is clearly missed and noticed when it comes to uniform progress as a whole with the Mozilla Webmaker team.

    Nevertheless, my fellow researchers and I met the issues to be faced this week with fierce tenacity and ambition to further lessen the remaining bugs in MakeDrive at its current state. The first half of the week preoccupied me with more unit test patches to land, with most of my time spend on a patch dealing with having to redesign some of the callback function signatures in the tests' infrastructure to cater to Node.js callback parameter conventions. Debugging galore ensued in order to correctly trace and follow the data passing inside of the callback hierarchy, but it ended up being invaluable learning experience.

    Finishing the week, I took on the task of implementing the first comprehensive readme document for upcoming first users of MakeDrive. While initially daunting, this was accomplished with the help and insight of every member of the team pitching in on their section with their expertise and I believe that the final result speaks for itself.

    This week will primarily concentrate on catching up with stress-testing Nimble with MakeDrive on the deployed page fellow team member Ali Al Dallal has up on the web. This will also be a wonderful opportunity to familiarize myself with emerging JavaScript and HTML5 technologies that Mozilla is already beginning to implement in its products and services such as Angular.js. 

    by Yoav Gurevich (noreply@blogger.com) at August 04, 2014 05:45 PM

    July 28, 2014


    Yoav Gurevich

    Query Strings and more Unit Tests

    While front-end work is always fun and demonstration-friendly more frequently than functional coding, all those pretty icons, animations, and colour schemes wouldn't be very useful without any backend code to provide purpose for their presence.

    My work was focused on implementing pre-production logic in the session/authentication data handler functions to accept query string data (that comes in on the address bar) as well as standard cookie data. This was added to increase the flexibility of MakeDrive to be able to cater to such things as firefox extensions sending incoming verification data.

    The rest of last week revolved around adding more unit tests to increase the comprehensiveness of the existing test suite in conformity to the complete overhaul of the client and server communications that are now completely reliant on Websockets. I was particularly concentrated on adding more test cases for the sync messages being passed back and forth.

    In the usual cadence of things, overall the week was very tasking but ultimately very productive. MakeDrive is just about ready to be deployed out to the public, and after a design overhaul by the Mozilla UI/UX team, Nimble will follow soon thereafter.

    by Yoav Gurevich (noreply@blogger.com) at July 28, 2014 02:59 PM

    July 23, 2014


    Rick Eyre

    WebVTT Released in Firefox 31

    If you haven't seen the release notes WebVTT has finally been released in Firefox 31. I'm super excited about this as it's the culmination of a lot of my own and countless others' work over the last two years. Especially since it has been delayed for releases 29 and 30.

    That being said, there are still a few known major bugs with WebVTT in Firefox 31:

    • TextTrackCue enter, exit, and change events do not work yet. I'm working on getting them done now.
    • WebVTT subtitles do not show on audio only elements yet. This will probably be what is tackled after the TextTrackCue events (EDIT: To clarify, I meant audio only video elements).
    • There is no support for any in-band TextTrack WebVTT data yet. If your a video or audio codec developer that wants in-band WebVTT to work in Firefox, please help out :-).
    • Oh, and there is no UI on the HTML5 Video element to control subtitles... not the most convenient, but it's currently being worked on as well.
    I do expect the bugs to start rolling in as well and I'm actually kind of looking forward to that as it will help improve WebVTT in Firefox.

    by Rick Eyre - (rick.eyre@hotmail.com) at July 23, 2014 12:00 AM

    July 22, 2014


    Andrew Smith

    Android programming: connect to an HTTPS server with self-signed certificate

    In a previous post I described my frustration with the fact that it’s so difficult to find documentation about how to connect to a server using HTTPS if the certificate for that server is self-signed (not from a paid-for certificate authority).

    After a while I found that someone at Google noticed that because of their lack of documentation the common solution is to disable certificate checking at all, which of course nullifies any possible advantage of using HTTPS. So they posted some documentation, and after struggling with it for a while I finally got it to work.

    You don’t need to use BouncyCastle, just the stock Android APIs, you should be able to easily customise this code for your own uses:

        /**
         * Set up a connection to littlesvr.ca using HTTPS. An entire function
         * is needed to do this because littlesvr.ca has a self-signed certificate.
         * 
         * The caller of the function would do something like:
         * HttpsURLConnection urlConnection = setUpHttpsConnection("https://littlesvr.ca");
         * InputStream in = urlConnection.getInputStream();
         * And read from that "in" as usual in Java
         * 
         * Based on code from:
         * https://developer.android.com/training/articles/security-ssl.html#SelfSigned
         */
        @SuppressLint("SdCardPath")
        public static HttpsURLConnection setUpHttpsConnection(String urlString)
        {
            try
            {
                // Load CAs from an InputStream
                // (could be from a resource or ByteArrayInputStream or ...)
                CertificateFactory cf = CertificateFactory.getInstance("X.509");
                
                // My CRT file that I put in the assets folder
                // I got this file by following these steps:
                // * Go to https://littlesvr.ca using Firefox
                // * Click the padlock/More/Security/View Certificate/Details/Export
                // * Saved the file as littlesvr.crt (type X.509 Certificate (PEM))
                // The MainActivity.context is declared as:
                // public static Context context;
                // And initialized in MainActivity.onCreate() as:
                // MainActivity.context = getApplicationContext();
                InputStream caInput = new BufferedInputStream(MainActivity.context.getAssets().open("littlesvr.crt"));
                Certificate ca = cf.generateCertificate(caInput);
                System.out.println("ca=" + ((X509Certificate) ca).getSubjectDN());
                
                // Create a KeyStore containing our trusted CAs
                String keyStoreType = KeyStore.getDefaultType();
                KeyStore keyStore = KeyStore.getInstance(keyStoreType);
                keyStore.load(null, null);
                keyStore.setCertificateEntry("ca", ca);
                
                // Create a TrustManager that trusts the CAs in our KeyStore
                String tmfAlgorithm = TrustManagerFactory.getDefaultAlgorithm();
                TrustManagerFactory tmf = TrustManagerFactory.getInstance(tmfAlgorithm);
                tmf.init(keyStore);
                
                // Create an SSLContext that uses our TrustManager
                SSLContext context = SSLContext.getInstance("TLS");
                context.init(null, tmf.getTrustManagers(), null);
                
                // Tell the URLConnection to use a SocketFactory from our SSLContext
                URL url = new URL(urlString);
                HttpsURLConnection urlConnection = (HttpsURLConnection)url.openConnection();
                urlConnection.setSSLSocketFactory(context.getSocketFactory());
                
                return urlConnection;
            }
            catch (Exception ex)
            {
                Log.e(TAG, "Failed to establish SSL connection to server: " + ex.toString());
                return null;
            }
        }
    

    Good luck! And you’re welcome.

    by Andrew Smith at July 22, 2014 04:17 AM

    July 21, 2014


    Yoav Gurevich

    Bug Squashing, Issue Triaging, and Nimble UI Enhancements

    Communal elation in the group is still very apparent after the functional demo of Nimble and Makedrive working together, and we are all focusing that positive energy to keep a rigorous pace in order to arrive at the upcoming milestone this Friday revolving around getting MakeDrive to be stable enough to deploy to the public and be used in non-controlled environments such as the other Webmaker tools.

    Last week, I created some fun and practical extensions to the front-end UI in order to test the Brackets appshell's potential in its current form in the context of being able to manipulate or change the end-user interface without having to change any of the code already implemented. I went ahead and recorded a video demo of my results on YouTube:


    I can't help but feel proud of what little front-end programming prowess I've managed to cobble up, haha.

    My focus this week veers back to backend functionality with more bug squashing on the MakeDrive end of things, particularly in the scope of webmaker authentication. I will be tackling some code removal/refactoring to eliminate unnecessary or arbitrary module imports and process executions as well as attempt to plug in support for query string session data as an alternative to cookies in order to extend webmaker-auth's login methods to be able to use firefox extensions and the like. Much learning will likely be had.

    As always, stay tuned for more updates!

    by Yoav Gurevich (noreply@blogger.com) at July 21, 2014 07:01 PM

    July 19, 2014


    Andrew Smith

    Why hate self-signed public key certificates?

    There are times when the most of the world goes into a frenzied argument for something without thinking it through. This happens with many kinds of issues from (recent news) geopolitical to (since the beginning of time) religious to (what this post is about) technical issues.

    Effective means of reason and deduction are forgotten, research is ignored as unnecessary, the only thing that matters is that everybody (except for a couple of loonies) says so – then it must be true.

    This doesn’t happen all the time. Often even large, unrelated groups of people can accomplish amazing feats. But sometimes I’m flabbergasted by the stupidity of the masses.

    Public key encryption. Everybody uses it, few people understand it. I do. Many many years ago I read a book called Crypto – an excellent introduction to the science, technology, and politics of cryptography. In particular I learned something very important there that applies to this post: how public key cryptography works. I won’t bore you with the details, fundamentally it’s this:

    • There is a key pair – public key and private key.
    • The holder of the public key can encrypt a message. The key is public – that’s anyone – can encrypt a message.
    • Only the holder of the paired private key (private – that’s one person) can decrypt a message encrypted with that public key.

    It’s an amazing system made possible by mathematical one-way functions, and it works. Because of it, more than anything else, the current internet was made possible. We would not have had online shopping, banking, services, or anything at all that requires privacy and security on the internet without public key cryptography.

    Public key cryptography has one really unfortunate complication – key exchange. You can be sure the holder of the private key is the only one who can read your message that you encrypted with their public key, but how can you be sure you have that person’s public key? What if an impostor slipped you his public key instead?

    There are various solutions to this problem, none of them ideal. The most popular one is to have a central authority that certifies the certificates. So if you want to be sure https://www.fsf.org is really the Free Software Foundation and not an impostor – you’ll have to trust their current certificate authority Gandi Standard SSL CA. Who the hell is that? Why should you trust them? Yeah, that’s the problem. The trust is enforced partially by financial benefits and partially by croud-sourced trust: Gandi would lose their business if they were caught issuing fake certificates, that’s all. But it’s the best we’ve got today.

    There is one case when a third-party certificate authority is unnecessary, in fact undesired: when I control both ends of the communication. When would that happen? Well it just so happens that I’m currently working on an Android app (my code) which connects to a web service (my code on my server).

    I would like to have secure communication between the two. Meaning I need to be sure that either the messages between the two have arrived unmodified and unread by third parties or they will not arrive at all. Perfect use case for pubic key encryption, and I can of course put my own public key in my own Android app to match my own private key on my own server.. right?

    No. Or at least not without a great amount of difficulty. Try to find a solution for using a self-signed certificate with the Android (i.e. Apache) DefaultHttpClient or HttpClient. You’ll find a lot of people who will say (with foam at their mouthes) NEVER DO THIS THIS IS INSECURE HORRIBLE TERRIBLE STUPID WHY WOULD YOU EVEN ASK!!!

    And it would be ok, if they explained why they think this is the case, but they don’t. Of course not, why bother figuring it out? Everyone else is saying (no, shouting) the same. Must be true.

    This is when I start to lose faith in humanity. From “weapons of mass destruction” in Iraq (willingly swallowed by hundreds of millions in the west) to “god has a problem with condoms” (in certain popular religions) to bullshit like this technical problem I’m trying to solve it’s amazing we haven’t blown ourselves up to bits a long time ago.

    And it’s not like things like this are rare or are a relic of the unenlightened past, this is happening now and on a global scale. I am dumbfounded, and only mildly optimistic because there are still very many people on this planet who are clearly doing well enough. I just don’t understand how we made it this far.

    When (or, I should say “if”) I figure out how to make a connection with HttpClient and a self-signed certificate I’ll post a followup.

    by Andrew Smith at July 19, 2014 03:08 PM

    July 15, 2014


    Khosro Taraghi

    Linux as an IPv6 Router

    Hello all,

    First of all, I do apologize that I haven't updated my blog page since 3 months ago because I have had a very difficult situation in my life recently, but everything went well. Special thanks to all my supporters who helped me on this sticky situation.

    Today, I would like to talk about IPv6 Router and how we can configure/monitor Linux(RedHat,Fedora,CentOS,SELinux) to work as an IPv6 Router. We can easily use radvd daemon
    (Router ADVertisement Daemon) for this purpose. In order to install radvd daemon, run the following command after you switched to su :

    su -
    yum install radvd


    Now you need to turn on IPv6 forwarding. Run the below command (Figure 1):

    sysctl net.ipv6.conf.all.forwarding=1


                                                                            Figure 1


    Configuration file is located at /etc/radvd.conf.  Figure 2 shows the content of radvd.conf :

                                                                              Figure 2

    As you can see in the figure 2, all lines are commented. Based on our requirements in network, we can start to uncomment those lines.

    Let's go through this lines and their definitions:

    interface eth0
    You need to decide which NIC or interface you want to use as a router. In this example, it assumes one interface:  ens33


                                                                               Figure 3



    AdvSendAdvert on;
    A flag indicating  whether  or  not  the router sends periodic router advertisements and responds to router solicitations. Router solicitations means when radvd daemon detects router network address requests from hosts.

    MinRtrAdvInterval 30;
    The minimum time allowed between sending unsolicited multicast router advertisements from the interface, in seconds.

    MaxRtrAdvInterval 100;
    The maximum time allowed between sending unsolicited multicast router advertisements from the interface, in seconds.

    prefix 2001:db8:1:0::/64
     {
        AdvOnLink on;
        AdvAutonomous on;
        AdvRouterAddr off;
     };

    The prefix definition specifies your IPv6 network address. To specify prefix options for a specific prefix, add them within parentheses following the prefix definition.  Here we have 3 prefix options.

    AdOnLink on
    According to manpage, when set, indicates  that this prefix can be used for on-link determination.
    When not set the advertisement makes no statement about on-link or off-link properties of the prefix. It simply means that host requests can be received on the specified network address.

     AdvAutonomous on
    When set, indicates that this prefix can be used for autonomous address configuration as specified in RFC 4862. It provides automatic address configuration.

    AdvRouterAddr off
    When set, indicates that the address of interface is sent instead of network prefix, as is required by Mobile IPv6. When set, minimum limits specified by Mobile IPv6 are used for MinRtrAdvInterval and MaxRtrAdvInterval.

    Now, I am going to change this configuration file to meet my private network requirements, for example.


                                                                                  Figure 4

    In this example, for a private network 192.168.74.0 Figure 3, we use the unique-local IPv6 prefix which operates like IPv4 private network address. It's fc00:0:0:0::/64

    Just a reminder from my comments in IPv6 configuration (http://ktaraghi.blogspot.ca/2014/02/ipv6-and-network-auto-configuration.html): A host in IPv6 stateless address autoconfiguration network uses its own MAC address to create a temporary link-local address (FE80:: prefix) to be able to connect to router. Then, the router sends its network prefix to replace the link-local prefix and create a full Internet address.

    Now, test your configuration file by the following command:

    radvd -c

    If everything is fine such as syntax, we can start radvd daemon. Run the following command:

    /bin/systemctl start radvd.service

    to check if the service is running, run the following command:

    /bin/systemctl status radvd.service

                                                                                Figure 5

    Now, it's time to check and see the advertisement packets that this machine is sending out to other machines/routers. On a different machine, run the following command:

    radvdump -d 4


                                                                                Figure 6

    -d switch means debug mode and 4 means in verbose mode(log everything).
    In Figure 6, it shows that our Linux Router is advertising the correct network address (fc00::/64); the same ipv6 network address that we configured. And That's it.

    Hope you enjoyed.
    Regards,
    Khosro Taraghi

    by Khosro Taraghi (noreply@blogger.com) at July 15, 2014 02:18 AM

    July 14, 2014


    Kieran Sedgwick

    How to plan a screencast

    The Webmaker team at Seneca College’s Centre for the Development of Open Technology has just completed a major milestone in the development of Nimble, a browser-based code editor and IDE based on Adobe Brackets.

    We did it! We diiidd iiiit!!!

    Our team wanted to demo this accomplishment! We decided a pre-recorded screencast would be easiest to share, so we set out to out-do our previous efforts. In the past, we’d already done screencasts to demo Nimble. The reaction was always strangely tepid, even from a non-technical audience we would have expected to be somewhat excited by our work.

    So I set out to plan an effective, informative, digestible screencast. You can see it here!

    Be your audience

    Our audience didn’t care about code, zeros, ones, clouds or the various challenges we went through. My goal was to answer two questions in under two minutes: “How does someone like me use this?” and “Why does someone like me use this?”. This is an important step, because it focuses the demo/video/presentation on only the pieces that are relevant to your audience. It also helps inform other considerations, like format and length.

    “How does someone like me use this?”

    How… DO I do that thing you just mentioned?

    The answer to this question is generally quite illuminating. “Why,” you might begin, “It’s easy! You simply press CTRL-F5-SHIFT-DEL to load the base tray, followed by the conventional ALT-CMD-3 to connect to the opening prompt!”

    Hmmmm…

    In any case, our application is quite user friendly for its target audience. Stepping through the user’s actual actions still provided value, because it helped us understand how to communicate that usage. People don’t use applications for no reason! There’s always a need, or a story, that drives our use of software.

    “I wanted to make my first web page, but on four different browsers on three different devices!”

    Well, okay then! This was our base assumption, and informed what the user would be doing with our application. We were able to plot out, roughly, each step the user would go through as they pursued this goal. This would help us only record the actions we needed to, cutting down on the length of the video.

    “Why does someone like me use this?”

    The next part was closer to, “Why am I using this software in particular?”. This is where the story started coming into a focus. What would get a user to use this software in the manner we wanted to demonstrate? I had to come up with a narrative that would cause a person to actually go through these steps if they were in this kind of situation.

    So I made one up, with a film technique called storyboarding.

    I broke down, scene by scene, exactly what we would show, and what we would talk about. Even down to estimates of how long each section would take:

    PROFIT!

    PROFIT!

    Cobbling it together

    Recording the clips was an interesting process. Luckily, our senior developer had experience doing this, and with me directing he was able to capture exactly what we needed. A short voice over later, and we had ourselves a demo!

    I could go into more detail about the fine details, like text overlays for confusing or important details, or staying concise with the commentary, but an example is probably better for those. Once again, I proudly present our demo of Webmaker Nimble in it’s proof of concept stage.


    by ksedgwick at July 14, 2014 07:59 PM


    Yoav Gurevich

    Realizing the Vision and Beyond

    I couldn't find a better way to present this besides show and recommend anyone reading this to check out this YouTube Link. Everything this team has been working on for the past 2+ months now functionally amalgamated and in a state where the world can start seeing it. Nimble (Brackets) in the browser using the MakeDrive filesystem to sync files between active sessions of the same client. Extremely exciting!

    The rest of the summer is about polishing and perfecting the operation of the project and adding features in order to really turn it into a bonafide Mozilla product that becomes a welcome addition to the rest of the Webmaker toolkit.

    by Yoav Gurevich (noreply@blogger.com) at July 14, 2014 02:52 PM


    Aaron Train

    Proxy Server Testing in Firefox for Android

    Recent work on standing up a proxy server for web browsing in Firefox for Android is now ready for real world testing. Eugen, Sylvain, and James, from the mobile platform team have been working towards the goal of building a proxy server to ultimately increase privacy (via a secure connection), reduce bandwidth usage, and improve latency. Reduced page load times is also a high level goal. A detailed Wiki page is available at: https://wiki.mozilla.org/Mobile/Janus

    The time for testing is now.

    How to Help

    • Install this (available here) proxy configuration (development server) add-on in Firefox for Android
    • Browse as you normally would (try your network connection and or WiFi connections)
    • File bugs in GitHub (make sure to compare with the proxy enabled and disabled)
    • Talk to us on IRC

    July 14, 2014 12:00 AM

    July 10, 2014


    Anatoly Spektor

    How to mock file upload with RSpec 3.0 ? [SOLVED]

    One more note on RSpec 3.0, this time one useful function that mocks File Upload , saves file/archive content to memory so after it you can do whatever your soul wants with it. Test file/archive should exist in the file system.

    def mock_archive_upload(archive_path, type)
      return ActionDispatch::Http::UploadedFile.new(:tempfile => File.new(Rails.root + archive_path , :type => type, :filename => File.basename(File.new(Rails.root + archive_path))))
    end
    
    

     

    e.g of use:

    #saves archive into memory so it can be manipulated in tests
    mock_archive_upload("../fixtues/my_archive.zip", "application/zip")
    
    

    Tagged: file_upload, mock archive upload, mock file upload, Rails, Rspec, rspec 3.0, Ruby

    by Anatoly Spektor at July 10, 2014 04:16 PM

    July 07, 2014


    Kieran Sedgwick

    Honing the review workflow

    Last week I wrote a few patches, but I was reviewing more than I wrote. We’re very close to having a working example of our Nimble/MakeDrive combination, so I was helping push through some of the last pieces we needed to get to our new milestone.

    We also started tracking issues with Github, which is fantastic! I spent a good amount of time late last week writing out issues, removing TODOs from our code and generally organizing the work we have to do. This perspective on development is nice to have. I do love hacking away, but it’s great to see the bigger picture without being bogged down in implementation details.

    Onwards to Friday demos!


    by ksedgwick at July 07, 2014 03:43 PM

    July 02, 2014


    Yoav Gurevich

    Nimble and MakeDrive's Future

    As expected, with the help of the Webmaker team, I managed to finish a functional proof of concept implementation of the Websocket authentication module that Alan Glickman, Kieran Sedgwick and I planned out the week prior just in time to quickly demonstrate it on Tuesday.

    On Thursday, our team also presented the current state of MakeDrive thus far and what has been accomplished up until now with the project. While the lack of practice in the areas of structuring roles and memorizing who begins what part of which area of focus left a bit to be desired, it was received with healthy amount of praise nonetheless.

    After planning for the rest of the week and next week's tasks, Gideon Thomas and I began to pair program converting the client-to-server communications of MakeDrive from SSE's to Websockets. A very productive week on the whole for everyone on the team, and we're all hoping to end next week with a functioning instance of MakeDrive running inside Brackets on the browser for project lead David Humphrey's last week of working at CDOT for the summer. Fingers crossed.

    by Yoav Gurevich (noreply@blogger.com) at July 02, 2014 01:49 AM

    July 01, 2014


    Armen Zambrano G. (armenzg)

    Down Memory Lane

    It was cool to find an article from "The Senecan" which talks about how through Seneca, Lukas and I got involved and hired by Mozilla. Here's the article.



    Here's an excerpt:
    From Mozilla volunteers to software developers 
    It pays to volunteer for Mozilla, at least it did for a pair of Seneca Software Development students. 
    Armen Zambrano and Lukas Sebastian Blakk are still months away from graduating, but that hasn't stopped the creators behind the popular web browser Firefox from hiring them. 
    When they are not in class learning, the Senecans will be doing a wide range of software work on the company’s browser including quality testing and writing code. “Being able to work on real code, with real developers has been invaluable,” says Lukas. “I came here to start a new career as soon as school is done, and thanks to the College’s partnership with Mozilla I've actually started it while still in school. I feel like I have a head start on the path I've chosen.”  
    Firefox is a free open source web browser that can...



    Creative Commons License
    This work by Zambrano Gasparnian, Armen is licensed under a Creative Commons Attribution-Noncommercial-Share Alike 3.0 Unported License.

    by Armen Zambrano G. (noreply@blogger.com) at July 01, 2014 05:58 PM

    June 30, 2014


    Jasdeep Singh

    Installing Ruby 1.9.3 on Mac OS X Yosemite using RVM

    Apple announced their Mac OS X  Yosemite Beta Developer Program recently. Like a usual Apple fanboy, I went ahead and installed the Beta version of Mac OS X Yosemite and as expected was stuck with a non-working copy of Ruby 1.9.3 on my system. So, I just tried a reinstall of Ruby 1.9.3 on my computer:

    rvm reinstall 1.9.3 --disable-binary --with-gcc=clang

    Hope, that helps.

    by Jasdeep Singh at June 30, 2014 10:46 PM

    June 28, 2014


    Zakeria Hassan

    What's a better way of encoding data if you don't want to use JSON, or XML .


    What is Protocol Buffers? 

     Short answer: It's a way to serialize unstructured data. 

    Benefits:

    - Smaller
    - 20-100 x faster
    - Easier
    - 3-10 x smaller then xml
    - Generates code for you
    - You don't have to handwrite your own parsing code

    You define (.proto file) the structure of your data then there is code generated to read and write to data streams. Protocol buffer was developed at Google to help with their index server.  Some developers even use protobuf for storing data persistently for example BigTable. Some developers even use it for RPC ( Remote Procedure Call ) systems. See overview for more on this:

    https://developers.google.com/protocol-buffers/docs/overview


    Here is an example of a .proto file:

    package tutorial;
    option java_package = "com.learning.protobuf"
    option java_outer_classname = "SomeOuterClassIMadeUp"
    message MyTopSecretMsg {
         required String info = 1 ;
         . . .
    }

    Note: The 'required' keyword could be replaced by 'optional'.
    See protobuf language guide for more details on how to define your .proto files:


    When you define your .proto file then you can compile it.

    What is it used for?

    Used as a communication protocol etc. 

    How do I install it?

                                                 
     $ tar -zxvf protoc**.tar.gz ; 
     $ configure;                         
     $ make ;                               
     $ make install ;                    
                                                 


    How do I compile the .proto file ?


    protoc -I=$PWD --java_out=$PLACE_TO_PUT_THE_GENERATED_CODE addressbook.proto

    NOTE: $PWD means this present working directory






    by Zak Hassan (noreply@blogger.com) at June 28, 2014 02:57 AM

    June 25, 2014


    Kieran Sedgwick

    Unit Test Revisions for MakeDrive

    I spent the majority of the week of the 16th paying back one of our biggest pieces of technical debt – the lack of unit tests. Since MakeDrive isn’t a user-facing product, it only works when supporting something that is user-facing. With nothing like that available (outside of a semi-realistic demo) it’s been like engineering an engine for a car without knowing what the car is going to look like, and how it’s going to house the engine.

    The solution is unit testing, which would fake having a car just enough that the engine is tricked into believing it exists. With enough test coverage, we can get a reasonable view of how our code and feature changes will affect the actual functionality of the code that we’ve written, and we can confirm that the code works properly in the most important ways.

    The great refactor

    Once my tests were written, and all the bugs they exposed were fixed, I decided that I didn’t like the sheer amount of code required for each test. In many cases, code was even being duplicated for steps common to multiple tests. To make the codebase more easily maintainable, I set out to refactor the test code to make it easier to reuse and read.

    Here’s a pseudo-coded example of a test before the refactor:

    it('should complete two steps in the sync process', function(done) {
      var socketPackage = util.openSocket(id);
    
      socketPackage.socket.removeListener("message", socketPackage.onMessage);
      socketPackage.socket.once("message", function(message) {
        message = resolveToJSON(message);
    
        expect(message).to.exist;
        expect(message.type).to.equal(SyncMessage.RESPONSE);
        expect(message.name, "[SyncMessage Type error. SyncMessage.content was: " + message.content + "]").to.equal(SyncMessage.SOURCE_LIST);
        expect(message.content).to.exist;
        expect(message.content.srcList).to.exist;
        expect(message.content.path).to.exist;
    
        socketPackage.socket.removeListener("message", socketPackage.onMessage);
        socketPackage.socket.once("message", function(message) {
        // Reattach original listener
          socketPackage.socket.once("message", socketPackage.onMessage);
    
          var path = data.path;
          message = resolveToJSON(message);
    
          expect(message).to.exist;
          expect(message.type).to.equal(SyncMessage.RESPONSE);
          expect(message.name, "[SyncMessage Type error. SyncMessage.content was: " + message.content + "]").to.equal(SyncMessage.ACK);
          expect(message.content).to.exist;
          expect(message.content.path).to.exist;
    
          done();
        });
    
        var checksumResponse = new SyncMessage(SyncMessage.RESPONSE, SyncMessage.CHECKSUM);
        socketPackage.socket.send(resolveFromJSON(checksumResponse));
      });
    
      var srcListMessage = new SyncMessage(SyncMessage.REQUEST, SyncMessage.SOURCE_LIST);
      socketPackage.socket.send(resolveFromJSON(srcListMessage));
    });
    

    This is what it might look like afterwards:

    it('should complete two steps in the sync process', function(done) {
      var username = util.username();
      util.authenticatedConnection({username: username, done: done}, function(err, result) {
        var socketData = {
          syncId: result.syncId,
          token: result.token
        }
        var socketPackage = util.openSocket(socketData, {
          onMessage: function(message) {
            util.prepareSync('checksum', username, socketPackage, function(syncData, fs) {
              expect(syncData.srcList).to.exist;
              expect(syncData.path).to.exist;
              expect(fs instance of Filer.filesystem).to.be.true;
              
              util.cleanupSockets(result.done, socketPackage);
            });
          }
        });
      )};
    });
    

    Quite a difference! I also excluded what would have amounted to another 5-10 lines of code in the not refactored version that are summed up in 2-3 lines of the refactored one. The key was understanding which parts were being repeated, and the challenge was keeping the helper utilities flexible enough to allow the unit test programmer to test whatever they wanted.

    To clarify, lines 4-13 and 36-37 of the first code example might be repeated in multiple tests, since they perform a predictable step that is required to test other functionality. On the one hand, the entire process could be put into a function:

    function srcListStep(options, callback) {
      socketPackage.socket.removeListener("message", socketPackage.onMessage);
      socketPackage.socket.once("message", function(message) {
        message = resolveToJSON(message);
    
        expect(message).to.exist;
        expect(message.type).to.equal(SyncMessage.RESPONSE);
        expect(message.name, "[SyncMessage Type error. SyncMessage.content was: " + message.content + "]").to.equal(SyncMessage.SOURCE_LIST);
        expect(message.content).to.exist;
        expect(message.content.srcList).to.exist;
        expect(message.content.path).to.exist;
    
        callback(message.content);
      });
    
      var srcListMessage = new SyncMessage(SyncMessage.REQUEST, SyncMessage.SOURCE_LIST);
      socketPackage.socket.send(resolveFromJSON(srcListMessage));
    }
    

    But wait! What if the assertions on lines 6-11 aren’t the things the writer of the test wants to check? For example, those assertions assume that the functionality is executing correctly, and fails the test if it doesn’t. What if the test writer wants it to fail? Testing fail cases is a common practice.

    So instead, perhaps something like this:

    function srcListStep(options, customAssertions, callback) {
      // Parameter handling to allow
      // using the default assertions
      if (!callback) {
        callback = customAssertions;
        customAssertions = null;
      }
    
      socketPackage.socket.removeListener("message", socketPackage.onMessage);
      socketPackage.socket.once("message", function(message) {
        message = resolveToJSON(message);
    
        if (!customAssertions) {
          expect(message).to.exist;
          expect(message.type).to.equal(SyncMessage.RESPONSE);
          expect(message.name, "[SyncMessage Type error. SyncMessage.content was: " + message.content + "]").to.equal(SyncMessage.SOURCE_LIST);
          expect(message.content).to.exist;
          expect(message.content.srcList).to.exist;
          expect(message.content.path).to.exist;
    
          return callback(message.content);
        }
        customAssertions(message, callback);
      });
    
      var srcListMessage = new SyncMessage(SyncMessage.REQUEST, SyncMessage.SOURCE_LIST);
      socketPackage.socket.send(resolveFromJSON(srcListMessage));
    }
    

    Now the person writing the tests can leverage this function to perform the common functionality, but check for a specific result instead of success.

    This is exactly the process that I went through for as much of the test code as I could refactor. In the end, it ended up cleaner, and more reusable.


    by ksedgwick at June 25, 2014 05:15 PM

    June 23, 2014


    Yoav Gurevich

    The Wonders of Mozilla Proper

    Last week, the team managed to finish the functionality for bi-directional syncing with MakeDrive on the browser, that being a huge milestone in the Nimble project. Helping in the design of the front UI, work on unit tests, and pair program through bugs was as rewarding and productive as it could've ideally been.

    The cherry on top of the icing was being able to visit the Mozilla office downtown to present the demo on Friday. The working space in and of itself is worthy of song and film, with ping pong, a music corner, couches abound, snack and fruit bar, and an espresso machine that cannot get enough praise for its impeccable quality. More importantly, to be able to gain insight from and work with some of the most talents minds in the industry was invaluable to say the least.

    With project lead David Humphrey expected to return from vacation this week, there's lots of catching up to do with Websockets authentication and changing the client codebase to function completely off of Websockets instead of server-sent events. Another daunting week ahead, with newfound energy and inspiration to tackle the tasks ahead. 

    by Yoav Gurevich (noreply@blogger.com) at June 23, 2014 03:33 PM

    June 20, 2014


    Armen Zambrano G. (armenzg)

    My first A-team project: install all the tests!


    As a welcoming bug to the A-team I had to deal with changing what tests get packaged.
    The goal was to include all tests on a tests.zip regardless if they are marked as disabled on the test manifests or not.

    Changing it the packaging was not too difficult as I already had pointers from jgriffin, the problem came with the runners.
    The B2G emulator and desktop mochitest runners did not read the manifests; what they did is to run all tests that came inside of the tests.zip (even disabled ones).

    Unfortunately for me, the mochitest runners code is very very old and it was hard to figure out how to make it work as clean as possible. I did a lot of mistakes and landed it twice incorrectly (improper try landing and lost my good patch somewhere) - sorry Ryan!.

    After a lot of tweaking it, reviews from jmaher and help from ted & ahal, it landed last week.

    For more details you can read bug 989583.

    PS = Using trigger_arbitrary_builds.py was priceless to speed up my development.


    Creative Commons License
    This work by Zambrano Gasparnian, Armen is licensed under a Creative Commons Attribution-Noncommercial-Share Alike 3.0 Unported License.

    by Armen Zambrano G. (noreply@blogger.com) at June 20, 2014 08:06 PM


    Lukas Blakk (lsblakk)

    Take on the harder problem, Google

    This just in:

    Girls love to make bracelets, right?Girls love to make bracelets, right?

    Google, who recently announced their very disappointing statistics for diversity within their company are trying to remedy that with a $50 million dollar initiative targeting the usual suspects:  Girls.

    This is not just me pointing fingers at Google.  I am actively working to create a program that targets adults and supports them getting deeply involved in tech without blinders to the realities of that environment as it stands now.

    They have $50M to put into this? Great.  They should, however, have enough brains in their organization to KNOW that ‘fixing’ the issues of lack of women in tech is demonstrably not done by just getting to more girls. Loss of women in tech happens with drop offs during CS courses & majors in college and then also out in the tech workforce because it’s a toxic and imbalanced place for them to spend their time and energy.

    All this money thrown at adorable girls, creating projects for them will not help if they are being set up just to go into that existing environment. While we should do outreach and attempt to build educational parity for girls (but more importantly kids of color, kids living in poverty) so that there is exposure and understanding of the technology the REAL problem to solve is how to get adult women (and other underrepresented people) re-trained, supported and encouraged to take on roles in technology NOW.

    While we’re at it, stop acting like only a CS degree is what makes someone a valuable asset on tech (pro-tip: many people working in tech came to it via liberal arts degrees). Make the current adult tech world a welcoming place for everyone – then you can send in the next generation and so on without losing them in the leaky pipeline a few years in.

    by Lukas at June 20, 2014 08:00 PM

    June 19, 2014


    Ali Al Dallal

    Webmaker's Nimble project in action

    Last week, I was in Vancouver for Mozilla Appmaker workweek. One of my task in the workweek is to create a user story for Nimble project, but before going any further here I just want to give a very brieft explaination about the Nimble project.

    Nimble is an upcoming project from Mozilla Webmaker. It uses Adobe Open-Source project called Brackets http://brackets.io/. Brackets is an open source code editor for web designers and front-end developers and we bring the editor on the browsers and make it for anyone who want to learn coding or create an app.

    So, to simulate the user experience my task is the hack the current Brackets version that is already on the browser and have it load any component from Appmaker then save them back to Appmaker after you edited in Nimble.

    Before to do that user must know and follow the following step:

    1. Know how to use git (How to push, pull, commit)
    2. Know some command line (Using shell)
    3. Know how to setup github page
    4. Know about HTML, CSS and JavaScript

    But if they use Nimble they don't need to know about:

    1. How to use git
    2. How to use command line
    3. Setup github page

    So basically if they know some about Javascript, HTML and CSS and they're good to go!

    I have created a screencast to demonstrate what would be the workflow if a user:

    1. visit Appmaker designer page.
    2. Clicked to remix the selected component in Nimble
    3. Edited some code in Nimble and clicked Publish to Appmaker
    4. See their component added to the list.

    The above version is a very hacky, but proof of concept on how a user can create a component in just a matter of a minutes.

    by Ali Al Dallal at June 19, 2014 04:17 PM


    Kieran Sedgwick

    MakeDrive: Bi-directional Syncing in Action

    Our CDOT team has been hard at work developing MakeDrive’s ability to sync filesystems between browser clients. Previously, we’d demo’d the ability to sync changes in one browser code editor up to a central MakeDrive server, called uni-directional syncing.

    Now, we’d like to proudly present a screencast showing MakeDrive performing bi-directional syncs between two browser client sessions, in two different browsers.

    A special thanks to:

    • Gideon Thomas for his persistent work on the rsync code, allowing syncing filesystems over the web
    • Yoav Gurevich for a reliable test suite on the HTTP routes involved in the syncing process
    • Ali Al Dallal & David Humphrey for guidance, coaching and code wizardry throughout the process

    by ksedgwick at June 19, 2014 03:13 PM

    June 16, 2014


    Yoav Gurevich

    To Websockets or not to Websockets

    With project lead David Humphrey currently on vacation and senior team member Ali Al Dallal called away to Vancouver with tertiary work from Mozilla, there was no shortage of work to be done or tasks to be undertaken. Kieran Sedgwick went ahead and took over Makedrive unit testing for the time being in order to try and eventually successfully solve the nightmarish bugs that blocked the tests' infrastructure from being fully implemented and able to support comprehensive codebase testing.

    Concurrently, I went ahead and switched gears and focused on the two tasks of trying to research and piece together a proof-of-concept of a Websocket-core API system that handles user and session authentication before automatically upgrading the connection protocol from HTTP to WS. This is still a work in progress, but much learning is being had thus far. Some decisions might need to be made on the issue of the potential limitations of the core library not being able to handle upgrade events as comprehensively as might be necessary in order to properly and securely create this connection switch validation.

    Lastly, I was helping fellow team member Gideon Thomas with the planned upcoming demo of the bi-directional syncing functionality of Makedrive, mainly with designing the front-end UI and pair programming through the client-to-server communication and invocation of our libraries. That unfortunately wasn't able to be materialized due to the discovery of a bug with the server-side diff route validation that is still being solved to this day which not only affected bi-directional syncing but the unidirectional syncing that was demoed weeks earlier as well. Slightly heartbreaking, but on we fight with the knowledge that eventual victory will taste that much sweeter.

    by Yoav Gurevich (noreply@blogger.com) at June 16, 2014 08:00 PM


    Kieran Sedgwick

    What was I debugging again?

    I went on a bit of a journey last week while trying to get reliable unit tests working for our MakeDrive server. The main point of the software is to replicate the way dropbox stores and shares data between multiple locations, but exclusively running through a browser. To clarify, this means that files that are created and modified with a browser-based application (like a browser-based code editor) will be able to be retrieved on any browser, on any computer.

    From a technical perspective, the magic is accomplished with a browser-based filesystem (Filer) which syncs data between a client’s browser filesystem and the MakeDrive server’s master filesystem for that client. We use a RESTful API for pushing data from the browser to the server, and have been toying with an HTTP-less alternative using Websockets.

    Our proof of concept is the other half of the syncing process, namely when a client’s browser is out-of-date and needs to sync itself with the newer data stored on MakeDrive. All good development includes some sort of testing workflow, and we desperately needed some in our work.

    But this was a challenge.

    All the timeouts

    In software development, there is nothing more frustrating than an intermittent problem. Except, perhaps, a silent one. If something breaks, we rely on feedback from the program in order to pin down what caused the issue. When software just quietly fails, it can leave us scratching our heads. Combine this with intermittent error that has no clear cause and you have a potent recipe for this:

    BANG!

    A timeout error is when something takes longer than it is allowed to. Our tests were timing out in the most bizarre circumstances, and even our supervisor was having trouble finding the cause of the problem. When the same untraceable timeout error popped up again in tests that were testing an entirely separate component (Websockets) I decided I had to get to the bottom of it.

    Enter node-inspector

    One of the great powers of JavaScript is how easy browser debugging makes it to watch the code execute. This is not quite the case with NodeJS applications, since they run outside of a browser. Luckily, someone wrote the fantastic node-inspector package, which allows developers to use Chrome’s developer tools to walk through the code as if it was running in a browser.

    This proved very helpful.

    Using node-inspector, I tried to isolate the traffic being sent from our test client to the server. Where was it going? Why wasn’t it opening a websocket connection? Was the traffic even reaching the server?

    I learned a great deal about how to delve into a codebase I wasn’t familiar with, and it led me to observe that the server was timing out on a completely separate request from the one I was testing. In other words, this had nothing to do with Websockets.

    Fuzzy facepalm

    Instead, it appeared that our websocket-like server-side-event (SSE) connection was hanging. The route a client would hit to establish this connection wasn’t returning a response code on the tests that were timing out.

    Homing in

    Now, after a day of testing, I had a lead. First, I set out to make the SSE connections (which operate in a similar way to Websockets) as transparent as possible. I wanted to see when they opened, when they closed, and when they errored. This confirmed my suspicions, since the SSE connections were definitely being opened, but never closed.

    Armed with certainty, I looked into the utility library we’d built for the tests to see where the connections were being managed. What I discovered was the following snippet of code using the NPM request module, with line 11 being the most important:

    var stream = request({
      url: serverURL + '/api/sync/updates',
      jar: options.jar,
      headers: headers
    });
    
    // Callback was passed a function to close
    // the SSE connection (theoretically)
    callback(null, {
      close: function() {
        stream = null;
      }
      ...,
    });
    

    This entire piece was intended to replicate the code in a browser that would establish and manage an SSE connection. Comparatively, it’s much simpler:

    var SSE = new EventStream( serverURL + "/api/sync/updates" );
    
    // To close
    SSE.close();
    

    The problem was that setting stream to null wasn’t enough to close the connection, and there wasn’t an obvious means of doing so. The request module being used to open the connection was built to handle HTTP requests, and though it can handle streams of data over TCP, it was fairly well concealed. Using my best friend node-inspector, I ripped the stream variable apart until I found a likely candidate which passed testing and solved all of my problems:

    callback(null, {
      close: function() {
        // HTML5 provides an easy way to
        // close SSE connections, but this doesn't
        // exist in NodeJS, so force it.
        stream.req.abort();
      }
      ...,
    });
    

    And so the tests passed. Diligence, persistence and a blinding hate for being beaten by a dumb computer were what got me through this.


    by ksedgwick at June 16, 2014 05:03 PM

    June 11, 2014


    Armen Zambrano G. (armenzg)

    Who doesn't like cheating on the Try server?

    Have you ever forgotten about adding a platform to your Try push and had to push again?
    Have you ever wished to *just* make changes to a tests.zip file without having to build it first?
    Well, this is your lucky day!

    In this wiki page, I describe how to trigger arbitrary jobs on you try push.
    As always be gentle with how you use it as we all share the resources.

    Go crazy!













    Creative Commons License
    This work by Zambrano Gasparnian, Armen is licensed under a Creative Commons Attribution-Noncommercial-Share Alike 3.0 Unported License.

    by Armen Zambrano G. (noreply@blogger.com) at June 11, 2014 04:23 PM

    June 09, 2014


    Yoav Gurevich

    Makedrive Unit Testing

    In most projects of similar breadth, understanding the big picture is paramount in the search of perfecting the design and function of the system itself. This week was crucial in my comprehension of how all the pieces need to integrate into the final product in order to satisfy all the desired end-user applications of the APIs that we're creating.

    My assigned work was primarily focused on unit testing the sync route hierarchy that takes place when the end user initiates a file save from one of his active sessions. This is the perfect task for mastering one's familiarity with a complex codebases. A firm grasp of what each variable, function, and object contains and passes is paramount to successfully and efficiently test against the environment. On top of polishing up agnostic unit testing syntax and conventions, I inevitably have to traverse through all of the code in a way that isn't necessary to accomplish most other work. This is turning out to be an extremely fascinating and challenging endeavour, since valid user credentials from the client session as well as valid session objects including file and directory data must be present and persistent in order to initiate the sync sequences and were therefore arbitrarily mocked in order to successfully go through each route. This infrastructure was developed by project lead David Humphrey due to the time constrained nature of the current sprint, but the helper functions that I wrote and am presently using ended up emulating the same behaviour in order to achieve the similar goal of having to persist valid session data throughout the test process.

    Backtracking to more fundamental concepts that I must still catch up on, I've taken the weekend to fill in some knowledge gaps with object context and scope in javascript. A great article I've found on this gives insight on this from the simplest to the more realistic use cases can be found here.

    With the first of the 4 routes seeming to be properly implemented, and with the help of fellow teammate Kieran Sedgwick, I intend on finishing the test foundation for all of the primary sync routes by the end of today and start integrating this logic with rsync and websocket applications for the rest of the week.

    by Yoav Gurevich (noreply@blogger.com) at June 09, 2014 03:00 PM

    June 02, 2014


    Yoav Gurevich

    The Acclivity that Separates Programming from Development

    Concluding this week was the presentation of Webmaker team's first completed two-week-long sprint which showcased the initial integration of makedrive with filer and its initial connection with client-side DOM sessions. As I alluded to in last week's post, my work on this heartbeat was primarily focused on implementing Server-Sent Events.

    The design was structured as so:


    (This was also seen in my original proof of concepts' front-end page). The initiating client session would "save" a file they just created, effectively sending a request to push the data into the server. Once validation and syncing would be completed, I (along with great help from Mr. Sedgwick, yet again) used a server-sent event back to all the other active client sessions (on the same user) that the sync and push has successfully completed and that it's time for them to update their version from the server. Completing the cycle and syncing all the other sessions is one of the goals of the upcoming sprint we are about to undertake.

    The demo, although executed wonderfully as a concerted team effort, ended up being received with a rather tepid response. This was likely due to the majority of the mozilla developers being away at conventions, and leaving the large part of the crowd to be more community and marketing work-oriented folks.

    This week I will be working on writing server-side and restful-API unit and functional tests. More to come next week, as usual.

    by Yoav Gurevich (noreply@blogger.com) at June 02, 2014 03:41 PM

    May 31, 2014


    Gideon Thomas

    RSync FTW

    Firstly, sorry for the long gap between my posts. I promise to be more committed :)

    These past two weeks, we have been working on Makedrive: a file system module that can sync to every copy of that file system. It is important to understand the power of what we are trying to do here. As a user, if I have a file system on my desktop and I add/change/remove files or directories in there, I can now just check my tablet on my way to work and see the changes I made on my desktop on my tablet. And the point is to make it work on a browser. Kinda like Dropbox in a browser. Sounds neat eh? Well, as cool as it sounds, it was really really hard to implement.

    We used Filer, an awesome filesystem unit conceptualized by the genius Alan K., as our base and worked from there. I worked primarily on the syncing component. I worked of code developed by a fellow classmate of mine Petr B. It took a while to understand the code, especially since it was quite complex, but within two days I had a decent understanding of how rsync (which is the syncing algorithm we were using) worked. But there were some issues that needed to be fixed like syncing empty directories. That took forever! I had to figure out where stuff went wrong which is hard when you are working with MD5 hashes and non-human readable data. But I was able to get it done in the end.

    Then came the hard part. We were able to sync from one file system to another. But what about over a network. There was nothing that could really help us with this design and no real resources we could look for. Well, it took a while but we were able to come up with a design (courtesy of David H. and Alan K.) that involved syncing through an API. In a few days, I was able to configure routes for an API (since I have good experience with designing API’s thanks to the Web services course I took) based on what I thought would be good end-points in each step of the sync process.

    It took a while, but after integrating with my colleagues pieces of code, we were able to show a small demo of this to Mozilla themselves \o/

    And there you have it…another successful project by the CDOT Mozilla Webmaker team!


    by Gideon Thomas at May 31, 2014 10:21 PM

    May 28, 2014


    Armen Zambrano G. (armenzg)

    How to create local buildbot slaves


    For the longest time I have wished for *some* documentation on how to setup a buildbot slave outside of the Release Engineering setup and not needing to go through the Puppet manifests.

    On a previous post, I've documented how to setup a production buildbot master.
    In this post, I'm only covering the slaves side of the setup.

    Install buildslave

    virtualenv ~/venvs/buildbot-slave
    source ~/venvs/buildbot-slave/bin/activate
    pip install zope.interface==3.6.1
    pip install buildbot-slave==0.8.4-pre-moz2 --find-links http://pypi.pub.build.mozilla.org/pub
    pip install Twisted==10.2.0
    pip install simplejson==2.1.3
    NOTE: You can figure out what to install by looking in here: http://hg.mozilla.org/build/puppet/file/ad32888ce123/modules/buildslave/manifests/install/version.pp#l19

    Create the slaves

    NOTE: I already have build and test master in my localhost with ports 9000 and 9001 respecively.
    buildslave create-slave /builds/build_slave localhost:9000 bld-linux64-ix-060 pass
    buildslave create-slave /builds/test_slave localhost:9001 tst-linux64-ec2-001 pass

    Start the slaves

    On a normal day, you can do this to start your slaves up:
     source ~/venvs/buildbot-slave/bin/activate
     buildslave start /builds/build_slave
     buildslave start /builds/test_slave


    Creative Commons License
    This work by Zambrano Gasparnian, Armen is licensed under a Creative Commons Attribution-Noncommercial-Share Alike 3.0 Unported License.

    by Armen Zambrano G. (noreply@blogger.com) at May 28, 2014 07:05 PM

    May 27, 2014


    Shayan Zafar Ahmad

    What little habits made you a better software engineer?

    I remember when I used to do this to a lesser extent. need to start it up again!

    Answer by Jeff Nelson:

    One habit I've clung to is writing small prototypes when I'm trying to learn new concepts.

    For example, I'll sit down with a book or a web page, and over the course of a few hours, write 30 or 40 programs all of them only a few dozen lines long.  Each program intended to demonstrate some simple concept.

    This prototyping makes it very easy to try out many concepts in a short period of time.

    View Answer on Quora


    Filed under: Open Source

    by Shayan Zafar at May 27, 2014 12:53 PM

    May 26, 2014


    Yoav Gurevich

    Server-Sent Events and Bootstrap Tinkering

    Last week was definitely substantial in terms of activity. In the pursuit of surprise, however, summary and descriptions of my work on the current sprint will be completed on this upcoming week's blog post when initial implementation into the MakeDrive logic will occur (hopefully by week's end) and be visually demonstrated first.

    I am delighted to report about my initial fiddling work with Bootstrap CSS which is coming along rather swimmingly and excitingly. This front-end resource is being used to present my first proof-of-concept of the relatively new SSE interface. Documentation is readily google-able and as in-depth as you need it. When used in conjunction with JQuery, it provides for extremely efficient element generation and manipulation. You can event grab pre-made templates from the main website and customize them from there for web design on the fly. It's front-end beauty that's ideal for the timeframe of back-end coders.

    Friday also welcomed a new workshop to host at Silver Springs public school. Very similar in structure to the workshop held one week prior, with the addition of me personally spearheading the last one of the day. The students were nearly as attentive and focused as the first school, and showcasing Webmaker proved to be a success yet again, with excited praise coming from faculty and participants alike. One trend fellow teammate Kieran Sedgwick and I were noticing is the slight difficulty for the younger audience to immediately find the tools they were looking for from the main page. Kieran has already filed a new issue on Github for the front-end team to take a look at.

    Lots more news to come after the shipping of the first sprint by the end of this week!

    by Yoav Gurevich (noreply@blogger.com) at May 26, 2014 07:00 PM

    May 23, 2014


    Armen Zambrano G. (armenzg)

    Technical debt and getting rid of the elephants

    Recently, I had to deal with code where I knew there were elephants in the code and I did not want to see them. Namely, adding a new build platform (mulet) and running a b2g desktop job through mozharness on my local machine.

    As I passed by, I decided to spend some time to go and get some peanuts to get at least few of those elephants out of there:

    I know I can't use "the elephant in the room" metaphor like that but I just did and you just know what I meant :)

    Well, how do you deal with technical debt?
    Do you take a chunk every time you pass by that code?
    Do you wait for the storm to pass by (you've shipped your awesome release) before throwing the elephants off the ship?
    Or else?

    Let me know; I'm eager to hear about your own de-elephantization stories.





    Creative Commons License
    This work by Zambrano Gasparnian, Armen is licensed under a Creative Commons Attribution-Noncommercial-Share Alike 3.0 Unported License.

    by Armen Zambrano G. (noreply@blogger.com) at May 23, 2014 03:35 AM

    May 21, 2014


    Ljubomir Gorscak

    Interesting article about business tactics becoming obsolete. I don’t know how much of this will turn out to be true, but it got me thinking. http://www.ragan.com/Main/Articles/48350.aspx# 

    "...the era of companies having separate departments for brand strategy, digital, social media, or content production will be over. It's all just marketing, which is totally focused on one thing: revenue generation."

    "It is unlikely that companies will continue to supply workers with mobile devices and computers. Everyone will bring their own, and the IT department will have to adjust and adapt. Similarly, the CIO will now work more closely with the CMO, not the CFO."

    "The 'Mad Men' style demographic targeting will be a thing of the past—meaning no more "our target is 60 percent female, ages 24 to 45, with a HHI of $75,000-plus." Who thinks of themselves in that way?… Customers and prospects are a community—or an audience—and they are reached and engaged with by their common interests, passions, or needs. The best way to accomplish this is with high-quality content delivered on an opt-in basis that builds a true two-way consumer-business relationship. B2B and B2C become P2P—person to person."

    by Ljubomir Gorscak (noreply@blogger.com) at May 21, 2014 06:16 PM

    May 20, 2014


    Aaron Train

    Language Switching in Fennec

    As Jeff Beatty describes, language switching in Nightly is now available. Ultimately, this linguistic enhancement allows one to select from all officially shipped localizations from within the browser independent from available language selection provided by Android. As Jeff calls out, "Languages no longer have to be among the list of Android supported languages to become official localizations of the browser."

    Firefox for Android QA is looking for help from others for discovering issues found when trying this feature out.

    The developer of the feature, Richard Newman, calls out the following to look for in Bug 917480 when testing this feature on Nightly:

    Note: the option to control this is in Menu > Settings > Language > Browser language

    • Nightly should obey one's selection as their preferred Android system-provided language. Firefox has obeyed this in previous and current releases

    • Nightly should use one one of the languages we ship, regardless of system language

    • Testing this feature involves verifying that a change is immediately applied, and that all entry points into the application reflect the selected language

    • Entry points to check: -- Data reporting notification. This launches Settings in the "Mozilla" section. Titles should be correct: on tablet, for example, you should see "Settings" in the top left, and "Mozilla" as a heading. You only get this on first run, so you'll need to Clear Data to get back to a clean slate and test this out -- Launching the browser. Top sites should be in the correct language, as well as other UI elements -- Clicking a link from another application -- Installable Firefox Market Web applications

    Other areas affected by language change:

    • Sync settings when accessed via Nightly and via Android Settings > Accounts & Sync > Firefox

    • Sent tabs from other devices: the launched notification should be in the last language you picked

    Notes of interest:

    • Language selection changes should be persistent across browser sessions and restarts

    • All chrome content, such as error pages should be in the correct selected language

    • Setting the browser language has the side effect of changing your Accept-Language header. You should get pages in non-phone languages sometimes; depends on the site

    • Verify that switching to Android Settings and changing the system language does the right thing if "System default" is selected, and does the different right thing if a specific language is selected

    If you discover any issues, please file a bug on Bugzilla

    References:

    Bug 917480

    Try it out on Nightly today

    May 20, 2014 12:00 AM

    May 18, 2014


    Yoav Gurevich

    The Plugin, the Sprint, the Speech, and the Workshop

    To use a word such as 'eventful' for this week would likely be the understatement of the century. In the short span of 5 days, the Webmaker team at CDOT managed to implement and demo a barebones functional version of a Brackets extension called Wavelength, and crack down on the remaining issues in the Filer codebase in order to ready it for porting into MakeDrive. In between all of this, fellow team member Kieran Sedgwick and I were sent out on our first workshop for the Toronto District School Board, and earlier in the week I was able to harness my communication skills and impromptu charm by welcoming a visit from high profile individuals.

    In the spirit of honesty and accuracy, the work on the brackets extension did start on Friday, but that shouldn't take anything away from the tenacity and talent of the team being able to dive straight into a brand new API and push up a functional extension within less than 10 hours of work per person. Credit and thanks should be given to Adobe for not only creating sufficiently thorough documentation to peruse through in times of need, but for uploading templates and specific examples for starting to build extensions that proved to be paramount in our ability to create ours in such a demanding timeframe. My particular contribution was implementing the toolbar icon and the events necessary to emulate the standard behaviour relating to mouse movement and action - changing background colour when hovering over the icon and changing the icon's colour when clicked on or activated. The biggest logical hump for me to overcome was having to wrap my head around the fact that Brackets elements are all effectively DOM elements; I was looking for an API-specific function or parameter that would invoke or manipulate the toolbars when in actuality these are all DOM elements controlled by standard javascript calls and CSS classes. Extremely neat.

    Filer work made up the bulk of the development work for the week, and the experience was the first true test of what is likely to come in the near future in regards to the independent workflow process. When the rest of the team was focused on their own respective tasks, and project lead David Humphrey is juggling 50 other issues at any given time, I was largely left up to my own devices and ken in order to solve any blockers that obstructed my progress. IRC is always there, but honestly it could never amount up to the quality of a peer's physical presence. Quite overwhelming at first. Filer's codebase is relatively vast in comparison to what I am used to working with up until now, and file system logic is brand new territory for me. Combine those factors with a rather outdated documentation and the questions started piling up quickly from under the woodwork. Once I gathered enough context about the variables and functions involved and more insight into assertion-agnostic unit testing, everything else eventually fell into place with the exception of a few kinks. Testing the logic proved to be a challenge as well since I was requested to run the unit tests on a local server instance to emulate an environment that accommodates CORS mechanisms. Mac OS X builds have apache2 built in to serve webpages locally, but being able to properly implement that also ended up needing the seasoned and extremely capable hands of Chris Tyler, OSTEP team's project lead and veteran linux wizard. Apache2 is overly restrictive in its document path hierarchy and file permission structure. I initially thought that placing a symbolic link of my index.html entry point in the default path given in the httpd.conf file. That proved to be unsuccessful, so Chris needed to change the default path to start at my Document file tree and set the chown group of all the inner files and folders to the "staff" moniker in my case with lastly the addition of granting read and execute permissions to the "other" octet (chmod 755, or similar). Allowing symlinks is apparently a dangerous course of action that opens your files to attacks. All in all, I managed to send pull requests of two issues related to filer by the end of the week, ultimately surpassing my own goals in the end.

    Thursday was reserved for the workshop activities at Kennedy Public School for a grades 6-8 career day. Kieran and I agreed on engaging the children in a relatively simple task of creating a webpage with their own background colour and URL-sourced image anchor using Webmaker's Thimble editor. Initially, we believed there would be enough time for the students to look for and find the syntax required to achieve the task on their own, but for many of them the learning curve was a bit much and we quickly adapted ourselves to nudge and help them along in the right direction. Nearly everyone was attentive and listening, and we were pleased to see some cases of genuine interest and comprehension of what they were looking at and doing. It was a smooth, productive day and I'd like to think that we've helped nurture the future software development giants of tomorrow some way, shape, or form.

    I conclude with a pleasant surprise this week when one of the ICT professors dropped by the office with a pair of executive academic representatives. I had the opportunity to give them a quick overview of our project, and tried my best to add as much genuine personality into the conversation as I could while keeping a professional manner. It's wonderful experience for anyone who wants to refine or nurture their interpersonal skills in a more improvisational dynamic. These kinds of meetings often lead to invaluable networking channels that will reward you in waves later on. Wonderful stuff.  

    by Yoav Gurevich (noreply@blogger.com) at May 18, 2014 08:19 PM

    May 15, 2014


    Dmitry Yastremskiy

    MongoDB ElasticSearch tutorial

    While working on my own web app I decided to learn and implement the new web technologies and one of those is very popular database MongoDb. It is a document-oriented NoSQL database with great scalability, performance, reliability, but unfortunately it has some poor full text search capabilities. Even though there are implemented text search features in 2.6 version, but it still leave something to be desired. To solve this problem was developed a search engine ElasticSearch which can work with many databases and gives you a rich and flexible way to search database and outputs real-time statistics for tuning the database. I realized that on the Internet so far not so many tutorials on how to make MongoDb and ElasticSearch to work together and most of tutorials are confusing and misleading or omitting important parts. For that matter I decided to gather all the necessary information on how to install these to guys and make them work together on Ubuntu 12.04 LTS. I will give here a step-by-step tutorial how to install and fix all the problems that I have encountered.

    First of all you need to install the latest Mongodb, it is pretty straightforward:

    sudo apt-key adv --keyserver hkp://keyserver.ubuntu.com:80 --recv 7F0CEB10
    echo 'deb http://downloads-distro.mongodb.org/repo/ubuntu-upstart dist 10gen' | sudo tee /etc/apt/sources.list.d/mongodb.list
    sudo apt-get -y update
    sudo apt-get install -y mongodb-org

    The second step is to install ElasticSearch and all necessary components:

    sudo apt-get install python-software-properties -y
    sudo add-apt-repository ppa:webupd8team/java
    sudo apt-get update
    echo debconf shared/accepted-oracle-license-v1-1 select true | sudo debconf-set-selections
    echo debconf shared/accepted-oracle-license-v1-1 seen true | sudo debconf-set-selections
    sudo apt-get install oracle-java7-installer -y
    sudo wget https://download.elasticsearch.org/elasticsearch/elasticsearch/elasticsearch-1.1.1.deb
    sudo dpkg -i elasticsearch-1.1.1.deb

    These commands will install MongoDb and ElasticSearch for you. Now you have to configure it and this is the part where I found the most difficulties. In order ElasticSearch to function with MongoDb you have to set up a replica set. Replica set is a great feature of MongoDb that allows you to have multiple databases which will be backed up automatically and in the real time. For example, if you create a replica set consist of 2 instances, then one will be used as PRIMARY (read/write operations) and second will be used as SECONDARY just for reading operations. Of course data will be written to the second instance simultaneously to keep backup updated, but for user it allows read operations only. It gives us a reliable set of failover databases. You can create as many instances you want and have management system to decide for you on fly where to write and to read, but about this maybe the next time. For us replica set is required for ElasticSearch to be able to communicate with MongoDb. ES uses operations log file (which is created by MongoDB with replica set and called oplog) to write all the data from MongoDb to ES db in order to perform search.

    First of all we have to create a directory for the second instance of MongoDb.

    mkdir -p mongo/data2

    In the next step we have to run two instances with some parameters:

    First instance aka PRIMARY:

    sudo mongod --dbpath /var/lib/mongodb --replSet rs0 --port 27017

    Note: for mongod service by default the location of database is /data/db/, however mongo stores data initially in /var/lib/mongodb. For this mater we specify –dbpath /var/lib/mongodb to point out where is our existing database located. You might have some data already in database and would like to use it, so we have to point out to where the existing data is. Otherwise you can specify any other existing directory.

    Option –replSet specifies the name of replica set, can be any.

    And of course we need to specify a port, because each instance uses it’s own port.

    After you execute the command, the console will show different messages showing process running in foreground. For now we need to run it in foreground, but in order to send it to background you need to specify as well –fork option.

    Now the second instance aka SECONDARY. Open a new terminal window and type in:

    sudo mongod --dbpath /mongo/data2 --replSet rs0 --port 27018

    Notice here we specify directory that we have created earlier to store the second database or replica. Also we need to specify another port for this instance.

    When we have executed above commands you will see the instances running and give you these messages:

    [rsStart] replSet can't get local.system.replset config from self or any seed (EMPTYCONFIG)
    [rsStart] replSet can't get local.system.replset config from self or any seed (EMPTYCONFIG)
    [rsStart] replSet can't get local.system.replset config from self or any seed (EMPTYCONFIG)

    It says that it cannot find a config and attempts to create an empty one. Let’s fix it.
    We need to open another terminal window and connect to the PRIMARY instance by issuing:

    mongo localhost:27017

    When it is successfully connected, we need to pass a configuration for our replica set, for that we create a config object. Since it uses a JSON notation, the format should be familiar to you:

    config = { _id: 'rs0', members: [
     {_id: 0, host: 'localhost:27017'},
     {_id: 1, host: 'localhost:27018'}
    ]}

    Our config object is ready to be passed and the following command will do it for us:

    rs.initiate(config)

    After successful execution you will see the message:

    {
    	"info" : "Config now saved locally.  Should come online in about a minute.",
    	"ok" : 1
    }

    and those annoying messages

    [rsStart] replSet can't get local.system.replset config from self or any seed (EMPTYCONFIG)

    will be gone.

    In the next step we can check if our configuration is ok by issuing command:

    rs.status()

    which will output all the information about your replica set and notice how prompt line is changed to PRIMARY>

    &gt; rs.status()
    {
    	"set" : "rs0",
    	"date" : ISODate("2014-05-14T22:35:10Z"),
    	"myState" : 1,
    	"members" : [
    		{
    			"_id" : 0,
    			"name" : "localhost:27017",
    			"health" : 1,
    			"state" : 1,
    			"stateStr" : "PRIMARY",
    			"uptime" : 2329,
    			"optime" : Timestamp(1400106573, 1),
    			"optimeDate" : ISODate("2014-05-14T22:29:33Z"),
    			"electionTime" : Timestamp(1400106582, 1),
    			"electionDate" : ISODate("2014-05-14T22:29:42Z"),
    			"self" : true
    		},
    		{
    			"_id" : 1,
    			"name" : "localhost:27018",
    			"health" : 1,
    			"state" : 2,
    			"stateStr" : "SECONDARY",
    			"uptime" : 336,
    			"optime" : Timestamp(1400106573, 1),
    			"optimeDate" : ISODate("2014-05-14T22:29:33Z"),
    			"lastHeartbeat" : ISODate("2014-05-14T22:35:08Z"),
    			"lastHeartbeatRecv" : ISODate("2014-05-14T22:35:09Z"),
    			"pingMs" : 0,
    			"syncingTo" : "localhost:27017"
    		}
    	],
    	"ok" : 1
    }
    rs0:PRIMARY&gt;

    Let’s test it! I’ve inserted some data to DB and I assumed that you know how to do it. I would like to test if our replicated DB gets the same data. To connect to the second instance aka SECONDARY you issue:

    mongo localhost:27108

    where port is port of our second instance.
    Notice now prompt line says SECONDARY, which means it is our second instance.

    Let’s see what contains our second DB by issuing

    use mydb
    db.warehouse.find()

    and we get this message:

    error: { "$err" : "not master and slaveOk=false", "code" : 13435 }

    Again? Now what??? We have to specify that this instance is a slave and confirm at the same time that we know what we are doing.
    Execute this:

    rs.slaveOk()

    It didn’t output to me any message (however suppose to), but now you can actually see a copy of data from master replica.

    Here is also a nice FAQ about replica sets to clarify the functionality: http://docs.mongodb.org/manual/faq/replica-sets/

    Now we can move on to ElasticSearch. First of all we need to make sure that ES service is running and responding.

    sudo service elasticsearch start

    and then in the browser we can type in: localhost:9200 where 9200 is default port for ES. We supposed to get a message like:

    {
      "status" : 200,
      "name" : "Michael Twoyoungmen",
      "version" : {
        "number" : "1.1.1",
        "build_hash" : "f1585f096d3f3985e73456debdc1a0745f512bbc",
        "build_timestamp" : "2014-04-16T14:27:12Z",
        "build_snapshot" : false,
        "lucene_version" : "4.7"
      },
      "tagline" : "You Know, for Search"
    }

    which confirms that ES is running.

    The last step in my tutorial is to hook up ES to MongoDB.

    We need to install mongodb-elasticsearch-river and dependencies which allow you to hook them up. ( https://github.com/richardwilly98/elasticsearch-river-mongodb)

    sudo /usr/share/elasticsearch/bin/plugin -install elasticsearch/elasticsearch-mapper-attachments/2.0.0
    sudo /usr/share/elasticsearch/bin/plugin --install com.github.richardwilly98.elasticsearch/elasticsearch-river-mongodb/2.0.0

    After then you need to restart ES and MongoDb and pass a config to ES in order to hook it ip to MongoDb.

    curl -XPUT 'http://localhost:9200/_river/mongodb/_meta' -d '{ 
        "type": "mongodb", 
        "mongodb": { 
            "db": "mydb", 
            "collection": "warehouse"
        }, 
        "index": {
            "name": "mongoindex", 
            "type": "person" 
        }
    }'

    and to test if it is actually works you can insert data to MongoDb and then query it in ES.

    curl -XGET 'http://localhost:9200/mongoindex/_search?q=input:/bebe.*/'

    Important notice is where it says ‘mongoindex’ is name of the index that you have specified previously in config. If this query returns data, then you were successfully set this crazy mix of software up! Congrats!

    This tutorial created first of all for myself to reinforce the knowledge as well as have this tutorial for the future. Cases are different and this tutorial might not work for everybody, as well as I don’t claim it as a perfect tutorial and it may contain mistakes. But I will be happy if this tutorial will find useful even one person, so I didn’t waste my time. Any comments welcome!

    by admin at May 15, 2014 12:51 AM

    May 14, 2014


    Marcus Saad

    Firefox Launch Party in Brazil

    Firefox’s latest release launch party took place at FISL 15 being part of its official schedule and gathered more than a thousand attendees throughout 4 days of event.

    During the party snacks, candies and soda was being offered to everyone who engaged the party somehow, either participating on swag quizzes, swag draw, answering questions regarding the latest release, about Mozilla and our projects. Moreover, we were providing a stand where people could install Firefox on their devices, such as Android, PC, Mac and Linux.

    Showing up at the event, Mozilla’s Fox was cheering people up, gathering the crowd for pictures with its furry tail and calling people out for our party. While busting some moves on the rhythm of “What Does The Fox Say”, the crowd watched in awe.

    As a side attraction, a Photobooth was created so that people could frame their picture using the new Firefox UI, and complete the phrase “My Firefox has never been so ….” with their preferred adjective before sharing on Twitter. We have accounted more than 50 shares using @MozillaBrasil profile, tracking down #FirefoxBrasil and #Firefox hashtag.

    Data Collected, in numbers:

    • More than a 1000 people passing by the party
    • More than 400 people enrolled for swag draw
    • More than 200 people joined a swag quiz including questions about Firefox, Webmaker, Mozilla, Bugzilla and several other projects.
    • Install fest during the event:
      • ~20 Firefox for Android
      • ~10 Firefox for Desktop (Most attendees already had it installed, we just checked if they were up to date and prized them with stickers).
    • Swag and Handouts
      • Dozens of Lanyards
      • Hundreds of Firefox, Firefox OS, and Foxy stickers
    • Around 30 Mozillians helped on the event set up, including arrangement, organization, greeting attendees, talking and showcasing the latest release of Firefox.
    • Photos taken at the photobooth can be found here

    What Happened during the event:

    • Engagement talks about the latest Firefox, Firefox for Android, Firefox OS, Webmaker, Support Mozilla, Firefox Student Ambassadors.
    • Overall talks about our newest projects, such as Servo, Appmaker, Intellego and others.
    • Handing out of flyers about Firefox OS and Firefox Student Ambassadors.
    • Videos talking about the process of creation involved on the latest release of Firefox, Web We Want, and “What Does The Fox Say”!
    • Photobooth for people to spread their love for Firefox and share it on Twitter
    • Fox cheering up the crowd and taking pictures with attendees.
    • Soda, candies and snacks were being given for those who attended any kind of activity.

    What have we learned:

    • Using FISL to host our Launch Party was a big hit. We had the perfect audience, people fascinated for open source and free software, and those who care for their privacy. They were eager to learn about the new features that the latest release bundled, but only that, they wanted to talk about other Mozilla projects as well.
    • The location allowed that even people who were not attending the event could join us, since the event was open to the community

    Actions we should take after event:

    • Since we gathered around 300 emails from the people who joined our quizzes and draws, It might be a good idea to send them an email inviting them to join our community, explaining how they can be helpful and what areas we have for newcomers.
    • Make Photobooth available so that anyone who wants to share their love for Firefox can do so on their social networks.
      • It can be found in here. Code can be found at my github, please feel free to send pull requests or to use at your local Launch Party or future events. Thanks Chris Heilmann for the original code. If you want, this app can be easily localized using Gaia’s L10n.js library.
    • Publishing event pictures on our social media profiles.

    Here you will find some useful links to materials we’ve created and used (or not), feel free to grab and modify as you might (As long as you follow Mozilla’s guidelines).

    Card inviting people to the event:

    It's party day!Cards spreading awareness

    Your privacy, our commitment Image 7 Image 6 Image 5 Image 4 Image 3 Image 2 Image 1

    by msaad at May 14, 2014 09:17 PM

    May 13, 2014


    Armen Zambrano G. (armenzg)

    Do you need a used Mac Mini for your Mozilla team? or your non-for-profit project?

    If so, visit this form and fill it up by May 22nd (9 days from today).
    There are a lot of disclaimers in the form. Please read them carefully.

    These minis have been deprecated after 4 years of usage. Read more about it in here.
    From http://en.wikipedia.org/wiki/Mac_Mini




    Creative Commons License
    This work by Zambrano Gasparnian, Armen is licensed under a Creative Commons Attribution-Noncommercial-Share Alike 3.0 Unported License.

    by Armen Zambrano G. (noreply@blogger.com) at May 13, 2014 05:38 PM


    Gideon Thomas

    You can do anything in Javascript!

    This is my first blog post! :)

    I have always been deluded by the misconception that Javascript can only be used to make a webpage interactive. As a matter of fact, that was what I was taught a few years ago when I took a HTML/CSS/Javascript course. But things have changed a lot since then. With technologies like Node.js, Require.js, Travis CI… the possibilities are endless.

    I am still a beginner in Node.js. This week, we were required to create an app that would extract links from a markdown file. Seems pretty straightforward! But here’s the catch – the logic for the extraction should be a separate module so that the module can be used to create two apps: one that will run on a browser and another that will work on command line.

    Here’s where Node.js came in. Node allowed me to create these parts of the app using a modular approach. First, I created a module that would simply get the links. I pretty much match a regular expression pattern to get the links. But after chatting with my colleagues, turns out there are several ways to do this. In fact, I was told that Array.map() was a better approach…it was :)

    Require.js made my work even simpler. It allowed me to include my node module as a javascript module in my html file. So, I was able to run it on both the command line and on a browser.

    I am excited to see what else you can do with Node and javascript in general and I am sure with the help of my team, we will be able to create innovative apps using javascript…can’t wait to see what our next challenge will be :)


    by Gideon Thomas at May 13, 2014 01:41 PM

    May 12, 2014


    Seneca Health Projects Blog

    A Flexible Personal Health Record Library

    Originally posted on Dylan Segna:

    After spending the last few months working on accessing four different health record apis from Android devices, I thought it would be an interesting idea to create a library that obfuscates the individual differences found in each api.

    I imagined it like this:

    You read a measurement from one record, lets say it was weight for example.

    You use the returned weight measurement and immediately upload it to a different record.

    No formatting changes, new object creation or individualized method calls. Just a simple, flexible way for developers to access multiple different health record apis using the exact same interface.

    How will this work? Let’s continue using the weight measurement example.

    By taking the values that every weight measurement has in common throughout the web api, an object can be created that can be used by every record. It could look something like:

    public class WeightMeasurement{
    private double weight;
    private…

    View original 278 more words


    by Dylan Segna at May 12, 2014 04:04 PM


    Dylan Segna

    A Flexible Personal Health Record Library

    After spending the last few months working on accessing four different health record apis from Android devices, I thought it would be an interesting idea to create a library that obfuscates the individual differences found in each api.

    I imagined it like this:

    You read a measurement from one record, lets say it was weight for example.

    You use the returned weight measurement and immediately upload it to a different record.

    No formatting changes, new object creation or individualized method calls. Just a simple, flexible way for developers to access multiple different health record apis using the exact same interface.

    How will this work? Let’s continue using the weight measurement example.

    By taking the values that every weight measurement has in common throughout the web api, an object can be created that can be used by every record. It could look something like:

    public class WeightMeasurement{
    private double weight;
    private String unit;
    private Calendar date;
    }

    This is where the more difficult part arises. Each web api expects its requests to be formatted in different ways. This object needs to be transformed by each individual api implementation when it is being pushed or pulled to/from the respective health record.
    This of course would all be done behind the scenes, hidden from the interface that a developer would be using, so they may continue to do things like:

    WeightMeasurement weight = fitbit.pull(MeasurementType.WEIGHT);
    withings.push(weight);

    This also serves another purpose; allowing extensibility in the library.
    If a new api needs to be implemented, only the formatting changes for the Measurement objects would need to be written. The interface that is used by a developer would remain exactly the same.

    There is however one problem I have yet to find an elegant solution for : application authorization.

    Each api that I have worked with has employed different methods for authorizing an application to access a user’s personal information.

    Some require the user to sign in through a webview, others only require a request be sent with the necessary credentials.

    Creating a common interface for this process may prove difficult, however I believe it to be possible.

    For now let me wrap this up until I have more concrete ideas to share.

    I believe a flexible library such as this, which has the potential to provide a common interface for developers to access multiple health record apis would not only prove to be a valuable development tool for anyone trying to work with multiple apis, but also prove to be extensible enough to stay relevant with future health record api development and creation.


    by Dylan Segna at May 12, 2014 04:00 PM


    Yoav Gurevich

    A Trial by Fire - Initial Exposure to Bower, NPM, and require.js Modules

    Starting the first week in any place of work tends to be chaotic to say the least - settling into the expected workflow and output, setting up your environment, and getting to know your team members, if any. The first week's project was designed to expose and introduce the popular and emerging web technologies used to test, build, and publish web apps in the front and backend of systems.

    Node.js functions and objects were used to import and export a simple module that parses through incoming data in markdown file format and return the URL addresses of the links found in the strings. Initially, the data was placed in a static variable in order to focus on the pure logic of the function that parses through the data. Unit testing was created with mochaTest and automated with a Grunt task, alongside JShinting and linting for any syntactic concerns. Travis was used in conjunction with the Github repository currently being worked on in order to invoke all of the testing tools anytime a change is pushed, and the results would be emailed to the developer shortly thereafter. After those dependencies were properly implemented and tied together, the module was published to the npm and bower registries in order to be useable by the public. Bonus tasks were to implement logic to read data from incoming file arguments on the command line as well as parse data from a front-end web app using require.js to import the module and its dependencies.

    Managing to accomplish all the aforementioned goals save for the front-end medium concluded a work-intensive and very intriguing week. Almost all of the technologies used in this project were brand new and my exposure to them will hopefully prove to be invaluable when tackling the coming work ahead with the Mozilla Webmaker team to release a functional version of the nimble to the Webmaker toolkit by the end of the summer. Any of the expected blockers that I've come across were efficiently quelled by the readily available amount of help from the rest of the team and the documentation of the APIs. I enter this week ready to tackle whatever challenges that cross my path next with cautious optimism and palpable excitement.

    by Yoav Gurevich (noreply@blogger.com) at May 12, 2014 02:55 PM


    Ali Al Dallal

    Write JavaScript modules that works both in NodeJS and Browser (with requirejs)

    Last week I was trying to write a simple JavaScript module that will parse a Markdown links from a given markdown strings and output them as an array. So, I want to challenge myself a bit and try to make sure that my one single library file will work not only in NodeJS app, but will also work in browser. Now in browser it could be done multiple ways and there are really two way of writing this module that I care the most. One is writing it to work in requirejs environment and the other with very basic <script> tag include in the HTML file.

    The challenge was if I have to write in node module's way it will not work in requirejs as exports is not something that will work in requirejs and if I have to use define sure it won't work in nodejs as well.

    This is how you write for node only (COMMONJS)...

    function foo() {  
    // do something
    }
    
    module.exports = foo;  
    

    The above will work in other environment if you will add some extra code to check if module exist in browser and if not you will do something else to make sure it won't break, but I prefer not simply I like to make sure the code is clean and find a way that will work for sure.

    And, this is how you do in browser...

    function foo() {  
    // do something
    }
    
    window.foo = foo;  
    

    Obviously there are many ways to write simple module in browser, but this is one of the example here.

    Also, for requirejs uses AMD...

    define(['jquery'] , function ($) {  
        return function () {};
    });
    

    Now to make sure we understand onething that to make your module works in all environments you will have to use only things that will work for sure for example you can't use require() that avaiable in nodejs environment with browser as that is not part of the browser (if you include some extra library it will work I guess).

    So, how to make it work in all environments?

    Here...

    (function(global) {
      foo = function () {
        //do something
      };
    
      global.foo = foo;
    
    }(this));
    

    The above code will work in nodejs, browser and requirejs. I prefer to do it this way as I don't have to include any extra library in node or add any conditions to make sure I don't have something that might break the library.

    There are many ways to write a code that will work in requirejs and node, but you will have to add library like amd-loader which will then convert your amd module and make it work with nodejs, but again that will require you to add extra library.

    If you have a better way or suggestion please leave a comment down below :)

    by Ali Al Dallal at May 12, 2014 02:41 PM


    Kieran Sedgwick

    The Markdown Parser

    And he’s back!

    I’m back at CDOT for the summer! A massive thanks to David Humphrey, Dawn Mercer and the Mozilla Webmaker team for giving me another opportunity to contribute in such a focused manner.

    I’m going to keep this post as brief as I can. My re-introduction to web development came in the form of a challenge. I was to:

    • Build a node module that exposes a method for parsing the links in Markdown syntax
    • Make this module bower-compatible
    • Make this module cross-platform (browser & nodejs)
    • Demonstrate it’s use on the server-side by incorporating a command-line tool that reads Markdown from a file
    • Demonstrate it’s use on the client-side by building a github-pages website that uses it with requirejs and bower
    • Demonstrate some grasp of LESS by making that website all pretty ‘n stuff

    Right.

    The Nodejs module

    This was a good exercise, because it forced me to become reacquainted with how Nodejs logically organizes code that uses it as a framework. Using the CommonJS module system meant putting my parsing logic in a file with this structure:

    module.exports = {
      // Logic here
    };
    

    It also meant using the package.json syntax necessary for publishing an NPM module. This was nice review.

    Cross-compatibility

    Once my parsing logic was complete, I had to figure out a way to make the same file compatible with the RequireJS and CommonJS module systems simultaneously. This was accomplished by encapsulating the module.exports code from earlier into a method named define() so as to keep RequireJS happy:

    if (typeof define !== 'function') { 
      var define = require('amdefine')(module);
    }
    
    define(function( require ) {
      return {
        // Logic here
      };
    });
    

    Command line

    This was a fairly simple task, involving reading data from a file (specified as a command line argument) into utf8 format:

    fs.readfile( path, { encoding: "utf8" }, function( err, data ) {
      // Call my parser and print output here
    });
    

    Client-side demo

    This can be viewed at http://sedge.github.io/mdlinkparser/. The challenge was approaching this as if it were a completely isolated web application, that just happened to need to use a Markdown link parser. From this perspective, I would need:

    • A front end package manager (bower)
    • A front end module loading system (RequireJS), and
    • A build system to connect the two as part of my workflow (grunt)

    Configuring bower

    Bower is powerful. As a package manager, it shares many similarities with Nodejs’ package manager NPM. For instance, bower has a bower.json configuration file that operates in a similar way to NPM’s package.json. Mine ended up looking like this:

    {
      "name": "mdlinkparser",
      "dependencies": {
        "sedge_mdlinkparser": "1.0.2",
        "jquery": "2.1.1"
      }
    }
    

    I leveraged bower further by adding an automatic build step by specifying a postinstall script that called grunt. I’ll get to this in a moment.

    Configuring RequireJS

    RequireJS is awesome because it ensures that the modules you need are fully loaded, with all of their dependencies included, before running your program logic. It has a simple structure for specifying which modules to load:

    require( [ "dependencyName" ], function( dependencyName ) {
        // Logic using `dependencyName` goes here
      });
    });
    

    However! The javascript files that are used on a public facing website can have complex folder hierarchies, meaning that some work has to be done before a dependency can just be specified by name like in my previous example. Manually, it involves running require.config() before any RequireJS logic in order to establish symbolic links to the actual resources:

    require.config({
      paths: {
        dependencyName: "path/to/src/dependencyName.js"
      }
    });
    

    Being on a grunt trip, I decided I would automate the process. I found a grunt plugin, called grunt-bower-requirejs, that just needed to be pointed to the file that would run require.config() and would automatically configure the paths for me. This meant that I now had a Nodejs-based build system using grunt for a front-end bower-based system using Requirejs.

    Running was a simple as bower install, since bower would then call grunt because of the script I specified in the file called .bowerrc:

    {
      "directory": "js",
      "scripts": {
        "postinstall": "grunt build"
      }
    }
    

    Conclusion

    I didn’t get to dive into LESS scripting for front end styling, but I hope to soon. I also spent a lot of time making sure my fellow CDOT developers were managing to keep up, and everyone seemed to learn the essentials as a result.


    by ksedgwick at May 12, 2014 02:38 PM

    May 10, 2014


    Andrew Smith

    Who’s screwing up GTK?

    For many years I’ve been a fan of GTK. I started using linux with GTK1 was dominant, as I became a developer GTK2 took over, with beautiful themes and very usable widgets. I used GTK software, feeling that the philosophy of the people who write GTK apps matches my own: less fluff and more stability.

    Then Gnome went off the rails. Ridiculous decisions like the one-window-per-folder file manager were made with utter disregard for real users’ opinion. Wasn’t very good for developers either, it got so bad that Gnome was entirely removed from Slackware – something I thought might be temporary but turned out to be a permanent decision.

    Then GTK3 and Gnome3 came – both with apparently clear intentions but again inexplicable disregard for anyone not sharing their “vision”:

    • Bugs were introduced (probably not on purpose) into GTK2 after GTK3 was released, and those bugs will never be fixed. For example I periodically get bug reports for one of my applications which I’ve traced down to GtkFileChooserButton and it’s a known issue noone will fix in GTK2.
    • Huge parts of GTK2 have been deprecated, for example:
      • The horizontal/vertical Box layout scheme, which is how you were supposed to do all layouts in GTK2, and despite the deprecation warnings from the compiler there has been no alternative layout mechanism identified in the documentation.
      • The entire thread API, which is at the centre of any multi-threaded application. I don’t know if this was replaced with something else or dropped completely.
    • The new library is clearly unfinished. For example the GtkAboutDialog is simply broken in the current version of GTK3.
    • Serious bugs in GTK3 are ignored. For example I spent a day researching why they broke the scrollbars in GTK3, found that it was probably done accidentally (the new functionality doesn’t fit even their own designs), filed a bug, and five months later – still not so much as an acknowledgement that this is a problem.

    To be honest I think the Gnome people were always a little too fond of making major experimental changes, but I always felt that GTK itself was a bastion of stability, like the Linux kernel, GCC, Apache, etc. With GTK3 that changed. Who’s running GTK now? I’ve no idea. I don’t know who was running it before either. I don’t know if it’s a leadership problem, a management problem, a financial problem, or even just a lack of technical knowhow (maybe some tech guru[s] left the project).

    I spent some time porting one of my programs (Asunder) from GTK2 to GTK3, and the problems I ran into disgusted me so much that I rolled back the “upgrade”. I wasn’t the only one either.

    If you have time (45 minutes) I recommend you watch this presentation by Dirk Hohndel at linux.conf.au, who with Linus Torvalds as a partner tried really hard to use GTK for their scuba diving application and eventually gave up in frustration. For me the highlight of the presentation was the comment on the GTK community: essentially noone in the project cares about anything other than their own goals, and their goals are not (as you might expect) to create an awesome toolkit, but rather to enable them to create Gnome3. That’s the only explanation I’ve heard or read that makes sense.

    They’re not the only ones either. I’ve run a little unofficial survey of existing software to check how many people moved to GTK3, that was done relatively easily using these commands:

    cd /usr/bin && for F in *; do ldd $F 2> /dev/null | grep gtk-x11-2.0 > /dev/null; if [ $? -eq 0 ]; then echo "$F"; fi; done

    for F in *; do ldd $F 2> /dev/null | grep gtk-3 > /dev/null; if [ $? -eq 0 ]; then echo “$F”; fi; done

    The result? 83 binaries using GTK2, and 68 using GTK3. You can’t read too much into those numbers – at least half of them are parts of XFCE (GTK2) or Gnome/Cinnamon (GTK3) but it’s telling to look at the list rather than the numbers. Essentially noone has moved to GTK3 except the Gnome projects and a couple of others. Hm, I wonder if they wonder why, or care…

    Dirk and Linus went on and migrated their application to Qt, and they had a lot of praise for that community. I trust them on the community part, so I decided to consider Qt as a toolkit for my next project. I have, and I wasn’t entirely happy with what I found:

    • Writing C++ in and of itself isn’t a major issue for me, but I dislike overdesigned frameworks and that’s what Qt is.
    • Qt doesn’t use native widgets, and that explains why Qt apps never look native on any platform.
    • In Qt5 (the latest) you have to use JavaScript and QML for the UI, which is a bit too big a jump for me.
    • But it’s supposed to work much better on other platforms (like Windows and OSX), which I believe.

    So with GTK3 in limbo for years and the future of Qt unclear – I don’t know what to do for my next app. The only other realistic option I found was wxWidgets, but I fear that’s built on top of GTK on Linux and it will simply inherit all the GTK3 problems. I’ve started creating a project in wxWidgets, but I am wary of investing a lot more time into it until I know how this relationship will work out.

    The point of this blog post though was to bash the people currently running GTK, because they deserve it. Shame on you for breaking something that worked.

    by Andrew Smith at May 10, 2014 05:33 AM

    Slackware penguin sticker

    I decided to decorate my office a little, and since I’ve always been a Slackware user I wanted to get a large Slackware sticker to put on my glass. I couldn’t find one, so I made it myself, here’s the result:

    Office slackware penguin sticker

    I had to construct the SVG file myself. I started from the Linux penguin SVG (available somewhere online) and a pipe from open clipart. Thanks to the guys at LinuxQuestions for helping me get started.

    To combine the two I had to learn a little bit of Inkscape (the vector graphics editor for Linux), which was neat.

    When I was done I took the file to the printer (who, unbelievably, started with “SVG? what is that?”) and finally printed it for me, but with the wrong font. I should have expected that (I teach my students about how the same fonts are not available on different platforms) but I had to ask him to fix it. To make sure it doesn’t happen again I had to convert the text in the SVG file to a path, and I’ve done that by selecting the text in Inkscape and then using the Path/ObjectToPath feature.

    Unfortunately somehow I managed to lose the original file, so the one below has the path, not the text, so if you want to change the text you’ll have to start by deleting what’s there:

    Slackware Linux SVG

    Because this was printed on a white background (the printer either couldn’t or didn’t want to print on a transparent background) I had to chop off the smoke and the shadow underneath, it didn’t look good over glass.

    Also the line between the slack and ware turned out much skinnier than what was in the SVG. I wasn’t sure if that was a bug in the Inkscape I used or the Corel Draw the printer used or something else entirely.

    Cost me 60$ or something to print a meter-tall sticker, pretty good deal I thought.

    by Andrew Smith at May 10, 2014 02:20 AM

    May 07, 2014


    David Humphrey

    blog.humphd.org

    Last week I finally made some time to rebuild my blog. The server it ran on died in the fall, and I haven't had the time to properly get it back up. From 2006 until the end of 2013, vocamus.net/dave ran on a WordPress instance hosted faithfully by Mike Shaver, with admin help from Mike Hoye, Vladimir Vukićević, and likely others I don't know about. I remember Shaver encouraging me to create a blog despite my hesitation (what did I have to say that was worth reading?). "Just send me your ssh key and I'll do the rest." And he did. For years and years.

    I'm extremely grateful for his encouragement to get started, and my blog has been an important part of my identity ever since. In a web increasingly built of Instagram-pics, 140 character punchlines, and various other status updates, I continue to feel most at home in the 1,000 word blog post.

    Part of why it took me so long to get things back online is that I not only wanted to start something new, but also to avoid breaking all my old links. I experimented with various static blogging options on Github, but decided in the end to use Ghost hosted on DigitalOcean with nginx for redirects. So far so good.

    In addition to dealing with redirects from vocamus.net/dave I've also created blog.humphd.org, which I'll also use going forward. I've also decided not to bother with comments. If you want to reach me you can do so via Twitter (I'm @humphd) or via email.

    Thanks to so many of you who contacted me privately to say, "Did you know your blog's down?" It's nice that so many people missed it. I know I did.

    by David Humphrey at May 07, 2014 07:45 PM


    Edward Hanna

    Finally the theme works

    About a month ago Mr. Anastasiade emailed me the task of changing the theme in the edX-Platform, specifically citing that the theme must follow the look and feel of the Seneca Environment.  So part of this is done at the moment.  That part is about getting the theme working with the existing production platform.  It sounds like a short sentence said and done. Yes there are instructions that talk about how to do it. But it takes a bit of intuition.  So this is how I did it. There are two sets of instructions which become vital:

    (The Open-edX Developer Stack management instructions)
    https://github.com/edx/edx-platform/wiki/Developing-on-the-edX-Developer-Stack

    (The Open-edx Production Stack management instructions)
    https://github.com/edx/configuration/wiki/edX-Managing-the-Production-Stack

    Its important to note how themes are managed in the Developer stack. But first read a Thread at the Google Edx-Code discussion news group. There you will find a discussion I had with Edx developers who are kind enough to share their time with me and others. From this Thread I took the following advice when configuring my Fullstack Production Box Image:

    (This was done using the 20140418-injera-fullstack.box)
    To Install a production stack see: https://github.com/edx/configuration/wiki/edx-Production-stack–installation-using-Vagrant-Virtualbox

    To configure your production stack for theme and the latest update you will need to enter th e following command:

    vagrant@precise64:/$ sudo nano /edx/var/edx_ansible/server-vars.yml

    my server-vars.yml file had the following (I did a simple cat to show you):

    vagrant@precise64:/$ cat /edx/var/edx_ansible/server-vars.yml

    edx_platform_repo: https://github.com/edx/edx-platform.git
    edx_platform_version: master
    edxapp_use_custom_theme: true
    edxapp_theme_name: stanford
    edxapp_theme_source_repo: https://github.com/Stanford-Online/edx-theme.git
    edxapp_theme_version: master

    Take note of the three dashes at the begining. They are a must! Next step I took was:

    vagrant@precise64:/$ sudo /edx/bin/update edx-platform master > updateLog.txt

    This step allows me to do the update and store the update to a log file. That way if I need to show someone, right away I can pull it up and give them a briefing. This helped me in my Google posts with the Edx Contributors and Developers because they could see what was really happening when I asked them a question.

    After the update you should be able to see if your theme is working by reloading the LMS page. You can also reload the CMS. Maybe at this point I am not sure if everything worked for you. At this point I still had some problems with mine. So I took some extra steps to make sure. In the DevStack there are different instructions, but they apply in a similar way.  At the page for Developing the Devstack:

    https://github.com/edx/edx-platform/wiki/Developing-on-the-edX-Developer-Stack

    read the section “Configuring Themes in Devstack”. You will need to:

    vagrant@precise64:/edx/app/edxapp$ sudo -u edxapp bash
    edxapp@precise64:/edx/app/edxapp$ nano lms.env.json

        “FEATURES”: {
            “AUTH_USE_OPENID_PROVIDER”: true,
            “AUTOMATIC_AUTH_FOR_TESTING”: false,
            “CERTIFICATES_ENABLED”: true,
            “ENABLE_DISCUSSION_SERVICE”: true,
            “ENABLE_INSTRUCTOR_ANALYTICS”: true,
            “ENABLE_S3_GRADE_DOWNLOADS”: true,
            “PREVIEW_LMS_BASE”: “”,
            “SUBDOMAIN_BRANDING”: false,
            “SUBDOMAIN_COURSE_LISTINGS”: false,
    “USE_CUSTOM_THEME”: true
        },

    Set use_custome_theme to true

    and “THEME_NAME”: “stanford”,

    If your theme is not stanford call it what you want!

    The next part involves Github. We are going to checkout our gitHub theme manually:

    edxapp@precise64:/edx/app/edxapp$ ll
    total 68
    drwxr-xr-x 10 edxapp www-data 4096 May  7 17:07 ./
    drwxr-xr-x 13 root   root     4096 Apr 18 16:19 ../
    -rw-r–r–  1 edxapp edxapp   3751 Apr 18 15:47 cms.auth.json
    -rw-r–r–  1 edxapp edxapp   3621 Apr 18 15:47 cms.env.json
    drwxr-xr-x  3 edxapp edxapp   4096 Apr 18 15:37 .distlib/
    -rw-r–r–  1 edxapp www-data  715 Apr 29 17:42 edxapp_env
    drwxr-xr-x 22 edxapp edxapp   4096 May  7 17:07 edx-platform/
    drwxr-xr-x  8 edxapp edxapp   4096 Apr 18 15:36 .gem/
    -rw-r–r–  1 edxapp edxapp   3942 Apr 18 15:47 lms.auth.json
    -rw-r–r–  1 edxapp edxapp   3762 May  7 17:23 lms.env.json
    drwxr-xr-x  3 edxapp edxapp   4096 Apr 18 15:37 .npm/
    -rw——-  1 edxapp edxapp     37 Apr 18 15:37 .npmrc
    drwxr-xr-x  9 edxapp edxapp   4096 Apr 18 15:30 .rbenv/
    -rw-r–r–  1 edxapp edxapp    572 Apr 18 15:27 ruby_env
    drwxr-xr-x  2 edxapp www-data 4096 Apr 18 15:31 .ssh/
    drwxr-xr-x  5 edxapp edxapp   4096 May  7 16:57 themes/
    drwxr-xr-x  3 edxapp www-data 4096 Apr 18 15:37 venvs/
    edxapp@precise64:/edx/app/edxapp$ cd themes
    edxapp@precise64:/edx/app/edxapp/themes$ git clone https://github.com/Stanford-Online/edx-theme.git stanford

    Note how the Stanford in red matches the THEME_NAME

    You should now have a folder called stanford.  To apply the theme, take the next steps to apply the theme:

    edxapp@precise64:/$ source /edx/app/edxapp/edxapp_env
    edxapp@precise64:/$ cd /edx/app/edxapp/edx-platform
    edxapp@precise64:/edx/app/edxapp/edx-platform$ paver update_assets lms –settings=aws
    edxapp@precise64:/edx/app/edxapp/edx-platform$ paver update_assets cms –settings=aws


    by Edward Hanna at May 07, 2014 07:44 PM


    Hua Zhong

    conclusion for cxxtools

    conclusion for cxxtools

    According to a few days' research, I find out the reason why getting errors when building the cxxtools on aarch64, but even I change the source code I can't make it built successful.. Maybe I need to change the compile command parameters to force it ignoring the warnings.

    Conclusion for cxxtools:

    Possible Optimization:
    Not much. The cxxtools' source codes include the code for arm architecture, and we don't need to modify the source code.

    Building:

    Even I find the problem (see my last post), I can't solve it by modifying source code, maybe I need to modify the building commands' parameters. For now I cant build it successfully.

    by hua va (noreply@blogger.com) at May 07, 2014 03:31 AM

    May 06, 2014


    Hua Zhong

    Porting & Optimization (4) and conclusion for fossil

    Porting & Optimization (4) and conclusion for fossil


    In my last port, I test the C codes and Assembly codes on x86_64, and now I will test it on aarch64 machine.

    I will use the same test program as last post, but change the assembly codes for running on aarch64.

    //Rotation using C
    //#define SHA_ROT(x,l,r) ((x) << (l) | (x) >> (r))
    //#define rol(x,k) SHA_ROT(x,k,32-(k))
    //#define ror(x,k) SHA_ROT(x,32-(k),k)

    //Rotation using assembly under x86_64
    //#define SHA_ROT(op, x, k) \
            ({ unsigned int y; asm(op " %1,%0" : "=r" (y) : "I" (k), "0" (x)); y; })
    //#define rol(x,k) SHA_ROT("roll", x, k)
    //#define ror(x,k) SHA_ROT("rorl", x, k)

    //Rotation using assembly under aarch64
    #define SHA_ROT(op, x, k) \
            ({ unsigned int y; asm(op " %0,%2,%1" : "=&r" (y) : "r" (k), "r" (x)); y; })
    #define rol(x,k) SHA_ROT("ror", x, 64-(k))
    #define ror(x,k) SHA_ROT("ror", x, k)


    testing standard:

    7 tests, run the program for 2000 times for each test, record the time for each test

    remove the first test (for preloading the cache), the longest time test and the shortest time test. calculate the average for the rest 4 tests.

    test result:

    [root@localhost test]# time ./test_c.sh

    real    0m16.349s
    user    0m1.070s
    sys    0m2.660s
    [root@localhost test]# vi ./test_c.sh
    [root@localhost test]# time ./test_c.sh

    real    0m16.379s
    user    0m0.960s
    sys    0m2.760s
    [root@localhost test]# time ./test_c.sh

    real    0m16.479s
    user    0m0.940s
    sys    0m2.850s
    [root@localhost test]# time ./test_c.sh

    real    0m16.408s
    user    0m0.980s
    sys    0m2.760s
    [root@localhost test]# time ./test_c.sh

    real    0m16.506s
    user    0m1.080s
    sys    0m2.670s
    [root@localhost test]# time ./test_c.sh

    real    0m16.414s
    user    0m1.110s
    sys    0m2.620s
    [root@localhost test]# time ./test_c.sh

    real    0m16.410s
    user    0m1.030s
    sys    0m2.720s
    [root@localhost test]#

    arm64:

    [root@localhost test]# time ./test_arm.sh

    real    0m16.440s
    user    0m1.180s
    sys    0m2.570s

    [root@localhost test]# time ./test_arm.sh

    real    0m16.438s
    user    0m1.030s
    sys    0m2.720s
    [root@localhost test]# time ./test_arm.sh

    real    0m16.451s
    user    0m1.010s
    sys    0m2.750s
    [root@localhost test]# time ./test_arm.sh

    real    0m16.473s
    user    0m1.190s
    sys    0m2.580s
    [root@localhost test]# time ./test_arm.sh

    real    0m16.519s
    user    0m1.030s
    sys    0m2.800s
    [root@localhost test]# time ./test_arm.sh

    real    0m16.432s
    user    0m1.000s
    sys    0m2.780s
    [root@localhost test]# time ./test_arm.sh1

    real    0m16.441s
    user    0m1.010s
    sys    0m2.760s

    C:
    (.98+.96+1.08+1.03)/4 = 1.0125 s

    assembly:
    (1.03+1.01+1.03+1.01)/4 = 1.02 s


    We can see the performances between 2 types of rotation are almost the same. The C codes don't have to be converted to assembly codes.

    Conclusion:
    Build:
    building the fossil on aarch64 you need to replace the latest config.guess file,
    the link is

    http://git.savannah.gnu.org/gitweb/?p=config.git;a=blob_plain;f=config.guess;hb=HEAD

    Optimization:
    According to the testing result, we can see the performance doesn't show significant improvement after changing the code to assembly, so modification is not necessary.

    by hua va (noreply@blogger.com) at May 06, 2014 03:47 PM

    May 05, 2014


    Armen Zambrano G. (armenzg)

    Releng goodies from Portlandia!

    Last week, Mozilla's Release Engineering met at the Portland office for a team week.
    The week was packed with talks and several breakout sessions.
    We recorded a lot of our sessions and put all of them in here for your enjoyment! (with associated slide decks if applicable).

    Here's a brief list of the talks you can find:
    Follow us at @MozReleng and Planet Releng.

    Many thanks to jlund to help me record it all.

    UPDATE: added thanks to jlund.

    The Releng dreams are alive in Portland














    Creative Commons License
    This work by Zambrano Gasparnian, Armen is licensed under a Creative Commons Attribution-Noncommercial-Share Alike 3.0 Unported License.

    by Armen Zambrano G. (noreply@blogger.com) at May 05, 2014 08:03 PM

    May 02, 2014


    Andrew Smith

    How long before you can’t access your files on an Android phone?

    A couple of months ago I got a Nexus 5 to play with. I was generally impressed with the device, but a couple of things gave me pause, one of them I’ll talk about now: you can no longer set up your phone to act as a USB mass storage device when connected to a computer.

    That really bothered me at the time but then I discovered that the “Media device (MTP)” mode allows you access to all the files without the need for special software or dealing with DRM.

    And today I remembered my worries. What happened is I installed and ran TitaniumBackup, which normally creates a directory in the SD card named “TitaniumBackup“, which I copy (as a backup) to my computer over USB. But not this time, this time when I connected the phone to the computer all I could find in the root was storage/emulated/legacy/TitaniumBackup – a file and not a directory.

    After an hour of trying to figure it out – I did (I just had to reboot the phone). Google does indeed allow you unrestricted access to everything in the SD card over MTP, except there is a bug in their code that won’t let you see some files sometimes.

    Reported in October 2012 – that’s a year and a half ago. A large the number of developers are complaining in that bug that Google didn’t as much as acknowledge it, and it’s been manifest for more than one major release.

    My theory is the same I was thinking of when I first plugged my Nexus 5 into the computer and didn’t see a block storage device: Google will at some point soon no longer allow you access to the files on your phone or tablet. They’re already making it complicated by hiding your sdcard by default, and now this bug will train the remaining feature users to not rely on it. Everyone wants everything on the cloud, that’s what everyone’s going to get, and nothing else unless you’re a techie.

    Probably as a coincidence (do you believe in those?) as I was struggling with this problem a message came up in my status bar offering to me to set up Google Drive. If that happened a bit later after I had time to digest what happened I would have said “yeah, I saw that one coming” :)

    by Andrew Smith at May 02, 2014 03:57 AM

    May 01, 2014


    Andrew Smith

    Snowshoe to work

    After going to one of our campuses (Newnham) on cross-country skis I decided to give showshoeing a try for my everyday campus (Seneca@York). I enjoyed it so much I did it all winter long.

    Video was edited with Cinelerra, though not much editing was needed. I’ll make some notes about the process here because I already forgot what I did the last time a mere two weeks ago:

    My camera (a Sony HDR-PJ200) creates MTS files (which apparently are AVCHD – a piece of information that’s useful to know when looking through endless lists of format capabilities). These are 1080i, which is not quite as good as 1080p and this is important to know not only because of the resolution and aspect ratio but also because “i” is for interlaced, and not all programs deinterlace automatically (e.g. VLC).

    I cannot use MTS files in Cinelerra. In fact very little software supports them in any fashion. So the first step was to find a format that Cinelerra could use as a source and wouldn’t lose too much of my high quality. Originally I used DV, but I was later annoyed to discover that DV is 3:4 which squeezes my image way too much. I tried a high quality MP4 wiht h264 compression but Cinelerra can’t handle playing that back. Finally I settled on MPEG2 with AC3 audio in an MPG container.

    I converted my MTS files to MPG using a very handy graphical tool called WinFF. That is little more than a GUI that will create an ffmpeg command, but if you’ve ever tried to write an ffmpeg command you’ll know how valuable it is. The preset I used in WinFF is called “DVD/NTSC DVD HQ Widescreen”. The resulting files were about 25-30% the size of the originals but the quality was quite good enough.

    In Cinelerra I imported the MPG files and edited the video. Then I remembered to set my project settings for 16:9 video (I chose 720×480), thankfully that part didn’t require me to reedit anything.

    Finally I rendered the video into:

    • One OGV file straight from Cinelerra
    • One Quicktime for Linux file with two’s complement audio compression and DV video compression, to be used as an intermediary to create:
      • One MP4 file (h264 with AAC)

    The final ogv/mp4 files are still rather large (~10MB/min) but I figure that’s ok since only about 3 people a month read my blog :)

    by Andrew Smith at May 01, 2014 08:12 PM

    April 29, 2014


    Zakeria Hassan

    Remedy to Java.out.Memory when browsing ActiveMQ JMS Queue that exceed 200,000 messages

    Remedy to Java.out.Memory when browsing ActiveMQ JMS Queue that exceed 200,000 messages



    Diagnosis:


    After attempting to debug this issue I quickly found out that there was too much data getting sent to the client (browser).  I looked at alternatives like changing the design and then I looked at reusing existing capabilities of the web console. I found out that the web console had an RSS feed that we could leverage if we can get a section (start position and end position) to paginate through the JMS messages in queue. It will be sequential so it may get slow as you go deeper.


    http://localhost:8161/admin/queueBrowse/TEST?view=rss&feedType=atom_1.0&start=0&end=2





    Problem:

    Our data can grow and if we want to paginate through it or if we have other applications that would like to reuse some of the consoles functionality they may experience problems such as:





    There where other problems that the community resolved but the main problem was that we where sending too much data to the UI. I have experience working with search engines and we had issues like this when our database grow too large. As a result I teamed up with Arthur Naseef to get this quickly and efficiently resolved.








    Solution:
    Note: Since the deeper we go into the queue the longer it takes. I've added a progress bar to help our users know something is happening:



    This is how the results look when we get back the messages.






    Conclusion:

    I think this will open the doors to new possibilities for our web console. We have a great community driving these efforts. You can expect to see more innovation coming soon.

    If you are interested in test driving this new functionality then you can clone my repository:

    https://github.com/zmhassan/activemq.git

    Note: This new UI will live in "pretty-UI" branch.


    This design may appear in later releases but I'm currently in discussion with the community whether this is the direction we want to go.


    I will put in a pull request tonight with just the basic pagination and I'm going to only include the code that is required to patch this issue only. The extra UI design will have to wait.

    Jira Issue:
    https://issues.apache.org/jira/browse/AMQ-5024

    Pull Request:
    https://github.com/apache/activemq/pull/16


    Thanks,
    Zak
    @Prospect1010
    Software Developer | Research Assistant,
    Center of Open Technology - Research Department,
    Seneca College, Toronto, Ontario

    by Zak Hassan (noreply@blogger.com) at April 29, 2014 10:42 PM