New, cool, or handy web applications
New, cool, or handy web applications
You may have noticed it already, but the QR generator has been updated. Except for the aesthetic changes, it is now possible to create codes for data other than url’s: general text, phone number, email address, meCard, vCard and WiFi settings. The same export options are still available: PNG, SVG, PDF and EPS.
It’s also making use of the latest version of the ZF Matrixcode component which will make it easier to add more features and functionality in the future.
Check it out at http://qrcode.littleidiot.be
Last week I visited the PHPUK2010 conference in London and it has been a great time spending with colleague developers and fellow web addicts. I’m not gonna go into detail about every single talk because it’s probably easier and more interesting to head over to slidehare.net or phpconference.co.uk and check it out for yourself. Just let me say that it was a very well organised conference with a load of great speakers at a superb location. So great job PHP London!
Having said that, there was one specific talk I was really looking forward to. Lorna Mitchell of iBuildings was going to explain us “Best practices in web service design”. I’m currently working on a REST web service and there are some aspects of it that still raise some questions. For instance, unlike SOAP, REST does not really have a description language, a language with a vocabulary that can describe the service. So how do you deal with that? Another thing is the output format. A web service can offer a variety of formats. JSON and XML are probably the most popular ones, but in the case of XML, would you define your own tags, or would you rather pick XHTML.
It’s the latter issue I’d like to focus on in this blog post.
Lorna’s talk was really interesting, and she obviously has quite some experience with building web services. It’s also great and inspiring to hear someone talk in such a passionate way as she did. Unfortunately she didn’t mention anything about the XML/XHTML format. Very understandable of course, as her time was very limited and there are so many aspects on this subject.
So afterwards I went to see her to check what her opinion was about this, and I must say I was a little bit surprised and confused by her answer. Her reasoning was that no markup should be used in the output of a web service, so one should definitely use general XML.
You can’t really agree with that argument because XML is a markup language as well. So then I was thinking that maybe I misunderstood and she actually meant there is no need to use a language that has ‘styled’ tags, but that wouldn’t make much sense either because robots that consume web services don’t apply styles to tags, only browsers do. Or maybe she meant that (X)HTML results in too much tags compared to custom XML, but even that could be easily proven incorrect.
Unfortunately since I had a couple of other question to ask her and other people were waiting with more questions, I couldn’t continue the discussion.
To me XHTML is in the first place XML, though with a predefined and limited tag set, and I can’t see many arguments why not to use it. On the contrary, this is why I think XHTML is in fact the better format:
- XHTML tags, although predefined and limited, will most likely fit all your needs for structuring your data. If the whole web is built with that limited set of tags, you could expect it to be sufficient for your service.
- The “HyperText” feature of XHTML, or in other words the possibility to link content together, could actually be very useful for web services as well (see code sample below).
- XHTML tags are semantical, and since every developer knows their meaning they are easy to interpret and read. You can easily use a <dl> for key-value pairs, <li> to represent a list, or for example a <span> if nothing would fit. A “class” attribute can be used to give additional meaning.
- As Lorna mentioned in her talk, documentation is extremely important. When using XHTML, you can just check out a web service in your browser and actually see how it works. The browser will know exactly how to render the responses. And if the service respects a ROA approach, you can even browse from one resource or service to another by clicking around. The web service would almost become the documentation on its own.
- If every web service would use the same (XHTML) tags, it would save a lot of developer work in terms of parsing the response.
To finish my point, let me just give you an example of a service response in both XML and XHMLT, and judge for yourself. I took an extract of a Twitter response containing some user information:
XML (actual Twitter response):
<?xml version="1.0" encoding="UTF-8"?>
<location>San Francisco, CA</location>
<description>Twitter API Support. Internet, greed, users, dougw and opportunities are my passions.</description>
<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.1//EN"
<html xmlns="http://www.w3.org/1999/xhtml" xml:lang="en">
<title>User 1401881 - Doug Williams</title>
<dt>name</dt> <dd>Doug Williams</dd>
<dt>location</dt> <dd>San Francisco, CA</dd>
<dd>Twitter API Support. Internet, greed, users, dougw and opportunities are my passions.</dd>
<form id="searchUsers" method="get" action="">
Search for other users:
<input id="term" name="q" />
<input type="submit" />
I added some hyperlinking to link different but related services together, so therefore the XHTML version is a bit longer. But is it more complicated? Is there more overhead because of markup? Would it be more difficult to parse? And just try rendering both versions in your browser and tell me which one learns you the most.
If you’d like to read more on this matter, and especially the ROA approach for RESTful services, I would recommend “RESTful Web Services” by Leonard Richardson and Sam Ruby (O’Reilly). The idea behind the book is “web services are web sites for robots”, which is really an interesting way of looking at it.
Kaltura is an open-source video platform that gives you all the tools you need to publish your own videos: asset management, transcoding, video processing, batch uploading, streaming, widgets …
Kaltura offers a premium solution that includes full service, full support and full customization, but besides that they also released a free community edition (KalturaCE) of their server side application last year which looks very promising.
For a new project I’m working on I wanted to test drive KalturaCE and investigate all the tools and features, and especially their API.
After going through the installation guide I was a bit disappointed that the application was only guaranteed to work on Ubuntu or Debian Linux distributions. No Fedora, no CentOS (although there are some 3rd party instructions to get it working on CentOS). On the other hand, the software has only been released like 7 months ago, so I shouldn’t be too demanding yet I guess.
Our production server runs on CentOS, but obviously I prefer to do the testing on the local server which runs on Fedora 10. So I decided to just give a try. Who knows I was having a lucky day.
The installation guide instructs you to unpack the application in the webroot. However, to prevent messing up my accurately ordered web server (ahum…) I preferred to run Kaltura it in its own virtual host.
This is how my VH looks like:
Options Indexes FollowSymLinks ExecCGI Includes
Allow from all
When using a virtual host, you might want to add a line to your hosts file so you can browse it afterwards.
If you unpack the tar file (tar xvfzp [source] -C [destination]) in another location than your document root and you move all the files afterwards, DON’T FORGET to explicitly move the .htaccess file located in the application root. Just doing for example a “mv ./* ../” will skip hidden files, and without the .htaccess file the application won’t be very amusing to work with I can tell you.
Kaltura also needs a MySql database, so create one, and make sure that the database user has all the required permissions as instructed in the installation guide!
Once the files and the virtual host are in place (don’t forget to restart Apache) and a database is created, it’s time to run the installation script by browsing to the virtual host.
The process is a piece of cake. Fill out some forms and that’s it. However, when arriving at the final page, I got this message:
“Kaltura Community Edition Server installation failed”. Bummer!
I remembered that in the last step, I checked the “Free Registration with Kaltura.com” option, so I decided to repeat the installation process and skip that option. Result: Server up and running!
Don’t know exactly why the registration option messes up the installation. The error did not specifiy any details. But who cares, it’s running
However I could still not preview any videos in the application. I kept receiving errors like “THE FILE DOES NOT CONTAIN LAYOUTS ELEMENTS …”. Running the script “run_replace_root.sh” (located in the install directory) fixed this issue though.
After adding and editing some content it seems that Kaltura runs smoothly on Fedora.
Next steps are enabling H.264 encoding, and investigating the possibility to run the platform in the Amazon Cloud. To be continued.
I must admit, I’ve never been a huge fan op Drupal. I know it’s a powerful application framework. I know it is very extensible. I know there are lots of free modules around that fit almost every need. So what’s not to like then? I’ll tell you: the interface. I’ve used all major content management systems in the last couple of years, and Drupal is the clear winner in terms of clunky and annoying interface. It was clear that the user experience was not much of a concern for the Drupal team.
So for that simple reason, I didn’t consider Drupal that often when starting on a new website project.
But when Drupal 7 was announced, I got interested again. A Drupal User Experience Project was set up to improve the, eeeehm… user experience, so it looked like they were aware of the problem and were getting ready to tackle it.
2 years of development later, Drupal 7 Alpha is finally released, so time to forget about the past and give it another try.
The install procedure is very nice and easy, and is pretty similar to other systems. Then, once installed, the interface showed up and, it was… a fail!
They did a good job on better organizing the different actions in categories that are clear and make sense, but what keeps annoying me is the fact that the interface is still not intuitive and in my opionion a bit of a mess. I don’t know really. When I saw the new WordPress 2.7 interface some time ago, I was really excited by the huge progress they made. With Drupal 7, I can’t say I’m impressed.
Maybe my expectations were too high. But, you see, I’m a hug fan of Metalab. They do fantastic jobs on web interfaces. Simple, beautiful, and efficient. Or take for example the CampaignMonitor software. It shines in simplicity and user friendliness. If you are getting used to working with those kind of applications, the Drupal 7 interface is pretty disappointing. I do realize that Drupal is a much more complex system then most other applications, since Drupal is more ‘generic’ and closer to a framework. But still, I believe they could have done a much better job. So I’m afraid that even the new interface will continue to scare beginners, and not much progress will be achieved in this field.
So for now I’m still not going to use Drupal 7 for regular websites, and maybe when the stable version is out I’ll give it another go. Until then I’ll stick with one of the alternatives.
I’m evaluating Drupal here primarily as a CMS system for websites, not so much as a framework for building applications. For websites, I just want it to work out of the box, looking good and simple for the client, without the need for custom coding.
I’m also aware that you’re free to design your own interface, but not having to do that looks as one of the major advantages to me.
A few days earlier I talked about Foxmarks to have your bookmarks synchronized across all the computers you use. Well for the same reason I was looking for a similar solution for the contacts in my addressbook. Same story as with the bookmarks: there were already some nice online solutions for this, and with applications like Plaxo or LinkedIn your data was always kept up to date by the contacts themselves, but I like to have that same data offline, preferably integrated in my application of choice.
Now the good thing was that I didn’t have to search for such service. I received an e-mail from the Plaxo team notifying me about their new Plaxo Pulse application that was able to do all the stuff I wanted. Great!
And I must admit, the guys at Plaxo did an amazing job. The most interesting features for me were the fact that I can import your contacts from my LinkedIn profile, and that I can synchronize in a bi-directional way with my mac’s Address Book. Now I have my contact data in my mac updated by the contacts themselves.
Besides the address book, you also get a handy online calendar, and even that you can synchronize with client calendar software like iCal. Just like in iCal or Outlook you can create multiple calendars, and for each calendar you can decide if you want to make it publicly accessible or not.
Another nice feature of the Plaxe Pulse service is that it allows you to have a sort of online business card. It’s always up to date as it is synchronized with my mac’s Address Book and by registering a username you get a short and easy url to link to it. I’ve always hated the printed businesscards as you need to buy like a thousand at once (of which 900 end up as post-it) and every year you have to buy new ones because some phone number or job title has changed.
Now I just print my name and the MyPlaxo url on my business cards so I can use them forever.
One remark: you might not want to use your openID for registering your Plaxo Pulse account as with your openID you can not use the synchronization plugins in your client software. The Plaxo team was aware of that so that could have been resolved by now.
Hope you enjoy it as much as I do.