One Week With OmniWeb 5

I said I’d write up my thoughts on spending one week with OmniWeb 5. Before starting, here’s the Coles Notes version: I bought the upgrade license, and have switched to using OmniWeb 5 (almost) 100% (see below for reasons why it’s not at a full 100% yet). Here goes:

Things I love about OmniWeb 5

  • The tab implementation freaking ROCKS. – almost exactly what I mocked up in January 2003, with the added bonus of page thumbnails… And the thumbnails are absolutely amazing and gorgeous. Can glance at 10 thumbnails and see exactly where I need to go next… And, it shows the status of pages loading in the background (shows a green checkmark on pages when they’ve finished loading)
  • Saving the workspace as you work – so if you quit, or the browser crashes, all tabs are restored right where you left off the next time you launch (even remembering scroll position).
  • Textarea form elements on web pages have this cool widget where you can get a larger sheet for more effective text entry – with an “Import from File” button. Awesome if you fill out a lot of forms (like, say while developing a web app…)
  • Site Preferences – override application prefs on a per-site basis
  • View: View in Source Editor! You can edit the HTML for a page in a source editor, and REDISPLAY THE EDITS LIVE, without having access to the page’s server. Awesome for debugging stuff.
  • Ad blocking is awesome. And it’s highly customizable, too. Be gone, annoying flashing banner ads! Related to this is the ability to control animation of GIFs on a page – I’ve got mine set to allow animation for no more than 20 seconds.
  • URL Shortcuts. You can add shortcuts for any URL, and as an added bonus, you can add parameters for pages that accept them (like search engines, etc…). Slashdot is just “slash” now. No waiting for autocomplete… I worked up a shortcut that allows me to search CAREO from the navigation bar (like Alan’s shiny new MLX Firefox plugin)
  • Regular expressions in the Find utility. Holy crap! Not that I’m a RegEx expert or anything, but any app that cares enough to put that kind of functionality at my fingertips deserves some serious kudos.
  • The “Page Info” panel. Every bit of detail of every single part of a page is available in a report. Files grouped by type (images, style sheets, scripts, frames, etc…) with full stats (file size, last modified date, expiry date of cache…). And, the ability to display each individual item, or save it separately, or view the source for it. Wow. Awesome for debugging websites…

Things I’m Ambivalent About

  • RSS implementation. Thought I’d love it. Thought it would be the greatest thing since sliced bread. But it’s just nowhere near NetNewsWire, only showing titles of items in a feed, and not tracking read/unread state.
  • Bookmarks. Maybe I just haven’t given it enough time, but it doesn’t seem dramatically better than Safari’s bookmark implementation (and has a wrinkle that you can’t drag a page’s URL proxy from the address bar into the in-browser-window bookmarks display – you get the bookmarks:/ URL instead. There’s a workaround, but that’s not the point…
  • Shared Bookmarks – not really useful unless everyone on your LAN is using OmniWeb 5 – I’m the only one I’m aware of here…
  • Multiple Workspaces – thought I’d really use these, but opening bookmark folders in tabs within the current default workspace works better for me.
  • Bookmarks syncing – I’ve been a huge fan of this with Safari, so it’s not groundbreaking, but it’s great to know it’s there.

Things I’m Not a Great Fan Of

  • Out of date WebCore. Doesn’t work with GMail (that’s the one thing I keep Safari running on a second machine for…)
  • The “Save Window Size” command works great for single-display systems, but if I have a window on my secondary display while at work (on an external monitor plugged into my TiBook), and set the window to prefer that screen by “Saving Window Size” on it, then when I go home (without the external monitor), the window tries to open on the external monitor anyway. I can grab the edge of the title bar and drag it back on to the only screen, but it’s a huge pain. It would be cool if the app was smart enough to realize the number of screens, and relative position thereof while saving size…
  • The Navigation Bar. Separate Stop and Reload buttons are redundant – and waste space on the nav bar. I can only do one or the other, never both… Also, I really miss Safari’s progress-bar-under-location-field display. It’s just so handy to be able to know that a page is about half loading by seeing a blue bar in my peripheral vision – without having to look up from what I’m reading, and without having to open an Activity Viewer. It’s nice to have the Activity Viewer, but it shouldn’t be the primary way to display page status.
  • No way to export my bookmarks. It will import them fine. Great. Now what? It will read my Safari bookmarks (but not write to them). I’ve been using Safari Bookmark Exporter to export my Safari bookmarks to every browser on my system. Can’t do that anymore, since bookmarks will be going into my OmniWeb bookmarks file, which is in a different format…

Anyway, I’m sure it’s far from a complete list, in each of the 3 categories. Bottom line is, I love OmniWeb 5, and plan to be a long time user.

DirectorWeb is 10 years old!!

The DirectorWeb website is 10 years old! Holy cow. Time flies. I used to spend a LOT of time on this site, using their Direct-L listserv archive/search utility, back when I was doing ~100% Director stuff. Alan did an awesome job on DirectorWeb, so much so that I considered it essential to doing Director development. I still remember many of the regulars (WTHMO – Warren (the Howdy Man Oleshko), Zav, Zac, John Dowdell, Warren The Audi Man (from Integration.qc.ca, IIRC) and many more…) Direct-L searches for “howdy” show that WTHMO must still be active ;-) I appear to have been dropped from the online archives, or perhaps more accurately, the archive doesn’t go back far enough to include me ;-)

It’s some kind of synchronicity thing, when I first "met" Alan Levine almost 10 years ago via DirectorWeb and Direct-L, and now deal quite a bit with him on Pachyderm, and other things Learning Object. What a long, strange trip it’s been… ;-)

Thanks for the update on DirectorWeb, Alan! (although, like yourself, I haven’t seriously touched Director for several years now…)

OmniWeb 5 Update

It’s only been a day, but I’m really liking OmniWeb 5. I had one crash, but other than that it’s been flawless.

The extra features are great (edit HTML then refresh the browser window – on any website!). Love the thumbnail tab view. Love the speed and ad-blocking. Hate the lack of Safari’s cool progress bar. Hate the separate Reload and Stop buttons. Hate that GMail doesn’t work in it… I did pay my upgrade fee for OW5 already, though ;-)

JavaEOXMLSupport is now working!

I finally got a version of the JavaEOXMLSupport.framework working to the level that it could actually be used in a project. The previous version was usable for read-only cases, but was pretty useless for editing/writing XML.

I had to rethink the strategy a bit. The previous strategy treated every Element in a document as an individual EO. That works conceptually, because it’s easy to model (you can model the XML schema in EOModeler, and use that, in theory – King even wrote a tool to generate an EOModel from a schema!) It’s harder to implement this, however, because individual EOs are somewhat separated from the DOM – they don’t know where they are within the DOM, etc… Also, in this model, it was very hard to add new elements to an existing document. Say a document didn’t have a keyword when it was pulled from the database. This strategy makes it difficult to add a classification.keyword.string if there isn’t one already… (how do you add an element into a DOM tree when you don’t know where you are in that tree?)

The current strategy is to treat the XML document itself as the EO, and to teach the Key Value Coding methods in that EO class (EOXMLRecord) to dig into the DOM as needed, and provide a nicely wrapped EOXMLElement class for each DOM Element ( the EOXMLElement wrapper provides Key Value Coding interface on the DOM Element so it plays nicely with WebObjects). This 2-class strategy (one to integrate with EOF, one to integrate with DOM) also makes it drop-dead simple to add new elements – they’re created on the fly when requesting their values, so it should Just Work if you try to add an element where there isn’t one (like adding a new classification.keyword.string, for instance…)

I’m 100% sure there are some Hummer-sized caveats and special cases and bugs looming in there, but the hard part is done – it works! I’ll clean up the code, write some documentation and sample apps, and update the Sourceforge project as soon as I get a chance (may not be until September, though, since I’m being simultaneously buried by the Pachyderm…)

Trying a switch to OmniWeb5 for a week

Prior to Safari, I was a die-hard OmniWeb 4 (then 4.5) user. I really liked OmniWeb, but Safari was much better (IMHO) at things like bookmark management.

I’ve been following OmniWeb’s development, and really like some of the new stuff (using WebKit means pages render correctly, the new tab implementation looks pretty sweet, workspaces should be useful, RSS feeds(?) …).

So, I’m going to try using OmniWeb 5 for a week, and see how it works in the field. I’ve switched my default browser to it so apps like NetNewsWire open it automatically. (btw, the tab thumbnails are awesome for opening stuff from NNW!). I’ll post again in a week with some thoughts…

XML Retrieval and Processing Comparison

I’m clearing my whiteboard, and need the space this was occupying, so I’m dumping it here for future reference. The following table compares the time it takes to retrieve XML from various sources (XML databases and the like) as well as to perform various types of processing (nothing, save as file, convert to DOM…). This table was very useful when we were coming up with our current XML storage strategy.

XML Store Retrieving Process Time (ms)
(289 records per query)
Time (ms)
(per record)
XStreamDB Minimal XML (handful of elements) DOM via DOM4JKVC 3038.9 10.52
NSDictionary 2684.6 9.28
Full LOM (entire XML Document) DOM4JKVC 3371.5 11.66
null (no processing) 1854.3 6.41
W3CDOM 3513.8 12.16
NSDictionary N/A * 14.28 *
JUD Save File 33 minutes / 4056 records 488.17

* This process was flakey at best, and refused to convert some records to NSDictionary objects, so the multiple conversion method failed.

All XStreamDB tests were performed using a WebObjects application that ran some java code to perform an XQuery against an XStreamDB database containing a copy of the CAREO repository (4056 records).

DOM4JKVC: a simple version of what has become the JavaEOXMLSupport.framework. It uses DOM4J to provide a Key Value Coding interface around a DOM Element, and involves parsing an XML string into a DOM Element.

NSDictionary: Uses WebObjects’ built-in XML-NSDictionary conversion (without mapping file)

W3CDOM: Simple conversion of the XML document into a W3CDOM Document.

Save File: Just save the XML string to the filesystem with no additional processing.

The JUD test was performed by using a Python script that pulled every document out of the live CAREO repository, and saved them to the local filesystem as .xml files.

All tests were performed on my PowerBook, with the XML being retrieved from a separate server (XStreamDB on our commons webserver, JUD on the U of C IT appserver)

This was nowhere near a fully empirical test, with extra variables popping in all over the place. The goal of this was to give an idea at the order-of-magnitude level of which strategy was fastest, and which was slowest. Basically, I needed to see if pulling the full LOM from XStreamDB and converting the whole shebang into a DOM would kill us. Turns out it’s over an order of magnitude faster than what CAREO had been doing… I also needed to get an idea of the additional time it would take to wrap a DOM Element with the Key Value Coding interface. Turns out that didn’t add anything, and somehow actually shaved some time off (although that is due to the DOM4J vs. W3C class performance).

I was surprised to fine that the combination of XStreamDB + DOM was approximately 41 times faster than the JUD (MySQL database to store any XML document by breaking it into Elements, Attibutes, and some other meta stuff, and reconstituting it on the fly via a PHP script).

Once I’ve got JavaXStreamDBAdaptor.framework and JavaEOXMLSupport.framework polished off a bit more, I’ll add the metrics for their performance to this chart.

JavaXStreamDBAdaptor.framework Almost Fully Functional

Tonight, I got the XStreamDB EOAdaptor firing on almost all cylinders. It has been able to query and retrieve XML documents for quite some time now, but it can now also update (replace existing documents with edited versions) and insert new documents. It’s not extremely tested yet, so there may be some pitfalls or errors (likely an error or two, or more likely some overgeneralized assumption or the like).

It’s got a hard-coded reference to the database and root at the moment, but that won’t take long to switch to code that pulls that from the EOModel for the Entity in question. Also, there isn’t a way to delete documents via the adaptor – but for now I’m more than fine with hitting XStreamDB Explorer to do that manually…

I plan on updating the Sourceforge project sometime next week, after I polish the adaptor a bit more, and fix JavaEOXMLSupport to enable addition of new elements to a document…

What a relief this is. I’ve been feeling like a total bumbling moron for the past few weeks. And, with this working, I can get back to work on the rest of the Pachyderm…

International Knowledge Sharing Conference in St. Thomas

Thanks to Rick’s Cafe, I came across this awesome sounding conference. KNOWLEDGE SHARING AND COLLABORATIVE ENGINEERING (KSCE 2004) – November 22-24, 2004 St. Thomas, US Virgin Islands

From the conference website:

The International Conference on Knowledge Sharing and Collaborative Engineering (KSCE 2004) will highlight advances in the research and present day applications of knowledge sharing and collaborative engineering, and will also attempt to forecast future trends and developments. Presentations of recent technical developments and demonstrations of current product applications will allow for the exchange of ideas amongst international researchers and practitioners. Highlights of the week will include paper sessions, tutorials, and keynote addresses.

KSCE 2004 will be held at the Frenchman’s Reef & Morning Star Marriott Beach Resort on the south side of the island of St. Thomas. Known for its breathtaking scenery, mountains, green fields, and some of the most beautiful beaches in the world, St. Thomas has plenty to offer any visitor, making it the ideal venue for KSCE 2004.

Aside from the obvious interest in the content of the conference, it would be so cool to be back in the USVI – I got married in Charlotte Amalie in 1997, so it would be great to get back to Magen’s Bay! St. Thomas is so beautiful…

The conference is being organized by the Calgary-based IASTED (International Association of Science and Technology for Development) – what a cool coincidence!

This sounds like a job for the Three Amigos! ;-) Maybe if I’m really good and promise not to go to any more conferences for a couple years…

Now to go buy lots of lottery tickets…