Marble Blast: Marble Madness lives!!!

I grew up with Commodore computers, starting with the Vic20, graduating through C-64, C-128, and finally, a smoking Amiga 1000. One of the best things I remember from the Amiga was a game called Marble Madness. A truly great game, but it’s been basically lost since those days.

I got a nice little announcement from Apple today. As a bonus for .Mac subscribers, they are throwing in a license for the game Marble Blast. It had “Marble” in the title, so I had to check it out.

Marble Blast is what Marble Madness would have become with several years of evolution. It’s like Marble Madness Meets First Person Shooter. The marble game from the perspective of the marble. Very well done, from graphics to animation to sound. I’m hooked. I wonder what’s different in Marble Blast Gold

Marble MadnessMarble Blast

UPDATE: I played a couple rounds this afternoon, and 2 separate people walking past my pod said “Cool! Marble Madness!” So, it’s not just me…

Macromedia Contribute 2

I’ve been playing around with Macromedia Contribute 2, which is now available for MacOSX. It seems like an extremely useful utility for editing existing websites, but falls short for creating them. That’s a reflection of the market they’re looking at – newbies editing content (change a phone number on an intranet, update an image…).

For that, it rocks quite nicely. I’m curious to see how/if it mangles a modern web page, since it seems to have some heavy table editing tools, but nothing for divs or css…

I’ve been pointing it at a dummy page to see what it can do. It handled the image upload and display attributes extremely well.

Computer Vision-based Searching

I just got back from a very cool demo by Brad Behm, who works in Dr. J.R. Parker’s Computer Vision Lab here on campus. Very VERY cool stuff. They’ve got an app set up so that you can feed it an image, and it will search its database of several hundred images and return similar images. They run several comparisons simultaneously, checking edges, colours, etc…

It’s not perfect yet, but they’re batting over .500, which is much higher than the other computer-vision search groups are getting (apparently Brad’s software averages about 57% accuracy, but this can go much higher to almost 100% depending on the images in the database and the source query image).

We talked about some of the issues and implications of this type of technology, and it could be an amazingly powerful utility. We’re going to be talking with Dr. Parker about the possibility of hooking it up to CAREO (no promises, but we’re interested – he’s out of town at the moment, though) to provide another set of learning objects, with metadata generated automatically and on the fly by their application.

Full Screen Safari

I’m finally able to run Safari in full screen mode, thanks to a little InputManager plugin called Saft. Very cool. I now have Safari running full screen in desktop #2 on my 18-desktop VirtualDesktop setup. Works like a champ. It even drops the dock and menu bar out of the way (have to remember the menu keyboard shortcuts…)

Saft is a little, well, quirky. It’s not fully integrated, and appears to do some funky patching of the running Safari app. But, it works.

Experiments in Online Learning

Rick Lillie is planning an interesting experiment this fall. He will teach 3 sections of the same class, using various methodologies to see their effect. Traditional (lightly blended learning) vs. 2 online learning scenarios. He’ll post info and results this fall.

From Rick’s post:

Since mid-June, I have been teaching a course for UCLA Extension that uses new eLearning courseware.  This has been a very interesting project in that it involves interaction between three learning/delivery resources (i.e., Blackboard, my course website, and eLearning courseware). In September, this project will move into its next phase.  I will teach three sections of the class.  Two sections will be at CalState and one section will be at UCLA Extension.
  • One class section will be taught in a somewhat traditional format (i.e., classroom delivery supported by my website and virtual office hours).
  • One class section will be taught in an online format using Groove as the learning management system, eLearning courseware, my course website, and virtual office hours.
  • One class section will be taught in an online format using Blackboard as the learning management system, eLearning courseware, my course website, and virtual office hours.
All three sections will follow the same class schedule, cover the same content, and take the same quizzes and examinations.  The research design includes several interesting variables.  I will write about this project as the Fall Quarter 2003 unfolds.

Thanks to Bruce Landon for the link!

Merlot Conference 2003

I had initially planned on handing my Merlot conference co-presentation on RSS and Learning Objects over to Mike, but I just bit the bullet and got approval to attend the conference myself. I’m planning on arriving in Vancouver on Aug. 4, and leaving the evening of the 8th. I’m hoping I can book a room in the Hyatt Regency, where the conference is taking place, but it may be a bit late to get room there…

It’s also turning into a bit of an EduSource shindig, so there should be lots of opportunity to blab about ECL, CAREO, Repositories, etc…

UPDATE: I arrive in Vancouver around noon on Monday Aug. 4, and leave Friday afternoon. Staying at the Hyatt Regency.

Inter-Application Communication Options?

We’re working on a new architecture for the next version of the software that drives CAREO et al. One thing we want to do is rip some of the core data functionality into a separate framework (actually a set of frameworks) and have a server host application manage communication between core data and client apps (i.e., CAREO, ALOHA, etc…).

It would be nice if, in addition to web services (SOAP, XML-RPC), if we could use a higher performance messaging system for client applications residing on the same box (or at least on the LAN) than SOAP can provide (latency, marshalling, etc…)

Ideally, it would not be java-specific (so we could write funky C or Objective C apps that can communicate with Core Data), and at least be compatible with the SOAP implementation (so we don’t have to duplicate everything underneath the communication layer).

The options I’ve come across are (with big thanks to Steve Zellers’ WWDC2003 presentation):

  • Mach Messaging
  • POSIX pipes/sockets
  • AppleEvents
  • WebServices (SOAP/XML-RPC)
  • JMS

Some of those are platform-specific (AppleEvents, Mach) or language-specific (JMS), others aren’t exactly capable of running at native application speeds (WebServices).

I’ll keep digging around looking for options. Any glaring omissions?

Embryology Slide Viewer

I’m working on a prototype application for viewing embryology slides (and having HUGE flashbacks from when I took the course back when the earth was young).

As a result, I get to play around in DirectorMX again, and am rather amazed at how quickly my Mad Lingo Skillz are coming back. Imaging Lingo rocks. 3D Lingo rocks.

I’m building 2 versions of the app. The first used Imaging Lingo to do a zoom/pan over a high resolution image. That worked well, but took a good chunk of CPU power to do the manipulations. The second version is using 3D Lingo to power an OpenGL image viewer, with a high resolution bitmap used as a texture on a plane, which is then moved in front of a camera to generate smooth panning and zooming (ala MacOSX Quartz). Very cool. Now, to play around with controlling the movement of the plane.

There are 2 challenges remaining: How to keep the images of various sections of an embryo registered appropriately to prevent pixel shifting as the student moves from slide to slide (they are all slightly different image sizes, and they are not perfectly centered on each slide…) Then, I get to figure out how to overlay text and graphic labels on top of the plane. That should be fun…

UPDATE: Here’s a screenshot of the current prototype (UI is very rough, but almost functional):
Embryology Slide Image Viewer prototype

I’ve got it working quite well for panning/zooming and moving through slices of a sample, using a 3D plane with the slide scanned in as a bitmap texture. Perfomance absolutely ROCKS on my TiBook (and, as an aside, I just tried it on a Dell laptop, where performance sucked complete ass. It’s so nice finally being on the right side of the performance curve…) I’m using the camera’s Z position in the 3D world to work the zoom function, and the plane can be physically dragged around to work the panning function. Dragging the image pans it, moving the slider zooms in and out. Very cool.

The next big challenge is keying the registration points of each slide image so moving between slides isn’t jumpy (i.e., if you’re zoomed way in on the notochord, you should stay on the same structure as you move to the next/previous slide).