Web Devout tidings


Tech Center Current blog

April 20th, 2007 by David Hammond

If you’re interested in some more of my technology-related writings, I’ve recently been posting on Tech Center Current, the blog for the California Community Colleges Technology Center where I currently work. Most of the posts are less advanced and less industry-centric than I typically make here, and posts are divided into three different levels of technical familiarity, so it reaches to a wider audience. Although I may be going to work for either Microsoft or Mozilla in the near future, I’ll act as an invited expert on the Tech Center Current blog for a while after.

Safari displays 1×1 alphatransparent PNGs too dark

April 20th, 2007 by David Hammond

I finally figured out why Safari was displaying the heading backgrounds on the main Web Devout site too dark: In general, Safari 2.0 seems to screw up the brightness or gamma correction on 1-pixel by 1-pixel alphatransparent PNGs. This is even true for PNGs which don’t have any gamma correction information included. Interestingly, if you change the image size to anything else, the brightness problem goes away. Why does Safari decide to darken 1×1 PNGs? Your guess is as good as mine.

I was using a repeating 1×1 alphatransparent PNG as the background in order to simulate an RGBA value in a CSS 2.x-compatible way. To fix the problem in Safari, I simply changed the image size to 2×1.

I just wanted to point this out in case anyone else runs into it and becomes stumped like I was for a while. The problem seems unique to Safari/WebKit; Konqueror doesn’t seem to have this problem.

Frankly, this is just one of a seemingly endless list of bang-your-head-on-the-desk bugs I regularly find in Safari in quite basic areas. Another one that bothered me for a while was that background images in Safari will repeat if the box is shorter or thinner than the background image even if you have background-repeat: no-repeat;, which you’ll notice if you also use a background-position. This just shows that passing something like Acid2 first doesn’t necessarily mean you’re the cream of the crop. Please exterminate these weird bugs.

Job opportunities: an interesting dilemma

April 12th, 2007 by David Hammond

This week, I was approached with job opportunities from both Mozilla and Microsoft’s Internet Explorer team. It turns out this is a tougher decision than I thought it would be.

Anyone who knows me knows what I think of Internet Explorer. Let me briefly summarize what, in my mind, are the two biggest problems with Internet Explorer as a product and what I feel are the primary sources for those problems:

First in my mind is standards support. Internet Explorer has by far the worst standards support of any major web browser, period. Anyone serious in web development knows this. Over time, Microsoft has been accused of things like not caring about standards and what have you. But I don’t think that’s really the core issue. I honestly believe that the IE developers fully intend to follow standards whenever they’re available. IE’s nonstandard event model wasn’t the result of deliberately deviating from the standard; there was no event model standard when IE added support. A lot of the so-called “nonstandard behavior” with CSS properties is the result of bugs and design flaws that the IE developers intend to fix. The main problem isn’t that they don’t care.

What I believe is the primary cause of IE’s currently miserable situation with standards support is the fact that Microsoft disbanded the platform development team back in 2001, and thus, aside from security updates, IE layout engine development was completely abandoned for five years. Five years. Half a decade. Roughly half of Internet Explorer’s entire life to date was spent sitting idle. IE 6 wasn’t a bad browser when it first came out, but other browsers have now had twice the time IE had to add standards support, fix bugs, and generally snazz up their engines. Internet Explorer was simply neglected for too long.

The second main problem with Internet Explorer as a product is its security record. Every piece of software as complex as a web browser will have plenty of security problems. And naturally, if you have 80% or higher market share, there will be lots of people trying to pick apart your browser piece by piece. But this isn’t the main problem.

The main problem with IE’s security is the security response process. Internet Explorer simply takes too long to fix its vulnerabilities, and it leaves so many vulnerabilities unfixed. Internet Explorer has taken on average several times as long as Firefox to patch its known vulnerabilities. We just passed the fourth Patch Tuesday of the year, yet according to Secunia, 78% of IE 7’s known vulnerabilities are still unfixed. That isn’t even counting the several-year-old IE 6 vulnerabilities that were never fixed and probably still exist in IE 7. Microsoft says that this is all due to their quality assurance process, but I dunno… I’ve heard about as many cases of IE patch problems as Firefox patch problems. Too many issues are swept under the rug. It’s another case of neglect.

So here I am with an opportunity to help do something about this. I have a chance to help give IE attention where it needs it. Internet Explorer is used by around 75% to 80% of the Internet population. It is, in many or most cases, the single immediate factor holding back professional web developers from doing their jobs as quickly, correctly, and efficiently as they otherwise could.

Meanwhile, I may also have the opportunity to work for Mozilla. Mozilla is an entirely different situation. They have this groundwork laid out. They have an engine that is relatively very well in line with the standards. I have little doubt that the Gecko engine code is much more consistent, well-structured, and mature than the Trident code in Internet Explorer. Mozilla isn’t struggling to correct lots of broken foundation; it’s working to perfect its well-written engine and to develop the new groundwork for future standards.

Working with Internet Explorer would be working to bring a dated but important engine into the present, while working with Mozilla would be working to lead a modern but not-quite-as-prominent engine into the future. Both are very important tasks and both are tasks which I would much like to be a part of. But alas, there is only one of me, and I have to make a choice. I feel like I would better enjoy the work and atmosphere at Mozilla, but I might be able to drive a bigger near-future impact on the Web by working with Internet Explorer. If, in the end, both options are available to me, what should I do?

Webpage Test tool updates

March 4th, 2007 by David Hammond

The Webpage Test tool has been updated with a new look and a few new features.

The biggest improvement is the ability to temporarily save test pages, similar to services like pastebin. The saved pages will remain on the server for at least two days before being deleted to save space. The saved page URL is also a short identifier rather than the entire source like the (now removed) “Link” feature used.

You may also specify a base URL outside the HTML. This is useful if you’re trying to offer someone corrected webpage source with URLs relative to the original URL but don’t want to confuse the person by including a base element in the HTML.

Basic HTML templates for HTML 4.01 Strict, XHTML 1.0, and XHTML 1.1 are available via links at the top of the page. Note that the XHTML templates come with the correct content-type (application/xhtml+xml) so browsers handle it like real XHTML. Because Internet Explorer doesn’t support true XHTML, IE will give you the usual download dialog instead of the webpage.

Finally, the system has been updated with a snazzier look. The new look is supported by Firefox, Opera, Safari, Konqueror, and other modern browsers. Internet Explorer currently falls back to a simpler look.

I have recently resumed work on a PHP-based SGML parser and syntax highlighter I’ve been developing, which I will try to eventually incorporate into this system. It aims to support much more of the SGML standard than the common alternatives and also provides indication of some common errors like invalidly placed elements and unrecognized character entities, elements, and attributes. It will not, however, attempt to be a complete validator.

Validity and well-formedness

February 20th, 2007 by David Hammond

I’ve just published a new web development article called Validity and Well-Formedness, which explains the distinctions between valid and well-formed XHTML.

If the W3C HTML Validator says your XHTML page is valid, that means it’s also well-formed, right? Wrong! This article has several examples of XHTML documents which are perfectly valid but are malformed and won’t even load in an XML parser.