Results tagged “”

leaves falling off the tree
For me, one of the promises of [PSGI and Plack]( is to get away from programming with global variables in Perl, particularly being able to modify the request object after it's been created. Long ago, replaced using a global
hash with a "query object", but it essentially has been used as always-accessible read/write data structure. It would appear at first look that improving this might a fundamental part of the new system: A lexical environment hashref is explicitly passed around, as seen here: Plack::Request->new($psgi_env); vs the old implicit grab from the environment: CGI->new; You might hope to be escaping some of th action-at-a-distance badness of global variables, like having a change to environment alter your object *after the object is created.* You might like avoid having changes made to your object avoid changing the global environment as well. Not only does Plack::Request require an explicit environment hash to be passed in, both nearly all methods are read-only, including a param() method inspired by a read/write method from of the same name. That's all good. This would all seem to speak to safety and simplicity for using Plack::Request, but the reality turns out to be far muddier than you might hope. I encourage you to down and run this short, safe [interactive perl script](/blog/2011/02/ which illustrates some differences. It shows that: * Plack::Request objects can be altered after they are created by changing the external environment. * Modifying a Plack::Request object can potentially alter the external environment hash (Something which explicitly does not allow). In effect, the situation with global variables is in some regards worse. Plack::Request provides the impression that there is a move away from action-at-distance programming, but the fundamental properties of being affected by global changes and locally creating them are still present. On the topic of surprising read/write behavior in Plack::Request, you may also interested to note the behavior of query\_parameters(), body\_parameters() and parameters() is not consistent in this regard. I submitted [tests and suggestion to clarify this](, although that contribution has not yet been accepted. Here's the deal: The hashref returned by query\_parameters() and body\_parameters() and parameters() are all read/write -- subsequent calls to the same method return the modified hashref. However, modifying the hashes returned by body\_parameters() or query\_paremeters() does not modify the hashref returned by parameters(), which claims to be a merger of the two. It seems that either all the return values should be read-only, ( always returning the same values ), or if modifying them is supported then the parameters() hash should be updated when either of the body\_parameters() or query\_parameters() hashes are updated. ## Reflections An incoming HTTP request to your server is by it's nature read-only. It's analogous to a paper letter being delivered to you be postal mail. It's a perfect application for an immutable object object design that Yuval Kogman [eloquently advocates for]( Plack::Request comes close to implementing the idea with mostly read-only accessors, but falls short. The gap it leave unfortunately carries forward some possibilities for action-at-distance cases that have been been sources of bugs in the past. I'd like to see Plack::Request, or some alternative to it, with the holes plugged: It should copy the input, not modify it by reference, and parameter related methods should also return copies rather than reference to internal data structures.
A midwinter box bike ride
Recently I've been reviewing how various Perl frameworks and modules generate HTTP headers. After reviewing several approaches, it's clear that there are two major camps: those which put the response headers in a specific order and those which don't. Surely one approach or the other would seem like it would be more spec-compliant, but RFC 2616 provides [conflicting guidance on this point]( The bottom line is that spec says that *"the order in which header fields with differing field names are received is not significant"*. But then it goes on to say that it is a "good practice" (and it puts "good practice" in quotes) to order the headers a particular way. So, without strict guidance from the spec about the importance of header ordering, it would be interesting to know if header order caused a problem in practice. The [Plack::Middleware::RearrangeHeaders]( documentation suggests there is some benefit to strict header ordering: *"to work around buggy clients like very old MSIE or broken HTTP proxy servers"* You might wonder what the big deal is-- why not just stick to the "good practice" recommendation all the time? The difference can be seen in the benchmarks provided by [HTTP::Headers::Fast]( By ignoring the good-practice header order, an alternate implementation was able to speed-up header generation to be about twice as fast. Considering that a web-app needs to generate a header on every single request, making header generation smaller and faster is potentially a tangible win, while also still being spec-compliant.
baby sleeps again I've spent a lot time recently [triaging bugs for]( I've enjoyed the process, and respect as a widely used Perl module. I'm not in love all aspects of module. I don't use or recommend the HTML generation features-- I recommend using HTML template files and [HTML::FillInForm]( for filling them. Whenever I think about how I'd like to change,what I have mind is often the same choice that [CGI::Simple]( made. There was a time years ago that I focused my attention on CGI::Simple and tried it in production, only to be bit by a compatibility issue, so I reverted back to I don't remember what the specific issue, and it's likely been fixed by now. But the pragmatic point remained with me: CGI::Simple may have clean code and a good test suite, but it's not necessarily free of defects and in particularly it lacks the vastly larger user base that has to provide real world beta testing.
new bikes-at-work trailerGet off the couch and pull your weight--
There's bug with your name on it.
There were nearly 150 active entries in the bug tracker when I was approved recently as a new co-maintainer. As I had time in the evenings after the baby was asleep, I went through and reviewed every one of these bug reports. Many had already been addressed by Lincoln some time ago. Those were simply closed. Still, I found about 20 fairly ready-to-go patches, and those have now been processed and [released today as 3.45]( Whenever code changes were made, I also strived to make sure new automated tests were added that covered those cases. You may be surprised how many methods in have no automated tests for them at all. Now there are still about 50 open issues in the [ bug tracker]( For these, I have tried to use the subject line some summary indication of what is needed to move it forward, like "Needs Test: ", or "Needs Peer Review: " or "Needs Confirmation". Generally, I plan to wait patiently for volunteers to help with these. If you use, consider helping to move one of these forward.
This weekend I spent some quality time with HTTP Cookie Specs ( [RFC 2109]( and [RFC 2695]( ), and looked closely how at the cookie parsing and handling is done in three Perl frameworks: [Titanium](, [Catalyst]( and [Mojo]( Titanium uses [CGI::Cookie]( by default, while Catalyst uses [CGI::Simple::Cookie]( and Mojo uses built-in modules including [Mojo::Cookie::Request]( I'll look at these solutions through the filters of Standards, Security, and Convenience. ## Standards: Max-Age, Set-Cookie2 and commas Max-Age is cookie attribute which gives the expiration time as a relative value. This is considered a more secure replacement for the "Expires" header, which gives the time as an absolute value, making it vulnerable to clock skew on the user's systems. and Mojo support it, but CGI::Simple does not. This is potentially an issue for Catalyst users, if they believe they have Max-Age support because the documentation refers them to CGI::Cookie, but they actually don't because they are using CGI::Simple::Cookie. Set-Cookie2 is a standard from 2000 to replace Set-Cookie, which became a standard in 1997. Mojo is the only of the three that supports it. However, Set-Cookie2 [never caught on]( Firefox 3 doesn't even support it, and neither does IE 6. Still, I like the idea of deciding for myself about supporting new standards, rather than having tools that only support older standards. Mojo wins here. The RFCs say that servers should accept a comma as well as semicolon between cookie values. and Mojo comply here, CGI::Simple does not. (I've submitted a [patch to address this](, along with a few other places I felt CGI::Simple cookie parsing lagged ## Security CGI::Simple cookies are potentially less secure because they lack "Max-Age" support. Mojo's cookie implementation appears to be vulnerable to an injection attack where untrusted data in a cookie value can write a new HTTP body. I have notified the developers of my findings there. and CGI::Simple both avoid the injection attack by URI-encoding the cookie values, (a spec-compliant solution). ## Convenience and CGI::Simple share several convenient user interface features which Mojo currently lacks. They allow you to set multiple-values for a single cookie, including setting a hashref. They also provide a convenient shorthand for giving expiration times, like "+10m" for "10 months in the future". Mojo lacks these features. If you have a Catalyst app that uses the multiple-values features, a port to Mojo could mean a painful cookie transition, since Mojo does not have a built-in understanding of the format uses to store cookie values. (This detail is not dictated by the cookie spec so both value formats are "spec compliant"). ## Conclusions Sebastian Riedel, the Mojo author, promotes Mojo as being focused on standards. From my findings here, I have to say that I agree that Mojo is a leader here, currently at the expense of a potentially serious security issue, and lacking some usability features that the others offer. CGI::Simple has a reputation but being a lighter and better enigeneered version of Certainly the overall the design and focus of CGI::Simple is an improvement. But the reality is that CGI::Simple was forked from in 2001. has received many improvements since then including improved cookie handling, like adding support for "Max-Age". However, CGI::Simple doesn't seem to make a point of tracking and merging improvements that originate in CGI::Simple is perhaps more like a lighter, tighter alternative to as it existed several years ago. The mature-but-maligned comes out fairing the best for cooking handling in my opinion. It did not have any of the potential security issues I found with the other two, and it has a range of convenient methods for cookie access. But as a final note, I encourage to you check with the specific projects for the most current information, as some of the deficiencies I found here may already be addressed.