Sorted by New

Wiki Contributions


Ah, that’s probably a better process than our house’s trick of having two separate MA EZPass accounts simultaneously associated with the same car in order to get a second transponder I can use when I rent.  (Our way makes it hard to predict which account gets billed in the rare occasion that they have to fall back to the license plate because a transponder read for the double-associated car fails.)

My sample size is not huge, but personally I’ve never had a problem with associating mine with a rental using the timestamps of my rental contract and indicating that the car is a rental.

Fair enough, although I put a little less weight on the undesirable precedent because I think that precedent is already largely being set today. (Once we have precedents for regulating specific functionality of both operating systems and individual websites, I feel like it’s only technically correct to say that the case for similar regulation in browsers is unresolved.)

Also, the current legal standard just says that websites must give users a choice about the cookies; it doesn’t seem to say what the mechanism for that choice must be. The interpretation that the choice must be expressed via the website’s interface and cannot be facilitated by browser features is an interpretation, and I’d argue against that interpretation of the directive. I don’t see why browsers couldn’t create a ‘Do-Not-Track’-style preference protocol today for conveying a user’s request for necessary cookies vs all cookies vs an explicit prompt for selecting between types of optional cookies, nor any reason why sites couldn’t rely on that hypothetical protocol to avoid showing cookie preference prompts to many of their users (as long as the protocol specified that the browsers must require an explicit user choice before specifying any of the options that can skip cookie prompts; defaulting users to “necessary cookies only” or the all-cookies-without-prompts setting would break the requirement for user choice).

But we don’t see initiatives like that, presumably in large part because browsers don’t expect to see much adoption if they implement such a feature, especially since it’s the type of feature that requires widespread adoption from all parties (browser makers, site owners, and users) before it creates much value. Instead, lots of sites show cookie banners to you and I while we browse the web from American soil using American IP addresses, seemingly because targeting different users with different website experiences is just too sophisticated for many businesses. They evidently see this as a compliance requirement to be met at minimal cost rather than prioritizing the user experience. I don’t see how the current dynamic changes as long as websites still see this purely as a compliance cost to be minimized and as long as each website still needs to maintain their own consent implementations?

In theory, yes. Do you have particular knowledge that things would likely play out as such if the regulations permitted, or are you reasoning that this is likely without special knowledge? If the former, then I’d want to update my views accordingly. But if it’s the latter, then I don’t really see a likely path for your regulatory proposal to meaningfully shift the market in any way other than market competition forcing all major browsers to implement the feature, in which case it doesn’t practically matter whether the implementation requirement has legal weight.

Once you’re willing to mandate browser features to bolster privacy between multiple users on the same device, I’d get rid of website-implemented cookie banners altogether (at least for this purpose) and make the browser mandate more robust instead.  I could see this as a browser preference with three mandated states (and perhaps an option for browsers to introduce additional options alongside these if they identify that a different tradeoff is worthwhile for many of their users):

  • Single user mode:  this browser (or browser profile) is only used by one user, accept local storage without warning under the same legal regime as remote storage of user data.
  • Shared device mode:  this browser (or browser profile) is shared among a constrained set of users, e.g. a role-oriented computer in an organization or a computer shared among members of a household.  Apply incognito-inspired policies to ensure that local storage cannot outlive a particular usage session except for allowlisted domains, and require the browser to provide a persistent visual indication of whether the current site is on the allowlist (similar to how browsers provide a persistent indication of SSL status).
  • Public device mode:  this browser (or browser profile) is broadly available for use by many people who do not necessarily trust each other at all, e.g. a machine in a school’s computer lab or in a public library.  Apply the same incognito-inspired policies as in shared device mode, but without the ability to allowlist specific sites that can store persistent cookies.  The browser must also offer the ability for the computer administrator to securely lock a browser in this mode to prevent untrusted users from changing the local-storage settings.

Good to know, thanks!

(And thanks in particular for linking to the original text — while your excerpt is suggestive, the meaning of “similar device” isn’t entirely clear without seeing that the surrounding paragraph is focused on preserving privacy between multiple users who share a single web-browsing device.  I feel like that is still a valid concern today and a reasonable reason for regulations to treat client-side storage slightly differently from server-side storage, even though it’s not most people‘s top privacy concern on the web these days and even though this directive doesn’t resolve that concern very effectively at all.)

(I'd love to see the regulations changed here: there's no reason to single out storing data on the client for special treatment…)

I haven’t personally needed to pay super close attention to the e-Privacy regulations but I thought they exclusively focused on cookies as a specific technology?  The web has client-side data storage that is not cookies, and cookies are more privacy invasive than simple client-side storage because they’re also automatically transmitted to the server on every matching request without any further interaction from either the user or the website.

It seems to me that it’s much easier to respect user privacy when using other mechanisms for client-side storage and for transmitting data from the client to the server.  I’ve also generally found that the cookie-free approaches tend to result in more maintainable and debuggable code, without incurring additional overhead for many use cases.  (An exception:  document-centric use cases where the documents themselves are access controlled generally do benefit from cookies, and low-JS sites have more legitimate use for a non-JS mechanism for storing and transmitting authentication information; but both of those seem to be somewhat niche use cases relative to the current web as a whole.)  Thus, I’m a bit annoyed that there hasn’t been more movement across the industry to migrate from cookies toward other more-targeted technological solutions for many use cases requiring data storage on the client — particularly for those use cases that would be legitimate banner-free uses of cookies according to e-Privacy.

Yeah.  Other folks have already mentioned that the degree of enforcement leeway in the U.S. increased when the federal government made artifically-lower speed limits a requirement of federal highway funding in the 1970s.  Which I can’t confirm or refute, but does make sense: I imagine that some states who disagreed with the change might have grudgingly set the formal limits in line with the federal policy, and then simply used lax enforcement to allow the speeds that they preferred all along.  I have noticed that it’s often seemed politically unpalatable for officials to stick to a program of stricter enforcement to rein in a particular area’s entrenched driving culture after speed limits were increased in the 1990s, though.

In any case, if folks think that part of the reason for lax enforcement is measurement error then that could be used as an input toward designing a separate maximum speed designation.  One could keep the “speed limit” enforceably defined in terms of the actual vehicle speed, while defining a new parallel “maximum speed” constraint strictly in terms of a measurement taken by law enforcement equipment that passes a particular calibration standard within a particular window of time before and after issuing the citation.  Then you’d end up with one standard that gives the benefit of doubt on measurement error to the driver and another that gives the benefit of doubt to the enforcement record, and thus there’s a logical reason for (at least some of) the spread between those two thresholds.  (This legal system might also make it easier to move toward maximum-speed enforcement that works more like existing license-plate-based tolling systems, allowing for a much more pervasive enforcement regime to push the culture toward compliance without the downsides of setting up lots of direct conflicts between irate drivers and law enforcement officers.)

Probably not, since some U.S. states do post minimum (fair-weather) speeds on Interstate highways.  Section 2.2 of this paper includes a slightly dated map indicating the minimum speeds in each state (where applicable).

Personally, I’m more familiar with folks creating entirely new nonprofit media outlets to focus on reporting in an area that they believe to deserve better coverage (many of which then seek to partner with traditional publishers on specific projects once they have a demonstrated body of work), rather than directly funding that coverage at an existing paper.

I think Religion News Service is basically an older representative of this approximate model, and topic-focused non-profit journalism organizations like this seem to be popping up more frequently as traditional models of funding journalism come under increasing strain. More current examples that appear to fit this approximate pattern include The Intercept for coverage on surveillance and adjacent issues, The Marshall Project for issues relevant to criminal justice reform, and Anthropocene Magazine for climate change solutions.

(Back in 2017 I asked for examples of risk from AI, and didn't like any of them all that much. Today, "someone asks an LLM how to kill everyone and it walks them through creating a pandemic" seems pretty plausible.)

My impression from the 2017 post is that concerns were framed as “superintelligence risk” at the time.  The intended meaning of that term wasn’t captured in the old post, but it’s not clear to me that an LLM answering questions about how to create a pandemic qualifies as superintelligence?

This contrast seems mostly aligned with my long-standing instinct that folks worried about catastrophic risk from AI have tended to spend too much time worrying about machines achieving agency and not enough time thinking about machines scaling up the agency of individual humans.

Load More