Hacker News new | past | comments | ask | show | jobs | submit login
The foundation of a more secure web: Google Trust Services (googleblog.com)
323 points by noinsight on Jan 26, 2017 | hide | past | favorite | 164 comments



You can now have a website secured by a certificate issued by a Google CA, hosted on Google web infrastructure, with a domain registered using Google Domains, resolved using Google Public DNS, going over Google Fiber, in Google Chrome on a Google Chromebook. Google has officially vertically integrated the Internet.


What's remaining is: server written in Go, running on a Google server OS, located on a Google designed server appliance, which is centrally controlled by a Google designed microprocessor, which is finally manufactured in a Google owned semiconductor foundry. Oh, and the sand used for silicon purification is sourced from a Google-owned stretch of beach.

I haven't considered the internals of the datacenter though...


Go poke around: https://research.google.com/pubs/papers.html

You will see lots of custom stuff that Google does. There are many things that are better to outsource to 3rd parties, but many things are better in-house because the solutions just don't exist or they cost too much for the volume they need. Some examples:

Network routers for CLOS topology (there are pictures of some of the hardware): https://research.google.com/pubs/pub43837.html

Custom built SSDs (custom firmware/controller with a mix of flash chips from various vendors): https://www.usenix.org/system/files/conference/fast16/fast16...

Then you the TPU custom chip for running Tensorflow.


Sorry, but it's Clos, which is named after Charles Clos. Pronounced "Cloh". This only bugs me because I've seen this done so many times - specific domain knowledge nerd rant over.


Interesting stuff. I've heard about the TPU chip but not the rest.

They're still eons away from e.g. Intel or Samsung when it comes to vertical integration on the hardware side.

I believe that Samsung is the only company in the world that is capable of building an entire computer (e.g., laptop or smartphone) from scratch completely in-house. They can design software and OSes AND manufacture SoCs, memory, LED panels, etc. It's very impressive.


On that note it would be really interesting to see the percentage breakdown of who's IP made your phone/laptop/etc


What I am waiting for is a good shopping experience hosted by Google. Can for the life of my not understand why did still haven't done this because it would solve so many of their problems wrt ads and purchasing.


Google had one years ago, Google Checkout [1]. It was a PayPal competitor that was built to also provide a good API for running shopping cart experiences. Another on the list of good products that Google built, ran until they got bored with it, then closed.

[1] https://en.wikipedia.org/wiki/Google_Checkout


Amazon have too far a head start on that one but it is ironic that google seem to be able to crawl and index amazon better than amazon can do internally.


It has seemed to me for quite some time that Amazon deliberately tries NOT to return what you are searching for. Oh, they'll give you one or two items that fit your search criteria, but they'll also return lots of stuff that they think you'll be interested in and possibly buy.


I do not think this would be allowed within the European Union.


What? There's a big "shopping" tab at the top of every search results page.


But do you use it? Probably not. Is it usable, are the results good? Nope. Is it a good user experience? Not the few times I've tried it.


AOL is dead, long live AOL!/s

For serious though, there's not really any lock-in here (yet). You could replace everything from the certificate through the public DNS with GoDaddy and things would work just fine. I don't really see Google moving to close the web parts of this.


We still need to make choices that guarantee it's the case in the future. We need to ensure we don't end up with environment as diverse as email where most people use Gmail, or Linux services which are all being rewritten under systemd, or many other cases where we voluntarily choose a monoculture that can force our choices in the future...


> We need to ensure we don't end up with environment as diverse as email where most people use Gmail

Good luck making something that's both 1) the most convenient and 2) not centralized.


good luck? with that attitude you deserve shitty centralized systems that spy on you. decentralization and ease of use aren't contradictory anyway. remember bit torrent? used to be very popular, even among less technical users. great UX too. click a magnet link and you have your content in a flash.

the problem is that there isn't a multi-billion dollar business case for decentralized user systems. a lot of server/dc tech is both decentralized and distributed because it is a more robust architecture. again, the reason this doesn't extend to the consumer is because it give them too much control and clogs up the revenue stream.

this isn't a technical or usability problem, as you claim. this is 100% economic.


Spam blocking is a problem that's very difficult to solve without massive scale. Decentralized email used to suck for filtering spam because they simply didn't have the scale needed to recognize spam reliably.

WordPress solved the spam issue for decentralized blog comments by centralizing it, which gives them the scale to solve it well but also gives them the ability to read probably half the blog comments on the web in real time.


No, it really isn't. Download a copy of SpamAssassin, train it on 400 hand-picked spams from your own mailbox, train it on 400 arbitrary hams as well, and you will have very accurate spam filtering. I was shocked how well it worked; I run my own mail personally and use GMail at work, and the results are (subjectively) indistinguishable. A Dovecot plugin that keeps the Bayesian numbers up to date as I move messages in and out of the Spam and Archive (for ham) folder completes the picture.

To go deeper, it turns out that Bayesian filtering is remarkably resilient. Even attacks which try to poison your filters by including ham-like content in spams are ineffective, because spammers cannot very accurately predict what your own particular flavor of ham is like. (People don't often mail me passages from out-of-copyright Victorian romance novels.)

I find that a few botnet-reducing SMTP heuristics plus Bayesian is sufficient; I dallied with some of the fanciness that compares known-spam hashes with other people, but it turned out not to be necessary.


Maybe it has changed but I have vivid memories of training lots of spam a decade or so ago and still getting half-assed results. Google was the first email provider that really did a good job blocking spam.


There is one setting, underdocumented, which makes a big difference these days. spamc has a default ceiling of 10K, messages larger than which it passes unchecked. Spammers have started routinely including images just over that threshold to defeat default installs. Bump that up a bit, and your accuracy will go way up.


I find it disingenuous to compare Google vertically integrating the Internet with systemd, a FLOSS product that objectively solves many of problems init systems had.


> I don't really see Google moving to close the web parts of this.

Google actively restricts which programs and extensions you can install on a Chromebook and Android, Google restricts what you can publish on their infrastructure, and AMP is also becoming somewhat of a problem.

On Android, Google killed all other push notification services (and tries to prevent people from writing open source libraries for theirs), by only allowing notifications from Google Cloud Messaging to work when the device is saving battery (basically always on recent versions).

After trying to fight this for quite a while, I do really see Google moving to close this.


Funded mostly by you looking at Google Ads.


Nope. Funded mostly by you clicking at Google Ads. Looking is for free.


Remember they own DoubleClick, the biggest banner ad platform.

https://support.google.com/dfp_premium/answer/177222?hl=en


And monetized and paid for through google ads. It's your life, packaged.


And you sure are added to my favorite comment if the week, Indeed Google wants to be the complete stack.


And you can discover such website using Google (the search engine).


so like Amazon.


There's an Amazon ISP?



To clarify, that's a CA, not an ISP


But as Google wants to stop the Fiber project, they will also stop being one. Apparently being an ISP adds no value to the rest.


I wonder, with Alphabet's investment in SpaceX, if they see satilite as the future instead of fiber.


I don't think this is a bad thing. Instead of a third-party you trust (or rather, your user-agent trusts) vouching that Google's indeed Google, it's now Google vouching for itself, and you trust them by the virtue that they're Google.

This ought not be surprising: presumably, who better to say that Google is indeed Google than Google itself?

The reason everyone doesn't run a root CA is because it's difficult to coordinate trust between parties that may not know about each other ahead of time, and each and every root CA adds more maintenance burden on part of trust-stores. When I self-sign my cert, I am effectively my own root CA, but I lack a compelling value proposition for everyone to add it into their trust-stores, and of course there's the initial difficulty of me propagating my key fingerprint over a tamper-proof "out-of-band" channel ahead of time where you have assurance that it's coming from me.

Google, on the other hand, is fairly easy to verify that they're indeed Google, considering they just published their public keys on their own website. By having a prior web property that's already trusted, they have bootstrapped the trust necessary for fingerprint distribution, and the rest should follow.

When Google's CAs start issuing certs to non-Google parties, we can revisit the 'eggs-in-basket' question.


> When Google's CAs start issuing certs to non-Google parties, we can revisit the 'eggs-in-basket' question.

I'm not really seeing the problem there, either. Unless we trust Google to do this less than we trust every other root CA. All evidence I've seen points to Google caring very much about internet security.


  I don't think this is a bad thing. ... This ought
  not be surprising: presumably, who better to say
  that Google is indeed Google than Google itself?
The problem is that "connecting to a Google property" almost certainly includes their WiFi access points as well as other networking offerings. Which implies the ability to MiTM traffic encrypted from products not controlled by Google (other browsers, VPN clients, etc.).

As stated in this Stackoverflow response[0]:

  Especially #2 is rather nasty, even if you
  pay for a highly trusted certificate, your site
  will not be in any way locked to that
  certificate, you have to trust all CAs in the
  client's browser since any of them can generate
  a fake cert for your site that is just as valid.
  It also does not require access to either the
  server or the client.
I wouldn't be surprised if AMP is also in play for this type of MiTM, but do not know for certain. Android native apps, however, are definitely poised to be compromised.

0 - http://stackoverflow.com/a/14907718


Google engineers have been very heavily involved in the CA/Browser Forum (https://cabforum.org), which sets issuance and trust rules for CAs. One of the things the CAB Forum is currently debating is a set of requirements mandating certificate transparency (CT) and obeying certificate authority authorization (CAA) records in DNS.

If implemented for all of these roots (and I don't see why they wouldn't, given their push on it), CT would create an open, unalterable record of every cert published from all of these roots and their subordinates. CAA, as a complement, would create a method by which you as a domain owner could control which CAs are allowed to issue certs for your domain, removing the ability to man-in-the-middle your domain without your permission.

The first step for both of these is making them mandatory for CAs (which Google is pushing hard on); once they're out there, it's possible to write plugins that will check CAA and CT records and fail closed if something looks wrong. It's a long way from perfect, but it's definitely a step in the right direction. Given Google's strong push for making those mandatory, I'm far more worried about a lot of the CAs already in my trust store than I am about these.


According to here[0], when you say:

  CT would create an open, unalterable record of
  every cert published from all of these roots
  and their subordinates.
That provides no substantiation for Google's push to be a Certificate Authority (CA). It arguably makes a case for Google to be a "Log Server" (which has its own troubling implications as that could then (now?) hook Google into the verification process of every certificate issued[1]). But absolutely nothing about CT needs/implies/warrants Google to become a CA.

  Given Google's strong push for making those
  mandatory, I'm far more worried about a lot
  of the CAs already in my trust store than I
  am about these.
The very authority to which you are appealing is precisely the one which is suspect.

0 - https://www.certificate-transparency.org/how-ct-works

1 - From [0]:

  During the TLS handshake, the TLS client receives
  the SSL certificate and the certificate’s SCT.
  As usual, the TLS client validates the certificate
  and its signature chain. In addition, the TLS
  client validates the log’s signature on the SCT ...


For CAA to work, we need fully deployed DNSSEC. Not just the roots, and some resolver, but all local clients too. Otherwise, there's still weak links in the chain.


But that is just it... "when" is not "if" here. There is a "when" it happens, and then a "when" we find out at some later point. Neither of those are reasonable "ifs".

There will be tremendous pressure on them to do it for certain parties, or have it done on their behalf unwillingly, or possibly unknowingly.


> I don't think this is a bad thing. Instead of a third-party you trust (or rather, your user-agent trusts) vouching that Google's indeed Google, it's now Google vouching for itself, and you trust them by the virtue that they're Google.

I'ts like a self signed certificate.


If you trust Google that is. The independent third party no longer exist.


> fairly easy to verify that they're indeed Google, considering they just published their public keys on their own website

This doesn't prove anything, unless you already trust another CA which can vouch that you're looking at Google's site in the first place. Or by some sort of consensus with other people that you see the same key.


"If you are building products that intends to connect to a Google property moving forward you need to at a minimum include the above Root Certificates."

The foundation of a more secure web apparently requires you to trust Google with the entire internet, using their properties as leverage to force it to be so.


Is Google less trustworthy than Go Daddy? Or CNNIC? Or the Hong Kong Post Office? Yes the CA system is broken but framing that as an anti-Google argument seems silly.


Google isn't less trustworthy, but it is far closer to being a monopoly. I would like to bias towards a more decentralized infrastructure.

Especially since Google is US based.


I agree in principle, but currently every CA is a single point of failure for the entire Internet, so adding more CAs makes things worse rather than better. We need something like DNSSEC/DANE to enable actual decentralization (where the Hong Kong post office could only sign Hong Kong domain names and so on).


Why should the Hong Kong post office only be able to sign HK domain names? Are businesses in Hong Kong not allowed .com addresses?


I would like to see a single responsible CA for each domain (which are allowed to hierarchically delegate). Country-specific agencies should only be able to sign domains within their country, and .com addresses (which should be reserved for genuinely international sites, though that's a separate argument) should be handled by an international CA that can a) apply some consistent international standard for how domain owners are identified etc. and b) be specifically held accountable for dodgy .com certificates


So... one CA for each domain, leaving no competition? And which unwanted domain will LetsEncrypt be left with, then?

Back in the real world, we have multiple CAs who have accountability for lots of overlapping domains. You can wish for some other non-existent situation, everyone else has to make the best of the situation as it stands.


> So... one CA for each domain, leaving no competition? And which unwanted domain will LetsEncrypt be left with, then?

Domains can compete with each other, particularly given the big opening up of TLDs. We could have actual competition between CAs at the end-user-facing level because it'd be visible to the user who the CA was (the CA and the registry ought to be merged - at the moment they're two parallel sets of infrastructure for doing the same thing), and if particular domains/CAs had poor-quality identity checking users might actually start to notice. As opposed to today, where the only one who knows which CA a domain might be using is the domain owner, and so the incentive largely is for the CA to do as little checking as possible.

> Back in the real world, we have multiple CAs who have accountability for lots of overlapping domains. You can wish for some other non-existent situation, everyone else has to make the best of the situation as it stands.

There's a migration path. Enable DNSSEC/DANE with all CAs authorized for all domains initially, then allow countries / TLD owners to start restricting who can sign certificates for their domains. If Hong Kong moved to requiring only Hong Kong Post Office to sign their domains, we could see how well or badly that model works - if it reduces phishing / spying then other countries will follow the same, if it stifles innovative internet businesses then they'll move away from that. But 150+ entities all having the power to own every site on the internet can't possibly be the right model.


What do you mean by 'monopoly'?

Simply mean used by the majority of the users?

So any sufficiently good product is monopoly, assuming that they are goodness is beyond the threshold to be favored by the majority of customers.

What do you want to say about Google's monopoly? Are Google going to hurt others and throttle effective competition? Was there any competition in CA market at all?


Regardless of your personal opinion, the law and the historical record state otherwise.

To answer your later questions:

Too many essential services under one umbrella.

Not quite.

It's competition stifling.

Yes.

Yes.


In a decentralized model, how do you know who to trust? How do you get google's public key? How do you know that public key can be trusted?


FWIW, I think Google is less trustworthy than the HK Post Office.


Why?


I didn't make a claim if they were trustworthy. Google has leveraged their properties to force people to trust them with the rest of the internet, regardless of if you think they are trustworthy or not.


"Google has leveraged their properties to force people to trust them with the rest of the internet"

Google saw the dismal situation of Internet CA, and forces internet to move to a better situation. Forcing people behave better is a good thing, IMHO. If you think other way around, there will not be a common ground for discussion between you and me.


Google has had an intermediate CA for many years (GIAG2) so, if you don't trust Google, this doesn't make things any worse for you.


It basically Google scratching their own itch and their PR people having to polish this stuff by inserting expressions like "more secure" and "moving forward".

It's disgusting but pretty much corporate life 101.


You may want to look into certificate transparency and who's supporting it.


That's a different issue, and doesn't address what I wrote.


Actually it does address your point about trust; CT severely limits the amount of trust we need to place to any single participating CA, including now Google.


I think it's related? Since certificate transparency is a way of watching what's going on with all certificate providers (or at least the ones that use it), an organization that thinks Google's root is up to no good has a way of checking.

It's after the fact, to be sure, but it matters for reputation.


Google has announced an effort to move all CAs to Certificate Transparency, here is a Threatpost piece on the topic - https://threatpost.com/google-to-make-certificate-transparen....

They will already log their public certificates to CT and this will continue given their push for CT.


"Trust Google Services"


I have no love for most the major CAs I've interacted with, but this feels wrong, though I can't quite pin point why.

Perhaps just a general feeling that all the internet eggs are being put, one by one, in one single alphabet basket.


As far as I can tell, this is more like Goog gathering their own eggs in their own basket. They are becoming more and more self-sufficient, but don't really seem that interested in taking over the whole market. As long as it stays that way, I don't mind much.


Ponder the meaning of the words in the title: The foundation of a more secure web. There's a clear implication of using market power to exercise universal control, and (as we've seen with email & browsers) this is not a new behaviour for Google.


They have the most popular browser, mobile OS, search engine. They operate popular public DNS servers too.

They add this cert and they control a vast chunk of the internet.


But there's no real lock-in effect. It's very easy to switch search engines, but rarely anyone does it because Google's simply better than Bing. If someone else comes up with better algorithms I'm sure that people would start switching. But very few people switch just to avoid the monopolist.

Same for Google Chrome. Switching to another browser is a matter of a few minutes (including taking the data with you). But as long as Chrome is at least as good as the others, there's no reason to do so.

I think it's important to recognize the differences between monopolists with lock-in (e.g. Microsoft with Windows and Office) and those without lock-in. Even though I'm also concerned about the amount of data that Google has, I'm sure they'll at some point end up like Yahoo or AOL. The question only is how long it takes.

IMO Microsoft is only around because they could generate revenues from lock-in effects during years where their new products were really bad. They've caught up now and did a lot right in the last years (and already benefit from it financially). But if they'd only had a portfolio like Google, I'm not sure they would still play a role.


> If someone else comes up with better algorithms I'm sure that people would start switching.

DDG has yielded results far superior to google years (IMHO, this is a bit subjective), but I don't see people moving over because google is simply "what they know".

A better alternative won't make people move, they need further motivation.


This is different from all the things you mentioned by the virtue of not offering CA services to anyone (except themselves). So they are not really entering/disrupting any new market with this move.


They control a popular public dns server, a CA, a global local cache, and control over the most popular browser and phone. They have all the pieces to do almost seamless MITM.

"We see you haven't signed up yet for AMP, so we've done the work for you"

Yes, I get they wouldn't do this, but the fact that they could is a little scary.


whats even more scary, is Google being able to tell what site is valid/secure. Imagine the implications if they limited who could get CA's or decided the information on your site was harmful


> this feels wrong, though I can't quite pin point why.

It's unusual for a root CA to be run by a service that otherwise has nothing to do with CA issuance, for the primary purpose of issuing certificates for that service's first-party sites, and not for third-party sites. I can't think of a single other example of a single-purpose root CA like this.

(The announcement mentions that they might use this to operate as a CA for third-party sites as well, but right now it exists to certify first-party sites.)


> I can't think of a single other example of a single-purpose CA like this.

Department of Defense: http://www.disa.mil/enterprise-services/identity-and-access-...


There are numerous, Microsoft has its own subordinate CA that they operate for their own certificates. Amazon has its own root CA https://www.amazontrust.com/repository/. There are more as well.


Amazon is using their CA for issuing customer ELB/Cloudfront certificates, not just their own stuff, and not even all of it: the cert I see at https://amazon.com's is Symantec-issued.


I guess if NSA/FBI forces google to hand over the CA keys, they kan orchestrate undetectable MITM-attacks.

I wonder why browser won't automatically store the fingerprint for every HTTPS-certificate it encounters and throw up a fuzz to the user if a certificate changes without any good reason?


Why aren't new certificates for the same domain signed by the old (perhaps expired) certificate (recursively) in addition to the whole CA model?

This proves that whoever has the new cert used to have the old cert. A browser would save a copy of a certificate the first time it visits a site, then when it visits again later it could request the chain of certs back to the first one it ever encountered. Past certs verify new certs; this check wouldn't even involve a CA.

This basically overlays trust-on-first-use security model on top of the CA security model and would make it much more difficult to perform a MITM on sites that the user regularly visits (which are probably the most valuable targets).


You'd need a backup plan for sites that have transferred ownership, or for sites that needed to revoke the old key due to compromise. And once you have that backup plan, how would you decide whether to care if that additional signature exists?


But ownership change and compromise should be communicated to the user. Maybe an "Unverified Identity" shows up for a while and triggers stronger checks in the browser for CT and revocation lists.


Such a MITM attack could only happen once (per CA); doing so would burn a CA, as browsers would then stop trusting it. Certificate Transparency (which many CAs already do and which will become mandatory for all CAs in 2017) ensures that browsers only trust certificates whose issuance gets publicly logged.


It's risky for browsers to do it automatically since it's not something that sites are excepting. Some might be using different certificates on different servers for same domain for example.

It's possible for sites to instruct browsers to do it though, but that's opt-in. https://en.wikipedia.org/w/index.php?title=HTTP_Public_Key_P...


It's opt-in by the site, but that's what public key pinning is: https://en.wikipedia.org/wiki/HTTP_Public_Key_Pinning


That's a bit different though, because you must pin an issuing key, not the actual cert.


Because the browser (or the user) has no way of knowing if the certificate changed for a good reason. Certificate pinning tries to tackle this at the CA level but it's not perfect (in a nutshell, browsers know that google.com can be signed only by a certain small subset of CAs).


The effort to prove a certificate is being changed for a good reason should be with the site owner, so I perhaps the standard could build in some sort of sign-by-previous-cert combined with mandatory information fields.

The certificate pinning of CA is not that useful.

So google rotate a lot of certs, but I bet 95% of the internet use one cert for one server until it expires. Google could fall in in line.


Perspectives for Firefox does this. But Google rotates their certs, this effectively disabling the approach and forcing most users to trust anything 'Google'.


This is not in fact all that unusual at all.


I've seen many examples of non-root CAs for such purposes, but it seems unusual (though not completely unheard-of) among root CAs. I dug through Mozilla's standard certificate bundle, and found very few such certificates. Amazon has one, but they also use that to issue certificates through AWS. Someone elsewhere in the thread mentioned a DoD root CA. The certificate store has some certificates from companies like Dell; no idea what they use those for. Who else has a root CA used exclusively for first-party certification?

In any case, I was only trying to suggest why this seemed odd, based in part on the availability heuristic: whether this is a common practice or not, it doesn't seem like something people (even people who regularly read about changes or events in the CA landscape) would see regularly.


It's actually quite common, another example is Amazon who operates its own root for its SSL certificate needs. Additionally, there is minimal risk profile differences to an unconstrained subordinate CA (like GIAG2 or the equivalent Microsoft subordinate) and a root. One could argue the risk is in fact reduced for a large issuer to be independent because of fewer entities can negativly impact operations.


I guess I'm not the only one who's noticed the steady stream of "Do x with google" on the frontpage. This is what's unsettling me, they seem to want to seep into every last crack of our lives, starting with all things web.

Edit: At least it felt like a slow stream to me. Search isn't being very cooperative towards my cause right now... The only item matching my memory is https://news.ycombinator.com/item?id=13013494 , but I'll be damned if there weren't others.


On the other hand, they still lag behind Microsoft and Amazon when it comes to cloud efforts. Yes, they clearly dominate search and ads, but the topics discussed here are mostly cloud-services related. And there, most is done via AWS and increasingly also via Azure (esp. enterprises). Google Cloud doesn't dominate that in any way.


Let's hope they stick to publishing all certificates into the certificate transparency list (Merkle tree).


They should be - they're making certificate transparency logs mandatory in 2017 [1].

[1] https://casecurity.org/2016/11/08/google-certificate-transpa...


If they use their browser dominance to gain the upper hand in the certificate issuance market that seems like a violation of anti-trust law.


If they use their data centers to imprison detractors, that also sounds illegal. It doesn't make data centers sketchy on their own, though.


I love that you can just buy a CA and devices will trust the new owner. That’s not messed up or anything.


WoSign/StartCom got a bit of a smackdown about their stealth acquisition so there is some level of oversight.


Only because they made the mistake of sharing their infrastructure (hence, their quirks) and got caught. I wouldn't call that oversight.

CAs should be required to announce ownership or large administration changes, and trust in said CAs should be revoked upon change unless/until they have been re-audited.


That is effectively how both the Mozilla and Microsoft programs root store programs works.


How could you design a system that works otherwise? Computer security is always about "this key says", not "this legal entity says".


Certificate is more than just a key. Especially EV certs are pretty close to "this legal entity says". CAB forum could have made a policy that root CAs are non-transferable and should remain in complete independent control of the entity that created it.


Well you could do something like have browser vendors require a legally binding document that the ownership of a CA cannot change without notice (at which point they can reassess the CA). Not that hard actually since there are only a few browsers that matter.


Peer review and regular key rolling should be built into the system.

It should not be based on "root" certificates rather something more like blockchain for generating security keys and each roll /session generates a new key.

IANAEE but if building a currency is possible without it being possible to create fake money then it should be possible to protect websites in a similarly decentralised way.


We have peer review in current CA system in two forms; Certificate transparency being the more visible one, CAB forum operating more in the background.


It is interesting to see that Google decided to opt for NIST P-384 curve for the root certs it is going to have valid until 2036.

Brian Smith has argued for supporting only P-256, P-384 and Curve25519: https://briansmith.org/GFp-0. That said, Mozilla decided to continue to advertize support for P-521 for NSS (https://bugzilla.mozilla.org/show_bug.cgi?id=1128792).

P-256 and P-384 are widely supported in various TLS libraries (SChannel, SecureTransport, OpenSSL, NSS), whereas Curve25519 doesn’t yet seem present in Microsoft or Apple’s libraries. I suppose with TLS 1.3 support perhaps we may see it implemented?

Unfortunately it seems none of the NIST curves (P-*) are considered “safe” by DJB and Tanja Lange: https://safecurves.cr.yp.to/.


The P-curves are "unsafe" (according to that rubric) in the sense that there are several ways to make mistakes writing curve libraries with them; you have to be more careful using them than you do with Curve25519.


No real problem with Google running their own CA, but can't help but to think that the same people who provide the browser, the search engine and the OS, now also provide the certificates on who and what to trust.

As much as we might trust Google, shouldn't there be something like separation of powers as a safeguard?


There is: they are intertwined with other regimes:

  * The international banking regime
  * The US legal regime for corporate purposes
  * Many other national regimes for operating purposes


You trust the hw vendor, the os vendor and the browser vendor - and you trust the CA. Google already had two out of four in many cases. You're not safe from the OS vendor by using a different browser, or CA.


It feels like a new age of internet when we have stuff like Googles private .goog gtld with domains signed by Googles private Root CA. It's not strictly bad (and I'm not complaining), but it feels bit silly/weird/scary/... .


Most of us adjudicate too much value to TLDs. It's an artificially scarce resource and most people here learned about the Internet when these TLDs were even scarcer.


Well, personally I think that most of us adjudicate too little value to TLDs. The point of DNS is to be hierarchical instead of one flat space, so imho all legacy TLDs should have been immediately deprecated the day ccTLDs were introduced, and countries should have been endorsed to maintain second level hierarchy (somewhat like .uk had for some time).

But of course that train left the station 30 years ago... in the current landscape I don't mind gTLDs any more than the general mess that is current DNS "hierarchy".


Many websites are not country specific. They started somewhere and are headquartered somewhere, but it is not something users should need to remember.


Using the global TLDs for a global site is fine. But I wish there was a rule against using .com/.org/... domains for sites that are restricted to a single country (e.g. a shop that only ships to the USA should be under .us)


This is the case in nearly all countries except the US. In the UK, nearly all shops will have a .co.uk domain. In Germany .de domains, etc. It's just that .com is believed to be the US equivalent for that, not an international domain.

That's also how large sites handle it. Amazon.com will lead you to the US page, not to an international section (e.g. where you could choose where to go).


I'm going to be 'that guy', but: Great, another page that requires JavaScript to display text. Hey, Google, you know what's more secure? Browsing plaintext sites without JavaScript.


I mean people don't trust Google's motives but I trust the certificate authorities less...

How do we (or Google) know that the CIA and FBI can't create certificates from all the CAs because they have stolen/demanded the Root CA for them?

If I was a TLA I'd want the ability to perfectly MITM anyone.

I think these questions imply that there needs to be a better way to think about security and trust for web endpoints in the days of the state as a bad actor.


> How do we (or Google) know that the CIA and FBI can't create certificates from all the CAs because they have stolen/demanded the Root CA for them?

Well, we know they don't do this on scale, because we'd spot the rogue certs in the certificate transparency logs.


> How do we (or Google) know that the CIA and FBI can't create certificates from all the CAs because they have stolen/demanded the Root CA for them?

Certificate Transparency.


Which is why Google knew that someone issues a cert for their website: https://security.googleblog.com/2015/09/improved-digital-cer...

Might be part of the reason they are becoming their own CA.


That's why the NSA opened Let's Encrypt, isn't it?


Let's Encrypt respects CT: https://crt.sh/?Identity=%25&iCAID=7395


I think SSL certificates need to be replaced. Security can NOT be designed with the 'good guy' in mind. if it can be broken at all we need an alternative.


The certificates are OK. The issue is the way they are signed and distributed.

Lots of issues with the current PK infrastructure is limited by the certificate transparency.


> The certificates are OK.

No, the certificates are pretty terrible too. Take a look at Peter Gutmann's presentations, or read the SPKI RFCs.

Among other things, certificates conflate identification, authentication & authorisation; they are based on a flawed, centralised, global phone book model; they are ASN.1; in one case, I believe that the meaning of a single flag has been inverted because of a mistake in a (Microsoft?) library that everyone has had to be bug-compatible with.

Some folks think that XPKI is so broken precisely in order to discourage its use (others claim the same thing about IPsec). I don't actually think that's true, but sometimes when I'm banging my head against some stupidity in XPKI, I wonder. I really do.


The main issue I see is ease of MITM for corporate environments. In a corporate environment a trusted root is installed, then an appliance can intercept all SSL certs and re-create the trust chain to introduce their own trusted root so they can read all SSL traffic and your browser says "SECURE". That is broken IMO.


It is very difficult, if not impossible, to protect against attackers who are in a position where they are able to install their root cert to your browser. If they can do that, then they can do a lot more also. At least in the current system such tampering is generally easily detectable.


It's broken that device owners can make them behave according to their intentions?

Shall we eliminate software freedom too? Otherwise companies can just install a fork of Firefox/Chrome with MITM support added back in to their managed endpoints.


Are there any conceivable architectures where whoever owns the computer couldn't MITM themselves? If only for debugging purposes, which would immediately be used by corporate IT for the usual purpose.


This.

Thank you for constructing the words I could not.


The certificates have their own, independent problems (who thinks x509 is a good format?)


The NSA


it is too hard for me to believe that root ca's have not been compromised when anyone working at these companies could likely easily take it without anyone noticing. I do not think transparency has anything at all to do with it.

I think an encryption solutions that cannot be 'broken' for decryption is far more required than one that has the 'good guy' in mind. I do not find it an acceptable solution for critical data.


Not any transparency. Certificate transparency.

https://en.m.wikipedia.org/wiki/Certificate_Transparency


I think we are talking on different lines of thought. I am not concerned with certificate transparency... as the article you point out says it can take a long time [years] before it is found to be compromised. the fact of the matter is, if ssl decryption is possible on the fly, we need a different solution for encryption, this include the use of credit card chip.

an encryption scheme cannot be designed to be broken and expect everything to be 'secure'

EDIT: I am not being allowed to reply.

excuse me, I think you need to read what I wrote more carefully. I do not care about certificate transparency. I must not be communicating clearly I will try again..

I am not referring to the ability to issue a new certificate.

I'm talking about the ability to perform SSL decryption without the end user knowing. you do not need to issue a new certificate to do this, you just need the end user to have trusted a new root CA... which brings us to this article where another company is issuing a root CA. do you trust everyone in your 'trusted root ca's on your computer?

Here are some ways to untrust certs [0][1]and another conversation on this [2]

[0]http://unix.stackexchange.com/questions/285784/untrusting-an...

[1] https://blog.filippo.io/untrusting-an-intermediate-ca-on-os-...

[2] https://news.ycombinator.com/item?id=11781915


Please read it more carefuly:

"One of the problems with digital certificate management is that fraudulent certificates take a long time to be spotted, reported and revoked by the browser vendors. Certificate Transparency would help by making it impossible for a certificate to be issued for a domain without the domain owner knowing."


You can pretty much trust all the root CAs that provide Certificate Transparency. If such a CA went evil such an event would be detected.


As I see it, there are a number of issues that need to be cleared up before certs are anything more than snake-oil. Unfortunately, the economics of the current market-reality fight (tooth and nail) against doing so. (Note that I'm limiting this to browser certificate handling.)

- We need clients to authenticate servers as well as the reverse.

- Browsers need to allow better user control over certificates. I know the reasons why this isn't provided, and I don't care. Add a "reset to defaults" if you're worried about people breaking their browsers, but a sensible way for users to control whom they trust is important. See next point.

- As of now, we have a pile of registries with near-zero public view into the operations of people with whom we're literally entrusting our bank accounts. Some of these are overtly in the control of nation-states, and many more are assumed to be at least covertly "assisting". Those of us with a problem with that (which should be everyone - even if you trust your friendly neighborhood intelligence agency, what about all the others?) need much better visibility into the operations of the CAs. I'd argue that they shouldn't even be for-profit operations, but that's not a huge point to me - the important points are knowing which ones are incompetent or compromised by their masters (which are the same thing in once sense, not not in others), and there are various paths to get there.

To the browser apologists: is a balkanized web worse than one that cannot be trusted? And are your actions actively harming people by making them believe it is trustworthy when it is not?


Not an apologist, but a Devil's advocate:

Being trustworthy means fulfilling the expectations of the party that is reliant on us, and which are based on what we promise. Currently, the CA system promises very little, with the general idea being that your data is safe from thieves and scammers; certainly not that your communications are safe from law enforcement. Therefore, they are mostly trustworthy.

If you start telling people that you can say which CAs are free from interference from intelligence agencies and other top-level snoopers, you're making a much stronger promise, and therefore any flaws in your assessment are much more dangerous.


Hmm, I wonder if Alphabet will spin up a made at Google alternative to Let's Encrypt?


Disclosure: I am the author of that post and Product Manager for this project as well as other related work like Certificate Transparency and Key Transparency.

While I can not say what Google will do in the future, I can say we are very supportive of Let's Encrypt. We have provided them funding and I personally act as an advisor to Let's Encrypt.

In short, we love what Let's Encrypt is doing.


I do too, and I also think that if one reasonably well funded, free CA that has full transparency is great, then two would be awesome :D


They might, and then abandon the project a year later, like many other Google products.


All I can say is this - what if I don't trust Google one iota?


Google already has intermadiate CAs, so your browser is already trusting google anyway.

This new CA doesn't change much in that respect.


Remove the Google root certificates from your system once they deploy them.


and don't visit any Google sites, which are the only ones deploying Google certificates.


You're google and you still take a print screen of a spreadsheet instead of using an HTML table to display tabular information on a blog post.


What's next? Chrome marking certificates not signed by Google CA as "Maybe not secure"?


Couple of notes:

* pki.goog does not enforce TLS

* Why use .goog instead of .google?


Some PKI-related services can not, due to user agent behaviors and, do SSL, for example, consider OCSP; if to fetch an OCSP request you need to do an SSL connection and the library doing SSL does an OCSP check to verify the SSL cert you can end up in an infinite loop.

While it would be ideal for that not to be the case, one has to build out infrastructure that supports the way UAs behave today.


Unless they launch a satellite far from earth with locked keys on it, I don't see how this is anything more than a corporate NSL


Are we going to see preferences in Chrome regarding certificates that link to a different root CA?

/cynical off


i don't see how this is real security, considering trump's election and google's track record of bending to government demands. even more disappointing is their use of an NIST curve.

but then again, government players plague every security system we have.


I have thought for a good while now that DNS is the primary weakness of the current incantation of the internet as the public knows it. I think perhaps we should ponder the benifits of replacing it, and using raw IP's more often.

In the meantime, DNSCurve would be a great start, vs the major issues I have found with DNSSec.


looks like the fox is guarding the hen-house.


I don't trust Google.


The proverbial fox guarding the hen house.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: