
2. Schrems I on Safe Harbour and Schrems II on Privacy Shield
Following the revelations of Snowden, the adequacy decision of USA data transfers under the safe harbour principle (2000/520/EC) had been questioned even by the Commission itself. However, the Commission opted for trying to strengthen the principles rather than revoke it (COM/2013/846), but these strengthening measures remained mostly recommendations in nature (e.g. aiming for new umbrella agreements for law enforcement purposes, expecting better supervision by US authorities of the self-certified companies, limiting the national security exceptions to what is proportionate, extending safeguards of US residents to all EU citizens etc., see also COM/2013/847).
However, in the same month as the revelations, a private person submitted a complaint to the Irish data protection authority (Commissioner) regarding the transfer of a social media company of his data to the USA. After the complaint was rejected, the complainant brought an action before the High Court, which referred the case to the Court of Justice of the EU. The direct question was whether data protection authorities were bound by the findings of the European Commission (2000/520/EC) or could they examine in detail whether the safe harbour decision ensured adequate protection in line with Articles 7, 8 of and 47 the Charter of Fundamental Rights of the European Union.
Regarding the direct question, in the case C-362/14 (Schrems I), the Court of the Justice of EU has answered that such data protection authorities can examine such claims of non-compliance, but they cannot adopt measures contrary to that decision of the Commission. However, the Court of Justice went further, and in order to give the referring court a full answer (C-362/14 para 67), they have implied that it was also necessary to examine whether that 2000/520/EC decision complies with the requirements of the data protection directive in effect at the time (95/46/EC). In that part, the Court of Justice has highlighted that (a) the safe harbour principles were intended for self-certified organisations, (b) national security and similar requirements’ have primacy over the safe harbour principles, with no limitations in such interference and without any safeguards for (non-US resident) EU citizens against such interference.
Under the case law of the Court of Justice, interference with the fundamental rights of Article 7 and 8 require legislation to first, lay down clear and precise rules governing the scope and application of a measure and imposing minimum safeguards, and secondly, to be limited what is strictly necessary. Furthermore, Article 47 requires that individuals have the right to an effective remedy before a tribunal, and should be able to pursue legal remedies in order to have access to personal data relating to him, or to obtain the rectification or erasure of such data.
After the invalidation, the High Court of Ireland had referred the issue back to the Irish data protection Commissioner, whose investigations found that Facebook was now using the standard contractual clauses (“SCC”) as published by the European Commission for data transfers. For that reason, the Commissioner have also asked the complainant to reformulate his question.
After the invalidation of 6 October 2015, the European Commission has adopted in a short time a new adequacy decision (Commission Implementing Decision (EU) 2016/1250), which explicitly contained provisions on personal data use by US public authorities as well, including limitations for the use by national security purposes (such as Presidential Policy Directive 28) and references to individual redress (cf. 111-124 of CID (EU) 2016/1250). The US government has committed to create a new oversight mechanism, independent from the US “Intelligence Community”, called the Privacy Shield Ombudsperson.
The reformulated complaint of December 2015 now also covered the SCC mechanism for data transfer, and claimed that even these measures were not in line with Articles 7, 8 and 47 of the Charter. Now, the Commissioner was also of the same opinion, that personal data of EU citizens would be processed by the US authorities in a manner incompatible with Article 7 and 8 of the Charter, and that US law still did not provide citizens with legal remedies compatible with Article 47 of the Charter. The Commissioner has found that the SCCs are not capable of remedying this defect, since they confer only contractual rights on data subjects against the data exporter and importer, without, being binding the United States authorities.
As per the request of the Commissioner, the High Court of Ireland again referred the question to the Court of Justice, highlighting their finding that non-US persons are still not granted effective (personal) remedies (neither under FISA Section 702, that is, 50 U.S. Code § 1881a and nor by the presidential policy directive above, PPD28). Such persons do not have access to litigation based on Fourth Amendment of the US Constitution, and in relation to surveillance measures, it is almost impossible for the claimants to prove that they have a sufficient interest in such matters (locus standi) for the court to decide the merits of the dispute. Furthermore, the High Court has was also of the opinion that the newly introduced Privacy Shield Ombudsperson cannot be considered as a tribunal under Article 47 of the Charter.
In this next decision, C-311/18 (Schrems II), the Court of Justice also examined the validity of the Privacy Shield decision, again, “in order to give the referring court a full answer” (161), and found that invalid.
The main reasons for such invalidity were based on not complying with neither the proportionality of the limitation under Article 52.1 of the Charter, nor with requirements of Article 47 of the Charter for having a public hearing by an independent and impartial tribunal.
The first condition was not met because surveillance programs under Section 702 of the FISA or E.O. 12333 (even with PPD28) do not contain limitations on such implementations of surveillance programmes for non-US persons, and, the US admitted that even PPD28 does not grant any actionable rights for data subjects against US courts.
The second condition was not met because submitting a claim to the Privacy Shield Ombudsperson was not found to be equivalent to bringing legal action before an independent and impartial court (in order to have access to their personal data, or to obtain the rectification or erasure of such data). The political commitment by the US Government was that any element of the intelligence services is required to correct violations detected by the Ombudsperson, but the Court of Justice has found that there no legal safeguards have been implemented to fulfil this promise. The Ombudsperson does not have the power to adopt binding decisions on intelligence services and they are not independent from the Secretary of State.
3. Is it really different this time?
Let’s now take a look at the new Executive Order. In its section 2, it provides wide-ranging principles for signals intelligence activities, regarding their legal basis, and stresses that they are always subject to appropriate safeguards, proportionality etc. It also gives an explicit list of objectives (which may later be updated of course), and also a list of prohibited objectives, including “suppressing or restricting a right to legal counsel” or collection of trade secrets for the US companies to gain competitive advantage. Regarding the privacy safeguards, somewhat generic safeguards are clearly present in section 2(c) requiring a prior determination for a specific signals intelligence collection activity and the necessity for such collection to advance an intelligence priority, and elements of proportionality (“shall consider the availability, feasibility, and appropriateness of other less intrusive sources and methods for collecting the information necessary”, “feasible steps taken to limit the scope of the collection to the authorized purpose”.)
With regard to bulk collection, more detailed requirements are included (section 2(c)(ii)), including finding that the same purposes cannot be accomplished with target collection, and that “reasonable methods and technical measures” should be applied “in order to limit the data collected to only what is necessary”. For bulk surveillance, the list of objectives is shorter, but it still included such widely applicable objectives as “protecting against cybersecurity threats created or exploited by … a foreign government, foreign organization, or foreign person”.
Besides these safeguards, a definite redress mechanism is also included that covers all signal intelligence activities (section 3). The redress mechanism has two levels. The first level of redress starts by the receipt of “qualifying complaints” sent by public authorities in a qualifying state. However, this doesn’t mean that only public authorities may submit such complaints, this rather means (under the definitions) that public authorities of qualifying states (including those of the EU) will first have to verify the identity of the complainant and that the complaint satisfies the requirements of the EO.
This complaint will be investigated by the Civil Liberties Protection Officer (CLPO) of the Office of the Director of National Intelligence in the US, who is entitled to have access to all the necessary information from the intelligence communities, and is expected to document the whole review including factual findings, and create classified reports. There are now explicit provisions in the EO that the decisions of the CLPO will be binding on the intelligence community and its agencies. Independence of this CLPO from the Director of National Intelligence is also clearly stated in the EO, including prohibition of removal of the CLPO outside misconduct etc.
It’s interesting to highlight that the complainant will not receive any information regarding the intelligence activities it was subject to, including whether they were at all subject to such intelligence activities - the reply will only state whether they have identified any violations and issued remediation or not.
Either the complainant or the intelligence community element may apply for a second level of review, a review by the newly established “Data Protection Review Court”. This court will be made of judges who were not employed by the government at the time of their selection, and may not be removed once appointed, in accordance with general rules for judicial conduct. The court shall operate in panels made of three judges and will mostly work based on the complaints and the documentation prepared by the CLPO. Similar to the CLPO, this court is also limited in what it can inform the complainant about.
In summary, we can say that in contrast with the Privacy Shield and its annexes of letters, the new EO has really clarified the legal basis and the safeguards for such intelligence activities. However, against all the best intentions and all efforts of the European Commission in a new adequacy decisions, we may still get into a situation that the Court of Justice still finds these safeguards insufficient. It is not trivial how the above redress mechanism in the new EO can ensure access to data by the data subjects as required by Article 8.2 of the Charter or whether such short answers by the CLPO/DPRC will amount to a right to an effective remedy and a fair trial.
But we also have to keep in mind some other aspects:
a) this kind of access to classified data is not unconditionally ensured under national law in the EU either with regard to intelligence activities;
b) compared to the lack of clear definition of national security in the EU countries (see CCBE Recommendations on the protection of fundamental rights in the context of ‘National Security, p. 12-16.) and the lack of transparency of such activities in the EU, the attempt of this EO of the US at giving a list of objectives for signals intelligence activities and providing safeguards is clearly a step in the right direction.
Even if the EU is not a federal state like the US, from the viewpoint of the protection of fundamental rights as protected by the Charter, at least a more modest degree of harmonisation and transparency could be ensured with regard to limitations related to national security as well.
/Read more/Being a lawyer means being obsessed with the right definition. It’s no surprise that the term “cloud computing” begs for definition, because even a cursory practical acquaintance with the term shows that many people understand quite contradictory things under the same term.
It’s also important to note for lawyers that the meaning of cloud computing has extensive and potentially costly consequences due to regulatory reasons. For example, for banks, there was a special regulatory regime introduced by the European Banking Authority EBA-Rec 2017-03 if a bank decided to outsource a service to cloud service providers. Since 2019 (EBA/GL/2019/2), there are no longer specific regulatory provisions on cloud services for banks at the EU level (other than recording this fact), the terminology remains in place at the EU level, and there are still a number of special national regulatory requirements in place that apply to cloud services only.
For defining cloud computing, the EBA used the definition of the standardisation body of the United States of America, NIST 800-145. This is the definition used by Eurostat as well.
The Communication from the Commission “Unleashing the Potential of Cloud Computing in Europe” (COM/2012/0529) also refers to this definition preamble 5
There are no further definitions of cloud computing at the level of EU that I know of. What’s more, even the definition used in Saudi Arabia by the Communications and Information Technology Commission (CITC) Cloud Computing Regulatory Framework 2-2-1
In summary, it’s hard to find any detaile definition other than the NIST one. There are regulatory acts that, for their own purposes, clearly use a cloud computing term that is different from the NIST definition, but they never make any attempt at a definition, such as in the Digital Single Market Copyright Directive (see the definition of online content-sharing services).
At first glance, the NIST definition in SP 800-145 seems rather simple, but still confusing. It defines “cloud computing” as a “model” that enables access to computing resources in accordance with a list of specific characteristics. However, at the same time, it also defines cloud computing model as a composite of three further concepts: “five essential characteristics”, “three service models” and “four deployment models”.
This double definition is difficult to understand, but for us, the first definition seems to be more salient for the moment. This first definition is the same as the “five essential characteristics”, and the five essential characteristics spell out a bit more detail about those specific characteristics referred to that make a model a cloud computing model. The next most important question is: what does it mean that cloud computing is defined by being a model? How can we use the definition of cloud computing to decide regarding a computer service or solution whether it is cloud computing or not? The answer comes from the rest of the first definition: a “model for enabling … access to computing resources”. (Here resources mean applications, servers, storage, processor capacity etc.) That is, model has a very generic meaning, it means the standard of “cloud computing” itself: if we have to decide whether something is cloud computing or not, can this something be called as cloud computing? Is this a cloud computing service, is this a cloud computing strategy, can this solution be called a cloud computing solution, is this product really a cloud computing product? (Actually, later publications by the NIST replace this term model by the term service.)
Of the five essential characteristics, only one of these characteristics seems self-evident: we can only talk about cloud computing if the model relies on network access available from many places (“broad network access”). The other four characteristics are both difficult to understand in detail, and also confusing: we can see that many current usages of the term cloud computing do not contain any reference to these characteristics, and furthermore in many cases it is theoretically impossible for the users themselves to decide whether a given service complies with these characteristics.
E.g. “resource pooling” says that these computing resources in all cases have to be pooled “to serve multiple consumers using a multi-tenant model”. But what about cloud computing uses that rely on the deployment model called a “private cloud”? The answer to this question lies in the definition of the “private cloud” deployment model, which is about a single legal entity (organisation) with multiple “internal consumers”, such as different departments within the same entity are considered as different consumers of the service. This is clearly not in line with general expectations of an average lawyer of what “multi-tenant” or “multiple consumers” would mean.
There are similar twists present in other essential characteristics as well, so most of the questions necessary to answer whether a service is a cloud service or not is not something that the cloud user (consumer) can do without the help of the cloud provider.
(It is also worth noticing that besides the cloud computing definition based on these five essential characteristics, other definitions of the NIST in the same paper might also have regulatory importance, such as when a cloud service is private cloud, then certain regulatory regime does not apply.) It is for this reason that NIST has also issued an explanatory publication as well, NIST SP 500-322 about the evaluation of cloud computing services. This publication also acknowledges that in many cases, vendors label their offerings as cloud services even if they do not fulfill the five essential characteristics. The publication gives both generic guidance above that of NIST 800-145 and useful examples to understand how each characteristics should be understood in practice.
Software run on computers, whether these computers running the software intend to serve a single person working at her desk, or multiple other computers running all across the world connected to the Internet. Software may work in such a way that this software is serving other browser software or otherwise using the same protocol as a browser uses: this is what makes a “web application”. So the purpose of a web app is usually to serve remote users connecting to this web app by way of their own browsers, from a remote location. The web app itself may be very similar in function to application installed on a local device (e.g. running natively on a mobile telephone or a desktop computer), meaning that a web app is not necessarily any different in use to a user, other than the user does not have to install it on a local device.
The user accessing the web application has not much control over the web app or the infrastructure hosting the web app. Also, if a user uploads a document to this web app, then the uploading user’s effective control over that document will also definitely diminish. Often, web apps are created, because they are intended to serve multiple users at the same time and costs of maintenance and support are significantly decreased in this way.
Against all the similarities, such a web app intended to be used by many users (resource pooling) does not necessarily mean a cloud app as well. A web app is only a cloud app if it also fulfills all the required characteristics of “cloud computing”, including the following:
a) a consumer may use such a web app without specific human interaction (requiring the service; “on-demand self service”);
b) the infrastructure running the web app is defined in a way that enables rapid scaling the resources used by the web app, without the consumer noticing a thing (“rapid elasticity”); and
c) use of this web app by consumers is measured by at least the number of simultaneous users (monitored, controlled, reported; “measured service”).
From these conditions, for a web app to ensure “rapid elasticity”, it has to be installed in an environment that is designed to ensure that capability. It is not enough that a single webserver is up and running and answers the requests from the remote users, thus making the web app available for their use. This requires that the web app is deployed on such a configuration of devices (including hardware and software: hypervisors, virtual machines, virtual data storage, and other computing resource abstractions) that makes this elasticity possible. This is called in yet another explanatory NIST publication SP 500-292 as a “resource abstraction and control layer” (sitting between the physical resources and the service layer defining the functions offered for the consumers of the cloud service). Without such a layer present, the web app will not be a cloud app as well.
This is a misunderstanding from multiple aspects: a) a cloud computing service may be operated at the consumer’s premises (onsite cloud computing) or outside of it (outsourced cloud computing, as clarified in SP 500-322; b) location independence of a service is an important factor for the characteristics of “broad network access” and “resource pooling”, but this is not sufficient in itself, and other essential characteristics have to be fulfilled as well, including “rapid elasticity”; c) a single device running a remotely accessible service will probably never meet the “rapid elasticity” and the measured service requirements; d) similarly, the multi-tenant requirement arising from “resource pooling” is also highly unlikely to be complied if there are no multiple consumers, even internal users such as in-house departments (which is necessary for a private cloud as defined in NIST SP 800-145).
Finally, a more confusing problem may be caused by the cloud service provider itself. Community cloud is defined by NIST as “The cloud infrastructure is provisioned for exclusive use by a specific community of [cloud consumers] from organizations that have shared concerns (e.g., mission, security requirements, policy, and compliance considerations).” If a cloud service provider has a service that can be understood as a cloud computing service (model), just defining a specific, cloud product for a specific group of professionals (such as lawyers), will not make it a community cloud. The cloud service will be a community cloud offering only if the cloud infrastructure used for that cloud computing service serves specifically just that group. So if a new cloud product targets a specific group of professionals, but at the same that “cloud product” is different only in terms of pricing, availability, applicable contractual provisions and e.g. support, but otherwise uses the same infrastructure as the the old cloud product, serving everyone outside the specific group of professionals the same way, then this will not be considered as a community cloud.
It is evident from the above that it is highly improbable for lawyers to find it out for themselves if a service provider’s offering is indeed a cloud service or not. Often, the providers themselves will probably not know this themselves, and will probably use the term cloud computing based on commercial considerations.
Therefore, it might be advisable to consider whether the term “cloud computing” is the appropriate term based on which we would consider regulation for lawyers.
If we decide to use a different term, we have to agree on what this more expressive term should be, such as “lawyers using information society services”, or “lawyers using outsourced computing services” or “lawyers using online platforms” etc. This depends on which risks we want to focus (see below).
If we go on with cloud computing, we have to make it crystal clear that we use the meaning of the term cloud computing that is different from that of NIST.
So both approach in some way requires that we ourselves give a new meaning of “cloud computing”.
Regarding the risks lawyers face, all the new risks are related actually to the decreased control over the data and the IT tools. But using any kind of IT tool (or any complex technology) necessarily leads to some decrease in control, thanks to the methods of automation or the expertise needed to ensure appropriate use of a given technology, and a certain degree of training and competence will have to counterbalance this.
With cloud computing, the problem is that it is very difficult for lawyers to notice and understand the way his or her control is decreased. Cloud service providers may not be objectively transparent in this (sometimes due to lack of knowledge in the tools they themselves use and offer, sometimes due to lack of knowledge in what lawyers need to understand or what they would expect them to disclose before the lawyer has to choose a service). Sometimes lawyers have no choice in selecting the right cloud service, because a selection has already been made for them by their client, a counterparty, other regulatory choices or other dominant technological means. And sometimes the solution is just too complex for the lawyer to be able to make a free choice.
a) Geographical distance to physical location of IT resources (tools, software): Even if the site of operation is a dedicated place (such as a specific secure data center), lawyers may have no longer any special protection as they have over documents held at their own premises.
b) Access to secure data centers is highly regulated: A direct consequence of more physical and administration controls in place results in actual access by users taking longer time; of course, increased security inevitably results in restrictions on how many identified persons are able to access the resources, and this also necessarily reflects how inspections may be carried out at the premises and by whom. The more users use a specific secure data center, the more unlikely it becomes that these users will be able to access their resources without compromising operational security.
c) Abstraction from the physical devices (virtualisation): Virtualisation enables digital (virtual) sharing of the physical infrastructure. It has profoundly changed how servers work since early 2000, thanks to the commoditisation of servers and to an increase in their capacity. It has become a dominant force of transformation in the back office IT tools, later in desktop and development. This resulted in considerable savings in management and ownership costs, but at the same time also pointed toward building more and more secure data centers, with robust disaster recovery plans etc. However, economies of scale can only be achieved if these physical devices subject to virtualisation are shared between more users, resulting in a considerable loss of control for these users and the need for a separate provider of aggregate virtual machines.
c) Abstraction of the physical location of IT resources: Introduction of virtualisation, homogeneity of the virtual resources and improvement in network capabilities, enable small and large cloud providers to manage their IT infrastructure more flexibly due to less constraints on the physical location of such infrastructure. While it also creates more efficiency and cost-savings, this also means that users are no more able to control where they IT resources are provided from. Considering that many regulations and law are location specific, this aggregation creates yet another layer of problem for lawyers.
d) For the users, simpler to configure and use: Relying on a virtual host is simpler for the user, because they don’t have to deal with the peculiarities of the physical system and configure it. For many users, it is desirable to simplify the environment in which they want to run their applications, thus hide complexities of the IT when applications themselves are complex enough. Thus, rather than installing directly on virtual hosts and configuring all the details, for many users it is better to run applications on higher level computing platforms, in containers or even in browsers (such as for web apps). This makes the operational environment for the application easier to start and, in most cases, more reliable. The downside is that should any problems arise, the user has less possibility to identify the causes and correct these problems. The higher level the platform is, the more the user will be at the service provider’s mercy, and the user may not be able to do nothing about this (regardless of the user’s competence).