We do not believe that this is going to be another steep correction followed by an equally swift V-shaped recovery like we saw at the outset of the pandemic.
Sequoia Capital is infamous for its memos and presentations it shares with its portfolio companies during macroeconomic crises (“R.I.P. Good Times” was for 2008; “Coronavirus: The Black Swan of 2020” was another).
Its latest warning, which was shared with 250 founders on May 16th, was called “Adapting to Endure.” In other words, don’t expect a recovery from the current market downturn to happen quickly.
Over the years, Sequoia, the venture firm behind Google, Apple and Airbnb, has developed a reputation as the tech industry’s POV Master, through memos and presentations that it shared with the leaders of its portfolio companies during past macroeconomic crises.
In 2008, that took the form of a 56-slide survival guide to the Great Recession, entitled “R.I.P. Good Times.” In early 2020, as the pandemic began upending the economy, Sequoia sent its founders a grim memo entitled, “Coronavirus: The Black Swan of 2020.”
Its latest warning to its portfolio companies takes the form of a 52-slide presentation where:
Sequoia describes the current combination of turbulent financial markets, inflation and geopolitical conflict as a “crucible moment” of uncertainty and change;
Sequoia told founders not to expect a speedy economic bounce-back akin to what followed the start of the pandemic because, it warned, the monetary and fiscal policy tools that propelled that recovery “have been exhausted.”
The firm suggested founders move fast to extend runway and to fully examine the business for excess costs. “Don’t view [cuts] as a negative, but as a way to conserve cash and run faster,” they wrote.
You can view the deck here and it is worth a skim to see what a top-tier Silicon Valley VC thinks about the current macro climate.
For over a decade I have been thinking about doing further study whether a masters, MBA, LLM or even a PhD. For various reasons I haven’t pressed the button on anything, although in 2019/20 I did get close.
I had just read a brilliant book called the “Prosperity Paradox” by Clayton Christensen which discusses why so many investments in economic development fail to generate sustainable prosperity, and how investing in market-creating innovations can create lasting change.
I was immediately hooked, although I was biased. I had focused my undergraduate business honours thesis on a former book by the same author called “The Innovators Dilemma”.
I found a few universities with suitable programmes and sent off applications. In the end I didn’t proceed with the offers but I thought it would be worthwhile to show the summary application and proposed research topic, approach and key areas to investigate. I still may look to explore this topic in the future albeit in a different way e.g. research, articles, consulting etc.
Proposed PhD Research title
“Developing Market-Creating Innovations That Drive Prosperity in Emerging Markets”
The historic approach to improving outcomes and prosperity in emerging economies has typically focused around ‘poverty alleviation’ whereby private-sector companies and start-ups exploit existing markets at the top or ‘bottom of the pyramid’ (Prahalad 2004), or other initiatives which ‘push’ international aid, grants, loans, outsourcing, or incremental (‘sustaining’) improvements to existing offers for established customer bases. More recently, a number of leading management researchers led by Clayton Christensen (2019) argue that more successful approaches may lie in creating or ‘pulling in’ new market innovations that enable significant numbers of non-consumers to easily and affordably find a product or service to help them overcome daily struggles or solve an important problem. Pursuing this strategy (distinct from other types of innovation including ‘sustaining’ and ‘efficiency’ innovations), established firms and founders typically see opportunity in the struggles of their respective frontier markets by targeting non-consumption in the broader market, creating not just products and services, but entire ecosystems, enabling infrastructure, networks and jobs to promote stability, prosperity and sustainable economic growth. Despite this opportunity, in 2016 alone, the OECD estimated that $143 billion was spent on official development approaches. Christensen (2019) however asks that what if this was instead channelled to support direct market-creation efforts in developing countries, even when those circumstances seemed unlikely? Some examples of market-creating innovations (MCI) are listed below:
M-PESA: A mobile money platform that enables the storage, transfer and saving of money without owning a bank account;
MicroEnsure: Affordable insurance for millions of people living on less than $3 a day;
Celtel: A pay-as-you-go mobile phone service that enables customers to purchase cell phone minutes from as little as 25 cents;
Galanz: An inexpensive microwave oven for the average Chinese citizen;
Tolaram: A tasty, inexpensive, easy-to-cook meal in Nigeria that can be prepared in less than three minutes;
Grupo Bimbo: Affordable, quality bread for Mexicans;
Ford Model T: An affordable car for the average American in the 1900s;
My PhD research will seek to build on these themes and the work of Christensen (2019) and others (Prahalad 2006; Auerswald 2012; Quadir 2014) to better understand the following key questions: How do established firms and start-ups successfully build market-creating innovations (“MCIs”) in emerging markets? Why are some firms successful, and others are not? The research will address gaps in understanding highlighted by Christensen (2019) in terms of further defining the process by which new markets are created, the characteristics that set market-creating innovators apart, and more details into the role of non-consumers (‘non-consumption economy’) in this process. In addition, my research will improve understanding of the relative importance of external factors which facilitate (or inhibit) success, including government, ecosystems, NGOs, investors, skilled labour, infrastructure, networks, and partners. The extent of benefits that MCIs deliver for society in terms of driving inclusive, sustainable and prosperous development across sectors including education, health, financial services, energy, and communications will also be analysed. Finally, the findings will deliver practical guidance, frameworks and insight for a wide range of international companies, entrepreneurs, governments, investors, thinktanks, and NGOs who pursue (or are looking to pursue) strategies and investments in emerging markets, or alternatively use the learnings to apply in more developed contexts
C.K. Prahalad, The Fortune at the Base of the Pyramid: Eradicating Poverty Through Profits (Upper Saddle River, NJ: Prentice Hall, 2006)
Philip Auerswald, The Coming Prosperity: How Entrepreneurs Are Transforming The Global Economy (Oxford University Press, 2012), 58
Provide a statement of your research interests and intended research topics:
My research interests focus on how organisations innovate (across processes, practices, products, partnerships) in various contexts, including geographical (e.g. emerging or developed markets), new markets (e.g. non-consumption economy, consumer insight, go-to-market), operational (e.g. outsourcing, resource allocation, incentives, portfolio management, projects, change), offerings (e.g. new product development), technological (e.g. emerging technology), competitive (e.g. start-ups, business models), strategic (e.g. organic, M&A, JVs), human (e.g. leadership, culture, talent, skills), ecosystems (e.g. networks, partnerships, knowledge, public-sector), and sectoral (e.g. education, health, financial, energy).
I will use my many years of relevant professional experience working across most of the above topics (whether as an academic, lawyer, consultant, or founder) to ensure that the PhD research makes a substantial contribution to the academic research (see research questions), and provides practical insight for critical strategic and investment challenges for industry stakeholders (e.g. multi-national companies, investors, public sector, NGOs, etc).
My PhD research will seek to build on the themes of my research interests, and the work of Christensen and others to help answer the following question: How do established firms and start-ups successfully build market-creating innovations (“MCIs”) in emerging markets?
What is the process by which these new markets are created?
What is the MCI development process within established and new (start-up) firms? For example, opportunity identification, development, investment, launch and scaling;
Why are some firms and efforts successful, and others are not?
What is the role non-consumers (‘non-consumption economy’) play in this process?
What are the qualities that set market-creating innovators and firms apart? For example, the ability to identify possibilities where there seem to be no customers;
What are the characteristics of the most successful (and unsuccessful) MCIs? For example, business models, attributes, targeting non-consumption, value networks, ecosystems, partnering;
What are the most important internal and external conditions which facilitate or inhibit this process?
What commonalities exist across nations, sectors, firm size, age, or other variables?
What is the role of other key stakeholders in MCI development? For example, government, NGOs, investors, ecosystems, networks;
What are the key benefits for society, sectors (e.g. education) and stakeholders (e.g. government) from MCIs which deliver inclusive, sustainable and prosperous development?
What are the future implications for private and public sector organisations (e.g. companies, government, investors, NGOs etc) who wish to facilitate the future development of MCIs, or take the learnings into other developing (or developed) markets?
The below diagram describes the research focus areas and questions relevant to be asked:
Some anticipated research parameters may include a focus on:
Products/services and ventures which create new markets (“MCIs”) and benefits for large segments of the population, as opposed to product improvements (“sustaining innovations”) or efficiency gains (“efficiency innovations”).
Sectors that play key roles in prosperity development including education, health, financial services, communications, food and water, energy, and technology;
Data collection in a wide selection of geographies including BRIC nations, developing and developed nations (e.g. US), although the feasibility of this may prove problematic thereby requiring a more vertical approach (e.g. narrow to a few nations);
A time horizon of MCIs created post-2000 to capture more recent examples of MCI development;
An inter-disciplinary research approach given the wide-ranging research topic, building on academic researchers in fields including strategic management, strategic marketing, disruptive innovation, new product development, consumer insight, technology and operations management, innovation, organisational behaviour, leadership, emerging market strategy, international and economic development, and public policy;
Hybrid data collection strategy: whilst the research scope (e.g. companies, countries, sectors etc) and data collection strategy has yet to be defined, it is expected that a hybrid approach which mixes both qualitative and quantitative methods with primary and secondary research might be the most appropriate. For example, face-to-face interviews, online surveys and case studies can help collect primary data to define firm MCI development processes. However, firm performance and development benefits (e.g. social, economic, and sectoral) will require quantitative analysis of public records and databases, as well as any additional internal data from private companies or government agencies.
 Examples of successful market-creating companies include Celtel (Africa), GrameenBank (Bangladesh), M-Pesa (Kenya), MicroEnsure (Africa), Jio (India) and Ford Motors (US) in the 1920s
 I have a range of sub-research questions but in the interests of brevity I have not included here.
As AI continues to transform many industries, including the legal service industry, many experts are unanimous in predicting exponential growth in AI as a paramount technology to bring new tools and features to improve legal services and access to justice. Already, many aspects of the estimated $786B market for legal services are being digitised, automated and AI-enabled whether discovery in litigation (e.g. RelativityAI), divorce (e.g. HelloDivorce), dispute resolution (e.g. DoNotPay) or contract management (e.g. IronClad).
As with many disruptive technologies, there are many experts who believe that AI will significantly disrupt (rather than extend) the legal market:
“AI will impact the availability of legal sector jobs, the business models of many law firms, and how in-house counsel leverage technology. According to Deloitte, about 100,000 legal sector jobs are likely to be automated in the next twenty years. Deloitte claims 39% of legal jobs can be automated; McKinsey estimates that 23% of a lawyer’s job could be automated. Some estimates suggest that adopting all legal technology (including AI) already available now would reduce lawyers’ hours by 13%”
The real impact will be more nuanced over the long-term as whilst AI will eliminate certain tasks and some legal jobs, it will also augment and extend the way legal services are provided and consumed. In doing so, it will drive new ways of working and operating for both established and new entrant firms who will need to invest in new capabilities and skills to support the opening up new markets, new business models and new service innovations. In the past few decades, we have seen the impact of emerging and disruptive technologies on established players across many sectors, including banking (e.g. FinTechs), media and entertainment (e.g. music, movies, gambling), publishing (e.g. news), travel (e.g. Airbnb) and transportation (e.g. Uber). It is very likely traditional legal providers will be faced with the same disruptive challenges from AI and AI-enabled innovations bundling automation, analytics, and cloud with new business models including subscription, transaction or freemium.
Although AI and AI-enabled solutions present tremendous opportunities to support, disrupt or extend traditional legal services, they also present extremely difficult ethical questions for society, policy-makers and legal bodies (e.g. Law Society) to decide.
This is the focus of this article which sets out a summary of these issues, and is structured into two parts:
Current and future use cases and trends of AI in legal and compliance services;
Key issues for stakeholders including legal practitioners, society, organisations, AI vendors, and policy-makers.
A few notes:
This article is not designed to be exhaustive, comprehensive or academically detailed review and analysis of the existing AI and legal services literature. It is a blog post first and foremost (albeit a detailed one) on a topic of personal and professional interest to me, and should be read within this context;
Sources are referenced within the footnotes and acknowledged where possible, with any errors or omissions are my own.
Practical solutions and future research areas of focus is lightly touched on in the conclusion, however is not a focus for this article.
Part 1 – Current and future use cases of AI in legal and compliance services
Historically, AI in legal services has focused on automating tasks via software to achieve the same outcome as if a law practitioner had done the work. However, increasing innovation in AI and experimentation within the legal and broader ecosystem have allowed solutions to accelerate beyond this historical perspective.
The graphic below provides a helpful segmentation of four main use cases of how AI tools are being used in legal services:
A wider view of use cases, which links to existing legal and business processes, is provided below:
document and contract management
legal research and insight
transactions and deals
access to justice
Further context on a selection of these uses is summarised below (note, there is overlap between many of these areas):
E-Discovery – Over the past few years, the market for e-discovery services has accelerated beyond the historical litigation use case and into other enterprise processes and requirements (e.g. AML remediation, compliance, cybersecurity, document management). This has allowed for the development of more powerful and integrated business solutions enabled by the convergence of technologies including cloud, AI, automation, data and analytics. Players in the legal e-discovery space include Relativity, DISCO, and Everlaw.
Document and contract management –The rapid adoption of cloud technologies have accelerated the ability of organisations across all sectors to invest in solutions to better solve, integrate and automate business processes challenges, such as document and contract lifecycle management. For contracts, they need to be initiated (e.g. templates, precedents), shared, stored, monitored (e.g. renewals) or searched and tracked for legal, regulatory or dispute reasons (e.g. AI legaltech start-ups like Kira, LawGeex, and eBrevia). In terms of drafting and collaboration, the power of Microsoft Word, Power Automate and G-Suite solutions has expanded along with a significant number of AI-powered tools or sites (e.g. LegalZoom) that help lawyers (and businesses or consumers) to find, draft and share the right documents whether for commercial needs, transactions or litigation. New ‘alternative legal service’ entrants have combined these sorts of powerful solutions (and others in this list) with lower-cost labour models (with non-legal talent and/or lower-cost legal talent) to provide a more integrated offering for Fortune500 legal, risk and compliance teams (e.g. Ontra, Axiom, UnitedLex, Elevate, Integreon);
Expertise Automation –In the access to justice context, there are AI-powered services that automate contentious or bureaucratic situations for individuals such as utility bill disputes, small claims, immigration filing, or fighting traffic tickets (e.g. DoNotPay). Other examples include workflow automation software to enable consumers to draft a will (for a fixed fee or subscription) or chatbots in businesses to give employees access to answers to common questions in a specific area, such as employment law. It is forseeable that extending this at scale in a B2C context (using AI-voice assistants Siri or Alexa) with a trusted brand (e.g. Amazon Legal perhaps?) – and bundled into your Prime subscription alongside music, videos and same-day delivery – will be as easy as checking the weather or ordering an Uber.
Legal Research – New technologies (e.g. AI, automation, analytics, e-commerce) and business models (e.g. SaaS) have enabled the democratisation of legal knowledge beyond the historic use cases (e.g. find me an IT contract precedent or Canadian case law on limitation of liability). New solutions make it easy for clients and consumers (as well as lawyers) to find answers or solutions to legal or business challenges without interacting with a lawyer. In more recent times, legal publishing companies (e.g. LexisNexis, PLC, Westlaw) have leveraged legal sector relationships and huge databases of information including laws and regulations in multiple jurisdictions to build different AI-enabled solutions and business models for clients (or lawyers). These offerings promise fast, accurate (and therefore cost-effective) research with a variety of analytical and predictive capabilities. In the IP context, intellectual property lawyers can use AI-based software from companies like TrademarkNow and Anaqua to perform IP research, brand protection and risk assessment;
Legal and predictive analytics – This area aims to generate insights from unstructured, fragmented and other types of data sets to improve future decision-making. A key use case are the tools that will analyse all the decisions in a domain (e.g. software patent litigation cases), input the specific issues in a case including factors (e.g. region, judge, parties etc) and provide a prediction of likely outcomes. This may significantly impact how the insurance and medical industry operate in terms of risk, pricing, and business models. For example, Intraspexion leverages deep learning to predict and warn users of their litigation risks, and predictive analytical company CourtQuant has partnered with two litigation financing companies to help evaluate litigation funding opportunities using AI. Another kind of analytics will review a given piece of legal research or legal submission to a court and help judges (or barristers) identify missing precedents In addition, there is a growing group of AI providers that provide what are essentially do-it-yourself tool kits to law firms and corporations to create their own analytics programs customized to their specific needs;
Transactions and deals – Although no two deals are the same, similar deals do require similar processes of pricing, project management, document due diligence and contract management. However, for various reasons, many firms will start each transaction with a blank sheet of paper (or sale and purchase agreement) or a sparsely populated one. However, AI-enabled document and contract automation solutions – and other M&A/transaction tools – are providing efficiencies during each stage of the process. In more advanced cases, data room vendors in partnership with law firms or end clients are using AI to analyse large amounts of data created by lawyers from previous deals. This data set is capable of acting as an enormous data bank for future deals where the AI has the ability to learn from these data sets in order to then:
Make clause recommendations to lawyers based on previous drafting and best practice.
Identify “market” standards for contentious clauses.
Spot patterns and make deal predictions.
Benchmark clauses and documents against given criteria.
Support pricing decisions based on key variables
Access to justice – Despite more lawyers in the market than ever before, the law has arguably never been more inaccessible. From a small consumer perspective, there are thousands of easy-to-use and free or low cost apps or online services which solve many simple or challenging aspects of life, whether buying properties, consulting with a doctor, making payments, finding on-demand transport, or booking household services. However, escalating costs and increasing complexity (both in terms of the law itself and the institutions that apply and enforce it) mean that justice is often out of reach for many, especially the most vulnerable members of society. With the accelerating convergence of various technologies and business models, it is starting to play a role in opening up the (i) provision of legal services to a greater segment of the population and (ii) replacing or augmenting the role of legal experts. From providing quick on-demand access to a lawyer via VC, accelerating time to key evidence, to bringing the courtroom to even the most remote corners of the world and digitizing many court processes, AI, augmented intelligence, and automation is dramatically improving the accessibility and affordability of legal representation. Examples include:
2. Key issues for the future of AI-power legal and compliance services
There are many significant issues and challenges for the legal sector when adopting AI and AI-powered solutions. Whilst every use case of AI-deployment is unique, there are some overarching issues to be explored by key stakeholders including the legal profession, regulators, society, programmers, vendors and government.
A sample of key questions include the following:
Will AI in the future make lawyers obsolete?
How does AI impact the duty of competence and related professional responsibilities?
How do lawyers, users and clients and stakeholders navigate the ‘black box’ challenge?
Do the users (e.g. lawyers, legal operations, individuals) and clients trust the data and the insights the systems generate?
How will liability be managed and apportioned in a balanced, fair and equitable way?
How do organisations identify, procure, implement and govern the ‘right’ AI-solution for their organisation?
Are individuals, lawyers or clients prepared to let data drive decision outcomes?
What is the role of ethics in developing AI systems?
Other important questions include:
How do AI users (e.g. lawyers), clients or regulators ‘audit’ an AI system?
How can AI systems be safeguarded from cybercriminals?
To what extent do AI-legal services need to be regulated and consumers be protected?
Have leaders in businesses identified the talent/skills needed to realise the business benefits (and manage risks) from AI?
To what extent is client consent to use data an issue in the development and scaling of AI systems?
Are lawyers, law students, or legal service professionals receiving relevant training to prepare for how they need to approach the use of AI in their jobs?
Are senior management and employees open to working with or alongside AI systems in their decisions and decision-making?
Below we further explore a selection of the above questions:
Obsolescence – When technology performs better than humans at certain tasks, job losses for those tasks are inevitable. However, the dynamic role of a lawyer — one that involves strategy, negotiation, empathy, creativity, judgement, and persuasion — can’t be replaced by one or several AI programs. As such, the impact of AI on lawyers in the profession may not be as dire as some like to predict. In his book Online Courts and the Future of Justice, author Richard Susskind discusses the ‘AI fallacy’ which is the mistaken impression that machines mimic the way humans work. For example, many current AI systems review data using machine learning, or algorithms, rather than cognitive processes. AI is adept at processing data, but it can’t think abstractly or apply common sense as humans can. Thus, AI in the legal sector enhances the work of lawyers, but it can’t replace them (see chart below).
Professional Responsibility – Lawyers in all jurisdictions have specific professional responsibilities to consider and uphold in the delivery of legal and client services. Sample questions include:
Can a lawyer discharge professional duties of competence if they do not understand how the technology works?
Is a legal chatbot practicing law?
How does a lawyer provide adequate supervision where the lawyer does not understand how the work is being done or even ‘who’ is doing it?
How will a lawyer explain decisions made if they do not even know how those decisions were derived?
To better understand these complex questions, the below summaries some of the key professional duties and how they are being navigated by various jurisdictions:
Duty of Competence: The principal ethical obligation of lawyers when they are developing or assisting clients is the duty of competence. Over the past decade, many jurisdictions are specifically requiring lawyers to understand how (and why) new technologies such as AI, impact that duty (and related duties). This includes the requirement for lawyers to develop and maintain competence in ‘relevant technologies’. In 2012, in the US the American Bar Association (the “ABA”) explicitly included the obligation of “technological competence” as falling within the general duty of competence which exists within Rule 1.1 of its Model Rules of Professional Conduct (“Model Rules”). To date, 38 states have adopted some version of this revised comment to Rule 1.1. In Australia, most state solicitor and barrister regulators have incorporated this principle into their rules. In the future, jurisdictions may consider it unethical for lawyers or legal service professionals to avoid technologies that could benefit one’s clients. A key challenge is that there is no easy way to provide objective and independent analysis of the efficacy of any given AI solution, so that neither lawyers nor clients can easily determine which of several products or services actually achieve either the results they promise. In the long-term, it will very likely be one of the tasks of the future lawyer to assist clients in making those determinations and in selecting the most appropriate solution for a given problem. At a minimum, lawyers will need to be able to identify and access the expertise to make those judgments if they do not have it themselves.
Duty to Supervise – This supervisory duty assumes that lawyers are competent to select and oversee team members and the proper use of third parties (e.g. law firms) in the delivery of legal services. However, the types of third parties used has expanded in recent times due to liberalisation of legal practice in some markets (e.g. UK due to the ABS laws allowing non-lawyers to operate legal services businesses). For example, alternative service providers, legal process outsourcers, tech vendors, and AI vendors have historically been outside of the remit of the solicitor or lawyer regulators (this is changing in various jurisdictions as discussed in below sections). By extension, to what extent is this more than just a matter of the duty to supervise what goes on with third parties, but how those third-parties provide services especially if technologies and tools are used? In such a case, potential liability issues arise if client outcomes are not successful: did the lawyer appropriately select the vendor, and did the lawyers properly manage the use of the solution?
The Duty to Communicate – In the US, lawyers also have an explicit duty to communicate to material matters to clients in connection with the lawyers’ services. This duty is set out in ABA Model Rue 1.4 and other jurisdictions have adopted similar rules. Thus, not only must lawyers be competent in the use of AI, but they will need to understand its use sufficiently to explain to clients the question of the selection, use, and supervision of AI tools.
Black Box Challenge
Transparency – A basic principle of justice is transparency – the requirement to explain and justify the reasons for a decision. As AI algorithms grow more advanced and rely on increasing volumes of structured and unstructured data sets, it becomes more difficult to make sense of their inner workings or how outcomes have been derived. For example, Michael Kearns and Aaron Roth report in Ethical Algorithm Design Should Guide Technology Regulation:
“Nearly every week, a new report of algorithmic misbehaviour emerges. Recent examples include an algorithm for targeting medical interventions that systematically led to inferior outcomes for black patients,a resume-screening tool that explicitly discounted resumes containing the word “women” (as in “women’s chess club captain”), and a set of supposedly anonymized MRI scans that could be reverse-engineered to match to patient faces and names”.
Part of the problem is that many of these types of AI systems are ‘self-organising’ so they are inherently without external supervision or guidance. The ‘secrecy’ of AI vendors – especially those in a B2B and legal services context – regarding the inner workings of the AI algorithms and data sets doesn’t make the transparency and trust issue difficult for customers, regulators and other stakeholders. For lawyers, to what extent must they know the inner workings of that black box to ensure that she meets her ethical duties of competence and diligence? Without addressing this, these problems will likely continue as the legal sector increases its reliance on technology increases and injustices, in all likelihood, continue to arise. Over time, many organisations will need to have a robust and integrated AI business strategy designed at the board and management level to guide the wider organisation on these AI issues across areas including governance, policy, risk, HR and more. For example, during procurement of AI solutions, buyers, stakeholders and users (e.g. lawyers) must consider broader AI policies and mitigate these risk factors during vendor evaluation and procurement.
Algorithms – There are many concerns that AI algorithms are inherently limited in their accuracy, reliability and impartiality. These limitations may be the direct result of biased data, but they may also stem from how the algorithms are created. For example, how software engineers choose a set of variables to include in an algorithm, deciding how to use variables, whether to maximize profit margins or maximize loan repayments, can lead to a biased algorithm. Programmers may also struggle to understand how an AI algorithm generates its outputs—the algorithm may be unpredictable, thus validating “correctness” or accuracy of those outputs when piloting a new AI system. This brings up the challenge of auditing algorithms:
“More systematic, ongoing, and legal ways of auditing algorithms are needed. . . . It should be based on what we have come to call ethical algorithm design, which begins with a precise understanding of what kinds of behaviours we want algorithms to avoid (so that we know what to audit for), and proceeds to design and deploy algorithms that avoid those behaviours (so that auditing does not simply become a game of whack-a-mole).”
In terms of AI applications, most AI algorithms within legal services are currently able to perform only a very specific set of tasks based on data patterns and definitive answers. Conversely, it performs poorly when applied to the abstract or open-ended situations requiring judgment, such as the situations that lawyers often operate in. In these circumstances, human expertise and intelligence are still critical to the development of AI solutions. Many are not sophisticated enough to understand and adapt to nuances, and to respond to expectations and layered meaning, and comprehend the practicalities of human experience. Thus, AI still a long way from the ‘obsolescence’ issue for lawyers raised above, and further research is necessary on programmers’ and product managers’ decision-making processes and methodologies when ideating, designing, coding, testing and training an AI algorithm:
Data – Large volumes of data is a critical part of AI algorithm development as training material and input material. However, data sets may be of poor quality for a variety of reasons. For example, the data an AI system is ‘trained’ on may well include systemic ‘human’ bias, such as recruiters’ gender or racial discrimination of job candidates. In terms of data quality in law firms, most are slow at adopting new technologies and tend to be “document rich, and data poor” due, in large part, to legacy on-premise systems (or hybrid cloud) which do not integrate with each other. As more firms and enterprises transition to the cloud, this will accelerate the automation of business processes (e.g. contract management) with more advanced data and analytics capabilities to enable and facilitate AI system adoption (in theory, however there are many constraints within traditional law firm business and operating models which makes the adoption of AI-enabled solutions at scale unlikely). However, 3rd party vendors within the legal sector including e-discovery, data rooms, and legal process outsourcers – or new tech-powered entrants from outside of the legal sector – do not have such constraints and are able to innovate more effectively using AI, cloud, automation and analytics in these contexts (however other constants exist such as client consent and security). In the court context, public data such as judicial decisions and opinions are either not available or so varied in format as to be difficult to use effectively. Beyond data quality issues, significant data privacy, client confidentiality and cybersecurity concerns exist which raises the need to define and implement standards (including safeguards) to build confidence in the use of algorithmic systems – and especially in legal contexts. As AI becomes more pervasive within law firms, legal departments, legal vendors (including managed services) and new entrants outside of legal, a foundation with strong guidelines for ethical use, transparency, privacy, cross-department sharing and more becomes even crucial.
Implementation – Within the legal sector, law firms and legal departments are laggards when it comes to adopting new technologies, transforming operations, and implementing change. With business models based on hours billed (e.g. law firms), this may not incentivize the efficiency improvements that AI systems can provide. In addition:
“Effective deployment of AI requires a clearly defined use case and work process, strong technical expertise, extensive personnel and algorithm training, well-executed change management processes, an appetite for change and a willingness to work with the new technologies. Potential AI users should recognize that effectively deploying the technology may be harder than they would expect. Indeed, the greatest challenge may be simply getting potential users to understand and to trust the technology, not necessarily deploying it.
However, enterprises (e.g. Fortune500), start-ups, alternative service providers (e.g. UnitedLex) and new entrants from outside of legal do not suffer from these constraints, and are likely to be more successful – from a business model and innovation perspective – in adopting new AI-enabled solutions for use with clients (although AI-enabled providers must work to overcome client concerns as discussed above).
Liability – There are a number of issues to consider on the topic of liability. Key questions are set out below:
Who is responsible when things do go wrong? Although AI might be more efficient than a human lawyer at performing these tasks, if the AI system misses clauses, mis-references definitions, or provides incorrect outcome/price predictions caused by AI software, all parties risk claims depending on how the parties apportioned liability. The role of contract and insurance is key, however this assumes that law firms have the contractual means of passing liability (in terms of professional duties) onto third parties. In addition, when determining relative liability between the provider of the defective solution and the lawyer, should a court consider the steps the lawyer took to determine whether the solution was the appropriate one for use in the particular client’s matter?
Should AI developers be liable for damage caused by their product? In most other fields, product liability is an established principle. But if the product is performing in ways no-one could have predicted, is it still reasonable to assign blame to the developer? AI systems also often interact with other systems so assigning liability becomes difficult. AI solutions are also fundamentally reliant on the data they were trained on, so liability may exists with the data sources. Equally, there are risks of AI systems that are vulnerable to hacking.
To what extent are, or will, lawyers be liable when and how they use, or fail to use, AI solutions to address client needs? One example explained above is whether a lawyer or law firm will be liable for malpractice if the judge in a matter accesses software that identifies guiding principles or precedents that the lawyer failed to find or use. It does not seem to be a stretch to believe that liability should attach if the consequence of the lawyer’s failure to use that kind of tool is a bad outcome for the client and the client suffers injury as a result.
Regulatory Issues – As discussed above, addressing the significant issues of bias and transparency in AI tools, and, in addition, advertising standards, will grow in importance as the use of AI itself grows. Whilst the wider landscape for regulating AI is fragmented across industry and political spheres, there are signs the UK, EU and US are starting to align. Within the legal services sector, some jurisdictions (e.g. England, Wales, Australia and certain Canadian provinces) are in the process of adopting and implementing a broader regulatory framework. This approach enables the legal regulators to oversee all providers of legal services, not just traditional law firms and/or lawyers. However, in the interim the implications of this regulatory imbalance will become more pronounced as alternative legal service providers play an increasing role in providing clients with legal services, often without any direct involvement of lawyers. In the long run, a broader regulatory approach is going to be critically important in establishing appropriate standards for all providers of AI-based legal services.
Ethics – The ethics of AI and data uses remains a high concern and key topic for debate in terms of the moral implications or unintended consequences that result from the coming together of technology and humans. Even proponents of AI, such as Elon Musk’s OpenAI group, recognise the need to police AI that could be used for ‘nefarious’ means. A sample of current ethical challenges in this area include:
Big data, cloud and autonomous systems provoke questions around security, privacy, identify, and fundamental rights and freedoms;
AI and social media challenge us to define how we connect with each other, source news, facts and information, and understand truth in the world;
Global data centres, data sources and intelligent systems means there is limited control of the data outside our borders (although regimes including GDPR is addressing this);
Is society content with AI that kills? Military applications including lethal autonomous weapons are already here;
Facial recognition, sentiment analysis, and data mining algorithms could be used to discriminate against disfavoured groups, or invade people’s privacy, or enable oppressive regimes to more effectively target political dissidents;
It may be necessary to develop AI systems that disobey human orders, subject to some higher-order principles of safety and protection of life;
Over the years, the private and public sectors have attempted to provide various frameworks and standards to ensure ethical AI development. For example, the Aletheia Framework (developed by Rolls-Royce in an open partnership with industry) is a recent, practical one-page toolkit that guides developers, executives and boards both prior to deploying an AI, and during its use. It asks system designs and relevant AI business managers to consider 32 facets of social impact, governance and trust and transparency and to provide evidence which can then be used to engage with approvers, stakeholders or auditors. A new module added in December 2021 is a tried and tested way to identify and help mitigate the risk of bias in training data and AIs. This complements the existing five-step continuous automated checking process, which, if comprehensively applied, tracks the decisions the AI is making to detect bias in service or malfunction and allow human intervention to control and correct it.
Within the practice of law, while AI offers cutting-edge advantages and benefits, it also raises complicated questions for lawyers around professional ethics. Lawyers must be aware of the ethical issues involved in using (and not using) AI, and they must have an awareness of how AI may be flawed or biased. In 2016, The House of Commons Science and Technology Committee (UK Parliament) recognised the issue:
“While it is too soon to set down sector-wide regulations for this nascent field, it is vital that careful scrutiny of the ethical, legal and societal dimensions of artificially intelligent systems begins now”.
In a 2016 article in the Georgetown Journal of Legal Ethics, the authors Remus and Levy were concerned that:
“…the core values of legal professionalism meant that it might not always be desirable, even if feasible, to replace humans with computers because of the different way they perform the task. This assertion raises questions about what the core values of the legal profession are and what they should or could be in the future. What is the core value of a solicitor beyond reserved activities? And should we define the limit of what being a solicitor or lawyer is?
These are all extremely nuanced, complex and dynamic issues for lawyers, society, developers and regulators at large. How the law itself may need to change to deal with these issues will be a hot topic of debate in the coming years.
Over the next few years there can be little doubt that AI will begin to have a noticeable impact on the legal profession and consumers of legal services. Law firms, in-house legal departments and alternative legal services firms and vendors – plus new entrants outside of legal perhaps unencumbered by the constraints of established legal sector firms – have opportunities to explore and challenges to address, but it is clear that there will be significant change ahead. What is required of a future ‘lawyer’ (this term may mean something different in the future) or legal graduate today – let alone in 2025 or 2030 versus new lawyers of a few decades ago, will likely be transformed in many ways. There are also many difficult ethical questions for society to decide, for which the legal practice regulators (e.g. Law Society in England and Wales) may be in a unique position to grasp the opportunity of ‘innovating the profession’ and lead the debate. On the other hand, as the businesses of the future become more AI-enabled at their core (e.g. Netflix, Facebook, Google, Amazon etc), the risk that many legal services become commoditised or a ‘feature set’ within a broader business or service model is a real possibility in the near future.
At the same time, AI itself poses significant legal and ethical questions across all sorts of sectors and priority global challenges, from health, to climate change, to war, to cybersecurity. Further analysis on the legal and ethical implications of AI for society, legal practitioners, organisations, AI vendors, and policy-makers, plus what practical solutions can be employed to navigate the safe and ethical deployment of AI in the legal and other sectors, will be critical.
 AI could contribute up to $15.7 trillion1 to the global economy in 2030, more than the current output of China and India combined. Of this, $6.6 trillion is likely to come from increased productivity and $9.1 trillion is likely to come from consumption side effects.
I recently came across a brilliant guide from Plexus, a legal software and services provider based in Australia. Although most GCs will rate rate ‘investing in technology and automation’ as their top priority, many initiatives will never get past the starting line.
There are many reasons for this and everyone’s context is different. It is certainly not for lack of intent, ambition, need, or interest.
According to Plexus, the biggest challenge functional leaders have is when they are required to rapidly work and operate in a cross-functional way. These skills are a new ‘core’ capability for a legal function or executive:
If you ask a GC how they select and sign up a new law firm to spend $100,000 - the answer is often as simple as a few emails and signed engagement letter. Ask them how they will spend half that amount on technology, and they will scratch their head… even though - because of the ‘sunk cost’ nature of professional services spend’ it is far more likely that it will not generate value
Management consultants whether McKinsey, BCG or Accenture have made an industry out of identifying best practices and applying to specific company challenges.
Although most in-house legal and compliance departments have remained immune from this for many decades, the tide has been turning for some years now with many legal departments building out higher-performing teams, operations, and services. Leveraging best practice insight – from across all sectors not just legal teams – has been a key ways to support this.
“While every company and team has its own unique needs, the guidance in these functional areas – known as the “Core 12” – applies to many environments and requirements towards operational excellence”.
The Core 12 can be seen below:
Essentially these are the operations, services or capabilities which define the legal function. CLOC provide more context below:
“Legal operations” (or legal ops) describes a set of business processes, activities, and the professionals who enable legal departments to serve their clients more effectively by applying business and technical practices to the delivery of legal services. Legal ops provides the strategic planning, financial management, project management, and technology expertise that enables legal professionals to focus on providing legal advice.
The Core 12 allows any legal department leader or 3rd party consultant to assess their current state of performance maturity, map it to the ideal state, and then decide and plan what are steps they wish to take to improve which makes sense for their specific context and constraints.
The last aspect is critical as the Head of Legal in a Series A-funded start-up will have completely different challenges, requirements and objectives to a Fortune 100 legal team.
When selecting one of the 12, you can deep-dive further into that area of competence. For example, with Technology, CLOC provide the following high-level (and non-exhaustive) detail to help understand what good generally looks like:
TECHNOLOGY: Innovate, automate, and solve problems with technology.
Current reality: Teams often rely on manual, time-consuming, and fragmented point solutions. They may lack an overall technology vision and are deploying costly applications that are underused and disconnected from the team’s workflow.
Desired state: Create a clear technology vision that spans all of the needs of your organization. Automate manual processes, digitize physical tasks, and improve speed and quality through the strategic deployment of technology solutions.
Create and implement a long-term technology roadmap
Incorporate connected tools for e-billing, matter management, contact management, IP management, e-signature, and more
Automate repetitive or time-consuming manual processes
Determine where to build and where to buy
Evaluate new vendors, suppliers, and solutions
Assess emerging technology capabilities and incorporate into your long-term strategic planning
Structure an effective partnership with your corporate IT team
Although the CLOC 12 isn’t of itself a useable tool as far as detailed diagnostic, business analysis or benchmarking is concerned, it does provide a helpful introduction for legal leaders looking to learn more about what good looks like in terms of legal operations and capabilities.
CLOC have a download guide with more information on the Core 12 which you can access here
This week I came across an article from Gartner (Feb 21) covering future trends within in-house legal departments. I think the list still holds true, although it will be interesting to check when the 2022 version comes out soon.
Obviously most lawyers have long been resistant to and risk-averse about new technologies and automation, but the effects of the pandemic forced many to shift gears in 2020 and pursue — or at least actively consider — more extensive automation of certain legal activities, especially those around major corporate transactions. According to Gartner, the challenge now is deciding which technologies to embrace to drive real business outcomes.
“The new pressures brought about by the coronavirus pandemic certainly have acted as a catalyst for this shift,” says Zack Hutto, Director, Advisory, Gartner. “Legal and compliance teams have rarely been frontrunners to modernize, digitalize and automate. The pandemic has flattened staffing budgets and increased legal workloads; technology is the most obvious solution for many legal departments.”
Here are the Top 5 trends:
Trend No. 1: By 2025, legal departments will increase their spend on legal technology threefold
Trend No. 2: By 2024, legal departments will replace 20% of generalist lawyers with non-lawyer staff
Trend No. 3: By 2024, legal departments will have automated 50% of legal work related to major corporate transactions
Trend No. 4: By 2025, corporate legal departments will capture only 30% of the potential benefit of their contract life cycle management investments
Trend No. 5: By 2025, at least 25% of spending on corporate legal applications will go to nonspecialist technology providers
Last Wednesday BBC R4 hosted the first of 4 weekly lectures hosted by Professor Stuart Russell, a world-renowned AI expert at UCLA. The talks (followed by Q&A) examine the impact of AI on our lives and discuss how we can retain power over machines more powerful than ourselves.
I think this area (e.g. AI commercialisation, AI governance, AI safety, AI ethics, AI regulation etc) is going to be one of the hot topics of the next decade alongside trends including climate change, fintech (crypto), AR/VR, quantum computing etc. Accordingly I couldn’t wait to hear Professor Russell speak.
The event blurb states the following:
The lectures will examine what Russell will argue is the most profound change in human history as the world becomes increasingly reliant on super-powerful AI. Examining the impact of AI on jobs, military conflict and human behaviour, Russell will argue that our current approach to AI is wrong and that if we continue down this path, we will have less and less control over AI at the same time as it has an increasing impact on our lives. How can we ensure machines do the right thing? The lectures will suggest a way forward based on a new model for AI, one based on machines that learn about and defer to human preferences.
As I write, I have heard 2 talks both of which have been absolutely fascinating (and quite honestly, scary. Especially regarding military applications of AI which is already here). I didn’t take notes however the BBC interviewed Professor Russell ahead of the talks. I have provided a summary of the Q&A below which is well worth a read:
How have you shaped the lectures?
The first drafts that I sent them were much too pointy-headed, much too focused on the intellectual roots of AI and the various definitions of rationality and how they emerged over history and things like that.
So I readjusted – and we have one lecture that introduces AI and the future prospects both good and bad.
And then, we talk about weapons and we talk about jobs.
And then, the fourth one will be: “OK, here’s how we avoid losing control over AI systems in the future.”
Do you have a formula, a definition, for what artificial intelligence is?
Yes, it’s machines that perceive and act and hopefully choose actions that will achieve their objectives.
All these other things that you read about, like deep learning and so on, they’re all just special cases of that.
But could a dishwasher not fit into that definition?
It’s a continuum.
Thermostats perceive and act and, in a sense, they have one little rule that says: “If the temperature is below this, turn on the heat.
“If the temperature is above this, turn off the heat.”
So that’s a trivial program and it’s a program that was completely written by a person, so there was no learning involved.
All the way up the other end – you have the self-driving cars, where the decision-making is much more complicated, where a lot of learning was involved in achieving that quality of decision-making.
But there’s no hard-and-fast line.
We can’t say anything below this doesn’t count as AI and anything above this does count.
And is it fair to say there have been great advances in the past decade in particular?
In object recognition, for example, which was one of the things we’ve been trying to do since the 1960s, we’ve gone from completely pathetic to superhuman, according to some measures.
And in machine translation, again we’ve gone from completely pathetic to really pretty good.
So what is the destination for AI?
If you look at what the founders of the field said their goal was, general-purpose AI, which means not a program that’s really good at playing Go or a program that’s really good at machine translation but something that can do pretty much anything a human could do and probably a lot more besides because machines have huge bandwidth and memory advantages over humans.
Just say we need a new school.
The robots would show up.
The robot trucks, the construction robots, the construction management software would know how to build it, knows how to get permits, knows how to talk to the school district and the principal to figure out the right design for the school and so on so forth – and a week later, you have a school.
And where are we in terms of that journey?
I’d say we’re a fair bit of the way.
Clearly, there are some major breakthroughs that still have to happen.
And I think the biggest one is around complex decision-making.
So if you think about the example of building a school – how do we start from the goal that we want a school, and then all the conversations happen, and then all the construction happens, how do humans do that?
Well, humans have an ability to think at multiple scales of abstraction.
So we might say: “OK, well the first thing we need to figure out is where we’re going to put it. And how big should it be?”
We don’t start thinking about should I move my left finger first or my right foot first, we focus on the high-level decisions that need to be made.
You’ve painted a picture showing AI has made quite a lot of progress – but not as much as it thinks. Are we at a point, though, of extreme danger?
I think so, yes.
There are two arguments as to why we should pay attention.
One is that even though our algorithms right now are nowhere close to general human capabilities, when you have billions of them running they can still have a very big effect on the world.
The other reason to worry is that it’s entirely plausible – and most experts think very likely – that we will have general-purpose AI within either our lifetimes or in the lifetimes of our children.
I think if general-purpose AI is created in the current context of superpower rivalry – you know, whoever rules AI rules the world, that kind of mentality – then I think the outcomes could be the worst possible.
Your second lecture is about military use of AI and the dangers there. Why does that deserve a whole lecture?
Because I think it’s really important and really urgent.
And the reason it’s urgent is because the weapons that we have been talking about for the last six years or seven years are now starting to be manufactured and sold.
So in 2017, for example, we produced a movie called Slaughterbots about a small quadcopter about 3in [8cm] in diameter that carries an explosive charge and can kill people by getting close enough to them to blow up.
We showed this first at diplomatic meetings in Geneva and I remember the Russian ambassador basically sneering and sniffing and saying: “Well, you know, this is just science fiction, we don’t have to worry about these things for 25 or 30 years.”
I explained what my robotics colleagues had said, which is that no, they could put a weapon like this together in a few months with a few graduate students.
And in the following month, so three weeks later, the Turkish manufacturer STM [Savunma Teknolojileri Mühendislik ve Ticaret AŞ] actually announced the Kargu drone, which is basically a slightly larger version of the Slaughterbot.
What are you hoping for in terms of the reaction to these lectures – that people will come away scared, inspired, determined to see a path forward with this technology?
All of the above – I think a little bit of fear is appropriate, not fear when you get up tomorrow morning and think my laptop is going to murder me or something, but thinking about the future – I would say the same kind of fear we have about the climate or, rather, we should have about the climate.
I think some people just say: “Well, it looks like a nice day today,” and they don’t think about the longer timescale or the broader picture.
And I think a little bit of fear is necessary, because that’s what makes you act now rather than acting when it’s too late, which is, in fact, what we have done with the climate.
Legal tech companies have already seen more than $1 billion in venture capital investments so far this calendar year, according to Crunchbase data. That number smashes the $510 million invested last year and the all-time high of $989 million in 2019.
While dollars are higher, deal flow is a little behind previous years, with 85 funding rounds being announced so far in 2021, well behind the pace of 129 deals last year and 147 in 2019.
Some of the largest rounds in the sector this year include:
San Francisco-based Checkr, a platform that helps employers screen job seekers through initiating background checks, raised a $250 million Series E at a $4.6 billion valuation earlier this month;
Boston-based on-demand remote electronic notary service Notarize raised a $130 million Series D in March at a reported $760 million valuation.
According to various start-up founders:
“This mainly is a paper-based industry. However, COVID exposed inefficiencies and it forced people to look at everything you do and explore new ways.”- Patrick Kinsel, founder and CEO at Notarize
“There’s no doubt COVID provided huge tailwinds for legal tech growth,” said Jack Newton, co-founder and CEO at Vancouver-based legal tools platform Clio, which raised a $110 million Series E at a $1.6 billion valuation. “It was the forcing factor for firms that had put off their transformation.”
Impact of the Cloud: Just as in many industries, the cloud and other new tech had been slowly changing the legal world for more than a decade. However, after COVID caused offices to close and legal processes and documents to go virtual, adoption of those technologies skyrocketed. Investors started to eye technologies that took many firms “in-house” processes and moved them to the cloud—many involving documentations and filings as well as tools to help better communicate with clients.
2. Cloud-first generation: Many general counsels are now coming from a “cloud-first” generation and know the importance of things such as data insights that can help predict outcomes. Just as data and AI has changed marketing, sales and finance, the legal community is now catching on, and many don’t just want to be a cost centre
3. Increasing investor knowledge: The increasing market and scaling legaltech start-ups are causing VCs to take note. While many investors eyed the space in the past, more investors have knowledge about contracts and legal tech, and founders do not tend to have to explain the market
However, the market is still small albeit growing and no ‘goliaths’ exist in the space. With no large incumbents, how investors see returns remains a popular question.
This may chance if, for example, horizontal software companies like Microsoft or Salesforce could become interested in the space—as legal tech has data and analytics those types of companies find useful, Wedler said.
Some companies in the space also have found private equity a viable exit, with films like Providence Equity rolling up players such as HotDocs and Amicus Attorney several years ago.
However, perhaps more interesting to some startups is the legal tech space even saw an IPO this year, with Austin, Texas-based Disco going public on the New York Stock Exchange in July. The company’s market cap now sits at $2.8 billion.
One thing most seem certain about is that while the legal world’s tech revolution may have been brought on by a once-in-a-century event—there is no turning back.
This week I attended a virtual Summit hosted by CLOC (Corporate Legal Operations Consortium). One interesting session covered lessons learned from developing and implementing legaltech and operation changes within legal and compliance teams of large corporates and SMEs.
Panelists including range of lawyers, project managers and legalops experts from Netflix, Salesforce and GE and covered topics including:
*How to manage change and being comfortable with being uncomfortable
*Avoiding big bang deployments which are so risky now vs POC/MVP and more agile approaches to change
*Learning how to take some budget off the total and use it for experimenting and be prepared to fail.
The below is a blurb introducing the session:
“…On the path to success, failure is not only an option, it’s inevitable. Mistakes, missteps, and misunderstandings are opportunities to acquire new skills and knowledge that can contribute to your professional growth. The growing and evolving legal operations profession is filled with opportunities to evolve beyond errors.
In this honest and impactful session, legal operations professionals will share a key moment of failure and how they learned and grew from it. After hearing from the panelists on their vulnerable moments of growth, we will spend time as a group sharing our own stories and offering our peers perspectives and possible solutions for overcoming some of their failures…”
Below I captured a few nuggets of gold from the panellists:
Give a purpose to failure; this helps to gain buy-in from users and clarify the bigger picture
Allow the community to own the new way to work rather than push to them
Leadership (e.g. town hall) to set the tone
Wider business context, and show why the change is important
Be transparent – there will be failure. Expect it. Tolerate it!
If leading the change, need to get to a stage of comfortable with being uncomfortable…but not too uncomfortable. Have to be mindful of current state of culture, empathise with users
Need to balance focus on the big picture using storytelling, sales skills etc as can’t control every details of the change
Experimental in communication, design thinking, courageous leadership, state of culture a huge consideration on how to balance approach
Big bang projects are so risky now vs POC/MVP and more agile approaches
Here is a list of great resources I have started to compile recently. I’ll continue to add here over time and hopefully build up a pretty good catalogue for those people interested in optimising and transforming legal services, whether legal and compliance teams and departments, law firms, or other service providers.
As the list get bigger I’ll start to add headings to make it easier to follow. If you come across any interesting resources, frameworks, guides, research etc, be sure to email me at firstname.lastname@example.org
In it, the author compiles an inventory of questions you might need to better understand the problem you are trying to solve for the customer, how important is it for them in the context of their life, how they currently solve it (or not), and so on.
Having founded start-ups and advised many other founders, it still surprises me that many people do not take the time to do this work to the quality and depth required. Many do not even know about it or value its importance, which really baffles me. Obviously it is more ‘exciting’ to get on with it and ‘start building’, although this is fraught with serious risks.
In any case, the tool above is certainly a great way to have better conservations with potential customers and shape propositions accordingly.
This week I came across a blog post from ImpactMyBiz which compiled a list of great statistics, use cases and market data pertaining to the current state of technology in the legal sector.
In sum, there’s a lot of good progress but the sector is still subject to a lot of hype and extremely slow adoption when compared to other sectors. This is moreso in the B2B space with B2C innovation moving at a faster rate of adoption in improvement over time.
Perhaps the continued challenges presented by COVID around the world, increasing regulatory complexity, competitive pressures from alternative legal service providers (ALSP) and new entrants, remote working, client cost pressures, access to justice, and other key drivers will continue to move the needle forward.
25 legal tech stats to shed light on where where the industry is heading for in the new decade:
1. In 2018, legal tech investments broke the $1 billion mark. That figure was topped in 2019, with $1.23 billion in funding by the end of the third quarter alone.
2. With the help of AI, a contract can be reviewed in less than an hour, saving 20-90% of the time needed to perform this work manually without sacrificing accuracy.
3. AI legal technology offerings for businesses increased nearly two-thirds in 2020 compared to 2019.
4. JP Morgan launched their in-house program, COIN, which extracts 150 attributes from 12,000 commercial credit agreements and contracts in a few seconds. This is equivalent to 360,000 hours of legal work by lawyers and loan officers per year.
5. Cloud usage among firms is 58%, with smaller firms and solos leading the way.
6. Security measures are lacking, with no more than 35% of firms using precautionary cybersecurity measures to protect their businesses. A staggering 7% of firms have no security measures at all.
7. Despite some reservations, lawyers continue to use popular consumer cloud services like Google Apps, iCloud and Evernote at higher rates than dedicated legal cloud services. Clio and NetDocuments ranked the highest among the legal cloud services.
8. The percentage of the ABA 2019 Legal Technology Survey participants answering “Yes” to the basic question of whether they had used web-based software services or solutions grew slightly, from 55% to 58%. 31% said “No”, a small decrease.
9. When asked what prevented their law firms from adopting the cloud, 50% cited confidentiality/security concerns, 36% cited the loss of control and 19% cited the cost of switching.
10. 26% of respondents in a 2019 survey report that their law firms have experienced some sort of security breach
11. In 2018, just 25% of law firms reported having an incident response plan. In 2019, this figure had risen to 31%, and we expect the same for 2020.
12. Interest in cloud services from law firms is high, but expectations of adoption among them remain low, with just 8% of firms indicating they will replace existing legacy software with cloud tools.
13. Only one-third of lawyers (34%) believe their organizations are very prepared to keep up with technology changes in the legal market.
14. Firms described as “technology leading” fared better, with 50% prepared to meet digital technology demands in the industry.
15. 49% of law firms report that they are effectively using technology today, and 47% say they can improve technology adoption and plan to do so.
16. Over half (53%) of lawyers in the US and Europe say their organizations will increase technology investment over the next three years.
17. While over half of lawyers expect to see transformational change in their firms from technology like AI, big data and analytics, fewer than one quarter say they understand them.
18. The biggest trends cited by lawyers that are driving legal tech adoption are “Coping with increased volume and complexity of information” and “Emphasis on improved productivity and efficiency.”
19. It is estimated that 23% of work done by lawyers can be automated by existing technology.
20. 27% of the senior executives at firms believe that using digital transformation is not a choice, but a matter of survival.
21. The top challenges for corporate legal departments today include reducing and controlling outside legal costs; improving case and contract management; and automating routine tasks and leveraging technology in work processes.
22. 60% of lawyers believe their legal firm is ready to adopt new technology for routine tasks.
23. According to research conducted by Gartner, only 19% of law firms’ in-house teams are ready to move forward with enterprise-level digital strategies.
24. A recent study uncovered that 70% of consumers would rather use an automated online system or “lawbot” to handle their legal affairs instead of a human lawyer because of three important factors—cost, speed, and ease of use.
25. 70% of businesses indicated that “using tech to simplify workflow and manual processes” to cut costs was a top priority going forward.
At the time, it was unusual for a major corporate firm to be experimenting into different areas.
The question for the presentation was as follows:
I focused on 2 main themes of (a) Changing the mind-set and (b) Managing innovation.
Since then, in six years a lot of innovation has been introduced into the legal sector. However, it has been a fairly low-bar for many years with the legal sector ‘glacial’ when it comes to change and technology.
Certainly the ‘legaltech’ and/or ‘lawtech’ markets have received significant injections of VC to build next generation B2C and B2B solutions. Most large firms are now experimenting with different AI and automation solutions, running incubators, offering flexible resourcing arrangements, investing in start-ups, and so on.
To better support Fortune500 General Counsels with their efficiency challenges, the Big4 are building services and capability at scale, as are legal process outsourcers and ALSP’s.
Many of these ideas were referenced in the presentation.
However, the critical question is has anything really changed in how legal services are delivered, bought and sold? How much of this is ‘innovation theatre’ and nibbling around the edges versus real change?
Does the partner in the Freshfields office in HK work any differently then they did as a trainee 20 years ago?
Are the skills and requirements of a newly qualified lawyer any different?
Does the single lawyer law office in Bristol run their practice any differently?
Does the COO of a regional law firm run the business any differently?
Do consumers who need a family lawyer do this any differently?
Does the barrister or judge involved in a trial do this any differently?
The short answer I think is not a great deal of change across the industry as a whole. However there has been a tonne of experimentation and innovation in some fragmented areas, especially in B2C (e.g. DoNotPay). COVID-19 has certainly accelerated this, and that can only be a good thing.
I think what we are seeing is a marathon, not a sprint. In fact, it is more like the start of a triathlon where there’s a washing-machine effect as participants fight their way forward before a steadier state emerges.
We see this with most new technologies, where things often take much longer to truly disrupt. In retail and e-Commerce, it is only recently that the Internet is causing significant challenges for traditional players, almost 20 years after the Dot.Com crash in 2001.
One thing is for sure – the next 10 to 15 years in the legal sector will be fascinating.
Today I sat (and passed) the MoP Foundation Exam run by PeopleCert on behalf of AXELOS. I’ll do the final Practitioner exam next week. I bought the on-demand training via SPOCE, a UK training firm specialising in project management (“PM”) certifications such as PRINCE2, Agile, MSP etc.
Although I’ve had over 15 years experience with PM (including courses in PRINCE2, ITIL), it has been an extremely worthwhile exercise to build and consolidate knowledge on best practices around managing change portfolios.
For those not familiar with portfolio management, it helps organisations to make better decisions about implementing the right changes to their business as usual (BAU) activity via projects and programmes.
In my experience – and supported by many studies and anecdotal evidence – most change initiatives tend to fail or not realise intended benefits. There are many reasons for this but certainly high-performing organisations invest in the right initiatives and implementing them properly.
In other words, such organisations do the right things, and realise all the benefits.
The Practitioner Exam next week will be significantly tougher than the Foundation. I better get back to studying.
In July last year I published a research and later and an eBook called REIGNITE! From Crisis To Opportunity In A COVID World. In light of a recent lockdown where I live (Guernsey) I thought it worth reflecting on what I wrote back then. To help I’ve pasted an infographic containing 8 areas where leaders should focus to rebuild their organisations.
Six months on and most (if not all) recommendations still remain, from prioritising digital investments, pushing ahead with smarter working policies, and leading with empathy. Whether or not organisations have implemented some or all of these is likely to be another story.
“Science, technology and innovation (STI) are universally recognized as key drivers for economic growth, improving prosperity, and essential components for achieving the Sustainable Development Goals (SDGs)” – UN Conference on Trade and Development (2019)
A few months ago I wrote down some thoughts and questions after being inspired by political events where I live (Guernsey) and internationally (e.g. US). In both jurisdictions, the balance of power has dramatically shifted for different reasons but both against a backdrop of major crises including health (COVID), rising inequality, and skills gaps.
In essence, I was trying to think through answering 2 key questions for the new Government and ecosystem players (e.g.businesses, investors, educators etc): what are some key STI areas of focus, and what questions would I ask?
I have since shared the memo with various stakeholders in the ecosystem, and now I thought it would make sense to post it publicly here. If you have any feedback, be sure to let me know
Research, analysis and policy development opportunities and questions for the new Government and ecosystem players (e.g. businesses, investors, educators etc)
Business case for an STI economy: the importance of ‘science, technology and innovation’ for Guernsey’s future in driving economic growth and improved prosperity for all citizens
Key STI trends, opportunities and challenges
What is STI/digital, why important, global best practices
Why important for Gsy?
Defining and measuring Guernsey’s existing STI/digital economy
Benefits and impacts to economy, society, prosperity and infrastructure
Jobs, skills, human capital and education
Role of stakeholders e.g. education, govt, business, people etc
Building blocks, what is needed? E.g.
Policy and regulatory frameworks
Institutional setting and governance
Entrepreneurial ecosystems and access to finance
Technical/ICT & R&D infrastructure
Relevance of Sustainability, Green Finance, Solar/Wind, FinTech, RiskTech, RegTech, GovTech
Role of tax policy, skills, FDI, govt, business etc
Other sample areas of ‘innovation policy’ to explore:
Environment: Sustainability, ESG and climate change
What is best practice around the world in small island or communities
What are some potential or viable new business opportunities
Assess current state of initiatives (e.g. Green Funds)
Evaluate new initiatives e.g. Wind, solar etc
International collaboration and trade
How important is it to be more market-focused and rethink and prioritise international partnerships/affairs? E.g. Jersey
A colleague and partner Chris Brock covers some of this topic in a recent report here
Regulatory, governance and risk innovation: to what extent do the various regulatory bodies and related private/public sector organisations (e.g. GFSC, Cicra, TISE, DPO etc) need to adopt a more balanced and innovative approach to regulation and new business? How to accelerate existing initiatives and opportunities? e.g. Green Finance
What is the nature of the current approach?
How to balance bureaucracy/risk-adversity in the Guernsey ecosystem but at same time encourage innovation, FDI and new businesses?
What are best practice examples of innovative regulatory/risk models from competing or similar jurisdictions or around the world?
To what extent could this be useful in Guernsey?
How is the wider market evolving and how will this impact Guernsey?
What are the pros/cons and opportunities/threats?
What new business opportunities a more innovative approaches enable? E.g. FinTech, RegTec
What are practical recommendations forward and for which actors
Role of market-creating innovations to drive prosperity AND economic growth (MCI): What is the opportunity for Guernsey to incubate market-creating innovations for local use and export? And how can Guernsey facilitate the development of MCIs across different sectors – e.g. FS, Infrastructure, Transport, Environment etc – for local use and export to improve income inequality and other social/economic benefits?
Today I came across a brilliant resource from Steve Blank for anyone interested in better understanding ‘lean’. It covers resources helpful for a formal class or for anyone who wants to review the basics. Here is what he provided:
I first came across Tony Hsieh when I read his book Delivering Happiness soon after it was published in 2010. I remember immediately being captivated by his story as a scrappy but ultimately successful tech start-up founder, and then as an early investor and employee at ShoeSite (later Zappos).
There he focused on people and tested ‘radical’ management concepts such as:
Pay brand-new employees $2,000 to quit Make customer service the responsibility of the entire company-not just a department Focus on company culture as the #1 priority Apply research from the science of happiness to running a business Help employees grow-both personally and professionally Seek to change the world Oh, and make money too . . .
Aside from these techniques which helped propel Zappos into the hands of Amazon for $1B+ , the bigger impact for me was how ‘simply’ he was able to communicate in the pages of the book. There was a real ‘humanity’ with the way he wrote which was in stark contrast from most other best-selling leadership and business books of that era (e.g. Jack Welsh).
You got a real sense that the author really cared about using business as a means to do good, and make money for not just himself but colleagues and investors. I later learned that he deployed significant amounts of his wealth into various regeneration and gentrification projects around Las Vegas (according to various reports, some were successful, others not so much).
The world of entrepreneurship is certainly worse-off with Tony’s loss.
For more context on Tony’s life and the impact he had, this NY Times obituary is well worth a read.
This week I have asked the team to get more ‘granular’ to better define, understand and analyse the problem they are focused on solving i.e. identify user pain-points, challenges, jobs to be done.
In the original Uber pitch deck the co-founders demonstrated a good understanding of the problem for the different stakeholders. Once this is done to a satisfactory level, you can then start to ‘test’ with customer research, experiments and MVPs.
To assist the team, below I provided some great videos from Strategyzer