Managing Artificial Intelligence Projects: Framework for Success. Part 3
Problem statement
The lifecycle starts with the identification, clarification, and formulation of the problem. The problem is defined with stakeholders to determine if AI solution is suitable. Data and software development teams collaborate with stakeholders to complete a project document that includes the problem statement, goals, and business case. Domain experts outside the teams also contribute. When defining the problem statement, its typology, environment, expected objectives, stakeholders, systems, processes, and data should be considered. Typologies typically fall into four categories: strategic, tactical, operational, and research.
Strategic typology adopts a high-level transformative approach, leveraging AI to address broad challenges in specific areas. Examples include, “How can AI be utilized for strategic planning of supply chain logistics to minimize costs and maximize efficiency?” or “Using AI to predict and maximize the long-term value of customers”.
Tactical typology is goal-oriented, seeking to improve current manual, mechanical, technical, or computational methods to enhance productivity or output quality. Examples are, “Implementing AI and predictive analytics for optimizing inventory management” or “Using AI in the finance sector to analyse transactions in real-time to detect and prevent fraudulent activities”.
Operational typology involves the direct application or replication of validated AI capabilities in similar problems or settings. Examples include, “AI systems that monitor production lines, predict maintenance needs, and ensure quality control in manufacturing processes.” and “Using AI to automate repetitive tasks in business processes, such as data entry or invoicing”.
Research typology focuses on algorithmic innovations that surpass the current AI state-of-the-art for solving related problems, such as “Projects to advance neural network architectures and training methodologies”, “Developing new or improving existing advanced Natural Language Processing algorithms for better language understanding” or “Researching frameworks to ensure ethical AI development and deployment”.
To navigate these typologies, organizations and team leaders should collaborate with internal or external AI experts, ensuring discussions are driven by shared goals and a common vocabulary. Also, considerations should include alternative model-based or computational solutions (e.g., using simple statistical models instead), budgets, planning, risk assessments, and downstream implications like ethics and societal impact.
The problem’s setting influences its objectives. In commercial settings, goals might be reduced operational cost or increased revenue and profit. In industrial settings, the focus may be on process efficiencies or productivity while in research, the aim should be expanding current knowledge in scientific literature.
Compliance assessment
At this stage, review the problem formulation, chosen approach, potential solution, and required datasets for security risks, and ethical and legal compliance. Since the AI team is highly technical, compliance professionals should handle the compliance review. Start with a broad ethics framework and narrow down to specifics like algorithmic justice (across gender, ethnicity, sexuality, and other minorities), data representations, and stakeholder interests.
Policymakers worldwide are grappling with ethical and practical issues posed by AI technologies. They aim to balance fostering innovation with protecting individuals and society from risks.
Some governments are adopting a wait-and-see approach, while others propose or enact AI regulations.
The EU Artificial Intelligence Act may take effect in 2025 (EU AI Act, 2024). In the US, Biden’s Executive Order established the U.S. Artificial Intelligence Safety Institute (USAISI). This body will develop safety and security standards for advanced AI. Absent broad federal laws, expect actions from specific agencies in sectors like healthcare, finance, housing, labour, and child safety, along with more executive orders.
The rise of AI has led to new security frameworks. ISO 42001 addresses information security for AI, while NIST AI RMF offers guidelines for managing AI-specific risks. Adopting these frameworks can guide organizations in implementing secure and responsible AI solutions.
Technical literature review
The problem formulation provides essential context for reviewing published research, deployed systems, solutions, and libraries used in similar scenarios. Key information sources include research article search engines (e.g., Google Scholar), publishing platforms (e.g., Medium), Q&A sites (e.g., Stack Exchange), code repositories (e.g., GitHub). Resources include literature reviews, articles, case studies, best practices, product documentation, tutorials, demonstrations, API documentation, and Q&A forum interactions.
Recent developments in open-access pre-trained AI models, such as GPT, should be examined for potential re-purposing or fine-tuning, rather than developing new models from scratch. When using pre-trained models, consider factors such as licensing, model suitability, context disparity, and updates or omitted data. Open-source models often encourage research and application, but the underlying data license may restrict commercial use. Legal advice is recommended to ensure compliance with licensing and usage constraints.
By Yesbol Gabdullin, Senior Data Scientist, MSc in Artificial Intelligence, MRes in Engineering, BEng in Electronics and Control Engineering, Microsoft Certified
The article was written for the Forecasting the Future of AI Applications in the Built Environment session of the Project Controls Expo 2024.