How to install and run Open source AI Models on your MacBook Pro.

Running AI models locally can be a powerful way to experiment with natural language processing or other AI-driven tasks without relying on cloud services.

Below we’ll cover:

1. Models and what they do

2. Environment setup

3. Installing Docker Desktop and Python

4. Deploying Open WebUI

5. Loading each model

Key Tools You’ll Need

Docker Desktop:

- Download for Mac (Apple silicon) from https://www.docker.com/products/docker-desktop/

Python 3.9 or 3.10 (Recommended)

- Download from https://www.python.org/downloads/

(Fact source: Docker Desktop homepage; Python official site.)

These are the models I’m running on my MacBook M1

1. Smallthinker:latest 3.4B

Overview: This is a compact, general-purpose LLM known for fast inference on smaller tasks. Ideal for chatbot interactions and simple summarizations.

Recommended Use: Prototyping, quick content generation where speed is more important than depth.

2. Tinnyllama:latest 1B

Overview: Even smaller. Great for extremely resource-constrained environments, but it offers more basic capabilities.

Recommended Use: Testing minimal language model footprints, simple Q&A.

3. Qwen:14B

Overview: Mid-size, more robust language generation, decent for creative tasks, summarization, or coding assistance.

Recommended Use: Balanced performance for broader tasks without requiring a huge system footprint.

4. Phi4:latest 14.7B

Overview: Slightly bigger than Qwen with advanced text-generation capabilities. Good for more coherent long-form writing.

Recommended Use: Detailed conversation, brainstorming, or multi-step reasoning tasks.

5. Llama3:latest 8B

Overview: Part of the Llama family, recognized for strong performance on various NLP tasks.

Recommended Use: Balanced approach for wide-ranging use cases—chatbots, summarizations, translations, etc.

6. Deepseek-r1:latest 7.6B

Overview: Excels at research tasks, combining retrieval-like features with generative responses.

Recommended Use: Data analysis, information retrieval, or advanced knowledge tasks.

7. Llama3.2:latest 3.2B

Overview: A lighter version in the Llama3 line, ideal for quick experiments or running multiple models concurrently.

Recommended Use: Testing with limited GPU or CPU resources, small inference tasks.

(Please note these model descriptions are compiled from general references and assumptions. Actual usage may vary, so always consult model-specific documentation.)

2. Setting Up the Environment

1. Install Python

• Go to https://www.python.org/downloads/ and download Python 3.9 or 3.10 for macOS.

• Install and verify by running:

python3 --version

Tip: Use pyenv (optional) if you want to manage multiple Python versions.


2. Install Docker Desktop

• For Apple Silicon (M1/M2), grab the installer from https://www.docker.com/products/docker-desktop/

• Once installed, start Docker Desktop. Verify by running:

docker --version

• Docker Desktop for Mac handles the complexities of using containers on Apple Silicon by providing virtualization under the hood.

3. Deploying Open WebUI


Open WebUI is a user interface for running and managing these local models. Assuming you have the Open WebUI Docker image or instructions:

1. Clone or Download

• Check the official documentation (e.g., GitHub or the provider’s site). For instance:

git clone https://github.com/yourusername/open-webui.git
cd open-webui

(Exact commands may vary based on the real repo URL. Example only.)

2. Build or Pull Docker Container

• If the Dockerfile is provided, you can build locally:

docker build -t openwebui:latest .

• Or if an image is available on Docker Hub or another registry, run:

docker pull yourregistry/openwebui:latest

3. Run the Container

• Launch the container with appropriate ports (e.g., -p 7860:7860 or similar). For instance:

docker run -it --rm \
  -p 7860:7860 \
  --name openwebui \
  openwebui:latest

• Wait for the container to finish booting. You should then see a prompt that Open WebUI is running at a local URL (e.g., http://localhost:7860/).

4. Loading and Running Each Model

Each model (Smallthinker, Tinnyllama, Qwen, Phi4, Llama3, Deepseek-r1, Llama3.2) can typically be loaded with a command inside Open WebUI or via environment variables at container startup. Here’s the general approach:

1. Place Model Files

• Download or place the model weights in a known folder. This might be /models or a mounted volume in Docker.

• For Apple Silicon, ensure that any dependencies or libraries needed are compiled for ARM architecture if necessary.

2. Load Model from Open WebUI

• Within the WebUI, you’ll often see an option to “Choose Model” or “Load Model” on the interface.

• Select the model you want (e.g., Smallthinker:latest 3.4B) and wait for it to load.

• If you’re using the command line, it might look like:

python openwebui.py --model "/models/Smallthinker-3.4B"

• Repeat this step for each model you want to try, or keep multiple models loaded if your hardware can handle it.


3. Model-Specific Arguments (Optional)

• Some models have specialized flags for performance or memory usage, like --low-vram or --cpu-offloading. Adjust to optimize for the M1 chip’s memory constraints.

• Example:

python openwebui.py --model "/models/Phi4-14.7B" --cpu-offloading

4. Monitoring Performance

• Keep an eye on Docker resource usage (CPU, GPU if using GPU passthrough, memory).

• The M1 chip has a unified memory architecture, so you might need to tweak the Docker memory allocation in Docker Desktop’s settings (Preferences > Resources).

5. Tips for Apple Silicon M1

Use Conda Environments (Optional)

Some people prefer conda environments for easier package management on Apple Silicon. Check out Miniforge for Apple Silicon if you want a simpler route. (Fact source: Conda Forge GitHub.)

Hardware Acceleration

Tools like Metal Performance Shaders (MPS) can accelerate model inference on Apple Silicon. Certain PyTorch builds support MPS acceleration.

- Example: pip install torch torchvision torchaudio --extra-index-url https://download.pytorch.org/whl/cpu (for CPU-only) or using the MPS version if available.

Memory Planning

If you’re running large models (14B+), consider your RAM usage. MacBooks with 8GB or 16GB memory might hit limits quickly.

- Try smaller versions or use 8-bit quantization to reduce model size.

Conclusion

Running these seven AI models—Smallthinker, Tinnyllama, Qwen, Phi4, Llama3, Deepseek-r1, and Llama3.2—on your MacBook Pro M1 using Docker Desktop, Python, and Open WebUI is a streamlined way to experiment with local AI. By containerizing the environment and carefully monitoring your Mac’s resources, you can easily switch between models, test new ideas, and share your findings with others.


Sources & Further Reading:

• Docker Desktop: https://www.docker.com/products/docker-desktop/

• Python: https://www.python.org/downloads/

• Conda Forge (Miniforge): https://github.com/conda-forge/miniforge

• Metal Performance Shaders: https://developer.apple.com/metal/metal-performance-shaders/

Feel free to adjust the commands and paths according to your exact setup. Good luck, and have fun exploring the possibilities of local AI inference on your Mac!

I asked each model below the same question, to evaluate the different responses…

Write a long form article on Shadow AI use in the workplace. Be sure to highlight the risks, provide examples and recommend practical solutions. Do all this with a sense on New Zealand as the target audience


Qwen 14B

Title: Navigating the Shadows of AI in New Zealand's Workplace

Introduction

In the technologically advanced landscape of New Zealand, artificial intelligence (AI) has become an integral part of many workplaces. From automating repetitive tasks to enhancing decision-making processes, AI offers significant benefits. However, this technology also casts a shadow, presenting risks that must be addressed to ensure a responsible and ethical use of AI in the workplace.

Risks of Shadow AI in New Zealand's Workplace

  1. Job Displacement: As AI systems automate tasks traditionally performed by humans, there is a risk of job loss or skill mismatch.

  2. Bias and Discrimination: AI algorithms can inadvertently perpetuate biases present in training data, leading to discriminatory outcomes.

  3. Lack of Transparency: Many AI systems operate as "black boxes," making it difficult for employees and managers to understand how decisions are being made.

  4. Cybersecurity Threats: As AI systems become more interconnected and reliant on data, they also expose organizations to increased cybersecurity risks.

Practical Solutions for Managing Shadow AI in New Zealand's Workplace

  1. Invest in Training and Reskilling: Organizations should provide training opportunities for employees whose jobs may be at risk due to AI adoption.

  2. Implement Fairness and Bias Mitigation Techniques: Organizations should prioritize the development and implementation of techniques that address bias and ensure fairness in AI systems.

  3. Promote Transparency and Explainability: Organizations should work on designing AI systems that are transparent, explainable, and auditable.

  4. Strengthen Cybersecurity Measures: Organizations should invest in robust cybersecurity measures to protect AI systems, data, and the workplace itself.

Conclusion

As AI continues to reshape New Zealand's workplace, it is essential for organizations to address the risks associated with shadow AI. By implementing practical solutions such as training programs, fairness techniques, transparency efforts, and strengthened cybersecurity, New Zealand can ensure a responsible and ethical use of AI in the workplace.

DeepSeek R1

Thought for 113 seconds

Okay, so I need to write a long-form article about Shadow AI usage in the workplace, targeting New Zealand as the audience. Let me start by understanding what Shadow AI is. From my knowledge, Shadow AI refers to artificial intelligence that's not fully integrated into systems but still has significant impact—like in recruitment, HR analytics, compliance, etc.

First, I should outline the structure of the article. The user provided a sample, which includes an introduction, risks section with examples, practical solutions, and a conclusion. That seems logical. Maybe I can follow a similar flow to ensure coverage.

In the introduction, I need to explain what Shadow AI is and its relevance in New Zealand's workforce. I should mention how it affects different sectors like tech, finance, retail, etc., because New Zealand has diverse industries. It’s important to set the stage for discussing the risks without making it too technical.

Next, the risks section. The user mentioned privacy breaches, bias, job displacement, and ethical concerns as key risks. I should elaborate on each with specific examples relevant to New Zealand. For instance, recruitment agencies using AI could inadvertently target individuals based on sensitive data like ethnicity or sexual orientation, which are protected in New Zealand.

I need to provide real-world examples. Maybe discuss a case where AI in HR led to unfair practices due to biased data. Also, how employers might mishandle personal data from job applicants. It's crucial to highlight that while Shadow AI can be beneficial, it comes with inherent dangers if not managed properly.

Then, the practical solutions section should offer actionable advice for businesses and policymakers. Best practices like ethical AI development, transparency in decision-making, regular audits, and employee training are good points. I should suggest ways New Zealand companies can adopt these without overhauling their operations immediately.

In the conclusion, I need to summarize the importance of addressing Shadow AI risks proactively while emphasizing the potential benefits once managed responsibly. It's a balance between embracing innovation and ensuring ethical practices.

I also need to make sure the language is clear for a general audience in New Zealand—maybe avoid too much jargon but still cover all necessary points. Including some statistics or references might add credibility, though I should check if that's possible without access to external data sources.

Wait, the user mentioned not to use specific examples from other regions but to focus on New Zealand. So I have to think about New Zealand-specific regulations and cultural contexts when discussing risks like bias in AI systems. For example, New Zealand has the Racial Discrimination Act 1975 which protects against discrimination based on race, ethnicity, etc., so an AI targeting individuals without proper consent could be illegal.

Also, considering the digital divide in New Zealand—some regions might have less access to technology, which can exacerbate risks if AI is used in ways that aren't widely understood or if data privacy isn't prioritized.

I should structure each section with clear headings and subpoints to make it easy to follow. Maybe start each risk section with a bold heading and then bullet points for clarity.

I need to ensure the article flows logically from introduction to risks, solutions, and conclusion. Each part should build on the previous one, providing a comprehensive view of Shadow AI's impact in New Zealand.

Potential challenges include balancing depth with brevity since it's a long-form article but not overly technical. I have to make sure each point is explained clearly without overwhelming the reader. Also, ensuring that the solutions are practical and feasible for businesses across different sectors in New Zealand.

I should also consider including recent trends or technologies that interact with Shadow AI in the workplace, such as advancements in AI-driven tools that might be particularly relevant now, like chatbots or predictive analytics in customer service.

Lastly, I need to make sure the article is engaging and informative, highlighting both the challenges and opportunities. It's important for readers to understand that while Shadow AI can drive innovation, careful management is essential to navigate the complexities of modern workplaces in New Zealand.

Navigating Shadow AI: Risks and Solutions in the New Zealand Workforce

In an era where technology is rapidly advancing, artificial intelligence (AI) has become a pivotal force across various sectors. Among these, Shadow AI—AI systems that are not fully integrated into processes but exert significant influence—poses unique challenges, particularly within New Zealand's dynamic and diverse workforce.

Understanding Shadow AI

Shadow AI refers to AI tools that operate outside mainstream systems but still have notable impacts on businesses and individuals. Examples include recruitment agencies using AI for candidate matching or HR analytics platforms that provide insights into employee performance. These systems can streamline operations while also presenting risks if not managed responsibly.

The Impact of Shadow AI in New Zealand

New Zealand's varied industries—ranging from tech startups to traditional sectors like banking and retail—are all influenced by Shadow AI. Its applications span recruitment, HR analytics, compliance monitoring, customer service automation, and more. While offering efficiency gains, Shadow AI also necessitates careful navigation of potential risks.

Key Risks of Shadow AI

  1. Privacy Breaches: The use of AI in HR can inadvertently involve sensitive personal data from job applicants. Without proper consent or clear privacy controls, this data could be misused or exposed, leading to legal challenges and reputational damage.

  1. Bias and Discrimination: AI systems trained on historical data may perpetuate biases present in the data sets. For instance, recruitment algorithms might unfairly target certain groups based on protected attributes like ethnicity or sexual orientation, as New Zealand's legal framework protects such groupings from discrimination.

  1. Job Displacement: As AI tools optimize processes and reduce the need for human intervention, there is a risk of job displacement. This can lead to economic strain and increased inequality among workers who lose their livelihoods due to automation.

  1. Ethical Concerns: The lack of transparency in AI decision-making processes raises ethical questions. Employees may feel unfairly treated if their decisions are based on algorithmic bias or insufficient data, leading to decreased morale and higher turnover rates.

Case Studies: Risks in Action

  • Recruitment Bias: A recruitment agency using AI for screening candidates was accused of unintentionally targeting job applicants based on sensitive attributes like sexual orientation, as protected by New Zealand law.

  • Privacy Mismanagement: An employer mishandling personal data from job applicants due to inadequate safeguards, resulting in data breaches and potential legal action.

Practical Solutions: Embracing Shadow AI Responsibly

To mitigate these risks, businesses and policymakers must adopt proactive measures:

  1. Ethical AI Development: Ensure AI systems are developed with ethical considerations, incorporating diverse perspectives to prevent bias. Regular audits can help identify and rectify any emerging biases promptly.

  1. Transparency in Decision-Making: Enhance transparency by making AI decision-making processes understandable to employees. Clear guidelines on data usage and limitations can build trust and accountability.

  1. Robust Data Handling Protocols: Implement strict data protection measures, including obtaining explicit consent for AI-driven HR practices, to safeguard employee information from misuse.

  1. Employee Training and Awareness: Provide ongoing training to employees on AI ethics, compliance laws, and digital citizenship. This empowers workers to advocate for their rights and understand their impact in the workplace.

  1. Regulatory Collaboration: Engage with local authorities and regulatory bodies to align AI practices with New Zealand's legal frameworks, ensuring compliance without stifling innovation.

Conclusion: Balancing Innovation and ResponsibilitY

Shadow AI offers significant potential to enhance efficiency and decision-making across New Zealand's workforce. However, its adoption necessitates a balanced approach that prioritizes ethical considerations, transparency, and employee well-being. By addressing the inherent risks proactively, businesses can harness the benefits of AI while maintaining accountability and fairness. As technology continues to evolve, so too must workplace practices, ensuring that innovation is responsibly integrated into New Zealand's dynamic business landscape.


Phi4

The Rise of Shadow AI in the Workplace: Implications for New Zealand

Artificial Intelligence (AI) is transforming workplaces across the globe, offering unprecedented opportunities to enhance productivity, innovation, and efficiency. However, as AI becomes more ubiquitous, it also introduces a phenomenon known as "Shadow AI" – unsanctioned or unregulated use of AI technologies within organizations. For businesses in New Zealand, understanding this emerging trend is crucial for mitigating risks while harnessing the benefits of AI.

Understanding Shadow AI

Definition and Scope: Shadow AI refers to AI systems developed by employees without formal approval or oversight from their organization's IT department. These can range from small-scale applications like chatbots created on a personal project basis, to more sophisticated machine learning models designed to automate specific tasks.

Examples of Shadow AI in New Zealand Workplaces

  1. Automated Reporting Tools: Employees in finance departments might develop scripts using AI tools like Excel macros or Python libraries to automatically generate reports.

  1. Customer Support Bots: A customer service representative may create a chatbot to handle routine queries, freeing up human agents for more complex issues.

  1. HR Analytics: HR personnel could use machine learning algorithms to analyze employee performance data and predict turnover risks.

Risks Associated with Shadow AI

While Shadow AI can foster innovation, it also presents several risks:

  1. Security Vulnerabilities: Unauthorized AI applications may not adhere to company security protocols, leaving sensitive data exposed to breaches or leaks

  1. Compliance Issues: Many industries are subject to strict regulations regarding data use and privacy (e.g., GDPR in Europe, similar principles under the New Zealand Privacy Act). Shadow AI initiatives might inadvertently violate these laws.

  1. Quality Control and Reliability: Without formal oversight, AI systems may produce inconsistent or unreliable outputs, potentially leading to costly errors or reputational damage.

  1. Resource Allocation: Unmonitored development of AI applications can lead to inefficient use of company resources, diverting attention from prioritized projects.

Case Study: A New Zealand Perspective

Consider a hypothetical scenario at a medium-sized tech firm in Auckland. An ambitious software developer creates an internal recommendation engine using machine learning tools to suggest products for upselling to existing customers. While initially successful, the tool inadvertently begins recommending incompatible or overpriced products due to flawed data inputs.

This led to customer dissatisfaction and increased returns, ultimately impacting sales revenue. Moreover, since this project was unauthorized, it bypassed essential security checks, risking exposure of sensitive consumer data.

Practical Solutions for Managing Shadow AI

To harness the benefits while mitigating the risks of Shadow AI, organizations in New Zealand can adopt several strategies:

  1. Foster a Culture of Transparency: Encourage employees to share their ideas and innovations with IT departments. Regular hackathons or innovation labs could provide structured opportunities for exploration within safe boundaries.

  1. Develop Clear Policies: Establish comprehensive guidelines on the use of AI technologies, including approval processes, security protocols, and compliance requirements. Ensure all staff are aware of these policies through training sessions.

  1. Implement Monitoring Mechanisms: Use monitoring tools to track unusual data usage patterns that might indicate Shadow AI activity. This can help identify unauthorized projects early, allowing for timely intervention.

  1. Promote Collaborative Innovation: Create cross-departmental teams including IT, legal, and compliance experts to evaluate employee-driven AI initiatives. This approach ensures that innovative ideas are developed with the necessary oversight and expertise.

  1. Provide Resources and Support: Offer employees access to official channels and resources for developing AI projects. By providing support through sanctioned platforms, companies can guide innovation within safe and productive frameworks.

  1. Regular Audits and Reviews: Conduct regular audits of all AI applications in use within the organization. This helps ensure compliance with company policies and regulatory requirements, as well as maintaining data integrity and security.

Conclusion

Shadow AI presents both opportunities and challenges for New Zealand workplaces. By understanding its implications and taking proactive steps to manage it, organizations can foster an environment where innovation thrives without compromising on security or compliance. Through transparency, collaboration, and clear governance, businesses can effectively navigate the complex landscape of AI in the modern workplace.

As AI continues to evolve, staying informed about trends like Shadow AI will be crucial for New Zealand companies aiming to remain competitive while safeguarding their operations and reputation.



Justin Flitter

Founder of NewZealand.AI.

http://unrivaled.co.nz
Previous
Previous

AI Strategy as a Talent Retention Imperative: A Briefing for New Zealand Directors

Next
Next

What is Sovereign AI and why do we think it should be a part of NZ’s AI Strategy.