At DigitalOcean, our mission is simple: provide the tools and infrastructure needed to scale exponentially and accelerate successful cloud journeys. Many Independent Software Vendors (ISVs) and startups including Snipitz, ScraperAPI, Nitropack, Zing, and BrightData have experienced successful scaling and rapid growth on DigitalOcean Kubernetes.
DigitalOcean Kubernetes stands out as a managed Kubernetes platform due to its simplified user experience, fixed and predictable pricing model, ample egress data transfer, and versatile range of virtual machines. These features make it an attractive choice for businesses seeking a reliable and cost-effective solution to deploy and scale their applications on Kubernetes.
In Part 1 of the series, we covered the challenges in adopting and scaling on Kubernetes, as well as “Developer Productivity” best practices. As businesses grow and their applications become more complex, observability becomes a critical component of their production environment. Observability helps identify issues, resolve them promptly, and optimize the environment for better performance and resource utilization.
In this current segment (Part 2), we focus on observability best practices for DigitalOcean Kubernetes. We will first look at the big picture of observability, discussing its various components and their importance. Then, we will explore the common challenges faced by businesses when implementing observability in a Kubernetes environment. Finally, we will provide a comprehensive checklist of best practices to help you achieve effective observability for your applications running on DigitalOcean Kubernetes.
Observability and monitoring are often used synonymously, but they differ in their approach and scope. Monitoring focuses on tracking predefined metrics and triggering alerts when thresholds are breached, while observability provides a holistic view of your system’s state by combining metrics, logs, events, and traces. Observability in a Kubernetes environment encompasses various components that work together to provide visibility into the health and performance of your applications and infrastructure. Observability enables you to gain deeper insights and diagnose issues more effectively. Traditionally, observability relies on three pillars: metrics, logs, and traces. However, in the context of Kubernetes deployments, events play a significant role in troubleshooting and gaining insights into your cluster’s health. Therefore, we will explore these four pillars of Kubernetes observability as shown below.
container_cpu_usage_seconds_total{container_name="oauth-server", namespace="production"}[5m]This metric measures the total CPU time consumed by the oauth-server container in the production namespace over the last 5 minutes.
2024-03-14T10:00:00Z ERROR [oauth-server] Failed to connect to database: timeout exceeded.This metric measures the total CPU time consumed by the oauth-server container in the production namespace over the last 5 minutes.
2024-03-14T10:05:00Z INFO [kubelet] Successfully pulled image "myapp:latest" for pod "myapp-pod" in namespace "production".This log shows kubelet's success in pulling the latest image for "myapp-pod" in the "production" namespace.
Trace ID: 12345. Operation: GET /api/v1/users. Duration: 250ms. Status: Success.This trace captures a successful GET request to the /api/v1/users endpoint, taking 250 milliseconds to complete.
Alert: CPU utilization for pod "api-server" in namespace "production" exceeds 80% for more than 5 minutes.This alert notifies that the CPU utilization of the "api-server" pod has been above 80% for over 5 minutes, potentially indicating an issue.
Observability spans across multiple layers:
The underlying platform (Kubernetes control plane, worker nodes, networking, and storage)
Your applications (microservices, containers, and workloads)
Business data (application logs, user interactions, and domain-specific metrics).
By capturing and correlating data from the above layers, you can gain a comprehensive understanding of your system’s behavior and detect issues more effectively.
Another crucial aspect to consider is whether you operate a single cluster or multiple clusters. In a multi-cluster environment, observability becomes even more critical as you need to aggregate and correlate data across different clusters, potentially spanning multiple regions or cloud providers.
For an ISV or startup, it’s essential to strike a balance between the observability data you collect and the insights you need. Developers may require more granular data for debugging and optimizing specific components, while operators and site reliability engineers (SREs) may focus on higher-level metrics and events that provide a comprehensive view of the overall system’s health.
With this big picture in mind, let us review an example of how to use the observability data to troubleshoot issues, find root cause, and take actions.
Imagine you’re running a popular e-commerce application on Kubernetes, and during a peak sales period, you start receiving complaints from customers about slow response times and intermittent errors when adding items to their shopping carts. How do you go about identifying the root cause of this issue and resolving it?
Let’s walk through this hypothetical scenario:
Metrics reveal performance degradation: Your monitoring dashboard shows a spike in the 95th percentile response times for the shopping cart microservice, indicating potential performance issues. Additionally, you notice increased CPU and memory utilization on the nodes running this service.
Logs provide context: By analyzing the application logs, you discover that the shopping cart service is logging frequent errors related to database connection timeouts. This could potentially explain the performance degradation and intermittent errors experienced by customers.
Traces highlight latency: You turn to distributed tracing and notice that requests to the shopping cart service are taking significantly longer than usual, with most of the latency occurring during the database interaction phase.
Events point to resource contention: Reviewing the Kubernetes events, you find that several nodes in the cluster have been experiencing high memory pressure, leading to frequent kernel OOM (Out-of-Memory) events and pod evictions.
Correlation and root cause identification: By correlating the information from metrics, logs, traces, and events, you can piece together the root cause; the increased traffic during the peak sales period has led to resource contention on the nodes hosting the shopping cart service and its database. This resource contention has caused database connection timeouts, resulting in slow response times and intermittent errors for customers.
With this insight, you can take immediate action to resolve the issue, such as scaling out the shopping cart service and its database. Additionally, you can set up appropriate alerts and notifications to detect similar issues proactively in the future.
This example demonstrates the power of observability in quickly identifying and diagnosing issues within complex distributed systems. By leveraging metrics, logs, traces, and events, and correlating data from these sources, you can gain deep visibility into your application’s behavior and pinpoint the root cause of performance problems or failures, ultimately enabling faster resolution and better user experiences.
Implementing effective observability in a Kubernetes environment can present several challenges, especially for startups and ISVs with limited resources. Here are some common challenges and considerations:
Data volume and signal-to-noise ratio: Kubernetes environments can generate a vast amount of observability data, including metrics, logs, traces, and events. Sifting through this deluge of data to identify relevant signals and actionable insights can be overwhelming and is not a good use of time.
Storage costs: Storing and retaining observability data for extended periods may not be justified unless needed for security or compliance reasons. Finding the right balance between data retention policies and storage costs is crucial to ensure optimal cost-efficiency while maintaining necessary historical data for analysis and compliance.
Data correlation and context: Observability data from different sources (metrics, logs, traces, events) can be siloed, making it challenging to correlate and derive meaningful insights. Proper dashboards and alerts are key to getting good insights.
Alerting and notification management: Defining appropriate alerting rules and managing notifications effectively can be a challenge.
Scaling and multi-cluster observability: As businesses grow and their Kubernetes footprint expands across multiple clusters or regions, observability becomes increasingly complex. Aggregating and correlating observability data from multiple sources while maintaining visibility and control can be a significant challenge for ISVs with limited resources.
Security and compliance: Observability data can contain sensitive information, such as application logs or user-related data. ISVs must ensure proper access controls, data encryption, and compliance with industry regulations and standards, which can add complexity and overhead to their observability implementations.
To address these challenges effectively, ISVs should consider adopting observability best practices tailored to their specific needs and constraints as discussed in the following section.
Implementing effective observability in a Kubernetes environment requires a structured approach and adherence to best practices. Here’s a checklist of key recommendations.
Observability is an ongoing process; it is not a one-time implementation cost. Your observability needs will change as your Kubernetes environment evolves. Embrace an iterative approach, and continuously refine and optimize your observability practices to adapt to new requirements, emerging technologies, and changing workloads.
Before embarking on your observability journey, define clear goals and objectives. These goals can be simple and focused, such as:
Enhancing visibility into the system and applications
Improving Mean Time to Detection (MTTD) for issues
Reducing Mean Time to Resolution (MTTR) for incidents
Metrics are an absolute necessity for any observability strategy. Begin your observability journey with metrics; they provide a foundation for understanding system behavior and performance.
Gradually incorporate logs and events into your observability stack as you mature along your observability journey. Logs provide detailed information about application behavior and can help in troubleshooting and root cause analysis. Events offer insights into the state changes and significant occurrences within your Kubernetes cluster.
It’s generally not recommended for ISVs at SMB scale to start with distributed tracing unless you have a clear understanding of its complexities and benefits.
Leveraging a SaaS observability platform is a best practice for ISVs and startups with limited resources because it allows them to focus on their core business objectives while benefiting from enterprise-grade observability capabilities. By outsourcing the observability infrastructure to a managed service provider, teams can reduce operational overhead, minimize the need for specialized expertise, and ensure scalability and reliability of their observability stack.
SaaS observability platforms offer a wide range of features and benefits, including:
Centralized data collection for metrics, logs and events.
Scalability and reliability by handling large volumes of observability data without the need to manage the underlying infrastructure.
Pre-built integrations with popular Kubernetes distributions, monitoring tools, and logging frameworks.
Powerful querying and visualization with pre-built dashboards.
Alerting and notifications.
Collaboration and sharing among team members by sharing dashboards, alerts, and insights.
Most ISVs and startups have limited resources and need to focus on core business. Leveraging Software-as-a-Service (SaaS) observability solutions is a good option. Managed services like Logtail, Papertrail, Datadog, New Relic, Elastic Cloud, or Grafana Cloud can provide a comprehensive observability platform with minimal operational overhead, allowing you to focus on core business objectives while benefiting from scalable, enterprise-grade observability. When evaluating SaaS observability platforms, consider factors such as pricing, ease of use, integrations with your existing tools and platforms, and customer support.
Using the kube-prometheus-stack is a best practice for self-hosted observability because it provides a battle-tested and integrated solution tailored specifically for Kubernetes environments. By leveraging this stack, teams can quickly set up a robust monitoring and alerting system without the need for extensive configuration and integration efforts. The stack follows best practices and provides a solid foundation for Kubernetes observability.
The kube-prometheus-stack is a collection of Kubernetes manifests, Grafana dashboards, and Prometheus rules that provide a comprehensive and easy-to-deploy monitoring and alerting stack. The stack includes popular open-source tools such as Prometheus, Grafana, and Alertmanager with best-practices alerts, preconfigured to work seamlessly with Kubernetes. The stack can be extended to monitor and analyze Kubernetes events and logs, providing valuable insights into cluster state and resource changes. We recommend the Kubernetes starter kit (chapter 4 - Observability) tutorial for customizing your installation, including data management.
We recommend Loki for logs with Grafana. Loki is a scalable and highly available, multi-tenant log aggregation system by Grafana Labs focusing on simplicity and efficiency. It aims to provide a cost-effective solution for storing and querying large volumes of log data (in S3/Spaces store). Unlike traditional log aggregation systems that index the contents of the logs, Loki allows users to search logs using labels rather than requiring full-text search. This design choice significantly reduces the storage and computational requirements. Loki integrates seamlessly with Grafana, enabling rich querying and visualization capabilities.
To further enhance the alerting capabilities of the kube-prometheus-stack, consider integrating tools like Robusta. Robusta can enrich alerts from Alertmanager and Kubernetes events, providing additional context and streamlining alert management. It helps in identifying and responding to issues proactively.
When using Grafana dashboards, it’s recommended to tailor them to cater to different user personas. Developers may require more granular information for debugging and optimization, while operators and SREs might benefit from higher-level views of system health and performance. Customizing dashboards based on user roles improves productivity and provides actionable insights.
Keeping observability costs under control involves implementing strategies to manage and optimize the storage and retention of observability data. As Kubernetes environments grow and generate increasing amounts of metrics, logs, and events, the storage requirements for this data can quickly escalate, leading to substantial costs if not properly managed.
To understand the importance of cost control, let’s consider an example of a 10-node Kubernetes cluster. Suppose each node generates an average of 100 MB of log data per day and 100 metrics per minute. In this scenario, the daily storage requirements would be:
Log data: 10 nodes × 100 MB/day = 1 GB/day
Metrics data: 10 nodes × 100 metrics/minute × 1440 minutes/day × 8 bytes/metric = 115 MB/day
This would be approximately 30 GB for logs and 3.45 GB for metrics, per month. These can quickly add up, adding to your costs.
To keep costs under control, consider the following strategies:
Data collection optimization: Select the metrics, logs, and events that are critical for your observability needs. Leverage filtering and aggregation techniques to reduce data volume before storage.
Data retention policies: Define clear data retention policies based on your observability requirements and compliance needs. Implement tiered retention policies, storing high-resolution data for a shorter period and aggregated data for longer durations.
Many ISVs operate multiple Kubernetes clusters. While you can still manage with standalone deployments of kube-prometheus-stack and good alerting (eg. slack integration), centralizing observability becomes a best practice in these situations.
Centralizing observability provides the following benefits.
Unified visibility: By aggregating observability data from multiple clusters, you can obtain a single pane of glass view of your entire Kubernetes environment.
Simplified troubleshooting: Centralized observability allows you to quickly identify and investigate issues that span across multiple clusters.
Consistent monitoring and alerting: With a centralized observability solution, you can define and enforce consistent monitoring and alerting policies across all your clusters.
Efficient resource utilization: Centralizing observability helps you optimize resource utilization by providing insights into the performance and scalability of your applications across clusters.
The above diagram depicts such an architecture. To centralize observability in a multi-cluster environment, you can leverage tools like Grafana Mimir or Thanos. These tools are designed to aggregate and federate observability data from multiple Prometheus instances, which are commonly used for monitoring Kubernetes clusters.
Grafana Mimir is a highly scalable and distributed time-series database that can ingest and store metrics from multiple Prometheus servers. You just need to connect Mimir as a data source to Grafana. It saves a lot of configurations, also you do not have to expose every prometheus service on each cluster. Now you can have a global query view across all connected clusters, enabling you to perform cross-cluster analysis and visualization. Mimir also offers features like horizontal scalability, high availability, and long-term storage capabilities.
When centralizing observability, consider the following aspects:
Data aggregation: Determine the metrics, and logs that need to be aggregated from each cluster and configure your observability tools accordingly.
Query performance: Ensure that your centralized observability solution can handle the query load and provide fast response times, even when dealing with large volumes of data from multiple clusters.
Data retention: Define data retention policies for your centralized observability system, taking into account the storage requirements and the need for historical data analysis.
Access control: Implement proper access control mechanisms to ensure that users can only access and view observability data relevant to their roles and responsibilities.
Observability is an ongoing journey; continuous improvement and adaptation are key to success. Regularly review and refine your observability practices to align with evolving business needs and technological advancements.
As we continue to explore the ISV journey of Kubernetes adoption, our ongoing blog series will delve deeper into the resilience, efficiency, and security of your deployments.
Developer Productivity (Part 1): Maximize developer productivity by streamlining the development and deployment process in Kubernetes environments.
Observability (this post): Unpack the tools and strategies for gaining insights into your applications and infrastructure, ensuring you can monitor performance and troubleshoot issues effectively.
Reliability and scale (Part 3): Explore how to manage zero-downtime deployments, readiness/liveness probes, application scaling, DNS, and CNI to maintain optimal performance under varying loads.
Disaster preparedness (Part 4): Discuss the importance of having a solid disaster recovery plan, including backup strategies, practices and regular drills to ensure business continuity.
Security (Part 5): Delve into securing your Kubernetes environment, covering best practices for network policies, access controls, and securing application workloads.
Each of these topics is crucial for navigating the complexities of Kubernetes, enhancing your infrastructure’s resilience, scalability, and security. Stay tuned for insights to help empower your Kubernetes journey.
Ready to embark on a transformative journey and get the most from Kubernetes on DigitalOcean? Sign up for DigitalOcean Kubernetes start here.
If you’d like to see DigitalOcean Kubernetes in action, join this OnDemand webinar to watch a demo and learn more about how DigitalOcean’s managed Kubernetes service helps simplify adoption and management of your Kubernetes environment.
]]>DigitalOcean App Platform is loved by developers and startups for its simplicity and hands-free experience. It is a fully managed platform-as-a-service (PAAS) solution that allows users to effortlessly deploy their applications by simply providing their code (via a git repository) or a pre-built container image. App Platform takes care of the entire application lifecycle, from building and deploying to monitoring and scaling, removing the complexity of managing the underlying infrastructure.
In the past, customers had to manually scale their app or write their own scripts for automating the scaling. This made managing dynamic apps a difficult experience on App Platform.
App Platform now offers CPU-based autoscaling, a powerful feature that allows you to automatically scale your application components horizontally based on CPU utilization metrics. This capability helps ensure that your applications can seamlessly handle fluctuating demand while optimizing resource usage and minimizing costs. You can configure autoscaling using either the user interface or via appspec.
CPU-based autoscaling works as follows:
Metric collection: App Platform continuously collects CPU usage metrics from the containers running your application components.
Threshold monitoring: The autoscaling system compares the average CPU utilization across all containers for a given component against the configured CPU threshold.
Automatic scaling: When the average CPU usage exceeds the configured threshold, App Platform automatically scales up the component by cloning the current deployment and adding more container instances. Conversely, if the CPU usage falls below the threshold, the system scales down by removing excess instances. Autoscaling range is between configured minimum and maximum instance count.
Configuring CPU-based autoscaling is straightforward. It is supported for any app platform component with dedicated instances. You can either use the user interface or appspec to configure the minimum and maximum instance count and CPU threshold.
The App Platform console provides a user-friendly interface to configure autoscaling settings for any component with dedicated instances, as follows.
You can also configure autoscaling parameters within your appspec (via Create a New App or Update an App). In the example below, the my-service component will automatically scale between 2 and 10 instances, based on the average CPU utilization across all instances. If the average CPU usage exceeds 80%, the system will add more instances, and if it falls below 80%, instances will be removed.
alerts:
- rule: DEPLOYMENT_FAILED
- rule: DOMAIN_FAILED
ingress
rules:
- component:
name: sample-nodejs
match:
path:
prefix: /
name: plankton-app-2
region: nyc
services:
- autoscaling:
max_instance_count: 10
min_instance_count: 2
metrics:
cpu:
percent: 80
environment_slug: node-js
github:
branch: main
deploy_on_push: true
repo: digitalocean/sample-nodejs
http_port: 8080
instance_size_slug: professional-xs
name: sample-nodejs
run_command: yarn start
source_dir: /
For more information on configuring autoscaling, please refer to the product documentation.
We can’t wait to see how you leverage this new feature to build and scale your applications more efficiently. If you have any questions or feedback, please don’t hesitate to reach out to our support team.
Happy scaling!
]]>Now, with almost 300 DigitalOcean Marketplace 1-Click Apps, we are excited to offer software license subscriptions on DigitalOcean Marketplace as Add-Ons. Customers will no longer have to purchase their software licenses through a third-party vendor, instead, they will be able to purchase the licenses directly in the Marketplace and have it injected into their Droplets.
Initially launching in collaboration with Plesk, customers will now be able to purchase their Plesk Obsidian software license directly through DigitalOcean Marketplace. Plesk is one of the leading web hosting control panels and management platforms, providing users with a simple yet performant platform developed for modern website hosting. With Plesk on DigitalOcean, customers will have access to a modern and lightweight application to manage their servers and websites through one intuitive browser-based interface.
One of the biggest challenges that small and medium businesses face is building, scaling and managing their technology stacks. At DigitalOcean, we have an expansive selection of 1-Click Apps and Add-Ons to help you accelerate your growth as a company. With just a single click and a few minutes, you can deploy apps and add-ons that seamlessly integrate into your DigitalOcean cloud environment, eliminating the need for complicated setups and extensive configuration.
With this update, our goal is to make it easier for customers to provision the license they need to unlock the software they are already deploying and using. The addition of license add-ons allows customers to scale their usage of the Marketplace by centralizing their license billing and management through DigitalOcean. Users will be able to purchase licenses using their trusted DigitalOcean account, manage their licenses all within the DO UI, and have the ability to upgrade/downgrade/end their license subscriptions easily and at any time through the DigitalOcean console.
We are actively working on the vendor experience so that more companies can list their software license on DigitalOcean Marketplace. This new feature will allow vendors that offer their product through licensing to expand rapidly, enabling countless product integrations and streamlining access to the DigitalOcean user base.
If you or your team offers licensed software that can be hosted on a Droplet, wants to generate a new revenue stream, and is excited to reach 600,000+ passionate builders across the world, then DigitalOcean Marketplace is an ideal fit for you.
Visit the DigitalOcean Marketplace Vendor Page for information and to register today.
]]>Starter Plan: For anyone who wants general guidance and troubleshooting. This plan is included for all customers.
Developer Plan: For teams developing and testing with non-production workloads.
Standard Plan: For teams deploying and maintaining production workloads.
Premium Plan: For businesses serving large customer bases with mission-critical applications.
To continue providing value and support to our customers as they work to achieve their dreams in the cloud, we are excited to announce new, enhanced features coming to the DigitalOcean Support Plans at no additional cost.
Coming to all plans:
Coming to the Standard Plan and Premium Plan:
Coming to the Premium Plan:
Customer monthly report: An individualized customer report that provides a monthly overview of account usage and activity. This report offers invaluable insight to customers about their resource usage and patterns on the DigitalOcean platform each month. This in-depth report will include details on everything that the customer runs on DigitalOcean with monthly usage analysis for the past 3-12 months.
Higher API limit: Customers can now enjoy a higher hourly limit per API OAuth token. This higher default limit improves the functionality of highly interactive DigitalOcean platforms for customers with large amounts of data.
These new features, in conjunction with our already existing features will offer customers the support they need to help ensure their workloads are secured, running efficiently, and accounted for. With DigitalOcean Support Plans, customers can have peace of mind knowing that our knowledgeable team of engineers provide both support and problem resolution by tending to workload needs. We know how much you rely on DigitalOcean to host your applications and protect your data, which is why we are excited to offer these extended features to our support plans. These features will provide you with greater peace of mind so you can continue to serve your customers with ease.
To take advantage of DigitalOcean’s extensive support plans and enhanced features, please review the support plan details and pricing structures. To learn more about which support plan is right for your business, contact our sales team to discuss further.
]]>These innovators greatly benefit from adopting DigitalOcean’s Kubernetes platform to efficiently and effectively scale their applications. By leveraging the potential of Kubernetes, ISVs can easily manage containerized applications and automate the deployment, scaling, and management of those applications. Further, this allows ISVs to rapidly scale their services without worrying about the underlying infrastructure, enabling them to focus more on developing their software. In addition, Kubernetes on DigitalOcean provides features such as auto-scaling, load balancing, and self-healing capabilities, which cater directly to the needs of ISVs looking to maintain reliable and high-performance applications for their customers. Further, ISVs can streamline their operations, reduce overhead costs, and seamlessly scale their applications as their business grows.
DigitalOcean Kubernetes (DOKS) offers a fully managed, CNCF-compliant Kubernetes service that stands out for its simplicity, affordability, and powerful ecosystem designed to streamline operations beyond initial deployment. DOKS provides an exceptional return on investment for Independent Software Vendors (ISVs), startups, and growing digital businesses through:
Simplified user experience: DOKS simplifies the Kubernetes experience, requiring just a single command or one click in the UI to create a cluster. This streamlined approach extends to simple configuration for cluster autoscaler, horizontal pod autoscaler, load balancer, DBaaS and block storage. DOKS provides a production-grade reliability for non-HA clusters through fast control plane repairs, minimizing downtime and technical overhead.
Fixed and predictable pricing model: Customers enjoy a transparent and predictable cost structure with DOKS. There are no control plane fees unless you choose a High Availability (HA) setup. Additionally, there are no fees for surge upgrades, which allow for up to 10 extra nodes to be created during upgrades. Importantly, billing only commences once a node becomes an active part of the cluster, not from the moment it is booted, ensuring you pay only for what you use. DigitalOcean Container Registry (DOCR) has fixed-price tiers, and currently no extra charges for egress.
Ample egress data Transfer: Each Droplet in DOKS comes with a generous egress data transfer pool, ranging from 500GB to over 5TB per Droplet. This ample bandwidth allocation ensures that the majority of users will never exceed their free bandwidth quota, reducing unexpected costs.
Versatile range of virtual machines: Catering to a wide array of workloads, DOKS offers a versatile selection of worker nodes (Droplets). Whether you need shared resources for cost-efficiency or dedicated resources for performance, or if your workload demands CPU and memory optimization, DOKS has options to suit your requirements.
Marketplace add-ons for Kubernetes streamline day 2 operations—such as ingress, monitoring, and logging—via DigitalOcean’s Kubernetes 1-clicks. Snapshooter offers backup and restoration for Kubernetes applications. For a managed SaaS experience, users can select from various SaaS add-ons available in the marketplace.
Our customers, finding success in industries ranging from AI & data platforms to web applications, online learning platforms, blockchain, video streaming, gaming, broadcasting, and digital marketing, showcase the versatility and capability of DOKS to support a wide spectrum of services. For instance, Bright Data leverages DOKS for web data indexing at scale, NitroPack accelerates site speeds for CMS-based websites, Atom Learning enhances online education, and Shoppermotion innovates in retail analytics through IoT. These examples highlight how businesses across different industries utilize DOKS to drive efficiency, innovation, and scalability.
“Our load is dynamic, low on weekends and high during the week. We used the DigitalOcean API to create Droplets, put an image on them, and set up our system, but at the end of the day, that’s not as powerful as Kubernetes. We wanted to solve our underutilized Droplets fast, and the only solution that came to mind was DigitalOcean Kubernetes.” - Nir Borenshtein, COO, Bright Data
Discover more about how these ISVs thrive on DOKS by exploring our customer case studies, including architecture diagrams under the Kubernetes section of the DigitalOcean Customer Case Studies portal. In the following sections, we will specifically focus on the enablement path for the ISVs journey.
Efficiency is paramount for Independent Software Vendors (ISVs). Many operate with lean development teams and move from discovery to production in just weeks. Kubernetes has become the go-to platform for containerized workloads because of its scalability, portability, and rich ecosystem. It is increasingly common for new products to be Kubernetes-supported from their first release.
Managed Kubernetes services streamline platform management. Many, however, do not handle any day-2 operations. Here’s a closer look at what these platforms typically handle versus what they leave to users:
Managed responsibilities | User responsibilities |
---|---|
Control plane management: Ensures the central orchestration layer is running smoothly. | Application and connectivity monitoring: Overseeing the performance and health of deployed applications. |
Platform software upgrades: Keeps the Kubernetes version up to date with the latest stable releases. | Observability: Implementing systems to collect, aggregate, and analyze logs, metrics, and traces. |
Disaster recovery: Restores the control plane and application configurations in case of catastrophic failures | Application backups and disaster recovery: Safeguarding application data against loss or corruption. |
Creating a Kubernetes cluster is straightforward—often just a single command away. Yet, deploying and managing production applications requires considerable additional effort. Regular code updates and varying workload demands (e.g., sudden spikes in streaming services) necessitate careful planning and execution. The challenges typically fall into several key areas:
Automation: While initial deployments are simple, maintaining and updating applications can become complex due to unique configurations. Automation is essential for efficiency and consistency. DOKS provides ready-made blueprints and Terraform scripts for automation.
Developer Productivity: Traditional deployment methods can hinder productivity. Fast, efficient development cycles are crucial, requiring optimized processes for building and deploying images.
Observability: Kubernetes generates a vast amount of logs and metrics. Although it simplifies log and metric collection, a dedicated platform for aggregation and analysis is essential. DOKS provides marketplace 1-clicks for observability (prometheus/grafana/loki stack), 1-click for Kubernetes dashboard. Additionally, Cilium Hubble with flow logs is enabled by default for network troubleshooting.
Scale: Applications may require DNS scaling or rapid autoscaling. High pod density on nodes demands optimal performance from the underlying CNI, with common scaling challenges centered around cluster autoscaler, DNS, and CNI performance. DOKS uses a containerized control plane (control plane components run as containers), and therefore is designed to scale rapidly to meet the spike in business and consequent cluster scaling.
Troubleshooting: Issues can emerge in various areas, including ingress setup, cluster upgrades, storage connections, and resource allocation. Preparedness and knowledge are key to effective resolution.
Disaster preparedness: While the cloud provider secures the control plane, application data and configurations need separate backup and recovery strategies. SnapShooter (part of DigitalOcean) now supports DOKS cluster discovery and backup.
Security: Good security practices should be an integral part of the developer and cluster lifecycle.
Fortunately, addressing these challenges primarily requires a one-time investment in automation and planning, except for ongoing troubleshooting efforts.
In the following section and future blog posts, we will explore patterns and best practices derived from our experiences with a diverse range of customers, aiming to navigate these challenges successfully on DigitalOcean.
Maximizing developer productivity involves streamlining the development and deployment process in Kubernetes environments. This checklist provides targeted recommendations to enhance efficiency and reduce overhead.
Explore and adopt tools that improve productivity. Some examples include the following, ordered based on usefulness.
k9s: This terminal-based UI tool improves cluster management by providing a real-time view of cluster activity and resources, making it easier to monitor and manage applications.
stern: Tail multiple pod logs concurrently with stern. It’s invaluable for debugging complex issues that span multiple services.
k8sgpt: Leverage AI to help with troubleshooting.
kubectx/kubens: Quickly switch between clusters and namespaces, streamlining workflow when managing multiple environments.
k8syaml: A tool that helps generate and validate Kubernetes YAML files, ensuring your configurations are correct and ready for deployment.
kustomize: Embrace configuration as code by customizing application resources without altering their original manifests, facilitating a more manageable and repeatable deployment process.
Traditional Kubernetes deployment cycles involve multiple steps: containerizing an application, pushing the image to a registry, updating the manifest, and applying it to the cluster. This process can significantly slow down development, especially during rapid iteration phases. The Inner Loop Development approach optimizes the cycle of writing code and observing its effects in a live environment. enabling developers to see feedback immediately on change dramatically increases efficiency.
Some tools for inner loop development include the following.
Skaffold: Automates many of the tasks involved in building, pushing, and deploying applications, making it easier to iterate on code changes.
Tilt: Focuses on optimizing the development cycle by monitoring file changes and automatically updating the environment in real-time.
Telepresence: Creates a bidirectional network bridge between your local development environment and the Kubernetes cluster, allowing for seamless testing and debugging.
DevSpace: Offers a streamlined workflow for developing and deploying applications to Kubernetes, including powerful features for building, testing, and debugging directly in the target environment.
Some tools for inner loop development include the following.
Skaffold: Automates many of the tasks involved in building, pushing, and deploying applications, making it easier to iterate on code changes.
Tilt: Focuses on optimizing the development cycle by monitoring file changes and automatically updating the environment in real-time.
Telepresence: Creates a bidirectional network bridge between your local development environment and the Kubernetes cluster, allowing for seamless testing and debugging.
DevSpace: Offers a streamlined workflow for developing and deploying applications to Kubernetes, including powerful features for building, testing, and debugging directly in the target environment.
CI/CD readiness is not merely about selecting the right tools; it is a journey that combines DevOps practices and automation to significantly influence success.
The pathway from code to deployment can be divided into four key stages
Git Repository Branching Strategy: Choosing the right branching strategy is crucial. For SMBs with smaller teams, GitHub Flow is often ideal due to its simplicity and the pull request (PR) based approach to code commits. This fosters a collaborative and iterative development process, encouraging code review and feedback. Note that there are other branching strategies that are also widely used, for example GitLab Flow, and Trunk based Development.
Build Pipeline Execution: Triggered by a PR or on a scheduled basis, this stage involves building the container image from the committed code and pushing the image to your container registry. It’s essential for ensuring that the code is packaged correctly and ready for deployment.
Application Manifests Configuration: Once the new image is built, updating the application manifests with the latest image configuration is necessary. This step ensures that the deployment will use the correct version of your application.
Rollout Strategy: Deploying the image to the cluster is the final stage of your CI/CD pipeline. Strategies like direct deployment, blue-green deployments, or canary rollouts can be employed based on the risk tolerance and requirements of the project.
Maintaining a 24x7 staging environment in a separate cluster simplifies testing and operations. It allows for thorough vetting of changes in a production-like environment without impacting actual users.
Implement distinct roles for staging versus production environments. Restrict access to production to minimize risks and enforce stricter controls over changes.
Utilize your CI pipeline (e.g., GitHub Actions) or tools like Kaniko for building container images. Automate deployments to staging to facilitate continuous testing and validation.
For production deployment, you should start with manual approval and then progress to continuous deployment.
Use manual approvals for Production. Although automation streamlines operations, incorporating manual approvals into your production deployment process adds a layer of quality assurance. This extra step helps in maintaining the stability and quality of your production environment.
Manual Approvals for Production: Continuous Deployment: Once you’ve established robust testing procedures and automation, consider moving to a continuous deployment model. This approach enables seamless transitions from development to production, reducing time-to-market for new features and fixes.
Adopting GitOps practices, where the cluster configuration is kept in sync with a Git repository, offers a robust method for managing deployments. Tools like ArgoCD and Flux automate the synchronization, providing a clear audit trail and simplifying rollback procedures if needed.
The following diagram shows an illustrative example of deploying code to multiple environments using ArgoCD. Changes are first applied to the staging environment for testing. Once approved, the same changes are promoted to the production environment, maintaining consistency and reliability across deployments.
As we continue to explore the ISV journey of Kubernetes adoption, our upcoming blog series will delve deeper into resilience, efficiency, and security of your deployments.
Observability (Part 2): Unpack the tools and strategies for gaining insights into your applications and infrastructure, ensuring you can monitor performance and troubleshoot issues effectively.
Reliability and scale (Part 3): Explore how to manage zero-downtime deployments, readiness/liveness probes, application scaling, DNS, and CNI to maintain optimal performance under varying loads.
Disaster preparedness (Part 4): Discuss the importance of having a solid disaster recovery plan, including backup strategies, practices and regular drills to ensure business continuity.
Security (Part 5): Delve into securing your Kubernetes environment, covering best practices for network policies, access controls, and securing application workloads.
Each of these topics is crucial for navigating the complexities of Kubernetes, enhancing your infrastructure’s resilience, scalability, and security. Stay tuned for insights to help empower your Kubernetes journey.
Ready to embark on a transformative journey and get the most from Kubernetes on DigitalOcean? Sign up for DigitalOcean Kubernetes start here.
]]>With Scalable Storage, you can now add more storage with a single setting. Easily add disk storage in 10 GB increments per node, priced at just $2/month per increment. Think of it as adding data buckets to your Kafka cluster, one by one, for the perfect fit.
Scalable Storage features include:
Increased disk storage capacity: All Kafka plans now come with a range of disk storage options, which can be used to start a new plan or upgrade an existing plan. The minimum amount of storage available can be increased from two to five times the starting amount.
Kafka Clusters now have more storage: Kafka Clusters now scale up to 1.5 TB of storage, enabling users to future-proof their Kafka deployments and ensure they can handle large production workloads.
Spin up a cluster in minutes: Provision highly available, Kafka clusters quickly via the UI, CLI, or API within minutes. Save time and reduce operational overhead required for setting up and connecting nodes, as well as a separate ZooKeeper node. Apache ZooKeeper is an open-source software that enables highly reliable distributed coordination. It is commonly used in distributed systems to manage configuration information, naming services, distributed synchronization, quorum, and state. Distributed systems often rely on ZooKeeper to implement consensus, leader election, and group management.
As your data volume grows and your applications require more and more insights, your Kafka cluster can become overwhelmed. This can lead to performance bottlenecks, user frustrations, and under-utilization of resources. This is where horizontal scaling emerges as a powerful solution, ready to transform your cluster from a straining engine to a smoothly-humming powerhouse.
Think of your Kafka cluster as a network of data processing pipelines. As data demands surge, these pipelines become congested, slowing down the flow of information and hindering your ability to extract valuable insights. Horizontal scaling acts like adding more lanes to this data highway, distributing the workload across additional nodes. Horizontal scaling will enable users to add more nodes to your existing Kafka cluster to handle more requests, helping to ensure peak workload performance.
Horizontal scaling entails adding more nodes to your existing Kafka cluster to handle more requests.
RockerBox, a marketing analytics company focused on simplifying multi-channel marketing, took advantage of additional nodes to keep up with spiking demand for their service.
“It was easy for our team to set up a 15-node DigitalOcean Managed Kafka cluster to handle the increased traffic around Black Friday.” - Kevin Hsu, Director of Engineering
This translates to:
Boosted performance: More nodes mean more processing power, increased throughput, and reduced latency.
Scalability on demand: Avoid overspending on a massive cluster from the get-go. Horizontal scaling lets you gradually add nodes as your needs evolve, like expanding your data network one segment at a time. It’s a cost-effective approach to growth! Anticipate data spikes, add nodes, and keep the data flowing like a well-oiled machine.
Enhanced resilience: We understand the criticality of uptime. Horizontal scaling helps ensure service continuity by providing redundant nodes that pick up the slack during planned maintenance or unforeseen outages. Your users will thank you for the uninterrupted data access!
Fault tolerance: More nodes often unlock advanced data replication features, helping to safeguard your information even during major disruptions. Think of it as having multiple copies of your data.
Improved reliability: With scalable storage, customers can improve the reliability of their Managed Kafka clusters by adding additional brokers. Configure your cluster to have 3, 6, 9, or 15 brokers for higher reliability in case of a failover.
Remember, scaling isn’t a one-size-fits-all solution. Carefully assess your workload, budget, and operational complexity before adding nodes. However, horizontal scaling can be a game-changer for cloud-based businesses grappling with real-time data demands. It unlocks performance, capacity, resilience, and flexibility, empowering you to extract the full potential of your Kafka cluster.
DigitalOcean Managed Kafka clusters include 3 nodes (also known as “brokers”), but Dedicated-Plan clusters can be easily upgraded to 6, 9, or 15-node configurations.
Designed for growing digital businesses with simplicity and affordability, DigitalOcean Managed Kafka is now available for all your production workloads. Learn more about Managed Kafka in our docs, and start taking advantage of the benefits of Managed Kafka today by signing up for a DigitalOcean account.
Need help regarding Managed Kafka? Contact our sales team or connect with a DigitalOcean Partner who can advise you on architecture reviews, deployments, migration support, and other infrastructure assistance.
]]>As the originator of the AI coding assistant category, Tabnine’s mission has always been to accelerate software development through AI for engineering teams of every size, which makes them an ideal partner for DigitalOcean, which is focused on simplifying development for technical users around the globe. Leveraging Tabnine, every DigitalOcean customer can now accelerate and simplify the entire software development lifecycle process without sacrificing privacy, security, and compliance.
Here are some common use cases of Tabnine’s AI coding assistant
Create - Apart from code generation, you can use Tabnine to answer coding questions, explain code syntax and structure, and get up to speed on new programming languages.
Test - Tabnine automatically generates and runs unit tests. This makes it easier to embrace a test culture and catch issues earlier in development.
Fix - If tests fail or bugs emerge, Tabnine will help with diagnosing the issue and propose code suggestions to fix the problem.
Document - Simply highlight the pieces of code and Tabnine will generate the documentation for you, and thus automate one of the most mundane tasks for your developers.
Maintain - If you need to make changes or improvements to existing code, just select the required code and Tabnine will explain the purpose of the selected code.
In addition to this, Tabnine is context aware, and provides recommendations based on your code and patterns. It understands and applies your coding standards and context aware guidelines. Check out this demo to see Tabnine in action.
Even though the usage of AI-based coding assistants has proliferated, there are concerns around privacy and protection when using these tools. This is an area where Tabnine is truly different from the plethora of generative AI tools in the market. Unlike other tools, Tabnine is the AI coding assistant that you control. Tabnine’s models are trained exclusively on permissively licensed open source code. This helps ensures that the recommendations from Tabnine will not match any proprietary code.
Similarly, Tabnine has a zero data retention policy - they never store or save your data and never train their models on your data. This offers peace of mind and enables you to focus more on leveraging AI to accelerate your pace of innovation.
With this partnership, DigitalOcean customers will get access to Tabnine Pro plans for 1 user to up to 10 users. To get started, simply choose the Tabnine plan that works best for you and click the ‘Add Tabnine’ button in your DigitalOcean control panel.
Tabnine supports all the major IDEs and languages. You can expedite the building of your app using Tabnine and deploy it easily on DigitalOcean’s platform via Droplets (cloud virtual machines), DigitalOcean Kubernetes, or App Platform, our fully managed platform as a service offering.
DigitalOcean’s simplicity combined with Tabnine’s AI-powered coding assistant will enable startups and developers to increase their productivity, so they can focus more on building innovative applications. We can’t wait to see what you will create with DigitalOcean and Tabnine—sign up for the Tabnine plans here and check out the docs to learn more!
]]>Initiated through a comprehensive selection process, which included a nationwide call for funding proposals, the response was overwhelming with 11 proposals submitted. Following a rigorous evaluation, two Pakistani non-profit and social enterprises, Behbud & Weecommerce emerged as standout recipients of this signature initiative aimed at propelling inclusive entrepreneurship in affected regions. Behbud aims to empower women across the nation by providing them with access to health, education, income generation and vocational training regardless of their social or racial backgrounds. Since its inception, the Weecommerce initiative has successfully launched over 40 websites for women-led businesses from the comfort of their homes, and will continue to help women entrepreneurs realize their full potential. Both partners are focused on opening up as many areas of learning as possible.
“Weecommerce is thrilled to aspire to broaden its impact on women entrepreneurs, elevating them in the field of technology with the support of Digital Ocean & BNU.” said Sabeen Khan, Program Lead, Weecommerce.
“The DO BNU grant approval is positioned to empower Behbud Association, fostering its transformation, efficiency, and long-term sustainability” said Bushra Prevaiz Kausar, Honorary Senior Vice President Behbud Association, Karachi. “The training initiative that we are taking with them will also empower women artisans, unlocking their entrepreneurial potential, turning aspirations into reality. This endeavor is set to create a transformative ripple effect, not only shaping the destinies of these women but also positively impacting their families and communities.”
In a demonstration of unwavering commitment, DigitalOcean and BNU will deploy significant investments over the next two years (2024-2025) to fuel the growth and success of entrepreneurs throughout Pakistan.
Selected candidates will benefit from a comprehensive support system, encompassing:
Cash Grants: Grantees will receive cash grants ranging from USD 25,000 to 40,000 (equivalent in PKR), providing essential financial stability and fueling business growth.
Free Infrastructure Credits: DigitalOcean will contribute free infrastructure credits valued at $2,500, allowing grantees to harness advanced cloud solutions for their ventures.
Education and Tutorials: Selected entrepreneurs will have access to tutorials and education provided by DigitalOcean, focusing on crucial aspects such as raising funds and implementing effective marketing strategies.
Mentorship: Selected entrepreneurs will gain invaluable insights and guidance through mentorship from DigitalOcean volunteers, aiding them in navigating the complexities of entrepreneurship.
Marketing and Project Management Support: BNU will provide marketing support to enhance visibility and offer project management assistance for effective execution.
Impact Measurement Support: BNU is committed to providing assistance in measuring the impact of initiatives, ensuring accountability and success.
“This collaboration underscores DigitalOcean and BNU’s shared vision of fostering a more inclusive entrepreneurial landscape in regions such as South Asia.” said Admas Kanyagia, VP Social Impact at DigitalOcean. “By supporting underrepresented entrepreneurs particularly women, the program seeks to instigate positive change, spur innovation, and contribute to the economic development of Pakistan.”
Read more about the Signature Initiative here.
About Beaconhouse National University (BNU):
Founded in 2003, BNU is Pakistan’s premier not-for-profit Liberal Arts University, committed to excellence in tertiary education. Our mission revolves around fostering innovation, creativity, and critical thinking while promoting values of diversity, inclusiveness, social sensitivity, academic freedom, and a merit-driven, need-oriented admission policy.
About DigitalOcean LLC:
DigitalOcean simplifies cloud computing so businesses can spend more time creating software that changes the world. With its mission-critical infrastructure and fully managed offerings, DigitalOcean helps developers at startups and growing digital businesses rapidly build, deploy and scale, whether creating a digital presence or building digital products. DigitalOcean combines the power of simplicity, security, community and customer support so customers can spend less time managing their infrastructure and more time building innovative applications that drive business growth. For more information, visit digitalocean.com.
]]>Since DigitalOcean took the Pledge 1% commitment, we have identified philanthropy as a key lever for social impact, in addition to our product donations and employee programs. In the past two years, we’ve directed our corporate philanthropy to several initiatives. When we launched DO Impact, we awarded cash grants to nonprofit organizations that were leveraging DO’s technology and products. We also directed cash grants as part of our annual customer conference, Deploy, to support nonprofits in communities where DO employees were based, and provided cash grants to nonprofits nominated by our Employee Resource Groups (ERGs).
As we started to explore ideas for our future philanthropy, we wanted to direct our cash grants to social issues that allow for deeper alignment and connection to the DO business. DigitalOcean’s mission is to simplify cloud computing so that developers and businesses can spend more time building software that changes the world. Most of our customers are entrepreneurs who are building businesses on DigitalOcean. Our cloud infrastructure platform lowers barriers for entrepreneurs so they can grow and scale their businesses with technology.
However, not everyone has an opportunity to be an entrepreneur. Women, youth, migrants, seniors, and the unemployed are often under-represented as entrepreneurs. The lack of access to finance, skills gaps, under-developed networks, and institutional and cultural barriers lead to gaps in entrepreneurship. We know this is an area for incredible societal impact—closing the gender gap in entrepreneurship would increase global GDP by $2.5 to $5 trillion. We believe that DigitalOcean can make a contribution to advancing under-represented entrepreneurs by lending our philanthropy, technology, and expertise.
For that reason, DO Impact is launching our Initiative on Inclusive Entrepreneurship, a three-year philanthropic signature initiative to drive inclusive entrepreneurship by building access and opportunity for under-represented entrepreneurs in key geographies around the world. According to the OECD, inclusive entrepreneurship ensures that all people, regardless of their personal characteristics and background, can have an opportunity to start and run their own businesses. Our new program will provide cash grants, access to DigitalOcean’s technology, and support from employee volunteers to nonprofits, social enterprises, and other civil society organizations that are advancing inclusive entrepreneurship.
We plan to begin our work in Pakistan, where a large group of DigitalOcean employees live and work. In Pakistan, small and micro-enterprises are the backbone of the local economy, but women entrepreneurs remain underserved. We’re pleased to share that for our first grants, DO Impact has partnered with Beaconhouse National University (BNU) to launch this initiative in Pakistan.
BNU holds the distinction of being Pakistan’s first not-for-profit Liberal Arts university, with a mission that revolves around fostering empowered and impactful global citizens within a socially conscious, cross-disciplinary, liberal arts environment. BNU will serve as the local anchor partner to identify, monitor, and support a set of pilot projects that advance women’s entrepreneurship. In November 2023, BNU launched a Request for Proposals (RFP) process to identify the first two pilot projects to advance women’s entrepreneurship in Pakistan. DigitalOcean and BNU are excited to announce our first two grantees:
Behbud Association Karachi stands as a dynamic force for social change, devoted to uplifting and empowering marginalized communities. Entirely run by women volunteers, Behbub was inaugurated in 1967 and marked as one of the largest and oldest non-profit organizations in Pakistan. They provide access to quality education, affordable health services, vocational training for income generation (focused on crafts) and employment to thousands of women for the past fifty years. With support from DigitalOcean and BNU, Behbud will receive a cash grant to expand digital literacy and access to e-commerce platforms for women artisans, and to invest in internal digital transformation.
WeeCommerce is an initiative from tossdown, a website platform that powers businesses to digitize sales and operations. In 2022, tossdown initiated Weecommerce (Women Entrepreneurs in E-commerce) program to extend their services to an under-served community - women entrepreneurs. Since its inception, the initiative has successfully launched websites for women-led businesses including kitchenware, homemade food, jewelry, and baked goods all from the comfort of their homes. With support from DigitalOcean and BNU, WeeCommerce will receive a cash grant to scale this program and provide free websites to women-led businesses in Pakistan.
Both projects will also receive access to DigitalOcean infrastructure credits through the DO for Nonprofits & Social Entreprises program.
We are proud to make these contributions to expand women entrepreneurs in Pakistan. Over the next year, DigitalOcean will partner with BNU, Behbud, and WeeCommerce to implement these initial pilot projects. Our priorities will be:
Joint learning: These initial pilot projects will allow us to better understand the barriers that women entrepreneurs face in Pakistan, and the role that technology can play in reducing those barriers. As a result, we will sharpen our approaches to achieve impact and scale.
Storytelling: We plan to share our progress along the way, by elevating the stories of women entrepreneurs and our grantee partners.
Employee engagement: DigitalOcean employees are passionate about helping entrepreneurs and their communities. We plan to engage DigitalOcean employees as mentors and volunteers for the underrepresented entrepreneurs who are engaged as part of these projects.
Community and love have been part of DigitalOcean since the very beginning, and this initiative is a natural extension of our company values. DO Impact is excited to support organizations in new ways as we broaden our social impact reach through inclusive entrepreneurship. We’re excited to share our progress and impact on advancing women entrepreneurs in Pakistan.
]]>DigitalOcean remained committed to its philanthropy and Pledge 1% commitment through corporate giving and employee-driven donations. Between cash grants, employee resource group (ERG)-driven donations, employee donation credits, and matching employee contributions, DigitalOcean gave over $500K+ to more than 725 organizations in 2023. These dollars were split across:
Initiative on Inclusive Entrepreneurship: DO Impact is launching a new philanthropic initiative to support underrepresented entrepreneurs across the globe. In late 2023, we partnered with Beaconhouse National University (BNU) to direct our initial investments to civil society organizations that are driving women’s entrepreneurship in Pakistan. Please follow our blog to hear more about this initiative soon!
ERG grants: Our employee resource groups identified three nonprofit organizations important to their causes supporting women, early career professionals, and the LGBTQIA+ community. ERGs identified and directed donations to StartOut, Django Girls and Maria’s Scholars, which are all working to expand access to opportunity for underrepresented communities through education and entrepreneurship.
In 2023, we hosted our first-ever DO Day of Service to activate DigitalOcean employees to volunteer remotely and in person across the globe. We also continued to activate employees through our employee gift match program, which is part of our employee benefits package. Below are a few highlights:
Sharks came together on August 18 in Bangalore, Denver, Karachi, New York and remotely to donate 410 hours of service, valued at $14K+, to our partner nonprofits, like Denver Urban Gardens, Stand Up India Foundation, Denver Urban Gardens, Dar-ul-Sukun, Bowery Mission and Be My Eyes.
Employees leveraged our generous BrightFunds gift match benefit to support nonprofits all over the world. Collectively, Sharks directed more than $245K+ of company matching dollars to 719 organizations. In total, $434K was donated to nonprofits through our company matching dollars and employee direct donations.
We continued to activate employees through various funds on our giving platform to allow employees to respond to humanitarian crises and environmental disasters throughout the year.
We had a 30% participation rate for 2023 across our employee giving & volunteering programs.
We re-launched our previous program, Hollie’s Hub for Good, in 2023 with a new name and expanded benefits. DO for Nonprofits and Social Enterprises now provides $2,500 in one-time credits for DigitalOcean’s cloud computing solutions to nonprofits and social enterprises around the world. Since the new program launched in August, we’ve added 240+ nonprofits to the DigitalOcean platform, and provided over $610K in free credits in 2023.
We also continued to highlight how cloud computing at DigitalOcean has expanded the capacity of nonprofits in our program. DigitalOcean’s capabilities of simplicity, cost effectiveness, reliability, and community resources align well with the needs of time- and resources-strapped nonprofits and social enterprises. We worked with Ersilia, a tech nonprofit working to leverage machine learning to address gaps in medical research in under-resourced countries. With the help of employee volunteers, we helped to use App Platform to quickly deploy their solutions for a research institute in Cameroon. In addition, we worked with BASAibu, a nonprofit that works to foster communities in Indonesia with a multilingual wiki that has reached 4M people to date. BASAibu uses DigitalOcean Droplets for VPS hosting, Volumes Block Storage, and Snapshots backups to host their wiki and forums.
We continued our engagement with amazing partners like Fast Forward, helping catalyze and extend our reach to more tech-driven nonprofits worldwide. We also joined our Pledge 1% community at Nasdaq to celebrate Giving Tuesday.
As we move full steam ahead into this new year, we’re excited to focus even more on several areas, including:
Launching projects to drive inclusive entrepreneurship in Pakistan in partnership with BNU, and our nonprofit (civil society) partners, as well as exploring additional geos for future investments
Scaling our nonprofit product donation program so nonprofits can build their capabilities in cloud empowerment with DigitalOcean products, services, and community.
Expanding our storytelling of our nonprofit customers to highlight their good work, and how DigitalOcean’s contributions support them through written stories, videos and social media.
Hosting our second DO Day of Service to activate employees to volunteer remotely and in their communities. We will also support our ERGs to identify additional partners that are expanding access to opportunity for their communities, as well as develop opportunities for them to mentor underrepresented entrepreneurs and students.
There is so much more to come. Stay tuned for more impact updates in the coming months!
]]>Backups are a crucial element of every business’ data protection strategy. DigitalOcean Backups are automatically created disk images of Droplets, essentially copies of your DigitalOcean virtual machines that can be used to recover from the risk of data loss. Turning backups on for Droplets enables system-level backups which provides a way for users to revert to an older state or create new Droplets. We’re pleased to share that we have made powerful enhancements to DigitalOcean Backups that will enable SMBs to build better and more robust business continuity plans.
Here are some key features of enhanced backups that can help you protect your business data:
Daily backups of your Droplet data are good for your business and good for your customers. Here’s why:
Align your data protection with the rhythm of your business. Data often follows business dynamics. As businesses grow, new transactions are added every day, new people join the business, more customers buy from the company, and that key business information is added to an ever-growing pool. Data protection has to match how your business actually runs, which is why flexible, daily backups are so useful for so many companies.
Stay ahead of data loss from security breaches. In a recent survey we conducted, 22% of SMBs mentioned data loss or data theft as their top concern about security. From credential stealing to phishing and many other forms of attacks, the fastest way to recover from a security incident is to have the latest backups handy so you can get your business back on its feet quickly.
Don’t let accidental deletion and user error deter you: From misconfigurations to confusingly similar Droplet names that get deleted accidentally or data lost during internal migrations, data disruptions can occur as part of day-to-day activities—but with daily backups, that risk is heavily mitigated.
Adhere to compliance needs better. With backups performed every day, it’s easier for growing digital businesses to stay in compliance with regulatory requirements for data protection.
Innovate without worries. SMBs are increasingly trying to get their products and services to market faster or innovate their existing solutions quicker. That means their applications, code base, configurations, and tech stack are evolving continuously—which opens the door for accidental data loss to become a huge headache. But not when you have daily backups as a safety net.
“The introduction of daily backups for DigitalOcean’s Droplet virtual machines is an important step for comprehensive data protection, enabling DigitalOcean on its path to becoming the cloud of choice for startups, ISVs & SMBs. This key enhancement positions DigitalOcean to not only deliver stronger data protection, but empowers businesses to recover from disruptions swiftly and provide more consistent and improved experiences for their customers.” - Dave McCarthy, Research Vice President, Cloud and Edge Infrastructure Services, IDC
DigitalOcean Backups are easy to set up, monitor, and manage from our cloud console.
Here’s how DigitalOcean Backups is helping our customers succeed:
“As a small business, DigitalOcean Backups just works and lets me sleep peacefully at night." - Stewart Flood, Founder, IVO Net LLC
“Daily backups help us keep a running repository of our data in our LMS so that anytime should we need to go back because of a failed LMS update, we will only ever have to go back 24 hours at the most, which is hugely beneficial considering that an LMS has data that’s changing and shifting on a daily basis.” - Steve Hampton, IT Client Support Specialist, Toccoa Falls College,
DigitalOcean Backups are perfect for businesses running their workloads on Droplets, but may not have extensive resources to implement sophisticated data backup strategies. It provides an easily configurable and automated solution to help ensure protection and availability of business-critical data.
Daily Droplet backups are now available in recently launched NYC1 and AMS3, as well as NYC3 and SFO3, with availability in other data centers coming very soon. Daily Droplet backups are priced at 30% of the Droplet cost. Read more about backup pricing or contact our sales team.
Add daily backups to your Droplet workloads today to help protect your data!
*Actual backup speed gains and performance may vary depending on a variety of factors such as system configuration, I/O load, operating environment, and type of workloads.
]]>Cilium Hubble is an open-source project that builds on top of Cilium, the eBPF-based networking, observability, and security solution. Cilium became DigitalOcean Kubernetes’ new data plane in 2019, and with the addition of Hubble, DigitalOcean Kubernetes users will get full visibility into the network traffic and security within a DOKS cluster.
With the integration of Cilium Hubble into DOKS, users can now gain deeper insights into their Kubernetes deployments. Whether you are tracking real-time network flows, visualizing service dependencies, or detecting potential security vulnerabilities, Hubble equips developers and system administrators with the tools they need for efficient Kubernetes management and troubleshooting. For more information about Hubble visit their repository on GitHub.
To start using Cilium Hubble on DigitalOcean Kubernetes is easy, the only requirements are having the Cilium and Hubble CLIs and to authenticate with your cluster. After that, you’re ready to access the Hubble UI.
Check out our short walkthrough video to see Hubble in action featuring the Star Wars demo.
Cilium Hubble is now included with DigitalOcean Kubernetes version 1.29.x or greater at absolutely no additional cost. Hubble will be widely available in all data centers with DOKS clusters. At this time metrics tracing is not part of Hubble on DOKS.
The addition of Cilium Hubble to DigitalOcean Kubernetes reaffirms our commitment to robust, secure, and easy-to-use infrastructure. Stay tuned for more updates and enhancements from DigitalOcean. For any questions or feedback, reach out to our support team or visit our community page.
]]>DigitalOcean’s Fleet Optimization Engineering team is responsible for fitting as many Droplets as possible on our servers without degrading Droplet performance. In other words, we carefully stack jenga blocks so that our towers don’t fall over.
What if you want to move one of the Jenga blocks without knocking over the entire tower? That’s where the story of Dolphin begins.
It was a typical day at DigitalOcean. Our servers were serving. Our Droplets were…Dropleting. Business as usual. Lucy Berman, an engineer on the Fleet Optimization Engineering team, received a curious ping from one of her colleagues on the Storage team:
“Hey Lucy, how do I ensure that two of my Droplets in the same region don’t wind up on the same server? I’m sure this question has a very simple answer and won’t result in a complex multi-year project. Thanks!” -Lucy’s coworker, probably
Lucy asked the rest of her team about this, and there wasn’t a clear answer. In our world, one region equals one datacenter, which meant that two Droplets placed in the same region had the potential to wind up in the same rack, or worse, on the same server.
The initial idea of Dolphin was meant to solve that exact use-case: an internal anti-affinity service. Think of anti-affinity as a means of making Droplets allergic to one another based on specified criteria. For example, Dolphin might notice that three Elasticsearch leader Droplets have been placed on the same server, and proceed to distribute two of the leaders to other servers within the datacenter, eliminating the risk of a server failure triggering an outage.
The idea of automatically distributing Droplets to make our systems more reliable served as the foundation of the Dolphin we know and love today.
Internally, we tend to use nautical names for most of our services. Within the Fleet Optimization Engineering team, we like to pick sea creature names, relating the name of the creature to the purpose of the new service.
We needed a service that could interpret many different signals and operate intelligently, and gracefully. Dolphins are known for their intelligence as well as their grace—the name was obvious.
The initial design of Dolphin took about a month, and didn’t involve much drama.
We used a form of deliberate design called “event storming.” Event storming helped us form a shared language around the various components of Dolphin. Settling on a shared language upfront allowed us to approach the implementation more systematically and made collaboration easier. The event names that we settled on during planning wound up trickling all the way down into the implementation.
rebalanceCreated := mxRoot.IncCounter(
"rebalance_created",
"increases when dolphin creates a new rebalance",
)
rebalanceCreationFailed := mxRoot.IncCounter(
"rebalance_creation_failed",
"increases when dolphin cannot create a new rebalance",
)
Like most of our internal services, we opted to write Dolphin in Go and deploy it via Kubernetes. We were very thoughtful while designing Dolphin—an automated-Droplet-moving-system-thing had the potential to do a lot of damage very quickly if it failed. With such a huge potential blast radius, safety was paramount.
With this in mind, we had the foresight to design a bunch of safety nets. For example, there is a threshold on the number of Droplets that Dolphin is allowed to move over a certain period of time. Concurrency safety, backoff mechanisms, and a “Big Red Button” have been extremely important for the success of Dolphin.
Our team has always had a culture of reliability, especially since Roman joined us—we like to play with things like supervision trees, chaos testing, and canaries. It felt very natural for us to carry these reliability practices into the implementation of Dolphin. In fact, the initial Dolphin release was in a sort of “read-only” mode, where it would log its intent without executing. This was an extra cautionary step in the interest of safety, but also made iterative development much easier!
It’s rare for an initial release of anything in the software world to happen without issue, but Dolphin was honestly quite painless. We credit the ease of Dolphin’s release to our deliberate planning, thoughtful execution, and our culture of safety and care.
Remember, our only initial goal was to allow internal teams at DigitalOcean to accomplish anti-affinity for their internal services—making sure their Droplets didn’t wind up on the same server.
For this to work, internal teams needed to manually mark existing Droplets with a “group ID” using an internal tool; if two Droplets shared the same group ID and lived on the same server, Dolphin would notice and move one of them somewhere else.
We named the component responsible for tracking Droplets a “Monitor.” A Monitor is a supervised goroutine that runs a state-machine, receiving inputs from multiple sources (Prometheus, Databases, etc.) and deciding whether any action is necessary. Our first monitor was named the Anti-Affinity Monitor. Here it is on our (virtual) whiteboard:
// antiAffinityDetected holds info for a detected anti-affinity violation
type antiAffinityDetected struct {
Group types.GroupID
HV types.ServerID
Workloads map[types.WorkloadID]struct{}
}
One interesting aspect of our current scheduling model is that Droplets within a region might live on the same server—there is no rule dictating that they can’t, though we try our best to avoid it. Dolphin provides a sort of “safety net,” as it notices when Droplets live on the same server that shouldn’t, and eventually shuffles them around appropriately.
We’ve dubbed this model “Eventual Anti-Affinity,” and it keeps things simple for us internally. Our scheduling system places Droplets on whichever servers it considers to be the best fit for the Droplets’ needs, while Dolphin, always watchful, keeps an eye on things and eventually moves Droplets around when necessary.
The second, more action-oriented component of Dolphin is called the Rebalancer.
// rebalance represents the execution of a rebalance after an imbalance // is detected.
type rebalance struct {
imbalance types.Imbalance
sourceHV types.ServerID
jobs map[types.JobToken]map[types.WorkloadID]struct{}
completedJobs map[types.JobToken]struct{}
timestamp time.Time
}
The Rebalancer is responsible for moving Droplets around, and was given this name because the live migrations it triggers keep the fleet in a well-balanced state.
So, our Monitor will notice when something needs to change and the Rebalancer can take appropriate action. What about situations where things get a little…cyclical?
Imagine, if you will, a Droplet that is somehow always detected by a Monitor. Dolphin would wind up moving the Droplet between servers endlessly. What if there are tons of these Droplets? We could easily wind up in a situation where we’re just shuffling the same Droplets around forever, wasting valuable resources. Anticipating this possibility, we built a component called the Workload Journey Manager.
This struct reveals the Workload Journey Manager’s responsibilities:
// canMigrateWorkload serves as a query for the workload journey
// manager to assert if a workload may be migrated or not
type canMigrateWorkload struct {
wID types.WorkloadID
respondChan chan bool
}
The Workload Journey Manager does exactly as its name suggests: it manages the journey of a “workload”—also known as a Droplet. We want to keep track of a Droplet’s journey from server to server, so each time Dolphin moves a Droplet to a new home, we add an update to our record of that Droplet’s journey.
Think of the Workload Journey Manager as a travel agent: it makes sure a Droplet’s journey isn’t too difficult, busy, or complicated. To protect a Droplet’s journey, the Workload Journey Manager enforces certain rules that may prevent a Droplet from being migrated for a certain period of time.
Some of these rules are static:
A Droplet can’t be moved more than N times per hour
If a Droplet is brand new, don’t move it (internally, we refer to new Droplets as “Baby Droplets” 👶💧)
Other rules are more dynamic:
If a Droplet has had N failed migrations in X hours, mark it as “unmovable”
If a Droplet has moved around a bunch of times in X hours, prevent it from bouncing around too much by marking it as “bouncing”
const (
// Unmovable represents a journey.Filter that filters workloads
// that have failed too many migrations.
Unmovable FilterTag = iota + 1
// Bouncing represents a journey.Filter that filters workloads
// that have migrated successfully too many times over a window of
// time.
Bouncing
// Baby represents a journey.Filter that filters workloads that filters workloads that
// have a small age.
Baby
)
In short, Dolphin consists of a bunch of state machines that constantly watch our fleet of servers, and rebalances Droplets it thinks should be moved.
After successfully tackling the anti-affinity challenge, we realized that we could extend the “Monitor” mechanism beyond anti-affinity. Maybe, just maybe, we could use a Monitor for other prevalent challenges we face while managing our ever-growing fleet of servers.
One such challenge was a classic operational issue—full disks. At DigitalOcean, we use local server storage for all Droplet root volumes, which means we take some placement bets. Sometimes, we’re wrong, and server storage starts running perilously low—traditionally, our lovely cloud operations folks would remediate this issue by hand, but it made us wonder, “What if Dolphin could remediate this type of operational issue automatically?”
Given Dolphin’s capacity for understanding Prometheus metrics, we took the query typically used to alert our CloudOps team for a full disk and used it as the detection query for a new Dolphin Monitor. This “Full Disk Monitor” helps us detect when a server is close to the disk fill threshold and automatically moves some Droplets away, alleviating the issue without needing CloudOps to manually intervene.
This new capability made Dolphin multi-functional. It went from being an anti-affinity machine to a sort of fleetwide swiss army knife Droplet-moving machine. Our next step was to take that mindset and extend it outwards to benefit external customers.
If you’ve ever rented an apartment in a building with other tenants, you know how important it is to be a considerate neighbor. There’s inevitably that one upstairs neighbor, though, who thinks they’re the only tenant in the building. They’re loud, they leave their belongings in the hallway, and they’re generally unpleasant to live with. So what do you do? Move.
Sharing resources in the cloud is like living in a rental building with other tenants. You purchase a certain space, but sometimes a neighbor makes it unbearable for you to live there.
While customers purchase products with a given set of resources, we’ve observed that most usage profiles are relatively light. As such, to maximize efficiency, DigitalOcean packs as many Droplets as possible onto our servers, with the understanding that most customers will be unaffected by this sharing of resources. However, certain customers—“busy users”—use as many resources as possible.
Because we overcommit our resources, these busy users can impact other users, causing problems like CPU Steal and High PSI. In the case of CPU Steal, a Droplet uses so much CPU that performance is degraded for all customers on the same server. On the other hand, High PSI occurs when a Droplet uses so much disk I/O that file access performance is degraded for all customers using the same disk.
So what do you do when you have a Droplet that’s unhappy because of a noisy neighbor? Like any savvy renter, Dolphin will notice the Droplet is having a less than optimal experience, and find somewhere better to place it.
Here’s an example of Dolphin automatically rebalancing Droplets that are impacted by High PSI:
The red Droplet and the yellow Droplet are both noisy neighbors, performing high read/write operations that affect other Droplets using the same underlying disk. The blue dashed line indicates when Dolphin’s rebalance starts and the green dashed line indicates the end of the rebalance. We can see that Droplets previously affected by high PSI are no longer being impacted: the yellow Droplet becomes a light blue Droplet after it starts running on a different server.
In the past, this situation was (and, in some instances, still is) remediated with manual intervention from our CloudOps team as a reactive measure. As a proactive system, Dolphin dynamically protects customers from experiencing poor performance, moving Droplets when they’re impacted by the activities of other customers. Sometimes it’s necessary to break your lease!
We tried to wrap up this blog post with a vision for the future of Dolphin. What’s next? How can we make Dolphin solve the world’s problems? Can it bake cookies? Can it solve the 2008 financial crisis? Honestly, we’re still figuring that out. The beauty and success of Dolphin has been our ability to continuously iterate as we build upon the system’s foundations.
We’re excited to figure out what’s next for Dolphin, and we hope you’ll feel the impact of our hard work!
None of this work would be possible without the constant support of Roman Gonzalez, Lucy Berman, James Brennan, Geoff Hickey, Mike Pontillo, and the Fleet Optimization Engineering Team. We also want to shout out Julien Desfossez and Vishal Verma from the Performance Team for providing us with the statistics and numbers that helped make Dolphin a reality. We are also grateful for the contributions of Michael Bullock and Billie Cleek on the anti-affinity work.
We’d also like to thank Becca Robb and Roman Gonzalez for running the Fleet Optimization Engineering Team’s Book Club, where we have read and discussed books that have taught us how to make reliable systems and work with domain-driven software design.
Finally, thanks to Jes Olson and the marketing team for working with us on this blog post and helping us share Dolphin’s story!
Ready to bring a new workload to DigitalOcean? Enjoy three months of the new workload cost on us. Yes, it’s that easy. Contact Sales at DigitalOcean today for more details and next steps.
]]>Greetings for a healthy and inspired 2024! I hope this letter finds you motivated to embark on new and exciting endeavors leveraging the ever increasing potential of cloud computing to empower your business objectives. As the Chief Revenue Officer at DigitalOcean, I am privileged to witness the incredible impact that our cloud platform continues to have on businesses like yours.
In today’s rapidly evolving digital landscape, choosing the right cloud provider is crucial to your success. While the hyperscalers influence the enterprise market, I want to shed light on how DigitalOcean continues to serve as the superior platform alternative for startups, growing digital businesses, and early to advanced ISVs (Independent Software Vendors).
We strive to stand out from the crowd of providers with our foundational simplicity in platform experience, ease of use, comprehensive solutions, and robust developer community - all tailored to our customers’ growth journey. As the cloud of choice for innovators and developers like you, we are committed to frictionless development processes, competitive cost advantage, and outstanding customer service.
Today, innovators at independent software vendors (ISVs) consistently choose DigitalOcean over hyperscaler platforms as the game-changer that will transform the way ISVs build, deploy, extend applications, and reach new, robust customer following with DigitalOcean MarketPlace. Along the way, these innovators effectively capture hundreds of thousands of dollars in savings that will be reallocated to key business objectives.
I am proud to share that we serve thousands of ISVs and startups on the DigitalOcean platform today, guided by a simplified platform experience that empowers you to unleash the true potential of your software solutions. If you have already started with Droplets on DigitalOcean, you should progress to the advantages of Premium Droplets and our Managed Databases that work to simplify the process of scaling database storage and performance, and reduce administrative burden. Affordable, transparent pricing benefits ISVs of all sizes that leverage services like managed Kafka for data streaming which is often cost prohibitive from other providers.
DigitalOcean’s App Platform also brings the advantage of fully-managed infrastructure to thousands of ISVs to build, deploy, and scale apps quickly using a simple, fully-managed solution. For fast moving AI developers, we offer the Paperspace advantage empowering GPU-backed AI development, access to NVIDIA H100 GPUs and key platform enhancements where users leverage out of the box solutions in a more seamless experience.
But the game changer for many ISVs that are looking to scale workloads, automate deployment and scale with a developer-friendly approach starts with the most simple, powerful and cost effective Kubernetes Capabilities. We designed DigitalOcean Kubernetes to be a powerfully simple managed Kubernetes service, significantly lessening the complexity experienced on the hyperscalers. While users define the size and location of their worker nodes, DigitalOcean provisions, manages, and optimizes the services needed to run the Kubernetes cluster. Setup takes minutes, and we provide a Kubernetes endpoint that you can use with the tools of your choice, from the standard kubectl command line interface (CLI) to the rich and growing ecosystem of Kubernetes services. DigitalOcean’s Kubernetes offering is a true differentiator, enabling ISVs to effortlessly manage containerized applications, scale workloads, and achieve unparalleled flexibility.
Our mission is simple—to provide you with the tools and infrastructure needed to scale exponentially and accelerate your journey to success as validated by many ISVs and startups on the platform today, including Snipitz, ScraperAPI, Nitropack, Zing, and BrightData among so many more.
Together, let’s redefine the boundaries of the digital landscape and build a future where innovation thrives.
Best,
Aaqib Gadit
Ready to bring a new workload to DigitalOcean? Enjoy three months of the new workload cost on us. Yes, it’s that easy. Contact us today for next steps.
Relying on a multi-cloud portfolio or unsure about your expansion strategy? To request a personalized assessment from our cloud experts in support of your cloud journey, start here.
Ready to embark on a transformative journey and get the most from Kubernetes on DigitalOcean, then start here.
Join the DigitalOcean Partner Pod Program today and elevate your business through a collaborative partnership designed for mutual growth. With support at every level of our company, we’re committed to a long-term relationship that helps us achieve more, together. Apply here.
NVIDIA H100 GPUs have accelerated AI/ML adoption with order-of-magnitude performance gains unimaginable a few years ago. Equipped with NVIDIA’s new Transformer Engine built on 4th Gen Tensor Cores, H100 GPUs power some of the most significant innovations in the AI/ML space, such as large language models and synthetic media models.
Paperspace now offers these powerful GPUs as both on-demand and reserved instances.
Sensational AI Performance: Powered by the new NVIDIA Transformer Engine and 4th Gen Tensor Cores, H100 GPUs deliver up to 9x faster AI training and up to 30x faster AI- inference speedups on large language models compared to the previous-generation NVIDIA A100 Tensor Core GPUs.
Scale with ease: Multi-node H100 GPU deployment (8x GPUs) enables scaling of GPU power to handle large and complex models. Blazing-fast 3.2TBps NVIDIA NVLink interconnect between these GPUs makes this multi-node GPU setup operate as a massive compute block.
Spin up in seconds: Create an H100 GPU instance in just a few seconds. Our ML-in-a-box configuration provides a holistic compute solution that combines GPUs, Ubuntu Linux images, private network, SSD-based storage, public IPs and snapshots.
Starts as low as $2.24/hr per chip. Paperspace provides both on-demand and guaranteed instances of H100 GPUs. Per-second billing options and unlimited bandwidth helps you save costs.
24/7 Reliability: Paperspace’s platform is monitored 24/7 so customers can maintain absolute focus on training their models and not on infrastructure. When you go into production, our extensive customer support options will help you stay on top of high-traffic usage.
Here’s what’s inside Paperspace H100 instances:
Machine name | GPU memory (GB) | vCPUs | CPU RAM (GB) | NVIDIA NVLink support | GPU interconnect speeds |
---|---|---|---|---|---|
NVIDIA H100x1 | 80 GB | 20 | 250 GB | No | N/A |
NVIDIA H100x8 | 640 GB | 128 | 1638 GB | Yes | 3.2 TBps |
Our ML-in-a-box configuration enables users to implement everything they require for a powerful user-facing AI/ML app. If your model is already in production, run inference on H100 GPUs on Paperspace to deliver delightful AI experiences to your customers.
Paperspace customers love the performance advantage H100 GPUs bring to their AI/ML models:
“Training our next-generation text-to-video model with millions of video inputs on NVIDIA H100 GPUs on Paperspace took us just 3 days, enabling us to get a newer version of our model much faster than before. We also appreciate Paperspace’s stability and excellent customer support, which has enabled our business to stay ahead of the AI curve. - Naeem Ahmed, Founder, Moonvalley AI
Spinning up an NVIDIA H100 GPU instance on Paperspace takes just a few clicks. Read our docs to get started with NVIDIA H100 GPUs on Paperspace!
Paperspace’s pricing for NVIDIA H100 GPUs is designed to be flexible. Our transparent, per-second pricing model combined with zero data ingress and egress fees, is perfect for startups and growing digital businesses who want flexibility and predictable pricing while leveraging the power of high-performing GPUs. Learn more about Paperspace pricing here!
Type | Configuration | Get access |
---|---|---|
Guaranteed instances: Ability for you to reserve H100x8 with 3.2TB of interconnect speeds for a specific period of time | H100x8 with 3.2TBps interconnect speeds | Click here to get started |
On-demand: Get on-demand access to H100 GPUs from the Paperspace console. There is no upfront time commitment to use the on-demand offering | H100x1 (no interconnect speeds) H100x8 with 3.2TBps interconnect speeds | Click here to get started |
In addition to computing power, Gradient Deployments provide AI/ML businesses the ability to deploy models at lightning speed. Recent enhancements include:
Simplified container registry validation: With a complete redesign of the container registry experience, it’s much easier for users to link their container registries to Paperspace. Simply select the container registry vendor (Dockerhub, Azure CR, Google CR, or Github CR), fill in the namespace, username/password or access token, and we will prefill the other values required to connect the registry to Paperspace. This provides improved management of existing container registries and adds a new way to add containers into Gradient Deployments to ensure successful deployment starts, streamlining the container registry experience.
Enhanced security for deployment endpoints: We have fortified the deployment endpoints for Paperspace to provide enhanced security for Gradient Deployments. When creating a deployment, you can choose whether to set the endpoint to public or protected. By selecting protected status, you can easily manage permissions and have more control over who can access the endpoints. You also have the option to switch between public and protected endpoints as needed. Check out our docs to learn more.
We’re excited about the new customer offerings on the Paperspace platform and remain committed to delivering excellent experiences for AI/ML businesses. Click here to get started or contact our sales team to learn more about how Paperspace can help your business grow.
]]>We wanted to hear from you, the community, about what you love about Hacktoberfest and what could be improved as we plan 2024’s event. We sent a survey to everyone who signed up to participate in Hacktoberfest 2023 in an effort to gather direct, transparent feedback from the people who make Hacktoberfest what it is. We introduced the significant change this year of not offering a Hacktoberfest t-shirt to people who completed the challenge, and many of you had strong opinions about it. The results demonstrate some clear trends across all participants. Throughout our survey, respondents expressed genuine appreciation for Hacktoberfest because of its learning, connection, career, and project development opportunities. We also heard that without the t-shirt, folks aren’t as motivated to take time out of their busy schedules to contribute.
In the words of one survey respondent:
“This year’s Hacktoberfest was an incredible experience for me, as it marked my first time contributing to an open-source event. I was initially nervous, but the supportive community and abundance of resources quickly put me at ease. I ended up making [sizable] contributions to different projects, and I learned so much along the way. I’m so grateful for the opportunity to have been a part of Hacktoberfest this year, and I’m already looking forward to participating again next year!”
Despite the changes in reward, of 1,237 respondents, 78% responded positively about the event.
However, many people were unable to complete the challenge or just one pull/merge request; the overwhelming majority attributed this to insufficient time between other work commitments and not being incentivized enough by the reward.
“As both a contributor and maintainer, the lack of T-shirts was a real bummer. I couldn’t promote HF to prospective contributors as an incentive to start contributing like I’ve done in past years. People really like quality free T-shirts.”
Note: We recognize that the change in rewards this year may have affected whether some folks participated in Hacktoberfest at all and so their feelings about Hacktoberfest 2023 (which we heard and take seriously) aren’t reflected by the answers to this survey question.
Self-identified Contributors made up 86% of respondents to the survey, while 22% identified as Maintainers. Only 3% identified as an Event Organizer, overshadowed by the 16% of people who responded to the survey but did not participate this year.
Maintainers
For those who identified as Project Maintainers, Hacktoberfest was considered beneficial to their projects. More than half—59%—agreed or strongly agreed with this statement, “As a maintainer, I find Hacktoberfest to be a valuable event for my project.” However, 31% were neutral, and 10% disagreed or strongly disagreed.
In response to this question, the ambiguity of those who were neutral is likely because Hacktoberfest attracts participants who want to “game the system” by creating spam PR/MRs to achieve the reward, overwhelming Maintainers with poor quality or time-wasting spammy PR/MRs. This is one of the key reasons we moved away from offering a free t-shirt, which has had the effect of fueling spam in the past. Additionally, we can’t track Maintainer actions at this time, making it hard to reward their participation outside of self-identification, which is also difficult to verify.
Event Organizers
For those who identified as Event Organizers, 64% said lack of swag was the primary blocker to setting up events, followed by 46% who cited a lack of funds for food and beverages. 31% struggled to get participants, and 31% said finding venue options was difficult; 16% of respondents were confused about where to post and promote their event, which was useful information for us, and we’ll work towards improving the user experience for events on the Hacktoberfest website in 2024; 13% had trouble finding speakers, another area that the team at Hacktoberfest will focus on improving for 2024.
Contributors For those who identified as Contributors, most respondents were evenly split on what blocks them from contributing: difficulty finding projects to contribute to and difficulty finding issues at the right skill level. This is something that the Open Source Community as a whole could work together on to find a solution, such as more detailed tagging or labeling of issues. We also learned that many of you wanted to work on projects that didn’t opt-in to Hacktoberfest.
“Specifically about Hacktoberfest: many of the repos I use and would contribute to don’t participate in HF, and searching for projects that do yields spam projects rather than real projects.”
Completing the Challenge Meanwhile, in the challenge completion question, we sought to learn what prevents participants from completing four accepted pull/merge requests. The two foremost reasons for these struggles were a shortage of time and motivation. A smaller group encountered obstacles related to their experience level, and some didn’t feel prepared or motivated to participate in the challenge this year.
“I do open source contributions all year round, the t-shirt was a motivation to concentrate more contributions in October.”
“Being a first-time contributor posed challenges in identifying suitable issues to contribute to. Most of the simpler problems were already assigned, and some of the more intricate ones proved difficult to comprehend within the available time.”
Overwhelmingly, participants leaned towards a free t-shirt or other item such as a hat, mug, or poster. Cash prizes edged out trees planted via TreeNation, certificates, and sticker packs. It was encouraging to see how many of you appreciate the tree reward, which had more votes than stickers or digital badges. Carbon offsets and provider credits were equally desirable, even more so than digital badges.
“I was really impressed with the digital badges, though it would be more exciting if there was a badge with our personalized name on it 😇.”
Many respondents, 62%, were enthusiastic about an in-person event, but only if the event were free; 17% would be willing to pay a fee to attend; 17% were undecided; 4% were a “no.” This question was helpful in gauging general interest. However, whether or not an in-person event will be held will largely depend on how much sponsor support Hacktoberfest receives and where and when it will occur.
As ever, the Hacktoberfest team is deeply grateful to the open source community for not only continuing to make Hacktoberfest one of the most well-attended open source events online but also for your candid feedback. As we work on Hacktoberfest #11, we’re taking your feedback into consideration and we look forward to making 2024 the best year yet. Thank you for your participation and your thoughtful responses!
]]>DigitalOcean provides a fertile ground for ISVs to expand their operations and accelerate their growth trajectory by prioritizing simplicity, cost-effectiveness, and high performance. For instance, Pionect, a software development agency, uses DigitalOcean to build and maintain robust, high-traffic e-commerce platforms for a diverse European clientele:
“As a software builder, our ambition is always to write good code and host our code on good infrastructure so that we can give our clients the best possible experience. We are excited to be a DigitalOcean partner and look forward to continuing our work with them,” says Egbert Wietses, the CEO of Pionect.
This article explores how a partnership with DigitalOcean can address the specific needs of ISVs, fostering an environment where they can thrive by leveraging powerful cloud capabilities and resources.
A range of ISVs use our platform to deploy, manage, and scale their applications easily, leveraging our cloud tools and services suite to heighten performance, improve customer satisfaction, and drive innovation. Here are the specific advantages our platform offers to ISVs:
Partnering with DigitalOcean offers ISVs a cost-effective cloud infrastructure with a transparent pricing model. This starkly contrasts the often opaque and unpredictable billing practices of larger cloud providers; it’s not uncommon for companies to experience AWS bill shock. DigitalOcean’s straightforward cost structure eliminates surprises, with predictable billing ensuring ISVs can budget and plan confidently.
Our platform’s pricing calculator enables software vendors to forecast expenses by simulating different usage scenarios and service configurations before deployment. This transparency in pricing is complemented by a range of scalable services that allow ISVs to start small and grow their resources without exponential increases in costs. As a result, ISVs can optimize their investment in cloud services, directing funds toward innovation and growth rather than unexpected infrastructure expenses.
At the core of DigitalOcean’s product offerings is a commitment to simplicity, allowing ISVs to scale their applications effortlessly. The intuitive setup of Droplets enables quick scaling of virtual machines without complex configurations. Managed Databases simplify the process of scaling database storage and performance, and they come with the convenience of a managed service to reduce administrative burden. DigitalOcean Kubernetes allows for easy orchestration of containerized applications, automating deployment and scaling with a developer-friendly approach. With Load Balancers, ISVs can distribute traffic across their infrastructure to maintain performance, all managed through a straightforward interface that prioritizes ease of use.
This focus on simplicity ensures that ISVs can scale their solutions with minimal effort, freeing them to focus on their core product development. Nixa, a Canadian web development agency catering to businesses and nonprofits, leverages DigitalOcean’s predictable pricing and straightforward cloud tools to deliver cost-effective and efficient software, websites, and applications.
“DigitalOcean is intuitive and powerful. Using hyperscalers setting up something like a load balancer can be complex, but using DigitalOcean setting it up is very simple–you just click and add it. We can keep our team of DevOps and SysAdmins lean to manage our 200+ customers because of the ease of use of DigitalOcean,” says the team at Nixa.
By partnering with DigitalOcean, ISVs connect to a broad and ever-expanding customer base spanning many industries and locations. This partnership allows ISVs to showcase their solutions to over 630,000+ SMBs, startups, and mid-market businesses already using DigitalOcean’s infrastructure, bypassing the initial barrier to entry of building trust.
This integration into DigitalOcean’s ecosystem allows for organic brand recognition and user adoption growth. ISVs can effectively co-market to a ready-made audience looking for complementary tools and services. The increased visibility from this direct access can lead to enhanced market penetration, driving revenue and market share for your brand.
DigitalOcean’s Marketplace is a software innovation hub, attracting a dedicated audience of developers and small to mid-sized businesses searching for cutting-edge tools. By featuring their solutions in the Marketplace, ISVs benefit from increased visibility within a community that values and actively seeks out new technology. This exposure is a gateway to a broader audience, offering ISVs the potential to amplify their customer base significantly.
Unlike other crowded marketplaces, DigitalOcean’s Marketplace offers ISVs a distinct advantage. The platform not only facilitates the discovery of ISV solutions but also streamlines the process of co-selling to this ready-made market. A presence in DigitalOcean’s Marketplace can be a powerful driver for ISV growth, potentially boosting revenue and expanding market share through strategic positioning and easy access to a large pool of prospective customers. Learn how to become a vendor and list on the DigitalOcean Marketplace.
DigitalOcean is committed to the success of ISVs, offering them a supportive and beneficial partnership. When ISVs team up with DigitalOcean, they receive tailored support to help them navigate and make the most of the platform’s features. This partnership is enriched with co-marketing initiatives, allowing ISVs to boost their visibility.
They also receive access to a wealth of resources to speed up their development cycle and go-to-market efforts. DigitalOcean provides ISVs with a practical and supportive experience focused on their growth and success in the cloud ecosystem.
Striving to keep the technological overhead to a minimum, DigitalOcean offers an accessible cloud solution that aligns with the needs of developers focused on their craft:
“We want to focus on creating beautiful software, and hyperscalers are often too complex for what our customers need. DigitalOcean makes it simple for us to manage a lot of customers under one account,” says Egbert Wietses, the CEO of Pionect.
Here’s what ISVs can expect with DigitalOcean:
Build software solutions cost-effectively across dev, test, and production environments
Reach a global audience of over 630,000 SMBs, startups, and midmarket customers
Showcase your solutions in DigitalOcean’s Marketplace, gaining visibility and attracting potential customers
Operate in an uncrowded marketplace, maximizing your exposure to potential customers
Enjoy a premier partnership experience, receiving dedicated support, co-marketing opportunities, and access to resources
We invite ISVs relying on multicloud portfolios or those unsure about their expansion strategy, to request a personalized assessment from our cloud experts in support of their cloud journey here.
Join the DigitalOcean Partner Pod Program today and elevate your business through a collaborative partnership designed for mutual growth. With support at every level of our company, we’re committed to a long-term relationship that helps us achieve more, together. Become a DigitalOcean partner and unleash your growth potential!
]]>Leverage the power of our expanded CDN integrated with Spaces Object Storage to enhance your web experience. With just a few clicks, you can enable the built-in CDN for your Spaces bucket, allowing you to access content swiftly using an edge URL instead of the origin. This network of edge servers, located at various points of presence (PoPs), ensures that content is delivered from the closest server to the user, significantly boosting speed and reliability.
Beyond accelerating page load times for a potential SEO edge, the CDN provides redundancy, allowing cached content to be served even during origin downtimes. Customize your caching preferences with TTL settings and opt for a custom subdomain with automatic SSL certificate renewal through the DigitalOcean Let’s Encrypt service, ensuring both ease of use and enhanced security.
In an era where speed and reliability are paramount, expanding our CDN PoPs network is more than just an upgrade, it’s a strategic move to bring your data closer to your users worldwide. This expansion means your applications, websites, and services will benefit from reduced latency and enhanced performance, crucial for real-time data processing and delivery.
Our Spaces CDN network has grown exponentially, now boasting a total of 274 PoPs across the globe. This includes significant increases in key areas:
Africa now with 22 PoPs
Asia expanded to 53 PoPs
Europe boosted to 49 PoPs
Mainland China strengthened with 33 PoPs
And substantial growth in other regions including Latin America, the Caribbean, the Middle East, North America, and Oceania.
What does this mean for you? Your content and applications are now more accessible than ever, ensuring a seamless experience for your users, regardless of their location.
Real-world business impact
Our users are at the heart of everything we do. We’ve listened to your feedback and tailored our expansion to meet your diverse needs, ensuring that the free Spaces CDN network serves you better, wherever you are.
Enjoy the expansive reach of the Spaces Content Delivery Network without any extra fees. Your basic Spaces subscription adds the built-in CDN and includes a generous transfer allowance, encompassing both CDN bandwidth and origin bandwidth. Plus, data transfer between the origin and the edge servers is also included with the allowance.
At DigitalOcean, we’re not just about providing services, we’re about creating solutions that make a real difference in your digital journey. This expansion is a testament to our dedication to delivering world-class services that empower you to focus on what you do best—growing your business and building amazing digital experiences.
Stay connected with us for more updates and enhancements, and thank you for choosing DigitalOcean as your trusted cloud services partner. For business inquiries contact our sales experts.
]]>IDC estimates that the aggregate worldwide Cloud IaaS and PaaS markets will reach $321.9 billion in 2024 with year-over-year growth of 25%. Even in a tough economic climate, spending on cloud remains resilient by allowing customers to conserve cash and eliminate large upfront costs. Cloud also reduces financial risk by scaling up (and down) with the needs of the business.
However, challenges still exist in cloud adoption especially for startups and small and medium businesses (SMBs). When asked, SMBs often state that cloud providers, largely hyperscalers, are not easy to do business with. There is pressure to make significant volume commitments or agree to long-term contracts in order to receive discounts. Complexity, cost and the absence of consistent support are other significant factors. SMBs need to deliver new features and functionality with a limited set of resources and constrained cloud funding, and startups have the task of scaling rapidly. It can be overwhelming to navigate the myriad of options of cloud services not to mention develop the skills required to be proficient in them.
This situation has led to a change in the way customers think about selecting a cloud service provider. It is rare to find a business that exclusively works with a single provider. Instead, there is an increasing trend towards multicloud deployments. In the past, the use of multicloud was accidental, usually due to various departments or business units standardizing on different clouds. Today, multicloud is an intentional strategy. It is a recognition that each cloud has its own strengths and weaknesses. Customers are increasingly taking a best of breed approach to cloud architecture, matching workloads to the cloud that best fits those needs. But it is not always just about technology.
The very nature of multicloud means an increase of data transfer between them. This can have a significant impact on cloud costs because while data ingress is typically free, data egress is not. It is important to look beyond the standard cloud service rates to understand all of the elements that affect the monthly bill.
The mainstreaming of AI is also having an impact on cloud decisions. Interest in Generative AI (GenAI) has moved the conversation beyond technical circles to senior business leaders. GenAI is viewed as both a productivity tool and a way of building competitive advantage.
The infrastructure needed to train, tune, and deploy GenAI models is very different from traditional workloads. The processing of large volumes of data necessitates hardware accelerators like GPUs. It also requires new software stacks that can facilitate the development of applications that leverage AI capabilities.
For these reasons, IDC predicts that by 2025, 70% of enterprises will form strategic ties to cloud providers for GenAI platforms, developer tools, and infrastructure. This is especially true for SMBs that would find it cost prohibitive to build the infrastructure to support AI on-premises. Cloud becomes the obvious choice as it facilitates access to the latest technology along with a consumption-based pricing model.
Today’s cloud buyers are focused on five major value drivers:
Cost savings and predictable billing. As cloud environments grow, buyers have become keenly aware of the differences in costs between providers. This includes not just the service costs themselves, but other factors that can impact overall pricing.
Community support and robust technical documentation. Developers value the ability to connect with others in the community to learn and share best practices. A robust and active community can influence cloud provider roadmaps.
Ease of use aligned to a range of services. Looking beyond standard compute and storage infrastructure, cloud buyers want a range of services including managed databases, developer tools, and AI platforms that are user friendly and easy to adopt.
Scalability and consistent performance. As more mission-critical applications move to the cloud there is an increased emphasis on scalability, high availability and performance.
Security. Data security, which includes data encryption and access controls and secure transmission protocols remains paramount for companies as they require cloud solutions with robust security measures to protect their sensitive information.
There are more choices than ever in the types of services offered in the cloud and savvy buyers are realizing that bigger is not always better. Organizations are exploring how to leverage multiple cloud providers in a way that optimizes cost and performance and finding success in multicloud architectures.
DigitalOcean continues to remain responsive to the changing needs of cloud buyers. Our foundational simplicity in platform experience, ease of use, comprehensive solutions, robust developer community and 24X7 technical support are tailored to our customers growth journey. Cost savings is also paramount to an improved customer experience, with many startups like ScraperAPI saving 250% over previous providers like AWS while meeting the same infrastructure needs. As egress costs continue to soar, DigitalOcean surfaces as a leader, offering negligible egress fees vs. thousands from the hyperscalers. In addition, the latest DigitalOcean compute offering of Premium Droplets run with faster Intel and AMD CPUs, along with NVMe SSD making the latest hardware available to users. Further, customers can also use Premium Droplets as worker nodes with DigitalOcean Kubernetes.
Affordable, transparent pricing benefits businesses of all sizes that leverage services like managed Kafka, which is often cost prohibitive from other providers. A game changer for AI developers is the Paperspace offering by DigitalOcean, which empowers GPU-backed AI development, and offers out of the box solutions in a more seamless experience. We invite businesses relying on multicloud portfolios or customers unsure about their expansion strategy, to request a personalized assessment from our cloud experts in support of their cloud journey here.
]]>We want to support you on your cloud journey, which is why Cloudways is currently offering our best deal of the year—40% off our hosting plans for 4 months for new users (terms and conditions apply). Additionally, we’re offering up to 40 free web migrations so that you can move to Cloudways managed hosting quickly and easily. Don’t wait to experience excellent performance and business-ready customer support, sign up for Cloudways today.
Cloudways by DigitalOcean is a managed cloud hosting platform that automates your web setup with just a few clicks. As a subsidiary of DigitalOcean, Cloudways is dedicated to saving your time and enhancing your website’s performance through an intuitive and user-friendly platform, making it even easier for a non-technical user to get their cloud infrastructure up and running. Cloudways is an excellent choice for beginners interested in learning more about web hosting, entrepreneurs, e-commerce providers, and small-business owners looking to grow their digital presence.
With the flexibility to host multiple PHP web projects on multiple cloud providers, lightning-fast servers, and enterprise-level security. Cloudways enables you to focus on building and expanding your web projects while benefiting from transparent and scalable pricing plans.
Each Cloudways plan has pre-optimized servers, automated backups, security patching, 1-click staging environments, free SSL certificates, and server scaling, streamlining your website for maximum performance. This enables you to deploy and migrate your websites quickly, focusing your time on your growth.
Server management can be a complex task, but Cloudways simplifies the process, making it both efficient and user-friendly. The Cloudways platform is designed to eliminate the challenges of server management by providing the pre-installed services that are necessary to have smooth operations for major PHP applications.
What sets Cloudways apart from other web hosting providers is the freedom to offer you in choosing your servers. With access to multiple cloud providers including DigitalOcean, you can select the provider that aligns with your specific needs, keeping in mind your data server locations, audience traffic, and pricing options.
Furthermore, Cloudways empowers you with the flexibility to scale your servers at any time. This particular feature proves invaluable during critical periods like Black Friday and Cyber Monday, helping to ensure that you can seamlessly adapt to varying workloads and capitalize on peak demand without having any operational hiccups.
“Cloudways’ focus on performance optimization has greatly improved our websites’ speed and loading time by 80%. This has positively impacted user experience, increasing customer engagement and higher conversion rates”
Patrick Abedin, Managing Director, Hellenic Technologies
“Cloudways has proven instrumental in solving many of our hosting and operational issues. For example, we managed to migrate our 15 development sites to Cloudways in under 3 days - a task that had previously been a 2-month ordeal filled with complications at our previous hosting provider”
Kelly Hodsdon, Founder & CEO, 123 Enterprises
Cloudways’ stack includes Apache, NGINX, Varnish, Redis, Memcached, and Elasticsearch for fast and reliable performance, data caching, and product search.
Cloudways’ protects PHP based applications with security patches, WAF protection, free SSL certificates, and IP whitelisting.
Cloudways’ staging environment allows users to test web and application changes before deployment, preventing downtime and errors.
Cloudways’ server monitoring dashboard tracks server and application performance to identify and troubleshoot problems.
1-click PHP Application Deployment
Cloudways automates PHP application deployment with a few clicks, saving you time to focus on growing your business.
More traffic: BFCM attracts millions of users, so you can expect to see a significant increase in traffic on your websites.
Higher conversion rates: Users coming to your websites are more likely to make a purchase, as they are actively looking for deals and discounts.
Increased brand awareness: BFCM is a great opportunity for you to reach new customers and raise brand awareness.
Ensuring that your website can handle the surge in traffic: Cloudways’ scalable servers can handle even the heaviest traffic spikes, so you can be confident that your website will stay up and running during BFCM.
Improve website performance: Cloudways’ pre-optimized servers and caching features can help to improve your website’s performance, ensuring that your customers have a fast and seamless shopping experience.
Reduce hosting costs: Cloudways offers pay-as-you-go pricing, so you only pay for the resources that you use. This can help you to save money on hosting costs, especially during peak traffic periods such as BFCM.
This Black Friday Cyber Monday (BFCM) season, Cloudways is offering a flat 40% OFF for 4 months with up to 40 free web migrations for new users. This offer enables you to keep your hosting cost to a minimum while your business generates revenue.
In addition to the 40% OFF with free migrations, Cloudways also provides BFCM deals for various applications, including WordPress, PHP, Magento, and more. This all-in-one solution helps you spend less and earn more during the BFCM season.
Take advantage of this offer and scale your business to new heights.
Note: The Cloudways BFCM exclusive offer is valid until December 1st, 2023, and is available for new users only and will be applied on the first 4 invoices only and migrations must be requested within the next 4 months, so don’t miss out - sign up today!
]]>Hacktoberfest, a month-long celebration of open-source software, has once again joyfully brought individuals together worldwide to contribute to open-source projects during the month of October. Developers, maintainers, and first-time contributors crafted innovative solutions, making technology more accessible to all. We are deeply grateful to those who contributed and helped make this Hacktoberfest a great year, even as we made several changes to the program. Hacktoberfest’s legacy as an open source celebration continues to grow, and we saw many new participants this year!
What stood out was the genuine enthusiasm among participants to work together to contribute to open source projects, driven by a desire for improvement rather than focusing on rewards. At its core, Hacktoberfest’s mission is to inspire more people to get involved in open source and work together to improve the software powering our world today. We’re proud that Hacktoberfest continues to be a source of inspiration, consistently motivating people and successfully fulfilling our mission.
This year we removed the t-shirt reward that folks had come to know and love. While we expected this to impact participation in Hacktoberfest, we were very pleased that 98,855 people from 184 different countries registered for Hacktoberfest compared to 146,891 last year. Participants also completed a total of 118,469 contributions during the month of October and 139,422 repositories opted-in to participate. In fact, Hacktoberfest has contributed 2.4 million accepted pull/merge requests to open source projects in its ten years.
From our partner, OpenSauced “We were inspired to see our contributors making progress and growing through our repositories and documenting their growth and contributions through the OpenSauced platform during Hacktoberfest. We also had the privilege of onboarding two dedicated community maintainers who supported new contributors with kindness. The commitment and energy of our contributors and maintainers have inspired others to make meaningful contributions to open source that have already continued beyond October.”
This month we held three engaging sessions that ranged from getting started to what’s new in open-source dev tools, to the future for AI and open-source. The AI session, in particular, provides guidance on GitHub Pilot, Hugging Face and other AI tools, as well as thoughtful advice on how to adjust to the rise of AI in tech.
Satellite Session #1: OS Dev Tools
Satellite Session #2: Future of AI & OS
CHAOSS Project Africa did so much for open source this month!
Check out their X/Twitter Feed
Folks loved personalizing their digital badges. Check out the Holopin Hacktoberfest Badge Board of Fame to see more.
Thank you so much to all of you who responded to our call for stories. Hacktoberfest offers many opportunities to learn and grow in your career, but don’t take our word for it, check out how these great folks have been influenced by participating!
Tuhina Tripathi | LinkedIn .
Hacktoberfest has been an incredible journey for me as a woman in tech from India. It introduced me to the world of open source, a realm I was initially hesitant to explore but soon found to be immensely rewarding. I delved into various projects, submitted pull requests, and collaborated with developers from around the globe. I experienced substantial growth in my coding skills, learned the importance of clean and efficient code, and gained insights from experienced contributors. I am especially proud to have been among the first 50,000 participants and earned the privilege of having a tree planted in my name, contributing to environmental sustainability. Hacktoberfest not only allowed me to level up as a developer but also expanded my network, connecting me with like-minded individuals who share my passion for open source. I’d like to express my heartfelt gratitude to Hacktoberfest for this transformative experience, and I look forward to continuing my journey in the open source community.
Maricio Vargas Sepulveda | GitHub |
Website Open Trade Statistics began around Hacktoberfest, and the funding from the DigitalOcean Credits for Open Source program to host a large database was crucial. Read the full story.
Mohammad-Ali A’râbi | GitHub | Twitter/X
Hacktoberfest has a special place in my heart. I started attending Hacktoberfest in 2017 and experienced something new each year with it: 2017 Opened my first pull request on GitHub. 2018 Created my first public library. 2019 Made my first real open-source contribution. 2020 Received contributions to my repos for the first time. 2021 Organized my first Hacktoberfest event (online). 2022 Organized my first in-person Hacktoberfest event as a Docker Community Leader!
Now I’m a Docker Captain, writing a blog series called Git Weekly, and writing a book on Docker Security. It’s hard to believe that I opened my first PR in 2017 and I was super intimidated by it. Now even my CV is hosted on GitHub and is compiled using GitHub Actions, and… wait for it… it’s called Hacktoberfest-CV. It always surprises me to see how much I learn from Hacktoberfest when I look back.
Amruth Pillai | GitHub | LinkedIn
Hacktoberfest '23 proved to be a significant milestone for Reactive Resume, propelling the project into the spotlight and attracting numerous new developers eager to contribute to a repository. With its user-friendly codebase and a winning combination of popular tech-stack, it emerged as the ideal starting point for aspiring developers venturing into the realm of production-level web applications. Throughout this period, the project experienced a surge in activity, with numerous new features, bug reports, and issues contributed by both users and contributors. This collective effort strengthened the project’s resilience and security. Additionally, the project achieved an impressive user base of 300,000 users, demonstrating its widespread adoption and impact.
This year the Hacktoberfest Discord community grew to 70k members from 40k in 2022. Through the community, we learned that Hacktoberfest helped advance careers, develop new passions, level up skills, and inspired mentorship. Open source projects belonging to startups and young companies that participated received help improving their open source projects, which helped grow their businesses. Keep in touch with our community by joining the over 70,000 developers on the Hacktoberfest community on Discord.
Every year we look for new ways to make Hacktoberfest valuable to the open source community. After a big year of changes we’d love to get your feedback, of course, including the change in rewards. The great news, if you take the time to fill out our short survey, we’ll plant a tree for you through our partner, Tree-Nation, and you can help improve Hacktoberfest, and combat climate change - all with one action!
Every year we are fortunate to receive support both financially through sponsorship and via engagement from our community partners. They are all wonderful and you didn’t get a chance to learn about them during Hacktoberfest, we encourage you to check them out now!
We are very grateful for this year’s sponsors and partners who made Hacktoberfest possible and provided great opportunities to learn, grow and connect.
Sponsors: ILLA Cloud, Appwrite, Amplication, Runme and OpenSauced
Community Partners: Major League Hacking, Holopin, Tree-Nation, GitHub, GitHub Education, GitLab, DEV, DagsHub, Hugging Face, Paperspace
Congratulations to all who completed the challenge! Keep coding and contributing.
-Phoebe & the Hacktoberfest Team
Phoebe Quincy, Senior Community Relations Manager
]]>The “Try the API” page uses the popular Swagger protocol to render DigitalOcean’s OpenAPI spec into a documentation reference. The reference not only documents our API calls, but also allows customers to authenticate, format, and make HTTP requests to their DigitalOcean account from the documentation itself.
For example, you can format a POST request that creates a Droplet from the Droplet section of the doc, or you can make a GET request to retrieve your account’s billing information from the Billing section. Swagger renders the JSON responses on the page.
To use the “Try it out” functionality, you need a DigitalOcean API key. You can follow the steps in our documentation to create a key and set its permissions to whatever you feel comfortable with.
Once you have an API key, open the Try-It-Now page and click on the “Authorize” button, or any of the lock icons beside the API calls, to open the authorization screen and enter your API key.
Swagger then stores the API key as a Javascript variable in its local state. This means that the key will be erased from your browser’s memory if the tab is closed or the page is refreshed. For security purposes, the page does not store the key in local or session storage.
Once you have entered the API key, select an API call from the page and click the Try it out! button. Configure any parameters or requests bodies as necessary, and then click Execute. The page makes the HTTP request to your account and returns a response. That’s it!
We hope this makes it easier to see what our API can do for you and can’t wait for you to try the “Try it out!” documentation feature!
]]>Since the launch of DigitalOcean Managed Kafka, which is now available in General Availability (GA), many customers have found value in the simplicity and affordability of DigitalOcean Managed Kafka. Read on to hear how DigitalOcean customers are leveraging the power of Managed Kafka to grow their businesses.
Setting up, maintaining, and monitoring Kafka daily through self-managing has many challenges. However, Managed Kafka provides an experience that has made these daunting tasks much easier for our customers.
Datacake, a low-code IoT platform, can better focus on product development, saying that “DigitalOcean’s Managed Kafka offering has been a game-changer for us at Datacake. By taking care of the operational aspects of running our Kafka cluster, we have been able to focus our attention on what really matters - building a great product.” - Lukas Klein, Chief Technical Officer, Datacake
Kairos Sports Tech, which provides operation and communications for major sports teams, has implemented Managed Kafka for usage in production workloads and has many important customers of their own who rely on their systems.
“… we were self-managing Kafka, and found the day-to-day management of running a multi-node system was time-consuming and full of complexities.” said Daniel Hendrie who is the Chief Technical Officer.
Thanks to the simplicity of Managed Kafka in the handling of complicated tasks, Kairos Sports Tech was able to save time and resources. Also, the ease of use led them to get to production much faster.
“DigitalOcean Managed Kafka simplified and sped up our management of Kafka—self-managing Kafka, it would take about eight weeks to go from initial development to a production-ready system. With DigitalOcean Managed Kafka, we cut that timeline to just one week.”
DigitalOcean Managed Kafka continues to provide simplicity at its core. To get a better overview of the simplicity offered, please check out this video on how to get started to see the simplicity for yourself.
Many managed Kafka services from other providers are extremely expensive and therefore are not viable options for SMBs, who then are forced to use self-managed Kafka, which is costly in the time it takes to administer. However, DigitalOcean Managed Kafka’s starting price of $147 is perfect for helping SMBs get started using Managed Kafka.
In addition, other providers have complicated pricing schemes for Managed Kafka, with separate costs for each broker, added storage, and additional add-on features. With DigitialOcean Managed Kafka, prices are predictable with a single, flat rate price that is shown to you when creating a cluster.
Another key factor for any customer is to pay for what you need and avoid underutilization when going from development to production with Kafka, which can take a couple of months. With other providers, customers might have to pay a higher price for a production-grade cluster when usage is low or be locked in a short-term trial period.
DigitalOcean pricing provides cost-effective pricing for both testing and production so anyone can get started with Managed Kafka for testing purposes. When it’s time to go live to production, customers can easily scale up their existing cluster to any of our dedicated vCPU plans, as seen below, and just pay for the additional usage when ready.
Designed for SMBs with simplicity and affordability in mind, DigitalOcean Managed Kafka is now available in GA for all of your production workloads. Learn more about Managed Kafka in our docs, and start taking advantage of the benefits of Managed Kafka today by signing up for a DigitalOcean account.
Need help regarding Managed Kafka? Contact our sales team or connect with a DigitalOcean Partner who can advise you on architecture reviews, deployments, migration support, and other infrastructure assistance.
]]>Have a different cloud provider? You can still leverage the benefits of The DigitalOcean Innovate More program by engaging with one of our cloud experts. Request a cloud bill comparison with DigitalOcean and highlight any critical pain points around pricing or performance. We’ll show you how DigitalOcean can improve your cloud experience and reduce your existing bill—Scraper API cut costs by 250% by migrating from AWS to DigitalOcean. To make your cloud adoption pathway even easier, we offer environment assessment, cloud migration support, and cloud rewards to initiate the process.
Many of our customers apply a multi-cloud strategy with DigitalOcean to avoid vendor lock-in, improve risk management, leverage innovative solutions, reducing reliance on a single provider. A multi-cloud setup enables businesses to optimize costs and services by selecting cloud providers based on specific needs rather than a larger provider with unused services.
Here are ten reasons to consider DigitalOcean as your cloud provider of choice:
DigitalOcean’s developer-friendly environment, characterized by a simple user interface, robust API, and extensive documentation, empowers SMBs to deploy and manage applications quickly. This setup minimizes the entry barrier often encountered with other cloud platforms, enriching the developer experience and making cloud management less daunting for smaller teams.
For businesses, this translates into the capability to maintain a leaner, more efficient team, optimizing operational workflows and, consequently, reducing overheads. “DigitalOcean has enabled our non-DevOps tech staff to easily set up Droplets and configure the network easily,” says Gim Wee, CTO of Sans Paper.
Cloud bills are often complex and unpredictable, challenging teams to pinpoint areas for cost reduction. DigitalOcean’s affordable pricing and transparent billing provide a cost-effective solution that aligns well with the budget constraints of SMBs, ensuring customers get maximum value for their investment. The clear pricing structure eliminates any unexpected costs, making budget management more predictable and enabling better financial planning.
“The simplicity of the billing calculation is there. We can easily forecast and figure out what you’re paying for. In AWS, it is very difficult to figure out, especially with multiple regions,” says Ankit Aggarwal, CEO and CTO of EGLogics. “With DigitalOcean, as the CEO and CTO of the company, I can easily see what is being utilized and how to reduce my billing.”
Try the DigitalOcean pricing calculator to create a custom price quote based on your selection and usage of DigitalOcean services, helping you better understand the costs and align them with your budget.
DigitalOcean makes it easy for businesses to scale their infrastructure up or down as needed. The platform lets you spin up new cloud servers and resources within seconds. This enables businesses to seamlessly accommodate growth, new product launches, seasonal traffic spikes, and other fluctuating demands without downtime or disruptions.
ScraperAPI, a proxy solution for web scraping used by over 10,000 companies, has scaled its business with DigitalOcean. “The best thing here is that if I want to scale up to even 2x, I can do it in about a minute,” says Zoltan Bettenbuk, CTO of ScraperAPI.
DigitalOcean offers robust global infrastructure and reliable network connectivity to businesses worldwide, with 15 distributed data centers across nine regions. This allows businesses to deploy resources close to their customers to reduce latency.
Benefit from exceptional performance with virtual machines, fueled by the latest iterations of Intel Xeon and AMD Epyc processors, coupled with top-tier storage solutions across all data centers. This infrastructure gives businesses the raw computing power needed for modern application development, machine learning, and other performance-critical workloads.
DigitalOcean offers a comprehensive suite of products—across compute, managed hosting, storage, networking, and more—that cater to various SMB requirements. “We see that the response time from our server latency decreases when we use Premium CPU-optimized Droplets,” says Nick Zorin, Co-Founder and CTO of Jiji. “Aside from achieving low latency, the product impacts our backend. They are CPU bound, so it provides lots of additional processing power, which helps us.”
Here’s just a selection of DigitalOcean solutions available to SMBs:
Droplets: Linux virtual machines that allow you to choose CPU plans, RAM, SSD storage, and transfer quotas to meet your needs.
Kubernetes: A managed Kubernetes service that provides uptime, scalability, and portability for containerized applications with a free control plane.
Functions: A serverless solution that runs code on-demand, enabling instant scaling without managing servers.
App Platform: A fully managed platform for quickly building, deploying and scaling apps without worrying about infrastructure.
Cloudways: Fully managed hosting that eliminates middle-of-the-night troubleshooting for your websites.
Spaces object storage: Store and access vast amounts of data without compute server limits using S3-compatible tools.
Volumes block storage: Add block storage volumes to expand capacity for compute servers as needed.
Load balancers: Scale applications easily with load balancing that directs traffic to available resources.
DDOS protection: Defend against network DDoS attacks with always-on and automated mitigation to ensure app uptime.
Paperspace: Cloud GPUs provide accelerated computing power for AI/ML workloads with uncompromised performance.
DigitalOcean’s One-Click App Marketplace enables SMBs to quickly launch and deploy pre-configured applications, saving time and effort associated with manual setup and configuration. You’ll find a vast array of software, streamlining the process of implementing common frameworks, services, and development stacks.
Popular development stacks llike MERN (MongoDB, Express.js, React.js, and Node.js.), MEAN (MongoDB, Express.js, Angular.js, and Node.js.), and FARM (FastAPI, React, and MongoDB) are at your fingertips. This accelerates the deployment phase, allowing businesses to focus on optimizing their applications for their specific needs, speeding up the overall development cycle and bringing products or services to market faster.
DigitalOcean implements robust security practices and features to safeguard customer data and applications. Servers come with built-in firewalls to control network traffic. Data at rest is encrypted by default for additional protection. Regular automated backups ensure disaster recovery capabilities. DDoS protection and two-factor authentication provide additional layers of security.
Compliance with AICPA SOC 2 Type II and SOC 3 Type II, and ISO 27001 demonstrate DigitalOcean’s commitments to security. Businesses can feel at ease knowing DigitalOcean has implemented industry best practices and passed rigorous audits.
Configuring servers, deploying applications, and managing infrastructure can be complex tasks that divert focus from core business goals. DigitalOcean’s platform is designed for ease of use with features like automatic backups, monitoring, and customizable alerts. This enables SMBs to ensure their infrastructure is running smoothly without having to check dashboards constantly.
Recovery from outages or disasters is simplified with just a few clicks to restore from automatic backups or Snapshots. The focus is on making infrastructure management effortless so that businesses can concentrate on developing applications and innovating products rather than managing servers. DigitalOcean’s automation and simplicity provide true “set it and forget it” cloud infrastructure management.
“This feature gives us the confidence to make frequent releases without worrying that production might go down,” says Gim Wee, CTO of Sans Paper. “We are confident that we can recover it.”
DigitalOcean fosters an active community of developers, customers, and technologists. The DigitalOcean Community provides forums, tutorials, and resources for SMBs to learn, connect, and get help. You can participate in discussions, ask questions, or share expertise about cloud infrastructure, development, system administration, and more.
Experienced community members actively help troubleshoot issues and provide guidance. The collaborative community enables new users to gain proficiency with DigitalOcean’s platform quickly. It also allows SMBs to network and exchange ideas with fellow entrepreneurs and developers.
DigitalOcean offers a vast library of educational resources, including documentation, technical tutorials, startup guides, and community-led initiatives. Step-by-step tutorials provide training on crucial skills like cloud infrastructure management, DevOps, web development, and more. DigitalOcean’s startup building guides offer actionable advice on technical and business aspects of building a company—from cloud monitoring tools to product roadmap prioritization. Events like Hacktoberfest facilitate collaborative learning.
These educational resources accelerate success in the cloud, empowering businesses to skill up their teams with the knowledge needed to leverage cloud technology and work more productively.
DigitalOcean’s simplicity, affordability, developer-centric approach, and extensive features make it the consistent choice for SMBs seeking a reliable and user-friendly cloud provider. We’ve thought through your entire cloud journey. But nothing is more powerful than hearing the cloud success stories from our customers, like Loot.tv. Check out how DO is enabling customer success, and join us today.
]]>Accidental data deletion could happen for a variety of reasons. Here are some of them:
Human error: In today’s IT landscape, it’s common to have multiple environments and collaborate across teams, each of which may have different naming conventions for cloud resources. This can sometimes cause confusion and lead to unintentional deletion of Droplets. While we have measures in place to prevent this such as a two-step deletion process, it’s still possible to mistakenly delete the wrong Droplet.
Data consolidation: Data is more abundant than ever and developers or business owners are often looking to optimize data storage. In an effort to create disk space for new data, you might delete or overwrite old Droplets which are still being used to run your apps or websites.
Administrative errors: IT systems can be complex and require serious testing before any migration, configuration, or other administrative tasks are performed. There’s a myriad of administrative reasons including insufficient training, lack of testing, security or network misconfiguration, that can all lead to data loss.
DigitalOcean Droplet Backups have been traditionally linked to the Droplet status. This meant that if you accidentally deleted a Droplet and had backups enabled, those backups would be deleted along with the Droplet sometimes causing unintended loss of critical data. To alleviate this issue, we’re introducing independent deletion dates for Droplet backups.
From October 23rd, 2023 onward, backups will be independent of the parent Droplet’s status and will have their own lifecycle.
Regardless of whether the parent Droplet is destroyed or not, backups will have a lifecycle of 4 weeks from the time of creation. You can see the deletion date for each copy in the cloud console.
You can restore your original Droplet or create a new Droplet from your backups even if the underlying Droplet has been destroyed.
Snapshots have always been delinked from the status of their parent Droplet so there will be no change of behavior for Snapshots.
Backup your Droplets in the DigitalOcean cloud console today to prevent business disruptions due to accidental data loss. If you are looking to protect workloads other than Droplets, try out SnapShooter!
]]>For startups and small- and medium-sized businesses (SMBs) with limited resources, having flexible and cost-effective storage options can be as important as having the database itself. That’s why we’re excited to introduce a new solution that can reduce costs for those businesses: Scalable Storage for DigitalOcean PostgreSQL and MySQL Managed Databases. Scalable Storage enables users to increase disk storage without needing to upgrade compute and memory to meet those storage demands.
Alongside Scalable Storage, we’re adding more database configuration options, increasing storage limits on existing configuration options, and implementing a more intuitive database creation and resizing workflow to make using DigitalOcean even simpler.
Scalable Storage gives users the flexibility to add storage to MySQL and PostgreSQL Managed Databases at cost-effective prices with as little friction as possible. That means your business can scale seamlessly with a variety of shared and dedicated configuration options to suit the unique needs of your business.
Add disk storage without adding compute and memory: The main benefit of this offering is that users can add disk storage in 10 GB increments each priced at $2/month to meet constantly shifting demand without needing to increase compute and memory and avoid any downtime in the process. Users can also change disk storage capacity using the Cloud Console or via API—simple, intuitive, and practical.
Greater disk storage capacity: All Managed Database plans now come with a range of disk storage options to start with or to upgrade your existing plan, beginning with a minimum amount that can be increased from two to five times the starting amount.
Managed Databases now have more storage—up to 15 TB—so you can future-proof your database and ensure that it can handle the largest of database production workloads.
Monitoring to optimize costs: Monitor your compute, memory, and disk utilization data and set alerts, so you can scale your compute, memory, and storage when it matters most. Only pay for the database compute and storage resources you need, so you can be sure that you’re optimizing costs.
Basic Compute plan updates:* Scalable Storage provides new and existing databases on affordable Basic CPU configuration plans to scale up their disk storage by two to three times the minimum amount, up to a maximum of 580 GB. With solid base performance, this option is a great entry-level option at the most affordable price for customers with minor to moderate workloads.
[New] Basic Shared Premium plans: Our new Basic Premium Intel and AMD configuration plans are great for users who need cost-effective database configuration options with larger disk storage and higher performance requirements. With NVMe SSD drives, higher compute performance, and disk storage that can be scaled up to five times the minimum amount for all plans up to 5 TB, these plans are great mid-range options for any use case.
Dedicated Compute plan updates: Get the best performance with maximum storage using our updated General Purpose and Storage Optimized configuration plans, which can now have disk storage scaled up to five times the minimum amount. General Purpose and Storage Optimized configurations provide greater stability than Basic CPU configurations to more effectively support customers with mission-critical workloads. Storage-Optimized plans provide faster read/write performance via NVMe SSD storage and the largest disk storage capacities with storage capacities of up to 15 TB to handle the most demanding production workloads.
We’re splitting the pricing of our database configuration plans into two categories: compute (vCPU and RAM) and disk storage. Each 10 GB increment of disk storage is charged at a flat rate of $2.00 per month with no additional fees. New database clusters start at $15 per month: $13 for compute, and $2 for 10 GB of storage.
All MySQL and PostgreSQL database configuration plans start with a minimum disk storage amount. With Scalable Storage, you can increase that minimum disk storage in increments of 10 GB up to a set maximum limit for that plan. If you need more disk storage than that plan’s limit, you will have to upgrade your compute plan to another plan that provides a higher storage limit.
As always with DigitalOcean, pricing is predictable, so users always pay flat rate prices for compute and storage options regardless of data center locations. At database creation or resizing, users will see a simple, clear cost breakdown, so there are no surprises.
To find out more about the configuration plans, including storage ranges, maximum limits, and total pricing of all database configurations, please refer to the pricing page.
You can scale storage independently when resizing an existing database cluster or creating a database cluster via the UI or API. You can also do so with standby nodes and read replicas.
If you wish to adjust storage via the API, refer to our documentation pages.
Scalable Storage provides flexible and cost-effective options to Managed Databases, which is vital for startups and SMBs with growing and changing business needs. Learn more about Scalable Storage in our docs, and start taking advantage of the benefits today by updating your existing Managed Databases.
Need help regarding Scalable Storage? Contact our sales team or connect with a DigitalOcean Partner who can advise you on architecture reviews, deployments, and other infrastructure assistance.
*We are discontinuing two Basic CPU configuration plans with 600 GB and 1.22 TB of disk storage. Existing customers on these plans will be able to continue using them without disruption. For more information, please refer to Plan Deprecation info in our product documentation.
]]>Here are a few ways you—yes, you!—can support open source:
Prepare and share your open source project for collaboration
Contribute to the betterment of a project by contributing pull/merge requests
Mentor others in the open-source community
Donate money directly to open-source projects you love
Did you know that companies and their employees participate in Hacktoberfest from all over the world, but you might not hear about it because they gather internally. A great example of this is Intuit, which has held internal hackathons for Hacktoberfest since 2019. In their own words,
“Open source is an important part of Intuit tech culture. By fostering a culture of contribution and collaboration within our engineering community, we help advance Intuit’s mission to power the prosperity of more than 100 million customers around the world, across Intuit’s platform and products—including TurboTax, Credit Karma, QuickBooks and Mailchimp. As we deepen our commitment to open source, our engineers are becoming even more open in the way they share, adopt, and use code. Our culture celebrates those with the passion and drive to contribute to open source projects, whether their own or others, as a way to bring new and better digital experiences to life.
That’s part of what makes the open source community so amazing. What started as a small commitment to launch a Hacktoberfest program, turned out to have a huge impact. We each play our own small role, but together, we can achieve things much bigger than we ever imagined.”
Learn more about Open Source at Intuit, and make sure to participate in this year’s Hacktoberfest!
Hacktoberfest sponsors and partners have a host of projects that are open to contributions—some are even incentivizing contributions in pretty cool ways. Learn more about how you can participate with them.
Celebrate Hacktoberfest with DigitalOcean, the originators of this decade long open-source software celebration. Built on simplicity and cost-effectiveness, DigitalOcean lets you focus on creating apps, not handling infrastructure. Dive into a plethora of DigitalOcean projects participating in Hacktoberfest. Whether you’re versed in nginx/vue, terraform, or an Ansible enthusiast, opportunities to contribute await. Dive in now!
ILLA Cloud is an open-source,low-code developer tool with AI Agent Features, so developers can build business apps much easier. It’s really exciting to celebrate the 10th anniversary of Hacktoberfest together with developers. We prepared ILLA Packages with badges, stickers and Magnets as physical rewards to contributors, and four digital badges for different PRs merged. Feel free to check out the details here. ILLA Cloud also made a tutorial video for new contributors and please check it out before you start to contribute. Following Contributing Docs and Discorc Community might help you too when you need any technical support from ILLA Cloud Team.
Appwrite is back for the third year in a row, and excited to celebrate the 10th anniversary of Hacktoberfest! We look forward to this amazing open-source community event that celebrates collaboration, creativity, and everything open source. This year we will be looking to the community to help out on a feature we love: Functions. With it being one of our most diverse features, it’s the perfect opportunity to contribute to Appwrite. From creating ready to use templates, to code contributions, and much more. Visit our Hacktoberfest website for more information on how to join!
Open source has always been at the core of what we do, and Hacktoberfest is an excellent opportunity to get more developers to join the community and support their favorite projects. There are several ways that you can contribute to Amplication as well. We’re inviting contributions across all our repositories, with an emphasis on the following four repos:
- Amplication main amplication/amplication](https://github.com/amplication/amplication)) repository,
- Docs amplication/docs](https://github.com/amplication/docs)) where our project documentation lives,
- Amplication Plugins amplication/plugins](https://github.com/amplication/plugins)) that extend our platform’s functionality,
- and finally, our website’s repository amplication/amplication-site.](https://github.com/amplication/amplication-site).)
This Hacktoberfest season, we’re going beyond just code contributions. We’ve lined up a series of events to make your Hacktoberfest experience even more engaging and rewarding. From an opening ceremony to numerous giveaways, we’ve got a lot in store for the community.Join Amplication’s Discord channel,) Discord](https://amplication.com/discord),). It’s the central hub where all communications will happen, so you won’t miss out on anything. Get plugged in now and let’s make this Hacktoberfest unforgettable!
Runme is a collection of open source repos hosted on Github, so there’s a number of places to contribute dependending on what interests you.
- stateful/runme is the markdown parser and CLI written in Go, and /issues contains many issues outlining feature requests, bugs to fix and maintenance work.
- vscode-runme is the VS Code extension that renders the interactive notebooks, it’s written in typescript and there is always much to do in expanding its utility (more custom cells and integrations, more languages to support, more tests, etc)
- stateful/docs.runme.dev is our documentation website built with docusaurus and we could always use help making these better!
If you have an idea of something you’d like to contribute, please join Runme’s Discord channel or Github and give us a brief description of your plans so we can give you feedback and direction if possible.
At OpenSauced, we believe in redefining the meaning of contributions, and we’re providing the space where contributors meet maintainers in a celebration of open source excellence. This Hacktoberfest, sign up with OpenSauced and elevate your open-source experience. Whether you’re contributing code, writing issues, or crafting blog posts, we’re providing a space for contributors to share their hard work and inspire others. For maintainers, OpenSauced offers unparalleled insights to truly understand your project’s contributions. Dive into analytics, spotlight highlights, and jointly celebrate the diverse efforts that fuel your project’s success. By focusing on genuine contributions and creating highlights, you’re not just getting noticed—you’re setting a standard in the community. Move beyond the “green square” mindset. Get noticed, make a difference, and redefine open source collaboration with OpenSauced.
GitHub’s thrilled to celebrate 10 years of Hacktoberfest participants building on GitHub. Look for livestreams on open source projects throughout the month. For students, every week GitHub Education will host engaging Livestreams on Campus TV and spotlight the week’s top maintainers and contributors on Community Exchange and across the blog and social channels. For those who want to contribute to GitHub’s own open source repos, GitHub Docs and Oktokit are participating this year!
GitLab is proud to once again partner with Hacktoberfest and celebrate the power of open source collaboration all month long. We’re connecting passionate developers with participating communities who’ve added the “hacktoberfest” topic to their projects on GitLab. Celebrate 10 years of Hacktoberfest by browsing our directory to find a project awaiting your contribution. And don’t forget to explore the GitLab project, too! Our open core DevSecOps platform also needs your contributions. Join us for our Hackathon to learn more.
DEV is a welcoming community of software developers that shares coding resources and advice. We’re proud to be powered by Forem, an open-source software designed to empower online communities. For the sixth consecutive year, we’re thrilled to team up with DigitalOcean to support Hacktoberfest, an event that resonates deeply with our values. At DEV, we strongly believe that diversity of perspectives enriches the open-source community, and we’re proud to be part of an initiative that embraces individuals from all walks of life.
This year, we’re rallying all DEV members, whether you’re a seasoned contributor or brand new to the community, for an exciting Hacktoberfest adventure! Participants in Hacktoberfest 2023 can earn multiple badges from DEV, recognizing their contributions as contributors or maintainers, with associated rewards including validation and community engagement.
Earning the Hacktoberfest 2023 DEV badge series is closely linked to your active participation in the DEV community and writing posts on DEV. To begin your journey, create or log into your DEV account now and get ready to showcase your skills, connect with a global community, and earn fantastic rewards! Learn more
DagsHub is where people build AI projects. A centralized platform to host and collaborate on all ML project components such as code, data, models, experiments, and annotations**.** Built on top of popular open-source tools (MLflow, DVC, Label Studio), DagsHub does the DevOps heavy lifting for you, so you can focus on creating better models. Join us for a month-long celebration of open-source contributions to Machine Learning projects. Gain hands-on experience building datasets, models, pipelines, and more! From non-code contributors to Binary Sorcerers, there are projects for everyone!
Hugging Face is the Open Source AI platform that allows people to build collaborative ML by sharing and using models, datasets, and applications. Use Hacktoberfest as an opportunity to learn about the HF Hub and build your first ML app! Hugging Face has built dozens of OS tools for ML, such as transformers, datasets, and gradio, and contributions are welcomed in all of them! Feel free to head to the organization and explore the projects!
Paperspace recently joined DigitalOcean, expanding our AI capabilities. Paperspace is the compute partner for GPT4All, an open-source, free-to-use, locally running, privacy-aware chatbot. No GPU or internet required. GPT4All is an ecosystem to train and deploy powerful and customized large language models that run locally on consumer grade CPUs. Check out the GPT4All Github repo and contribute!
As Hacktoberfest unfolds, participants can make their contributions even more vibrant by showing them off to the world. If you’ve earned a badge for your accepted PR/MRs, don’t forget to claim it and share it on social media by tagging #hacktoberfest and #holopin, and add your badge board to your GitHub profile. This is not only a testament to your dedication but also a way to inspire other developers in the open source community. For maintainers keen on issuing custom Holopin badges for contributions to their own repositories, head over to holopin.io/pricing to explore the various options available.
Join Major League Hacking for Global Hack Week, a monthly event series where you can learn new skills, build your portfolio, attend fun sessions, and connect with developers from around the world. The Open Source edition of Global Hack Week is taking place October 16th-23rd! Learn how to find and contribute to open-source projects through 50+ workshops. The event is completely free to attend, register here to unlock the sessions and event perks!
Planting trees has been proven to be one of the most efficient solutions to fight Climate Change and we have already planted more than 34.000.000 with all our community. Through the Tree-Nation platform we aim to bring a technological solution to the problem of deforestation, responsible for about 17% of all climate change emissions. Thanks to our reforestation and conservation projects we help to restore forests and also we create jobs, support local communities and protect biodiversity. Hacktoberfest created a very beautiful forest with us that has more than 4.000 trees planted. Join Tree-Nation, join our mission and let’s reforest the world together!
We look forward to seeing how you participate this year! Have fun and happy hacking!
Phoebe Quincy, Senior Community Relations Manager
]]>Earlier this year, we launched Premium CPU-Optimized Droplets, a powerful offering which our customers immediately loved. We’re excited to now extend the premium advantage to the General Purpose line of Droplets to bring still more versatile and consistent performance to a variety of cloud-native websites and applications.
Premium General Purpose Droplets come with a host of advanced features:
Premium General Purpose Droplets enable you to reliably deliver stunning product and service experiences.
Premium General Purpose Droplets are available on our simple cloud UI that makes building and scaling cloud infrastructure easy for everyone. With today’s launch, when you go to the control panel to spin up Droplets, you’ll see a new option for Premium Intel within the General Purpose Droplet tab. There are also slugs for use with our CLI, API, or extensions like our Terraform provider. Premium General Purpose Droplets are also available as worker nodes in DigitalOcean Kubernetes.
Premium General Purpose Droplets are now available in NYC3, NYC1, SFO2, TOR1, FRA1, BLR1, and SYD1 data centers, with more coming soon, so spin up a Premium General Purpose Droplet right now—or switch from regular to Premium. Read more about General Purpose Droplet pricing for your business, or contact our sales team to discuss pricing.
*The benchmark CPU, network and file I/O performance numbers are based on DigitalOcean’s internal testing framework and parameters, using an 8 vCPU Droplet. Actual performance numbers may vary depending on a variety of factors such as system configuration, operating environment, and type of workloads.
]]>To enable small businesses to leverage the power of Kafka without the administrative burden and the associated costs, we are excited to announce the launch of DigitalOcean Managed Kafka, a fully managed event streaming platform as a service. DigitalOcean Managed Kafka removes the complexities associated with self-managing Kafka and has an all-inclusive, flat-rate pricing model starting at just $147/month to help you avoid unexpected costs.
For small businesses, the time spent configuring and maintaining Kafka themselves can often outweigh the cost savings of a self-managed version. The initial setup has a high overhead cost as Kafka is often provisioned as a multi-node cluster with numerous parameters that require proper tuning. After setup, overall management is arduous for a small business as manual updating, securing, scaling, and logging resources are required to prevent issues. When an issue does occur, it can be challenging to troubleshoot due to the system complexity of a multi-node architecture. Any mistake can result in data loss, operational downtime, and security issues, any of which can be devastating to a growing business.
DigitalOcean Managed Kafka is built to alleviate these challenges while enabling even small teams to utilize Kafka. Benefits of DigitalOcean Managed Kafka include:
Simple, all-inclusive, fixed pricing: Starting at only $147 a month, Managed Kafka provides you with simple, predictable pricing, allowing you to control costs and avoid any surprise bills.
Easy scalability: As your customer base grows, it’s important to have a system that can keep up with increased demand. With Managed Kafka, businesses can save time and reduce operational overhead by having the option to scale up or down with a single setting.
Spin up a cluster in minutes: Provision highly available, three-node Kafka clusters quickly via the UI, CLI, or API. This saves time and reduces the operational overhead required for setting up and connecting three individual nodes, as well as a separate Zookeeper node.
Monitor and maintain with ease: Set alerts so you can take better-informed actions to optimize cluster performance and react faster to issues. Logging information can be combined with other DigitalOcean products on a single dashboard for fast and easy troubleshooting.
Get automatic and scheduled updates: DigitalOcean keeps your clusters updated, and you can schedule version updates, keeping Kafka clusters stable and secure. This makes it easy to take advantage of the latest functions of Kafka with minimal disruption to customers.
High availability: In the event of a node failure, Managed Kafka will automatically swap in a healthy node while the rest of your nodes push and pull messages without downtime.
End-to-End Security - Since data is critical, it also needs to be secure. Data is encrypted data at rest with LUKS and in transit with SSL. Clusters run on DigitalOcean are in a VPC (Virtual Private Cloud) by default making them unreachable via the public Internet unless the source is whitelisted.
Stable-performance - You can provision Kafka clusters on virtual machines with 100% dedicated vCPUs. Dedicated vCPUs have guaranteed access to the entire hyper thread at all times, delivering a consistently high level of performance for your apps and your customers.
DigitalOcean customers, such as Datacake who have already implemented Managed Kafka, love the benefits that Kafka brings to their applications.
“DigitalOcean’s managed Kafka offering has been a game-changer for us at Datacake. By taking care of the operational aspects of running our Kafka cluster, we have been able to focus our attention on what really matters - building a great product. With this new service, we were able to migrate seamlessly to an event-based architecture while maintaining the highest levels of operational security.” Lukas Klein, Chief Technical Officer, Datacake
DigitalOcean Managed Kafka is designed for SMBs with simplicity, easy scalability, and affordable, flat-rate pricing, starting at $147 a month. Learn more about Managed Kafka in our docs, and start taking advantage of the benefits of Managed Kafka today by signing up for a DigitalOcean account.
Need help regarding Managed Kafka? Contact our sales team or connect with a DigitalOcean Partner who can advise you on architecture reviews, deployments, migration support, and other infrastructure assistance.
]]>We’re excited to announce several product enhancements that we’re committed to launching in coming months, starting with DigitalOcean Managed Kafka, which is available right now. A fully-managed, event streaming platform as a service, Managed Kafka offers peace of mind by removing the burden associated with managing a complex solution like Kafka on your own.
DigitalOcean has the products and expertise startups and small businesses need to scale simply—and we’re constantly improving our platform to suit the evolving needs of every one of our users. Take a look at some recent updates that your business can take advantage of now, and a few exciting product enhancements planned for the near future.
Watch the video to get a summary of the updates, or read the details below:
Start using NVIDIA H100 GPUs on Paperspace:__ In July, we acquired Paperspace, a leading provider of cloud infrastructure for highly scalable GPU-accelerated applications, to help startups and SMBs simplify the process of developing, deploying, and scaling modern AI applications at a fraction of the cost. With extremely popular NVIDIA H100 tensor core GPUs available now, startups and SMBs can begin their AI/ML journeys on Paperspace today. Paperspace is integrated with DigitalOcean Spaces so you can easily access and manipulate your Spaces Object Storage through Paperspace’s Notebook experience. Plus with an SSO integration with DigitalOcean, accessing products and computing capabilities across platforms is simple.
More Paperspace updates to come: We will soon introduce other updates such as: simplifying getting started with AI/ML with model deployment templates; streamlined integration with Hugging Face; and improved management tools inclusive of metrics, history, and logs. We also plan on upgrading the underlying network infrastructure with up to 400G capabilities to speed up accelerated computing workloads and boost the Paperspace VM connectivity to the Internet via highly available network in the upcoming months.
Managed Kafka, a fully managed event streaming platform as a service: Apache Kafka is the de facto standard for building real-time, data streaming applications and is very popular in industries like streaming, IoT, and data analytics. Due to its inherent operational complexity and high costs, though, many small businesses haven’t been able to take full advantage of Kafka. Today we’re introducing Managed Kafka, a fully managed event streaming platform as a service. It removes the significant difficulty of self-managing Kafka and includes benefits such as automatic updates, high availability, and easy scalability. Like all of our products, Managed Kafka offers low, predictable pricing that starts at $147 per month. Check out the Managed Kafka announcement to learn more.
Faster, frequent, and comprehensive Backups: DigitalOcean Backups—automatically created disk images of Droplets at weekly intervals—provide an easy way to revert to an older state or to create new Droplets. Customers often have a need for more frequent backups than we’ve offered in the past to better bolster their data protection strategy. To address this need, we’ll soon introduce the ability to create daily backups and other enhancements. Both daily and weekly backups will be differential, significantly reducing the time required to create backups. We’ll open the beta for daily Droplet backups in the near future.
Scalable Storage for Managed Databases: Many customers run out of storage long before they fully utilize the compute capacity of their database cluster. We will introduce Scalable Storage for Managed PostgreSQL and Managed MySQL that will allow users to easily increase the storage capacity of a database cluster without upgrading the entire cluster configuration and without downtime. This provides flexibility and makes it easier for businesses to support an ever-increasing data footprint. Additionally, by removing the need to increase the vCPU and RAM, which is typically more expensive, costs are better kept in check. We will also introduce database plans with a storage limit of 15TB, a significant increase over our previous limit of 7TB. Scalable Storage for Managed Databases will be available in Q4 2023.
Highly-performant Object Storage in the Bangalore, India data center: With highly-performant and scalable object storage now available in the Bangalore data center (BLR1), businesses in India can store data within the country—boosting performance by having compute and storage in the same location—which may support their compliance strategy. All Spaces buckets in BLR1 support 800 RPS for read/write operations making it ideal for data analytics workflows, training AI models, log files generated by applications, and video streaming applications.
Premium CPU-Optimized Droplets: Whether it’s crystal clear audio, glitch-free video, or uninterrupted gaming, everyone loves seamless digital experiences. To better deliver those experiences, we introduced Premium CPU-Optimized Droplets, virtual machines built for high throughput and consistent performance that are ideal for network and computing-intensive workloads such as streaming media, online gaming, machine learning, and data analytics. Premium CPU-Optimized Droplets offer up to five times higher outbound network speeds, 58% higher performance, and 290% faster disk writes than standard Droplets. Since their launch earlier this year, Premium CPU Optimized Droplets have become extremely popular, with startups such as Validin using them to power their data indexing platform.
Enhanced memory and storage for Basic Premium Droplets: We’re always looking to provide more flexibility to customers so they can better address new use cases for their growing business with the power of cloud computing. Basic Premium Droplets, our shared virtual machines—powered by newer AMD EPYC™ and Intel Xeon® CPUs along with superfast NVMe SSDs—are designed to deliver superior performance at an affordable price. In August 2023, we expanded the Basic Premium Droplet lineup to include new plans with enhanced memory and storage to give businesses more flexibility and a wider choice of virtual machines.
Managed Databases with powerful compute: Basic Premium compute plans are coming to Managed Databases. With faster NVMe SSD storage disks, the Basic Premium compute plans will improve the read and write performance of your database clusters. These plans, available in Q4 2023, are ideal for customers who need cost-effective database configuration options with larger disk storage and higher performance requirements.
More Premium Droplet variants coming: We’re excited to announce that we will be extending the Premium variant to new Droplet types in 2023. Starting in October, users will have access to Premium General Purpose Droplets. These Droplets utilize newer generation CPUs, faster NVMe drives, and offer up to 10 Gbps of outbound data transfer speeds making them ideal for running e-commerce, consumer, and SaaS apps.
Robust WordPress hosting with powerful automation: Businesses on DigitalOcean have told us that they need a WordPress solution that’s hassle-free and delivers fast and reliable WordPress website experiences without any interruptions. We’re introducing Cloudways Autoscale, a new fully-managed WordPress hosting solution which includes autoscaling, load balancing, and high availability. Powered by Kubernetes, Autoscale uses load balancers to automatically distribute traffic efficiently to maximize website speed and performance. Ideal for companies running e-commerce stores, ticket booking sites, and high-traffic influencers, Autoscale easily handles traffic spikes by automatically scaling your hosting resources up or down based on traffic demand. You can request early access to the beta now and be among the first to experience the future of cloud hosting.
We hope you are as excited about these updates and enhancements as we are. Check out this webinar or fill out this form if you want to learn more about any of the product updates. Contact our sales team to learn more about how DigitalOcean can help your business grow >
]]>At DigitalOcean, we have been excited to see the growth and resilience of our nonprofit customers. Our inaugural product donation program, Hollie’s Hub for Good, allowed us to offer free infrastructure credits to 2,200 nonprofits from around the world. Nonprofits like Project Sitara, Unicodemy and Nonprofit Exchange use DigitalOcean due to the simplicity, affordability, reliability and accessibility of the platform. We’ve been inspired by the work these organizations have done with DigitalOcean’s support, and are excited to be expanding our programs so we can support even more nonprofits and social enterprises as they do good. DigitalOcean is eager to reach even more nonprofits and social enterprises around the world.
DigitalOcean took the Pledge 1% commitment in 2021 to commit $50M over the next ten years for charitable purposes. As part of the commitment, we launched our social impact initiative, DO Impact, in 2022 to empower changemakers around the globe through our products and philanthropy, enable our people to do good in their communities, and ensure our footprint is sustainable.
Today, DO Impact is excited to announce the launch of the DO for Nonprofits & Social Entreprises program. This program builds upon Hollie’s Hub for Good by offering more benefits and services to these important customers. Program benefits now include,
$2,500 in free infrastructure credits (one-time credit, valid for one year)
20% discount for Cloudways managed hosting services
Continued access to DigitalOcean’s tutorials & documentation
The DO for Nonprofits & Social Enterprises program allows us to meet the Pledge 1% commitment through the most important lever in our business, our product. Additionally, we have partnered with Percent to scale our application and verification process. If you are a nonprofit and looking for a cloud infrastructure provider, please visit the link above to learn more.
We can’t wait to support more organizations doing good, and look forward to sharing their stories in the future. One example of how DigitalOcean is already having an impact is our nonprofit partner Ersilia, a technology non-profit organization that helps research organizations in countries in Africa to more effectively develop new medicines using data science tools and AI/ML models.
With cloud infrastructure credits and the help of DigitalOcean experts, Ersilia used App Platform to quickly and easily deploy their platform on DigitalOcean’s infrastructure, and are now able to easily manage and maintain their deployments themselves. By providing this support, DigitalOcean has helped to bolster Ersilia’s ability to address global health disparities in a significant way. This is the kind of social impact that cloud infrastructure can make.
We remain forever grateful to Hollie Haggans, a former employee who created “Hub for Good” (later named Hollie’s Hub for Good) as a way for DigitalOcean to support our global community during the COVID-19 pandemic. We’re proud to remember Hollie, and her legacy will always live on through DO Impact, and our new DO for Nonprofits & Social Enterprises program.
]]>DigitalOcean: Welcome to DigitalOcean, we’re excited to have you and the rest of the Paperspace team on board! Can you first tell us, what were your backgrounds prior to Paperspace and how did the company get started?
Paperspace: We met at the University of Michigan’s architecture school, when we were both getting our Master of Architecture degrees. Michigan’s architecture program is known for its emphasis on digital computation (where software and building overlap), and in our senior year we were doing structural simulation work and realized the power of using graphical processing units (GPUs) for crunching lots of data and building powerful, parallelizable applications.
When we started to dive deeper into GPUs we saw that GPUs could be powerful for not only simulation projects but also for emerging machine learning and other applications, and thought if we could make it easier to access GPUs we could unlock a ton of emerging applications. We decided to start a company around GPUs, and applied to the tech accelerator YCombinator on a whim to help us build the idea and get investors. We got into their 2015 batch with a set of all-stars advisors including Garry Tan (who runs all of YC now), Alexis Ohanian (who started Reddit), and Justin Kan (the founder of Twitch – one of the first companies to really use GPUs in the cloud).
DigitalOcean: Did you always plan to focus on AI workloads or has that shifted over time?
Paperspace: We thought GPUs would be big since they were useful for visual streaming, gaming, and more. In the early days of the company around 2016-2017 we started to see that the biggest group of users were using deep learning models, so we started building more tailored tools for AI/ML workloads. The space has changed so much, and a year or two ago we started to see large language models (LLMs) start to take off, with the space growing even more in the past 6 months.
DigitalOcean: How did you first get connected to DigitalOcean?
Paperspace: We’ve always been close to DigitalOcean, as we’re based in New York where DO is also headquartered, and some of our early engineers came from DO. We also took a lot of inspiration from how DO did things—we felt that everyone should be able to sign-up without talking to sales, similar to DigitalOcean’s self-service model. The other thing we took from DigitalOcean was their content marketing - we realized that there were many students and others who wanted to learn AI, so we launched our Paperspace blog modeled after DigitalOcean’s tutorials. We first met Yancey [DigitalOcean’s CEO] four years ago, and have stayed in touch over the years.
DigitalOcean: What are some challenges you had to overcome during Paperspace’s journey?
Paperspace: We had some technical challenges along the way when first building our GPUs, which involved a lot of testing things to get the GPUs to show up on the server. We also realized we had to be super intentional about every dollar we spent, and maintain morale even with a small team. Scaling a business can also be challenging — when you’re really small it’s easy to talk to the right person to get something done, but when you grow you need to build processes while still keeping up the momentum of a smaller team.
DigitalOcean: What excites you most about the AI/ML industry today?
Paperspace: The AI/ML industry is moving so quickly, keeping up with changes is hard. We’re in a radical innovation phase with so many things being tried out. AI models are fundamentally transformative, and they will make their way into everything—if you were a mediocre coder, you can become an advanced coder, and even if you’ve never coded before you can build apps with AI. There are still challenges around it, but we’re very optimistic about the future of AI.
DigitalOcean: Finally, tell us what makes Paperspace unique from other solutions out there.
Paperspace: We’re a GPU cloud provider at a high level, with a set of tools added in that are targeted to training, fine-tuning, and ultimately deploying machine models into production. Our biggest differentiator is that we’re much simpler than other solutions. We wanted to bring radical simplicity to the product—you can do what we offer in the big cloud providers, but it’s much harder. We also value transparency in terms of billing so developers and students using the product aren’t hit with surprise bills. We know DigitalOcean is also committed to simplicity and billing transparency which made it a great fit for us.
To try out Paperspace GPUs today, sign up on their website! You can also keep up with AI trends and tutorials on the Paperspace Blog, which has tutorials, sample apps, and more created by the Paperspace research team and community.
]]>Beginning in September we encourage you to visit Hacktoberfest.com and mark your calendar for September 26th when registration opens. The Hacktoberfest website offers a wealth of resources for both open source newcomers and seasoned experts. We invite you to join by contributing to open-source projects. You can do this in a variety of ways:
Prepare and share an open source project for collaboration
Contribute to the betterment of a project via technical code contributions
Contribute your non-technical skills and experience such as writing, graphic design, and advocacy
Organize an event
Mentor others
Donate money directly to open source projects
In its tenth year, we’re making important changes to Hacktoberfest to ensure its sustainability for the next decade. Most notably, we will be moving away from the t-shirt rewards we have previously provided to a digital reward kit.
Although we know Hacktoberfest t-shirts are loved by the community, producing over 50,000 t-shirts and shipping them around the world has become logistically challenging. Even with the support of external sponsors, almost all of the program’s operating budget in past years has been allocated towards these physical rewards. Furthermore, we’ve run into challenges in many countries with participants being required to pay customs taxes and import duty fees which often exceed the value of the gift itself.
Our commitment remains unwavering towards Hacktoberfest’s primary mission of supporting open source projects. After carefully considering various options for this year and the future, we are excited to introduce an exclusive digital reward kit in partnership with Holopin. We believe that even without t-shirt rewards, the developer community will continue to come together in the same spirit of Hacktoberfest that they’ve always shown.
In previous years we have given participants who completed 4 PR/MRs the option to plant a tree through our partner Tree Nation instead of redeeming a t-shirt. This year we’re excited to share that we’ll be purchasing a tree for the first 50,000 participants that complete their first PR/MR.
This year we’re excited to be partnering with Major League Hacking for Global Hack Week, a monthly event series where you can learn new skills, build your portfolio, attend fun sessions, and connect with developers from around the world.
Join us for a special Open Source edition of Global Hack Week taking place October 16th-23rd! Learn how to find and contribute to open source projects through 50+ workshops. The event is completely free to attend, register here to unlock the sessions and event perks!
Learn more about organizing your own Hacktoberfest events, visit hacktoberfest.com/events.
Share your appreciation of Hacktoberfest with the community through photos, videos, or a blog post! Whether you’re a contributor, maintainer, event organizer, sponsor, or partner we want to hear from you!
VIDEO: Create a short video and share it with the community through your favorite platform. Be sure to tag #hacktoberfest10 and/or #digitalocean Submit your video here.
BLOG POST: If you’d like to write about your experience participating in Hacktoberfest, we encourage you to create a blog post
Share how you first heard about Hacktoberfest, how being part of the community has impacted your personal or professional development, and your favorite or most useful hack. If you’re a project maintainer, share how Hacktoberfest contributions have improved your project. Creativity is welcome! Once your post is live, let us know by sharing on your social channels and tagging or hashtagging Hacktoberfest so we can help amplify it.
SOCIAL MEDIA: Share your Hacktoberfest experience on social media! Use the official hashtag #hacktoberfest10 and tell others about your favorite contributions, any swag you’ve received in the past (share a pic!), or a particularly memorable hack. You can also submit your story for us to spotlight.
We look forward to hearing from you and seeing how you’ve been part of the Hacktoberfest community! Submit your story today.
Hacktoberfest wouldn’t be possible without the support of our sponsors—this year, we’re excited to welcome ILLA Cloud and Appwrite as our premium sponsors, as well as Amplication, Runme and OpenSauced. Thanks also to our amazing partners, Holopin, Major League Hacking, Tree Nation, GitHub, GitLab, GitHub Education, DagsHub, Hugging Face, DEV, and Paperspace.
Happy Hacking!
Phoebe Quincy, Senior Community Relations Manager
]]>Today, we are excited to announce the expansion of the Basic Premium line of Droplets with new plans that are equipped with enhanced memory and storage options. Like every other Droplet, these new plans will continue to have storage and network transfer bundled with the virtual machines. These new plans will be available in AMS3, BLR1, FRA1, NYC3, SFO3, SYD1 and TOR1 datacenters.**
Choose from these new plans or any of the existing plans we offer for Basic Premium Droplets with AMD processors.
Processor Type | Memory | vCPUs | Transfer | NVMe SSD | Monthly Price |
---|---|---|---|---|---|
AMD EPYC™ | 8 GiB | 2 | 5,000 GiB | 100 GiB | $42 |
AMD EPYC™ | 16 GiB | 4 | 8,000 GiB | 200 GiB | $84 |
AMD EPYC™ | 32 GiB | 8 | 10,000 GiB | 400 GiB | $162 |
Choose from these new plans or any of the existing plans we offer for Basic Premium Droplets with Intel® Xeon® processors.
Processor Type | Memory | vCPUs | Transfer | NVMe SSD | Monthly Price |
---|---|---|---|---|---|
Intel® Xeon® | 1 GiB | 1 | 1,000 GiB | 35 GiB | $8 |
Intel® Xeon® | 2 GiB | 1 | 2,000 GiB | 70 GiB | $16 |
Intel® Xeon® | 2 GiB | 2 | 3,000 GiB | 90 GiB | $24 |
Intel® Xeon® | 4 GiB | 2 | 4,000 GiB | 120 GiB | $32 |
Intel® Xeon® | 8 GiB | 2 | 5,000 GiB | 160 GiB | $48 |
Intel® Xeon® | 8 GiB | 4 | 6,000 GiB | 240 GiB | $64 |
Intel® Xeon® | 16 GiB | 4 | 8,000 GiB | 320 GiB | $96 |
Intel® Xeon® | 16 GiB | 8 | 9,000 GiB | 480 GiB | $128 |
Intel® Xeon® | 32 GiB | 8 | 10,000 GiB | 640 GiB | $192 |
Starting today, creation of new Droplets from these plans will not be available via the cloud console. However, existing Droplets will continue to be accessible in the cloud console.
Processor Type | Memory | vCPUs | Transfer | NVMe SSD | Monthly Price |
---|---|---|---|---|---|
Intel® Xeon® | 1 GiB | 1 | 1,000 GiB | 25 GiB | $7 |
Intel® Xeon® | 2 GiB | 1 | 2,000 GiB | 50 GiB | $14 |
Intel® Xeon® | 2 GiB | 2 | 3,000 GiB | 60 GiB | $21 |
Intel® Xeon® | 4 GiB | 2 | 4,000 GiB | 80 GiB | $28 |
Intel® Xeon® | 8 GiB | 4 | 6,000 GiB | 160 GiB | $56 |
Intel® Xeon® | 16 GiB | 8 | 9,000 GiB | 320 GiB | $112 |
Try the new Basic Premium Droplets today! If you’d like to have a conversation about using DigitalOcean Droplets in your business, please feel free to contact our sales team.
]]>As a tech startup or small business, managing your expenses is critical to your success. If you’re running your business in the cloud, one of your most significant expenses is likely your cloud service bill. But how you pay that bill can make a big difference to your bottom line. In this article, we’ll explain why paying for cloud services through ACH bank payments is often a better choice for small businesses. As a small business, you can use ACH payments to streamline your payment processes and reduce transaction costs. For example, you likely already use ACH payments to automatically pay vendors and employees and to collect payments.
An Automated Clearing House payment or ACH is an electronic payment method that enables you to transfer funds between bank accounts in the US. ACH payments are commonly used for payroll, vendor payments, and direct debits. Setting up ACH payments is as easy as providing your bank information.
Credit card transactions can be vulnerable to fraud and data breaches, which can be devastating for a small business. ACH bank payments, on the other hand, are highly secure and use encryption technology to protect sensitive financial information. By using ACH bank payments, you can reduce your risk of financial losses due to fraud or data breaches.
As a small business, managing your cash flow is critical to your survival. ACH bank payments are processed quickly and can be scheduled in advance, which makes it easier for businesses to manage their cash flow and budgeting. Credit card payments, on the other hand, may take longer to process and can lead to unpredictable cash flow.
Keeping track of your expenses and revenue is essential for any business. However, credit card transactions can be more challenging to reconcile and track in accounting software compared to ACH bank payments. By using ACH bank payments, you can simplify your accounting processes and reduce the risk of errors.
Finally, ACH bank payments are often more reliable than credit card payments, as they are less likely to be declined or flagged for fraud. This can result in better customer service and a more positive experience for businesses and their customers.
As a startup or small business, every expense matters. By using ACH bank payments to pay for your cloud services, you can reduce your costs, increase your security, manage your cash flow better, simplify your accounting, and provide better customer service. If you’re not already using ACH bank payments, it’s worth considering as a payment option for your cloud service bills.
DigitalOcean business customers may qualify to upgrade to ACH payments. If not enabled in your billing page our experts can help to understand payment options. For detailed instructions on how to upgrade to ACH payments at DigitalOcean, click here.
]]>DigitalOcean Spaces Object Storage offers a user-friendly interface and API that simplifies data management in your Kubernetes environment. Object storage is an ideal solution for Kubernetes environments because all pods can access all data at all times. With Spaces, you can easily store and retrieve files, images, and other objects, seamlessly integrating them into your Kubernetes workflows. By leveraging the familiar S3-compatible API, you can effortlessly interact with Spaces, making it an ideal choice for developers familiar with the popular S3 API.
Spaces integrates natively with DigitalOcean Kubernetes, allowing you to leverage its powerful features and streamline your application workflows. By integrating one of the S3 SDKs into your Kubernetes application, you can mount Spaces as a storage bucket within your Kubernetes pods, enabling easy, shared access to object storage. The Spaces documentation includes a number of code examples for interacting with the files in your Spaces bucket.
Scalability is a critical aspect of any Kubernetes deployment, and Spaces Object Storage excels in this regard. DigitalOcean Spaces offers virtually limitless scalability, ensuring your storage can grow alongside your application needs. Whether you’re dealing with small-scale projects or enterprise-level workloads, Spaces can handle it all. The Spaces’ API supports up to 800 Requests per Second (RPS) per bucket.
Data reliability is critical for any storage solution in a Kubernetes environment. Spaces Object Storage helps ensure high levels of data resilience and durability. Spaces data is replicated across multiple physical racks, providing built-in redundancy and protection against hardware failures or data corruption. By leveraging Spaces’ robust data protection mechanisms, you can help safeguard your critical data and minimize the risk of data loss.
DigitalOcean and other cloud vendors also offer block storage and file storage products that can be used with Kubernetes. In some use cases, they are appropriate for Kubernetes workflows. However, neither of these storage options offers the combination of features offered by Spaces Object Storage, including virtually limitless scalability and the ability to share files via a worldwide CDN.
DigitalOcean Spaces Object Storage provides an affordable storage solution for your Kubernetes workloads. With a competitive pricing plan and a pay-as-you-go model, you only pay for the storage you need, making it cost-effective for businesses of all sizes. This flexibility allows you to allocate your resources more efficiently, optimizing your infrastructure costs without compromising performance or reliability.
Spaces Object Storage enables efficient collaboration and distribution of data across your Kubernetes environment. Whether you need to share files between pods, deploy static websites, or distribute large datasets, Spaces simplifies the process. By leveraging Spaces’ built-in Content Delivery Network (CDN), you can serve content globally with reduced latency, helping ensure a smooth user experience for your customers, no matter where they are located. The Spaces CDN also includes robust Cross-Origin Resource Sharing (CORS) settings to allow you to share resources across multiple domains.
Thousands of DigitalOcean customers are leveraging the synergies of DigitalOcean Kubernetes with DigitalOcean Spaces Object Storage. Here are some case studies to learn more:
Zingbrain uses DigitalOcean Kubernetes with Spaces for their AI-powered gaming engine. They like the high availability and cost efficiency of the combined solution.
Datacake uses Kubernetes and Spaces to power their low-code IoT platform. They appreciate how simple it is to spin up and manage services on DigitalOcean.
DigitalOcean Spaces Object Storage empowers DigitalOcean Kubernetes users with a powerful and efficient storage solution. By combining the simplicity and scalability of Spaces with the flexibility of Kubernetes, you can streamline your workflows, reduce complexity, and optimize your infrastructure costs. From simplified data management to seamless integration, Spaces enables you to focus on developing and delivering exceptional applications while leaving the storage complexities to DigitalOcean.
Embrace the benefits of DigitalOcean Spaces Object Storage today and unlock a new level of efficiency and scalability for your Kubernetes deployments. Start leveraging the power of Spaces and take your applications to greater heights. Sign up for DigitalOcean Spaces now or let us help you set up your Kubernetes cluster with Spaces, and witness its transformative impact on your Kubernetes workloads.
Further reading: Spaces Developer Center Tutorials, Real-life example: DigitalOcean Kubernetes and Spaces
]]>With Paperspace as part of DigitalOcean, we are taking our commitment to simplicity to new heights. Paperspace brings extensive capabilities for AI/ML applications powered by high-performance GPUs and will enable our customers opportunities to train, build, and scale ML models in the cloud.
Why Paperspace?
The demand for AI/ML cloud solutions has witnessed an unprecedented surge, fueled by the emergence of large language models (LLMs) that have captured the imagination of developers, small businesses, and startups around the world. But smaller businesses face steep, financial and technical barriers to entry when developing new AI/ML applications. Paperspace’s highly scalable GPU-accelerated infrastructure perfectly complements our existing capabilities, allowing our customers to meet this demand head-on.
Together, DigitalOcean and Paperspace will unlock opportunities for startups and small businesses that have, to this point, been somewhat limited to large enterprises, with large IT departments and R&D budgets. SMBs and startups will now be able to delve into AI/ML applications such as generative media, text analysis, natural language understanding, recommendation engines, and image classifications, all with the support of Paperspace’s cutting-edge capabilities, and DigitalOcean’s famous simplicity and community. Paperspace’s pre-configured Notebook environments provide an ideal test pad for AI/ML model exploration and fine-tuning. And when the time comes to bring those models to life, Paperspace’s streamlined workflow ensures a smooth transition to hosting and scaling for production.
Paperspace customers will benefit from a broader spectrum of DigitalOcean’s cloud services, including databases, storage, and application hosting, seamlessly integrated into their existing ecosystem. They will also gain access to DigitalOcean’s extensive documentation, tutorials and support system, assisting them in their journey to build and deploy AI applications. And for our existing DigitalOcean users venturing into or already operating in the realm of AI/ML, the integration of GPUs alongside CPU workloads opens up a world of possibilities. Together, our customers will have access to a comprehensive suite of cloud offerings, advanced networking capabilities, and the flexibility to leverage the full potential of both GPU and CPU resources as AI/ML technologies continue to evolve.
And beyond technology, Paperspace aligns closely with DigitalOcean in terms of our values. We have a shared passion for nurturing businesses, whether testing an idea, building a product, or scaling their GTM efforts. Through these new and simplified AI/ML offerings, we aim to democratize this powerful technology for SMBs and startups worldwide, enabling them to leverage the power of both AI and the cloud with ease.
What’s Next?
In addition to Paperspace’s technology, we are fortunate to be welcoming their extremely talented team to our family. Our shared values and commitment to customer success will ensure a collaborative work environment focused on continued innovation and service. As we embark on this journey with Paperspace, we expect an even more powerful DigitalOcean experience that exceeds our customers’ expectations.
We promise to keep you informed about the remarkable developments that lie ahead. Together, we are poised to unlock new horizons and fuel extraordinary AI projects from our SMB and startup customers worldwide. At DigitalOcean, we remain committed to our mission to simplify complex technologies, empowering small businesses to focus on what truly matters—building products and software that have the power to change the world.
]]>Spaces is a highly performant and scalable object storage solution with a built-in CDN. It’s excellent for storing unstructured files/data for various applications, including data analytics workflows, training artificial intelligence models, log files generated by applications, and video streaming applications. Spaces enables multiple users to access these files simultaneously, preventing performance and productivity bottlenecks. You can also use Spaces to host static assets and user-generated content (e.g., image and sound files) and store log files.
Spaces complements local storage and is designed to help businesses grow and scale on DigitalOcean. Spaces is S3 compatible, allowing you to use the large ecosystem of S3 tools, utilities, and plugins to store and retrieve unstructured data using an HTTP API. Here are some key capabilities:
Spaces also comes with a free built-in content delivery network that caches assets across 45+ global CDN locations, including several locations in the APAC region, including Mumbai, Singapore, Hong Kong, Sydney, and Melbourne.
Many laws and regulations govern data collection, storage, and sharing. An essential aspect of some of these regulations is that they often prohibit storing certain data outside national boundaries. Using Spaces Object storage for storing unstructured data at scale within the Bangalore data center can be an important part of your company’s overall regulatory compliance strategy.
No company is immune from outages or natural disasters, so businesses must protect themselves and their customer data from unforeseen situations. Today’s cloud landscape is not just about virtual machines, it is also about containers, managed databases, applications, and more. Backing up the data within these applications can help businesses avoid data loss, increase customer satisfaction and ensure customer retention. You can use Spaces as an endpoint to store frequent backups of your DigitalOcean services, including docker containers and managed databases. If your business serves customers globally, you can use Spaces Object storage in the Bangalore data center as an essential part of your disaster recovery strategy.
Pricing for Spaces starts at $5 per month, including 250 GB of data storage and 1 TB of bandwidth. Additional storage is charged at only 2 cents per GB. With DigitalOcean’s superb bandwidth pricing and flat bandwidth overage of $0.01 per GB, you can easily estimate your monthly bills. Inbound bandwidth to Spaces is free.
To get started, create a Space (bucket) in the Bangalore data center and upload your data. If you have existing data that you need to move into Spaces “buckets” from other locations outside of DigitalOcean or from other DigitalOcean regions, please follow this step-by-step guide to migrate your data over to Spaces using Flexify.IO. Flexify is a DigitalOcean-recommended migration provider and can help migrate your data to Spaces from various online storage accounts and virtual endpoints. If you still have questions, please review DigitalOcean’s security pillars or contact our team.
]]>We are thrilled to announce that, effective today, DigitalOcean Kubernetes (DOKS) customers can leverage SnapShooter to back up their Kubernetes applications. SnapShooter seamlessly discovers your DOKS clusters, allowing you to define your backup policy workflow. Users can then utilize the backup storage per your plan or bring storage. This service is now available in Early Availability (EA) to all customers, starting with the free tier.
Kubernetes has widespread adoption due to its scalability, high availability, and extensive ecosystem, and data protection is critical for any production Kubernetes application. However, until now users of DigitalOcean Managed Kubernetes have not had a simple way to get a comprehensive backup of all their cloud resources.
SnapShooter, powered by DigitalOcean, is an all-inclusive backup management tool for Droplets (virtual machines), Docker containers, Volumes, files, apps, and databases. Designed with the needs of small- to medium-sized businesses in mind, SnapShooter delivers a simple yet comprehensive and secure backup workflow.
When it came to incorporating a Kubernetes backup solution into SnapShooter, we were faced with a choice: to build our own or to embrace the existing ecosystem. Given Velero’s popularity, we integrated a new backup engine, leveraging Velero, into SnapShooter. This decision enabled us to maintain our existing UI and backup workflow for DOKS backup while simultaneously offering the quality and reliability of Velero.
Watch the below video for a step-by-step walkthrough:
This section assumes that you already have an active SnapShooter account. If you don’t have one yet, you can easily add SnapShooter to your DigitalOcean account through the DigitalOcean cloud console. Once added, you’ll be able to use single sign-on to access the SnapShooter console.
Under your DigitalOcean account in SnapShooter, you will discover a new addition - Kubernetes. Select the cluster you wish to back up and follow the intuitive guide to activate your backup job.
A comprehensive view of all your previous backups for the selected cluster is available, alongside the option to scrutinize the logs in one consolidated space.
The backup policy for the DOKS cluster is purposefully designed. In the initial step, you are presented with the option to back up one or multiple namespaces. This includes the capability to back up the entire cluster (by leaving namespace selection to be blank) as well. Note that you can only have one backup job per cluster in this EA release.
Subsequently, you can customize your backup frequency and retention policy. As the final step, choose your preferred storage option, either bringing your own or utilizing the storage provided by SnapShooter.
SnapShooter is thorough in its backup process, including Kubernetes manifests and persistent volumes (PV). Manifests are stored in the object store (such as Spaces/S3), and volumes are backed up as snapshots within your DigitalOcean account.
SnapShooter charges a management fee based on the number of backup jobs you run. SnapShooter offers various plans, with Kubernetes backup supported across ALL tiers. For instance, if you’re on the Startup tier, you can run 20 backup jobs. Creating just one backup job for a DOKS cluster will be a single backup, leaving 19 backup jobs in your quota. In essence, SnapShooter’s pricing for backup remains consistent regardless of the job type.
However, it’s important to note that each tier has a limitation on the cluster’s maximum size (number of worker nodes) that can be backed up.
Please remember you will be billed separately for volume snapshots and backup storage (if you choose to bring your own).
Since joining DigitalOcean at the beginning of this year, Snapshooter has introduced a native backup workflow for DigitalOcean Managed Databases, facilitated backups of Cloudways applications, launched an agent for backup support in private networks, and incorporated Docker-based applications. Adding DOKS backup support now rounds out the resources SnapShooter can back up. Indeed, SnapShooter stands among a select few products in the industry that can back up VMs, containers, Kubernetes, files, and databases.
Our decision to support Kubernetes backups directly responds to numerous requests from our valued customers. We eagerly await your experience with this new feature and look forward to hearing your feedback and comments. Our team is always accessible via our support channels or on Discord.
]]>While it’s essential to choose the right data center location for your business operations, there are various scenarios where using multiple data centers can help you more effectively grow and scale your business while making your business more resilient against risk.
This article will discuss reasons for deploying across multiple data centers rather than relying on a single data center.
Many laws and regulations govern how data is collected, shared, and stored. An essential aspect of some of these regulations is that they often prohibit storing certain data outside national boundaries. For instance, consider an application accessed by customers living in Cleveland, Ohio, and Toronto, Canada. In this situation, it might be necessary to host the application in two data center locations—Canada and The United States, even if the Toronto data center is closer to both locations. Using data centers in different geographies may be an important part of a company’s overall regulatory compliance strategy.
Despite the best efforts of cloud service providers, data center failures can occur. Recovering quickly is essential to ensure customer retention and avoid data loss. Data loss can occur for various reasons, including hardware failures, outages, natural disasters, and failure of environmental controls resulting in fire. Businesses must protect themselves, and their customers, from these situations.
While it’s important for businesses to back-up data, restoring data from backups can be time-consuming and inadequate in some situations. To help mitigate the consequences of a data center failure, you can deploy your application across several data centers. The application hosted in one data center can serve the incoming traffic, and the application hosted in another data center (somewhere relatively close to the primary data center) can serve as a backup node. If the first data center goes offline for any reason, you can reroute the incoming traffic to the application deployed in the second data center, thereby preventing or minimizing data loss.
Deploying across multiple data centers can improve your application’s overall performance by optimizing various parts of the workload, especially when you have a global customer base.
If your application generates and serves up unstructured data (static assets) like media and text files, you can use object storage to store the data and CDN to serve and distribute the data. The combination of object storage and CDN can speed up unstructured data distribution and improve content availability, especially for applications with a global reach. For instance, if you anticipate high user activity (i.e., users uploading/downloading unstructured data) in a secondary region, you can spin up an object storage instance in a data center closer to that region.
Spinning up an object storage instance allows you to speed up the data movement between the CDN endpoint and object storage location and potentially prevent performance bottlenecks. When users upload static assets to the CDN endpoint, the CDN endpoint can quickly write the data to the object storage. Similarly, the static assets stored in the object storage can be promptly synced to the CDN location, reducing the wait time and speeding up the download process.
By scaling your object storage across multiple data centers, you can improve the overall performance of your application and provide exceptional customer experiences.
Suppose you have a centralized application (e.g., a gaming application) with distributed edge nodes that writes to a central master database. In this scenario, you can speed up the read operations by scaling read-only database instances horizontally across multiple data centers, reducing the read latency significantly and making the overall application faster.
The write latency would still be the same, as your application would be writing the data into the central database. In other words, if your application performs more reads than write operations, you can scale up read operations by deploying your distributed edge nodes across multiple data centers, thereby speeding up your application.
On the other hand, if your application requirements dictate that the write latency is as low as possible (e.g., fintech applications), you can deploy your entire application (the main application along with database instances) across multiple data centers such that the application can serve customers regionally while keeping the write latency low. This way, you can optimize the read/write operations at a database level by hosting your database instances across multiple data centers.
DigitalOcean is focused on making it easier for businesses to deploy and scale their applications. With 15 globally distributed data centers in nine regions, DigitalOcean makes it easier for startups and SMBs to provide exceptional experiences while accelerating growth. If you want to scale your business and discuss your company’s cloud situation with our team of experts, please fill out this form, and someone will get back to you.
]]>DigitalOcean is happy to announce the launch of our new Partner Directory. The Partner Directory provides both companies and current customers, as well as DigitalOcean teams, with the ability to find DO partners with the expertise and technical skills to help solve their business problems. With the help of DigitalOcean partners in a range of industries, implementing projects and workloads on DigitalOcean just got even easier.
DigitalOcean Partners offer a wide range of expertise that can be very useful to companies looking to move onto DigitalOcean, or grow their infrastructure as they scale their business. Partners span areas such as web development/hosting/management, migration or DevOps services, Kubernetes, e-commerce, application development, and even infrastructure management. Our partners have developed these skill sets through multiple projects with DigitalOcean customers, and have the depth of knowledge in our products to make for smooth sailing in your DigitalOcean journey!
DigitalOcean customers or those interested in exploring DigitalOcean can search the Partner Directory by expertise area, or they can find a local partner who is versed in the language and policies of a specific country. You can also see detailed views of each partner’s expertise and skills, and examples of the typical projects done by a specific partner. DigitalOcean users can connect with partners directly using the contact form to explore the project work more thoroughly.
DigitalOcean has a wide range of partners around the world that are ready and able to assist companies of all sizes with their business needs. We’re excited to launch the Partner Directory so all of our users can get more benefit from our skilled partners who can help you grow your business on DigitalOcean.
Interested in becoming a DigitalOcean partner? Learn more about the Partner Pod and how to become a partner today!.
]]>One way to keep your operational expenses low is to choose the right virtual machines (VMs) to deploy your games. DigitalOcean’s Premium CPU-Optimized Droplets combine high-performance CPUs, fast network speeds, NVMe storage and plenty of bandwidth, and are a powerful yet cost-effective way to host your games. Learn more about how gaming companies have found success with Premium CPU-Optimized Droplets below.
As online gaming has gained widespread popularity and both games and the devices they are played on have become more advanced, players have become increasingly focused on latency, sometimes known as ping. Latency is a measure of the time taken by the game server to respond to an action from a player, process the request and actions from other players in case of multi-player games, and send processed information back to the player. High latency, also known as lag, is often a major source of frustration for players. High latency in cloud-served games can cause players and actions to fall out of sync, leading to a poor gaming experience.
DigitalOcean’s Premium CPU-Optimized Droplets are designed for network and CPU-intensive workloads, making them ideal for gaming applications. These virtual machines, our fastest yet, feature up to 10Gbps outbound network speeds, some of the latest generations of Intel® Xeon® CPUs and NVMe SSDs. They deliver up to 5 times faster outbound network speeds, 58% more performance and 290% faster disk writes when compared to Regular CPU-Optimized Droplets. That means you can stream games faster to your players with low latency, minimal data packet loss, and high quality media, resulting in stutter-free gaming experiences.
Another advantage of using Premium CPU-Optimized Droplets for your game deployments is that you’ll see consistent performance when you’re optimizing your virtual machine fleet continuously to meet dynamic demand, even if you’re creating or destroying multiple virtual machines every day.
Chill Gaming, a gaming company who’s built hugely popular games such as Combat Quest and One State, is leveraging Premium CPU-Optimized Droplets to deliver high-quality gaming experiences to its players:
We’ve been trying out the new Premium CPU-Optimized Droplets for the last several weeks and we’re impressed with the performance gains. One of our main goals is having the lowest latency possible, these Droplets have been a breath of fresh air for our .NET applications. So far this is the best DigitalOcean solution we can recommend for high CPU load scenarios and for high network throughput - Igor Boyko, Chief Technology Officer, Chill Gaming
Gaming workloads are inherently bandwidth-intensive, and this has become even more true in recent years as games have increasingly complex game elements. Unlike virtual machines from other cloud providers which charge extra for bandwidth and storage, DigitalOcean Droplets come with generous bandwidth and storage included, out of the box. Starting at $109 monthly, Premium CPU-Optimized Droplets come in a variety of configurations ranging from 4vCPUs to 48vCPUs, with free bandwidth allowance of 5TB to 11TB, and included NVMe storage of 50GiB to 1200GiB.
Our transparent and straightforward bandwidth pricing model is designed to help you scale without paying exorbitant bandwidth charges. Additionally, DigitalOcean Droplet plans give you the flexibility to pool the generous bandwidth allowances across Droplets within the same account. Pooling increases the amount of bandwidth available for your account and helps keep bandwidth overage costs in check. For example, one of our gaming customers has a large monthly bandwidth usage of 479TB, but because the bandwidth for all of their Droplets is pooled, they are provided with 1563TB total bandwidth per month. If they were to host their games with a major hyperscale cloud provider, their bandwidth charges would amount to $24,000 every month, as bandwidth would not be included in their compute costs.
Check out how Playkids built games with millions of users with a team of 3 engineers managing the infrastructure.
Unlike other applications, gaming workloads can be volatile due to sudden variations in player traffic, so it’s important for games to build strategies to handle dynamic traffic. When you have a small team with a large user base, infrastructure management has the potential to become a roadblock.
Simplicity is a core tenet at DigitalOcean and it extends to everything we do, including the creation and scaling of virtual machines. DigitalOcean helps you take the complexity out of your gaming infrastructure, helping you scale without a large Devops or ITops team. Businesses can save up to 50% time on infrastructure management due to easy-to-use workflows, excellent documentation, faster onboarding, and simplified billing.
Our Premium CPU-Optimized Droplets also make excellent worker nodes for DigitalOcean Kubernetes. This Managed Kubernetes solution can automate your infrastructure to provision and deploy faster and as frequently as multiple times a day, enabling you to scale up or down automatically at a moment’s notice. Utilizing Kubernetes, you can optimize your infrastructure and only pay for what you utilize.
You can also get peace of mind that any issues with your cloud resources will be resolved quickly with DigitalOcean’s Premium Support, which makes resolving issues faster via a dedicated Slack channel and Google Meet calls with technical experts. Our team provides you access to troubleshooting tips, unlimited customized support, and quick response times (as low as 30 minutes), so you can accelerate your time to market.
Our full-featured Premium CPU-Optimized Droplets are the perfect virtual machines to host your gaming applications. Spin up one today and see the difference it makes to your gaming workloads. Our S3-compatible high-performance object storage and managed databases also make it easy to handle large scale data with ease.
Find more resources on computing solutions for game development here. If you’d like to have a conversation about using DigitalOcean in your business, please feel free to contact our sales team.
]]>Over the last six months, we’ve been working to optimize how we notify users of incidents and maintenance. This has included changes to emails, templates, how we Tweet, and of course, our status page.
Today, we are thrilled to announce the launch of our redesigned status page!
As part of this journey, we worked to conduct user research to help inform our design decisions and understand how you, our customers, utilize our status page. The insights we gained allowed us to make some large changes, which we’ll discuss below.
The home page of https://status.digitalocean.com has been simplified and features DigitalOcean Services as the main component. You can now view dropdowns for any regional services (like Networking) and service dropdowns will be automatically expanded when there is an ongoing incident.
As part of this effort, we temporarily disabled user notifications, but are pleased to announce they’re back and more customizable than ever! Users can subscribe to updates via Email, SMS, Slack, Twitter, and Atom/RSS Feed. Users will be able to opt-in to notifications for any combination of regions and services.
Instead of displaying the most recent incident history on the home page, we’re focusing on showing the current status of our services. Users will be able to see history by navigating to the history page, linked at the bottom left of the home page.
In addition to the above, there have been multiple updates made to the icons, layout, and text formatting used throughout our status page. We’ve focused these changes around improving the accessibility and compatibility of the page with screenreaders and other tools, as well as mobile devices.
On the home page, users will still see the current update for any scheduled/ongoing maintenance and any ongoing incidents, with a link to the dedicated update page for that maintenance/incident.
Our linked Twitter account, dostatus, will remain the same and our team will continue to respond to user queries there.
On the history page, users can still filter the feed to easily find past incidents or maintenance by service and region.
We invite you to head over to https://status.digitalocean.com/ and take a look for yourself, as well as re-establish any desired notifications.
Any feedback is welcome and we’ve set up a dedicated page to submit feedback here: https://ideas.digitalocean.com/interfaces/p/status-page-redesign-may-2023
On behalf of the entire DigitalOcean team, thank you for your patience while we made these updates! We hope they serve you and your business even better and look forward to providing our users with timely and transparent updates via our new status page.
]]>DigitalOcean Kubernetes (DOKS) offers a High Availability (HA) option for its control plane; it’s designed to be durable with a 99.95% Service Level Agreement (SLA).
The HA control plane allows faster cluster creation and recovery because it is containerized, leveraging the latest cloud-native and open-source technologies. It automatically detects and replaces unhealthy components and dynamically allocates CPU and memory resources on demand. In addition, the improved DOKS HA control plane allows for faster feature updates and bug fixes, making it easier to maintain and roll back. The above diagram depicts the new and improved DOKS HA control plane. You can enable HA on a cluster for only $40 monthly with a click, the CLI, or the API. Once HA is enabled on a cluster, it can’t be disabled.
To examine why HA is so important, let’s look at what happens when a control plane fails—take the example of a gaming app running on Kubernetes. In this scenario, the control plane of the Kubernetes cluster is responsible for managing and orchestrating the various components of the game application, such as the game servers, databases, and load balancers. If a control plane fails, it can lead to the game becoming unavailable or unstable. As a result, players may experience server crashes, long load times, or even complete game outages. This can result in unhappy users and potentially lost revenue for the gaming company.
Let’s take a few components in your control plane and follow what happens if they fail. When the API server fails, it prevents your cluster from receiving new API requests, making it impossible to perform new deployments, updates, and scaling operations until the issue is resolved. The etcd is a key-value store that Kubernetes uses to store configuration data, state information, and metadata for all cluster resources. If the etcd fails, the cluster will no longer be able to access this data, resulting in a wide range of issues such as loss of control plane functionality, inability to deploy new workloads, and potential data loss. If the scheduler fails, new pods won’t be allocated to nodes, making your services inaccessible. Lastly, when the controller manager fails, changes applied to the cluster won’t be picked up, so your workloads will appear to retain their previous state.
The control plane and workers are independent, so a control plane failure won’t knock out workloads already in a healthy state. Fortunately, nodes are among the least often changing objects; once they are provisioned, they need minor modifications. You can access existing services even when you can’t connect to your API server. Users won’t notice a short-term control plane outage. However, more extended periods of downtime increase the probability that worker nodes will also face issues.
For example, extended periods of downtime will prevent the user from changing their existing functioning workloads. If a worker node has problems while the control plane is down, it’ll be impossible to reschedule the pods to another node. This event will cause your workload to drop offline. At this point, a control plane failure can impact your customers.
Enabling High Availability (HA) in DigitalOcean Kubernetes is recommended for workloads and environments requiring optimal availability and resilience. This includes mission-critical apps and websites, and services requiring continuous operation with minimal downtime. HA Kubernetes cluster ensures a resilient infrastructure that can withstand control plane outages better—resulting in improved performance and uptime for users, making it an essential feature for businesses that require continuous operation of their apps and services.
As workloads grow, a resilient infrastructure becomes increasingly important. A minor failure can have cascading effects at scale, leaving you at risk.
Improve uptime and performance
Enabling High Availability in the Kubernetes control plane can mitigate the impact of a control plane failure. It improves performance and reliability for users while reducing the risk of outages.
Meet customer expectations
When the stakes are high and customers demand near-perfect uptime, a highly available control plane helps organizations meet their obligations.
To enjoy the benefits of a highly available control plane, you can easily add it to your DigitalOcean Kubernetes cluster at the push of a button. In addition, you can enable HA DOKS with CLI, API, or UI. Contact us if you would like expert help with DigitalOcean Kubernetes to modernize your infrastructure.
Docker is an extremely popular tool in the developer community—according to the StackOverflow 2022 survey, it is the most widely used tool among professional developers. There are several factors that have contributed to Docker’s popularity, including that it provides a simple and efficient way to package and deploy applications. Additionally, Docker has a large and active community of users, which has helped to drive innovation and adoption. In this post, we’ll walk through what data you should be backing up when using Docker and how to use SnapShooter for Docker backups.
When running an application in Docker, there are 3 types of data that need to be protected.
Application modification is when you modify the container data at runtime. Protecting this type of data typically means that you’d back up the whole image and push it to the registry. However, you can optimize backups for application modification by configuring the application to use volumes for data storage.
Application configuration contains the description of your specific application. For example, when deploying an NGINX or a Wordpress application, you will need to configure the application when it starts. When running Docker on a single host, the recommended practice for production deployment is Docker Compose. Docker Compose allows you to define the application configuration in a YAML file. You can keep the Docker Compose configuration in a Git repo, and deploy on to any host. You get an identical application, whenever you run Docker Compose. However, if you have runtime data, that will be lost when you switch hosts and run Docker Compose again.
For storing the application data, there are several options. Refer to the diagram below from Docker documentation.
Out of these options, named volumes are considered a best practice for managing storage in Docker. Named volumes are persistent, shareable, and easy to manage across different hosts. They also provide a clear separation between data and containers, making it easier to manage and migrate data independently of the containers themselves.
We recommend backing up Docker volumes to build a robust data protection strategy. Docker Desktop provides an extension for backing up and restoring a volume. There are some other scenarios to be considered as well, which include:
In SnapShooter, we have recently added support for comprehensive backup/restore for Docker-based applications.
Docker-based backup jobs are included as part of all tiers in SnapShooter and our Marketplace add-on. Try Docker backups and let us know what you think!!
]]>Docker is a popular container-based tool to package and deploy applications. It has made application development easy, portable and agile. However, backing up data within the container can be tricky because the host server can’t access these resources easily.
Our new Docker backup features in SnapShooter includes features such as:
Docker backups can be accessed via Backup jobs in SnapShooter. Just point to the host and select the volumes you want to backup. When restoring, first you should restore the volume, and then run your docker containers (e.g. Docker Compose). That way, the application can use the restored volume right away.
SnapShooter’s agent server is now in early availability. Agent server is a lightweight piece of software that users can install in their environment for enhanced backup coverage and flexibility. Here is why SnapShooter users should consider using the agent server:
Find out more about using SnapShooter agent server here.
Whether you run an agency, an ecommerce business or a blog using WordPress, Cloudways makes it easy to build a website that can accelerate your business growth. SnapShooter now enables you to back up your Cloudways application with ease and customize it to your needs:
DigitalOcean Managed Databases are a powerful and scalable way to host databases for your apps without the hassles of database administration. Managed Databases come with free automated backups and you can restore data to any point within the previous seven days. If you desire more flexibility in backing up your DigitalOcean Managed Databases, SnapShooter provides custom backup policies, longer retention, target storage of your choice, and allows you to download backups.
With SnapShooter’s latest update to database backup, setting up backups of DigitalOcean databases only takes a few clicks
Get started with simple, frequent and customizable backups of your cloud data. If you are a DigitalOcean customer, you can add SnapShooter via our marketplace. Our annual plans help you save up to 34% on SnapShooter subscription. If you’d like to have a conversation about using SnapShooter for your business, please feel free to contact our sales team.
*Early availability releases may not be appropriate for production-level workloads. We encourage users to use simulated test data and avoid running sensitive workloads in early availability releases.
]]>Like most startups using DigitalOcean products, ScraperAPI started with Droplets (VMs) and scaled into the hundreds. Eventually, they found that it wasn’t the right architecture to support their bandwidth-intensive app and rapid growth. They were tired of manually writing code to sort through user agents and rotate IP addresses. They needed more control, performance, and reliability. The company’s monolithic app also needed help to handle the increasing demand. They opted to migrate to Digitalocean Kubernetes for scalability and convenience.
“We used to run 100+ Droplets. We converted to DigitalOcean Kubernetes in 2020, going from partially to fully managed DigitalOcean services. High-scale transactional apps run optimally if one instance has a few things [cloud-native patterns]. That’s where Kubernetes comes in. I’m a big fan of smaller pods but many. If you replicate with VMs, you can’t go below one core. Kubernetes allows more granularity.” –Zoltan Bettenbuk, CTO of ScraperAPI
ScraperAPI used the lift and shift method to migrate to DOKS. This involves moving existing VMs to Kubernetes without making profound changes to the architecture, configuration, or code and running them in containers.
**
Today, they still depend on the same monolithic app they migrated to DOKS in 2020. However, they’ve discovered a workflow to break-up their monolith and add new features continuously, all while keeping their team small. ScraperAPI operates as a DevOps shop where all the engineers manage the environment and code. Their website’s entry point is a dashboard and API. ScraperAPI has a team of five engineers managing the infrastructure. They use DOKS and other managed services to scale and manage their resources.
First, they create one or more proofs-of-concept (POCs) on the DigitalOcean App Platform. When it makes sense, the POCs can be new features or break-off services from their monolith. Sometimes they decide to stay on the App Platform, where their website’s UI and console exist today.
When the team agrees on a POC, they migrate it to DOKS to scale it further. They automate this process using GitHub actions for the POC migration from the App Platform to DOKS. They appreciate App Platforms’ simplicity. GitHub Actions delivers source code to the App Platform, then containerizes your app and deploys it for you. Bettenbuk loves uploading raw code to the App Platform. “I can start on the App Platform without changing my code,” says Bettenbuk. While the most tedious work for the team involves writing a Dockerfile for their microservices, the team can automate more with tools like buildpacks that don’t require writing Dockerfiles.
Afterward, GitHub Actions creates YAML files for Kubernetes and uploads their image to their DigitalOcean Container Registry. At first, they used the GitHub container registry to control their images. After testing, they found the DigitalOcean Container registry to be faster. The only change to their DOKS cluster is creating a namespace, which is also automated.
ScraperAPI’s preference for DigitalOcean is based on managed services’ pricing, scalability, and convenience. Their transition to DOKS has allowed them to scale efficiently and save time.
“At a previous company, we used a bare metal service provider and scaling a database could take upwards of two weeks; the provider had to provision new nodes. One of the best things is I can now scale in less than a minute.” –Zoltan Bettenbuk, CTO of ScraperAPI
They enabled the DigitalOcean cluster autoscaler which can control costs with automated adjustments to nodes in your cluster. ScraperAPI also activated the High Availability control plane for its reliability (99.95% uptime SLA for DOKS). Managed services allow ScraperAPI to focus on other essential aspects of its business—building new features instead of managing infrastructure.
DigitalOcean Kubernetes (DOKS) was the right choice for ScraperAPI, providing easy scalability, cost savings, and reliable database-managed service. If you’re considering migrating to DigitalOcean, explore DOKS as an option to manage your infrastructure, enabling you to focus on your business—not on managing servers.
Ask our experts if App Platform or DigitalOcean Kubernetes is right for your business.
Read the full story of How ScraperAPI scaled their data-heavy business with DigitalOcean Managed Databases.
]]>This article will cover how to evaluate the right location, the factors to consider when selecting a data center, and how to factor trade-offs into your decision.
Pick a data center location geographically close to the end users who most frequently access your applications. This will improve the overall customer experience you provide to your customers by reducing the overall latency. Latency is the time it takes from when the user makes a request to when a response gets back to that same user.
Although the latency numbers may not seem significant, the effects are compounded because of the constant back-and-forth communication between your users and your application. Lower latency means reduced wait time and increased speed for your customers, as web pages will load more quickly. Minimizing your web application’s latency can also help improve SEO performance, making your website more visible in search results and generating more traffic from search.
Many tools, like Google Analytics, can help segment traffic to your application by geography, which can be a good starting point for identifying the location of your end users. Once you determine these high-traffic locations, you can spin up your application in a data center close to the region where the traffic originates.
Network Infrastructure and connectivity are important considerations when choosing a data center location, especially for interactive applications with high traffic volumes (e.g., live streaming, audio and video conferencing, VOIP, etc.). How a data center routes the traffic can help you understand the overall Quality of service (QoS) you can provide to your customers.
Choose a data center that routes customer data on a private global private network (if operated by the cloud provider) to provide your customers with a superior experience, compared to routing data over the public internet, which is inherently an unreliable channel. As the cloud provider manages the global private network infrastructure, they can continuously monitor the network’s overall health in a given data center. The provider can proactively take action to preempt any connectivity issues from disrupting their customer traffic or optimize the traffic routing based on the situation.
When you choose a data center location, ensure that the data center location you choose has the latest generation compute, storage, and networking hardware equipment available to host your applications. Using the latest-generation hardware ensures that the performance bottlenecks are not caused due to legacy hardware and provides better reliability.
In the case of storage, NVMe is the latest standard capable of delivering low latency, reliable, exceptional performance (IOPS, throughput, etc.), and is less prone to failure as there are no electro-mechanical parts. NVMe storage can prevent application slowdown, crashes, and other performance issues usually associated with legacy storage hardware, especially if your application heavily uses transactional databases. By choosing a data center with high-performance latest-generation hardware, your applications will run smoothly, and you can deliver excellent customer experiences.
To reliably host your application, consider the breadth of the products offered by a cloud provider in their data center. Sometimes, a cloud provider may offer only a subset of their products (e.g., compute, storage) in a given data center location. Choosing such a location translates to an additional expense for your business, as you need to hire someone to devise a workaround to address the product gaps in a given data center location.
For example, consider a scenario where a cloud provider only offers compute resources and no storage resources (e.g., block storage). The workaround for such situations would involve combining the compute resources from one cloud provider with a storage solution from another cloud provider. This approach adds complexity and increases costs for your business, as you need to consider the data ingress/egress costs between cloud providers and allocate resources to analyze bills from at least two providers. Your application may also be unavailable when either cloud provider has an outage, putting your reliability and reputation at stake. Choosing a data center that offers a complete solution can minimize costs and help you focus on growing and scaling your business.
DigitalOcean is focused on making it easier for businesses to deploy and scale their applications. With 15 globally distributed data centers in nine regions, DigitalOcean makes it easier for startups and SMBs to provide exceptional experiences while accelerating growth. Learn more about our product availability across our data centers. To have a conversation about using DigitalOcean for your business, please feel free to contact our sales team.
]]>The Sydney data center is designed, implemented, and operated like a global telecommunications service provider-style network built to reduce latency - or the time it takes from when the user requests to the time it takes for the response to get back to that user. Our internal performance tests have shown a 6x reduction in round trip time for customers using the Sydney data center from major cities in Australia compared to DigitalOcean’s Singapore data center. Low latency is essential for businesses that host interactive applications such as multiplayer gaming, live streaming, audio/video conferencing, and retail analytics.
The Sydney data center is connected to DigitalOcean’s private internet edge and backbone network, an information superhighway that connects DigitalOcean’s global data centers and encrypts all the traffic for additional security. The Sydney data center is connected with the lowest latency links to California and Singapore with end-to-end physical path diversity. Customers accessing your application hosted in the Sydney data center from Singapore or California can get a superior experience as their requests will be routed on DigitalOcean’s private network internationally instead of routing over the public internet, which is an inherently unreliable channel. The end-to-end physical path diversity ensures that even if a submarine cable gets damaged, the traffic gets re-routed on an alternate path, ensuring that the overall connection stays uninterrupted. To boost domestic connectivity, the Sydney data center is directly connected to Perth using diverse segments, meaning connections that run along the continent over land and underwater.
The Sydney data center uses a blend of peering and transit connectivity from major providers, which provides single-hop access to over 75% of residential service provider networks in Australia, with total connectivity approaching 1Tbps. Single-hop access means less re-routing, which means your data can reach its destination faster. DigitalOcean is committed to providing startups and small businesses with simple, cost-effective solutions -so our customers get all the benefits of this excellent connectivity by default without having to pay extra or decide between various routing options.
The Sydney data center is built from the ground up with a brand-new network design and architecture that can scale to handle the growing traffic to your applications. The data center’s network design is modular rather than monolithic, which allows separate devices (core routers, switches, etc.) to perform a specific task instead of having a single device perform multiple networking functions as with a monolithic design approach. In the event of a hardware malfunction, the modular design limits the reach of a faulty device can cause - minimizing the blast radius. It also allows for easy failovers, which can be carried out in seconds, preventing large-scale network outages that can impact our customers and their end customers.
The modular network choice also increases the overall resiliency of our data centers so we can provide our customers with a great experience. The Sydney data center also uses the latest generation 400G ethernet-enabled routers, which ensure that we can help businesses with high bandwidth/throughput requirements quickly scale and grow on our platform.
DigitalOcean’s Sydney data center is a one-stop solution that offers you all the products you need to build and scale your application, including Droplets, DigitalOcean Kubernetes clusters, Volumes Block Storage, Spaces Object Storage, and Managed Databases. Unlike many cloud providers that provide a limited set of products in any given data center, DigitalOcean’s goal is to provide access to all the products you need in the same data center. This enables you to provide your customers with an exceptional experience while avoiding unnecessary expenditures such as egress costs or paying for extra staff to architect a solution to work around the gaps in a cloud provider’s offering.
The Sydney data center has plenty of capacity to scale, so you can seamlessly provision new instances as the traffic to your application grows. The data center has access to 54MW of power with backup UPS and generator systems, which will seamlessly kick in to provide backup power. This means your applications keep running smoothly in case the main power from the utility companies gets interrupted. All the equipment is deployed in a secure cage with floor-to-ceiling panels and has seven layers of physical and biometric security procedures to ensure complete privacy and security. The hardware racks are custom manufactured for DigitalOcean and have locking cables to avoid issues arising due to human error.
“I was excited to hear that we’ll have DigitalOcean’s reliability coming onshore to Australia. That will make a big impact for local businesses.”—Scott Purcell, Co-Founder, Director, Man of Many
DigitalOcean’s philosophy of flat pricing across all data centers is one of the key reasons startups and SMBs love our platform. Unlike other cloud providers that charge a premium for regional pricing, with DigitalOcean, you will enjoy the same low-cost pricing across all data centers. With affordable price points, industry-leading bandwidth pricing, and flat pricing across all data centers, DigitalOcean removes the guesswork/billing surprises often associated with other cloud providers. According to Forrester’s Total Economic Impact study, a business using DigitalOcean finds a payback of its investment in less than six months. The simple pricing structure means you don’t need to allocate valuable resources for conducting lengthy reviews or hiring external consultants to analyze your monthly spending, so you can focus on growing your business and not costs.
DigitalOcean is focused on making it easier for businesses to deploy and scale their applications, and we are incredibly excited to see what we can build together in Australia. If you want to discuss using DigitalOcean in your business, click here to contact our sales team.
]]>Why did you want to start your own business?
I’ve always pictured myself running a software business — I’m not really sure why, it’s just a happy place for me. I love building, and I love creating things that don’t exist 🙂
What’s the story behind Battlesnake?
Battlesnake began with the realization that it is hard for experienced developers to explore complex tech without a meaningful problem to work on.
If you’re brand new to coding, there’s so much out there to help you learn—courses, puzzle platforms, bootcamps, etc. But if you’re a senior engineer, there’s nothing. The best advice anyone has is “do a side project”, which sucks. Side projects tend to be lonely, unguided, and even boring! I can’t count the number of times I’ve started a side project and quickly abandoned it.
We built Battlesnake to solve this problem for ourselves. It’s not another “coding competition”— it’s all about giving experienced developers an open-ended and unsolvable problem to work on. It’s easy to get started with, and it’s up to you which parts you optimize and over-engineer! And of course, there’s an awesome community to do it with :)
When did you know that your business was going to “make it” or take off?
At first, we didn’t. Battlesnake sort of took off under our noses. We made it for ourselves as something to do with friends to challenge each other and push the limits of our programming skills. It was really fun!
Then professional engineering teams, computer science researchers, and other developers started to play regularly. GitHub projects started popping up. People began writing blog posts and creating YouTube videos about the snakes they were building for upcoming competitions. Someone even created a Discord server—and it was six months before we even knew about it!
People wanted to be involved in what we were building. That passion was impossible to ignore. We knew this was something that needed to exist. If we weren’t working on it full-time, someone else would be.
What’s one thing that you wish you knew before starting?
Developer communities are magical. When given proper support, they can be powerful. It’s been so inspiring to see what they create. Often our community will build complex tooling, design new maps, and even help drive our roadmap with their feedback. Someone even used our unpublished WebSocket API and Raspberry Pi to build an LED array to display Battlesnake matches in real time. Truly incredible stuff.
But communities also have a mind of their own too. If a developer gets an idea in their head, you can bet they’re going to try it—with or without your permission. So you do need to be prepared for that! We’ve learned a LOT from engaging and communicating with our community. It’s become one of our more powerful advantages.
What’s been the biggest challenge, and how did you overcome it?
The biggest challenge for us has been sticking steadfastly to our values and principles. Battlesnake works because we’re our own target audience—our entire team is senior and experienced developers who understand what we’re building and why, and this builds an immense amount of trust between us and our community. Being authentic has been so important to us and our success.
At the same time, we’re constantly approached by organizations and companies wanting to recruit and engage our community directly. Oftentimes, with large price tags attached. And while that’s tempting, we understand deeply that part of the reason Battlesnake is growing is because of the respect and sincerity we show our community.
We only work with wonderful developer-facing teams that understand developers at their core and know how to engage meaningfully and authentically (like Digital Ocean!) Our community means the world to us, and we stand firm in their best interests, always.
What advice do you have for others interested in starting their own business?
My response to this has changed over the years. I think the best advice I’ve heard is to “just start working on the business”, which in most cases means talking to people you want as early customers.
It’s too easy to get bogged down and intimidated by fundraising news cycles, who has hired who, how big someone’s office space is… The best advice is to ignore all of it and just become absolutely obsessed with what your customers care about. By doing this, you’ll naturally make the connections you need for later investment, hires, and sales. Customers first!
How do you measure success? (Most important KPIs?)
We measure success by the number of developers that build multiple Battlesnakes, with at least two different strategies, tech stacks, or algorithms. For us, reaching this milestone implies two things:
a) We’re succeeding in building something that is both entertaining and complex enough to engage developers beyond initial exploration, and;
b) Developers are using Battlesnake to learn new things! Which is a huge win for everyone.
Is there a right time and a wrong time to start a company?
Probably, but it’s nearly impossible to know without trying. My advice here would be instead of waiting for better timing, get started and give it your best effort while listening for signals that something isn’t going to work. It’s way easier to try something and realize it won’t work than agonizing over trying to predict if it will.
Why did you come to DigitalOcean?
Our community demanded it! A surprising number of Battlesnake developers not only use DigitalOcean services to build and deploy amazing Battlesnakes, but they also evangelize it to folks trying Battlesnake for the first time :)
I think DigitalOcean’s support for the broader developer community is second-to-none, and it shows when you speak to their users. We’re lucky to partner with them!
What are some of DigitalOcean’s tangible and measurable benefits?
We were able to scale across multiple data centers and cloud regions incredibly quickly! This was very important to Battlesnake growth in the early days, and remains one of our coolest features :)
Any intangible benefits that are hard to quantify?
Working with DigitalOcean as a partner this year has been fantastic. Battlesnake players love DigitalOcean. We were able to stream live with DigitalOcean streamers, create a Sammy head and tail customization for our Battlesnakes, and even hand out some DigitalOcean swag to competition winners. The reception from our community has been wonderful.
What DO products are you using?
Battlesnake runs thousands of live games per minute. We connect servers worldwide using various different platforms and technologies, including DigitalOcean’s App Platform across multiple regions to host our global game engine regions. This made the game more competitive for players around the world by reducing latency.
What’s next for Battlesnake?
Our goal is to have more than 100,000 developers play Battlesnake in 2023! We’ve also just started exploring what live, in-person Battlesnake can look like, and we’re incredibly excited about what we have planned :)
What technology are you most excited about or focused on for next year?
We’re still looking for our first, top-tier AI Battlesnake. Many developers have tried, but none have succeeded (yet). It turns out that the multiplayer aspect of Battlesnake is incredibly challenging to build generalized models for, but we’re hopeful that we’ll see some emerge throughout the 2023 competitive season.
DigitalOcean solutions including our new Premium CPU-Optimized Droplets are ideal for hosting your game. Sign up for a DigitalOcean account to build your game on DigitalOcean today, or speak to a solution expert about game hosting here.
]]>So how can indie games stay ahead in an ecosystem that includes multibillion-dollar companies and increasing competition? By freeing up developer time to focus on innovative new experiences for your players.
Optimizing cloud infrastructure costs and performance is a good place to start. Reducing infrastructure costs frees up money in the budget for things like Research and Development or hiring more talent. And simpler, more reliable infrastructure allows game developers to move faster and teams to run leaner, freeing up developer time to prioritize what matters most—the game.
Performance will always be top of mind of gaming companies, and cloud providers play a critical role in providing reliable services. Take some time to look into the reliability and support promised by your cloud provider. There’s significant potential for revenue growth—or loss—based on game performance. A game that goes down as its popularity goes up isn’t set up to outperform the competition. And if you need someone actively manning the keyboard to bring things back up, that’s valuable time a developer could use to improve the player experience. Consider how you can automate infrastructure management for deploys, scaling, healing, and any other repetitive tasks.
Implementing automations like Git for version control, CI/CD pipelines for quick updates, and Infrastructure as Code (IaC) for provisioning resources can free up significant time for teams. Gaming companies can benefit from using IaC to help with infrastructure automation, deployment, and changes in order to quickly create as many instances of your entire infrastructure as you need, in multiple provider regions, from your declarative code. This can ensure low latency for players around the world as instances are spun up in locations closest to the players accessing the servers.
Evaluate the cloud provider’s Service Level Agreement (SLA) to ensure that it fits your needs, but don’t stop there. Confirm that data backups, network load balancing, autoscaling, failovers, and other fail-safe mechanisms suit the needs of your business. Finally, explore options for fully managed services through your cloud provider, and weigh the time savings with the likely additional cost of implementation. If internal teams weren’t bogged down with infrastructure maintenance, are there high-value initiatives they could work on instead?
As games become more popular, the infrastructure resources needed to support the application increase. Because cloud providers commonly bill by the hour or second and users are only charged for what they use, bills can fluctuate quite a bit from month to month, especially when games are scaling. By choosing the right virtual machine for their workload and being cognizant of bandwidth charges, organizations can cut costs without sacrificing performance.
The virtual machine (VM) is the foundation for success. Selecting the wrong VM can significantly impact the reliability of your game. Consider the amount of RAM, vCPUs (virtual Central Processing Units), storage, and outbound transfer that you need. By choosing the VM that features appropriate specifications for your needs, you can save money while maintaining high performance. Most gaming companies benefit from DigitalOcean’s Premium CPU-Optimized Droplets, which provide up to 10 Gbps of outbound data transfer, reliable performance, super fast NVMe storage, and dedicated CPU to meet the needs of games serving thousands of users at once.
For many network-intensive applications like gaming, bandwidth costs make up a substantial part of their cloud bill. High bandwidth costs and complex billing systems can lead to unexpected surges in pricing as games become more popular, and a surprise bill could be catastrophic for a growing business. Gaming companies can benefit from transparent and straightforward bandwidth pricing models so they can plan for scale.
Because bandwidth costs are usually listed as pennies per GB, it’s easy to overlook their significance, but choosing a provider with affordable base bandwidth pricing could save you hundreds of thousands or even millions of dollars as your game scales. DigitalOcean has proven especially popular with gaming companies for this reason, as we charge only about 10-20% of what other clouds do for bandwidth.
DigitalOcean’s low bandwidth costs enable gaming businesses to spend less on bandwidth, while our flexible and scalable compute options ensure we’ll support your growth. From Droplet virtual machines to Spaces, our Object Storage offering, and Managed Kubernetes, we provide the tools you need to build and grow your applications.
To speak to a solution expert, click here
]]>The following capabilities are included in Premium CPU-Optimized Droplets out of the box:
Here’s why you should consider adopting Premium CPU-Optimized Droplets:
Enhance user experience. Leverage higher outbound data speeds in Premium CPU-Optimized Droplets to provide a faster and smoother app experience.
Scale your operations seamlessly. The mix of newer CPUs, faster network speeds, and NVMe SSDs makes it easy to scale your apps for data-intensive workloads, especially machine learning and AI applications. Premium CPU-Optimized Droplets are a great choice to train and develop powerful data analysis models for your business and customers.
Maximize the performance consistency of your apps. When running multiple Droplets to power your app, Premium CPU-Optimized Droplets deliver powerful performance across all Droplets, enabling you to provide consistently superior experiences to your users.
DigitalOcean customers, such as Validin who tried out the new Premium CPU-Optimized Droplets, love the performance advantage they bring to their applications.
“We just switched several CPU-intensive data pipelines to DigitalOcean’s Premium CPU-Optimized Droplets from another major cloud provider. This move cut hours per day of processing time out of those pipelines. The combination of raw CPU power, CPU count, and local NVMe disk (with the 2x option) is perfect for us. We’re thrilled that these will be generally available soon for our other data processing workloads.” - Kenneth Kinion, Managing Director, Validin
Premium CPU-Optimized Droplets further our mission to provide a simple and easy-to-use cloud experience for builders and businesses. With today’s launch, when you go to the control panel to spin up Droplets, you’ll see a new option for Premium Intel within our CPU-Optimized plan. You can also find slugs for Premium CPU-Optimized Droplets for use with our CLI, API, or extensions like our Terraform provider.
When paired with our high-performance object and block storage solutions, Premium CPU-Optimized Droplets can help you tackle data-intensive applications with ease. Premium CPU-Optimized Droplets are also available as worker nodes in DigitalOcean Kubernetes.
Premium CPU- Optimized Droplets are now available in NYC1, NYC3, SFO3, FRA1, AMS3, BLR1 and SYD1 datacenters, with more coming soon. Read about DigitalOcean Droplet pricing for your business.
Spin up a Premium CPU-Optimized Droplet now or switch from Regular to Premium. If you’d like to have a conversation about using DigitalOcean in your business, contact our sales team.
*The benchmark CPU, network and file I/O performance numbers are based on DigitalOcean’s internal testing framework and parameters, using an 8 vCPU Droplet. Actual performance numbers may vary depending on a variety of factors such as system configuration, operating environment, and type of workloads.
]]>In 2017, Simon Bennett founded SnapShooter—a backup and recovery solutions provider to back up your servers, databases, and applications. The product quickly found product-market fit. Despite bootstrapped beginnings, the company immediately started generating revenue with just a team of two. Along the way, the SnapShooter team joined the DigitalOcean Hatch program, the start of what would become a long-term partnership with the company.
DigitalOcean recently acquired SnapShooter to better enable startups and SMBs to protect their cloud data across files, apps, and databases. SnapShooter makes cloud backups simple, fast and flexible, offering one system to consolidate all backups so you can be confident in knowing your cloud data is protected.
We spoke with Simon Bennett about his experience building SnapShooter, before getting acquired by DigitalOcean.
Q: How did you come up with the idea for SnapShooter?
Simon: I was a software consultant that specialized in getting founders’ startup ideas built out into MVP so they could go and get their next round of funding. SnapShooter was started as a way to protect a customer of mine that didn’t really have the budget to do active security management. It was cheaper and easier to do backups, and then restore if it got broken. I soon realized that it’s probably something that other people would want; I’d seen hints online that people wanted daily backups of Droplets.
I made the product available to everybody, put billing in front of it, and that was the start of the product. At the end of 2019, I’d already started to expand into database backups primarily for DigitalOcean customers. I went full-time on the product in 2020, and in 2023 we were acquired by DigitalOcean.
Q: How did you decide on the product roadmap for SnapShooter?
Simon: In terms of working out what to build next, I always just came from what customers asked for. I never wanted to waste their time by building stuff that people didn’t ask for. The first move was to add support for DO Volumes and then the next big move, which changed the landscape of the product, was adding MySQL backups.
Q: Why did you apply to the DigitalOcean Hatch program?
Simon: The $10,000 in credits was extremely appealing to us as an early stage startup and I also thought it would be a good opportunity to build a relationship with DigitalOcean employees, who would be beneficial to SnapShooter. I realized that you had the Slack community. And in there, there were quite a lot of DigitalOcean staff members, some had already talked to me before by email and Twitter and LinkedIn. It was a bit more of a direct path for when stuff went wrong.
Q: What other online communities did you gravitate towards as both a founder and a developer?
Simon: Closed Slack communities, Indie Hackers, and thousands of podcasts in the startup space.
Q: As a software developer and technical founder, what did you choose to outsource versus do in-house?
Simon: I’d say marketing is important to do in-house. It’s not fun, I don’t enjoy it, but I don’t think it would be wise to have outsourced it. The first thing I actually outsourced was development; I hired a developer. I could focus more time on the other stuff while we were still at an early stage. I’m a developer, so I can vet a developer, get them on board and get them up to speed. I cannot get a marketing team on board quickly, because I just don’t know how to validate that.
Q: How did you grow your business and find your first customers?
Simon: The first users came directly from networking, people I knew who were DigitalOcean customers. The next thing that worked was jumping on the DigitalOcean community and answering backup related questions there. Really, SEO was the thing that worked—either by content or directly answering backup related questions. More recently, the DigitalOcean app marketplace has been a good source of customers.
Q: What DigitalOcean products have you used for your business?
Simon: We used Managed Database, Managed Redis, Load Balancer, and Droplets.
Q: Having used DigitalOcean, how much of your outcome would you attribute to using the platform?
Simon: I would say it’s pretty fundamental. The whole product was built around helping DigitalOcean customers. Without that, SnapShooter wouldn’t even exist.
DigitalOcean supports all types of applications, from basic websites to complex Software as a Service solutions. From Droplet virtual machines to App Platform, our Platform as a Service offering, and Managed Kubernetes, we provide the tools you need to build and grow your applications. Hatch, our global startup program, can help you power your business with easy-to-use infrastructure and the support you need to grow. To sign up for a DigitalOcean account, click here.
]]>With remote work becoming more and more popular in recent years, many small- to medium-sized businesses (SMB) and startups have remote employees and cloud workloads across multiple regions. This can create complex networking challenges that many businesses struggle to solve. The following diagram illustrates complex network configurations as a result.
In these scenarios, there are two primary questions that businesses are asking themselves.
In the ideal world, businesses have dedicated teams and processes to control and monitor access across internal IT systems. Many enterprises spend millions of dollars to do just that.
For startups and SMBs, there is neither the time nor the budget. Consequently, access to cloud resources is often granted through basic SSH keys or by whitelisting a developer IP address.
This is not scalable as the business grows, and it is also problematic from a security standpoint. Virtual machines and cloud resources are often left open to the internet, with little protection in between.
Ideally, a business could control remote access and secure resources with minimal investment, but businesses are unsure about how to address these issues with optimal investment.
It is easy to connect cloud resources within a region securely by using a private virtual private cloud (VPC) address, but what about between regions? Businesses can connect resources using their public IP addresses and firewall rules. However, this is difficult to scale, and has security implications because of the public internet.
Ideally, you could treat cross-cloud resources the same way as those within a VPC, as a single, secure subnet, without having to worry about setting up firewalls.
These challenges are important to solve, but the solution often comes with both cost and complexity.
The VPN (virtual private network) is a known solution that has solved the remote access question for decades. You can connect your users using a VPN gateway, and off they go.
Less common is using the VPN to connect workloads. However, “point-to-point” VPNs are increasing in popularity. Point-to-point VPNs allow you to connect any number of workloads using an overlay network. In the past, businesses avoided using VPNs for this because of their slow speed and complexity, but as you’ll see below, this is no longer such a concern.
While there are many available VPNs, WireGuard is one option which has multiple benefits for startups and SMBs looking to securely connect to a network. Some of its benefits include:
It is extremely fast, relative to older VPNs like OpenVPN. If configured correctly, WireGuard has a negligible impact on network performance, making it ideal to use with cloud infrastructure.
It is very simple to configure, allowing users to create complex networks easily.
It uses a new cryptographic handshake called the Noise Protocol, which is faster and more secure than the traditional SSL/TLS based handshakes.
It uses more modern and security cryptography (ChaCha20-Poly1305 encryption algorithm).
Because of its low overhead, WireGuard is deployed on a wide range of devices and platforms, including mobile and embedded systems. In fact, it’s now in the Linux kernel, so it will run on most servers and devices by default
By using a WireGuard VPN, businesses can deploy powerful, secure networks as shown in the diagram below.
Some advantages include:
You can create as many virtual networks as needed (development, production, etc.)
You can add any of your compute solutions (Droplets, virtual machines, Kubernetes) into a desired network.
Your resources will continue to work as expected, for example:
SSH to public IP will work, unless you configure otherwise.
Internet connectivity from the VM will work fine.
End user traffic (from the internet to a load balancer to a Droplet) will work fine.
Connection to other resources (eg managed database) will work as is.
The virtual network adds an additional private IP address to the resource that can be used for secure communications from anywhere.
You will be able to securely connect from end clients (e.g. developer laptops) to your cloud resources.
You will be able to securely connect cloud resources over the internet (e.g. servers, databases).
You will be able to automate the rollout (e.g. via cloud-init) of new Droplets so they join the VPN network automatically.
For Kubernetes, you can deploy a VPN gateway and provide access to the cluster’s pod and service networks.
The system works even behind NAT (network address translation) gateways.
Netmaker is a network management tool built on top of WireGuard. It provides a simple and easy way to set up, configure, and manage WireGuard-based VPNs and overlay networks for SMB users. While managing a small network of devices with WireGuard is easy, it gets complicated at scale, and Netmaker takes away that complexity.
Netmaker is available on the DigitalOcean Marketplace as a 1-click application. It provides the following benefits.
Automated WireGuard networks
Secure remote access for employees.
Secure connections between droplets and kubernetes across regions.
Secure connections between inter-cloud workloads.
Gateways to reach external networks.
Netmaker comes with both a community and licensed edition. It is fairly easy to get started with Netmaker. Here is an 8 min walkthrough video that will help you set up a secure virtual network using Netmaker. DigitalOcean customers can get a 50% discount for DigitalOcean customers with promo code DIGITALOCEAN2023 (valid through December 2023), so you can start using it today!
]]>Object storage now executes up to 800 total operations per second!
Newer Spaces buckets now have an improved limit of 800 total operations per second. To check whether a bucket has this new limit, see our Spaces rate limits.
Block storage increased up to 50% and object storage is up to 100% faster
We introduced a major performance boost to DigitalOcean Storage. DigitalOcean Volumes input-output operations per second (IOPS) and throughput increased up to ~50% supporting rapid block storage operations. Spaces Requests Per Second (RPS) has doubled, expanding up to 1500 requests per IP address per second.
New SYD1 Sydney, Australia data center expands our global footprint
We launched the Sydney, Australia (SYD1) datacenter region and you can now deploy Droplets, managed databases, and other products in our expanding global reach.
Fedora 37 is now available
The Fedora 37 (fedora-37-x64) base image is now available in the control panel and via the API. This quickstart helps you create snapshots of Droplets and Volumes you can save and access images on-demand.
The 3rd Gen of DigitalOcean Premium Droplets with AMD EPYC processors
The 3rd Gen AMD EPYC processors (code name Milan) are available in our Premium AMD Droplet plans. The new Droplets are powered by 3rd Gen AMD EPYC processors, superfast PCIe Gen 4 storage, and high-speed 100 GbE networking for top performance.
While customers can’t select their specific processors, many Premium AMD Droplets created in the SFO3, SGP1, FRA1, and NYC1 data centers already run on 3rd gen AMD EPYCTM processors as we continue to deploy them.
DigitalOcean Kubernetes (DOKS)
Migrate any DOKS cluster to the new control plane and add HA
We are excited to announce that you can migrate any DOKS cluster to the new control plane and enable High Availability. If you created a cluster prior to June 2022, you might be on an older control plane. Upgrade now to take advantage of the new features.
Free, Standard, and Premium support lifts start-ups
At DigitalOcean we are committed to serving every customer. That’s why we offer a choice between Free, Standard, and Premium support plans. The new Standard and Premium plans ensure fast responses by dedicated experts through additional channels like video calls. Customers can get an architecture review, custom onboarding, help with products, and other personalized assistance.
Introducing the Developer Center
The new DigitalOcean Developer Center aims to help developers of all skill levels learn, upskill, scale, and get in front of their audiences faster using DigitalOcean products and popular open-source technology.
Meet PyDO, DigitalOcean’s official Python API client
We are thrilled to introduce PyDO, DigitalOcean’s Python API client library. PyDO allows Python developers to interact with and manage their DigitalOcean account resources through a Python abstraction layer. Fully supported and maintained by DigitalOcean, PyDO is now available to install.
Instant OSS global communication with Mastodon
Mastodon Droplet 1-Click is open-source software that provides a microblogging platform akin to others like Twitter. However, instead of being centralized, it is a federated network. Learn how to host your own Mastodon on Kubernetes in our Developer center.
Our Solutions Experts are available to assist you with custom setups, migration, and pricing.
Happy coding!
Ivan Tarin
Sr. Product Marketing Manager
]]>In this article, we step through the technical implementation we employed between GitHub and Vault to support this OIDC flow for secrets consumption. We cover both the programmatic components of this secrets management pattern and the engineering details other organizations may wish to adapt to create their own versions of this program. At the end of the article, we share a Terraform module we have open sourced to help any organization to get up and running with a similar initiative.
A common concern with many secrets management efforts is the “secret zero” problem. An organization stores all of its secrets in some kind of protected enclave, such as HashiCorp Vault or 1Password. The organization needs to restrict access to the secrets and to segment which secrets each group can access. Therefore, roles are created and authentication to those roles are distributed to appropriate users, teams, and systems.
But where can those authentication credentials be safely stored? Not in the secret store, as these credentials are a precondition to getting access. Could they be stored in a separate protected enclave? But how is access to that system protected? It’s turtles all the way down. This first set of login credentials used to gain access to the secrets store is often denoted “secret zero.”
One common solution to the “secret zero” problem is to introduce another secret enclave that contains an already trusted entity at the point where access to the secrets store is needed. For example, if a company’s secrets are stored in HashiCorp Vault, static long-lived credentials to Vault (e.g. userpass, AppRole) can be generated and stored in GitHub as encrypted secrets. A repository is granted access to these secrets as a property of the access control settings set on the repository or organization available in GitHub. Depending on the operating environment and the company in question, this may be sufficient to allow a GitHub Action workflow secure access to secrets in Vault.
For many organizations, however, this approach necessitates implementing complex management procedures. An organization should be able to produce the following information about their secrets management program:
A mapping of which authentication roles are used by which repositories
The secrets accessible to each team’s projects
Whether a team’s access is too broad or too restrictive
Demonstrate adherence to specific compliance requirements
GitHub secrets do not currently provide capabilities to enable a company to produce this information. A Vault role with static credentials may be created for a particular use case, but an organization cannot natively confirm it has not been stored in a second repository’s secrets and leveraged for an unintended use case.
Moreover, these credentials are all static and long-lived, posing a risk to the organization if the deployment workflow is compromised. Stolen credentials continue to be a significant factor in breach incidents. All of a sudden an organization must build a sprawling asset management system for its secret store login credentials that is likely perpetually out of date, build a homegrown credential rotation and auditing lifecycle, or entirely give up on understanding these relationships and accept the risk this lack of visibility poses. This approach is certainly better than plaintext exposure! But GitHub OIDC offers a better solution.
GitHub launched OIDC support within GitHub Actions in October 2021 to enable cloud deployment workflows to authenticate to their services without needing to handle credentials inside the GitHub repo. From their roadmap issue: “OpenID token exchange eliminates the need for storing any long-lived cloud secrets in GitHub.” By using OIDC authentication to Vault, we remove the need for engineers to manage a root credential pair and solve the “secret zero” problem for these workloads!
Discussing OAuth2 and OpenID Connect are outside the scope of this article, but this introduction to OAuth and OIDC from Okta serves as a helpful visual explainer. At a high level, OIDC is a way to authenticate a user or service to a third party identity provider (IdP) using a JSON Web Token (JWT). Instead of managing login credentials, the token exposes parameters (known as claims) which we can bind a Vault role against. When GitHub presents a token containing the necessary combination of claims, Vault will return an auth token for a given Vault role.
Beyond solving the “secret zero” problem, using GitHub OIDC for authentication provides greater flexibility to fine-tune least-privilege access to roles. For example, beyond simply delineating between repositories inside an organization, GitHub OIDC auth allows us to bind specific workflows inside a repository to different Vault roles in an auditable, consistent manner. Suddenly, we can not only definitively answer the question “What Vault roles are used by which repositories?” through native properties of our authentication configuration, but we are capable of asking - and answering - the more granular “in what scenarios can a repository access a Vault role?”
Beyond the question, “what secrets can team X’s project access?” we can enforce what different sets of secrets team X’s project can access during deployments, CI testing, and other use cases as a native property of our authentication scheme. And we can accomplish all of this without requiring developers to handle credentials to Vault themselves, without having to deal with static credential rotation lifecycles or exposure, with credential TTLs in the seconds or minutes, and with complete auditability designed into the configuration-as-code approach.
As we discussed in our previous post on developer-first security, a developer-first security approach integrates into the organization’s existing development workflows. The first step is to document the workflows used by development teams. This is unique for every organization, but there are general patterns we can discuss. For DigitalOcean, we began with the following five use cases:
Testing pull requests - A continuous integration (CI) workflow testing pull requests in a repository needs to access nonproduction secrets.
Continuous deployment (CD) triggers - Pushes to the main branch trigger a continuous deployment workflow that builds a new version of the application and deploys it to production. This workflow needs access to production secrets.
Complex, multi-environment workflows - A single workflow that deploys first to a staging environment, verifies correct functionality, and then deploys the application to production should have access to staging and production secrets at each respective point inside the workflow, but should not be able to access both staging and production secrets at the same time.
Supports monorepos - Multiple teams contributing to a monorepo can define individual .github/workflow/
files inside the same repository and get access to their unique credentials that other teams and workflows inside the monorepo cannot access.
Reusable & shareable workflows - An internal reusable workflow, such as a set of tasks encapsulating publishing artifacts to Artifactory, can access its needed secrets when called from any repository across multiple GitHub organizations. Consumers invoking the workflow do not need to configure anything unique for access to secrets.
The following security considerations apply to each developer use case:
Credentials must be short-lived. Compromise of any workflow must present an extremely minimal window of opportunity for a malicious entity to exploit these credentials.
Secrets consumption must be fully auditable - we must be able to determine what repository accessed what Vault role (and therefore consumed what secrets) at some specific time. We must also be able to determine what secrets could be consumed by a repository or workflow at any given time.
Let’s step through how the OIDC configuration can be bound to Vault and how to provide the fine-grained customizability to match these developer and security use cases. These code examples will use Terraform.
Enabling a GitHub OIDC configuration on Vault’s end requires creating a new JWT auth backend pointing to GitHub.com or to a GitHub Enterprise Server instance. GitHub has documentation on how to construct the URL for a GitHub Enterprise Server.
resource "vault_jwt_auth_backend" "github_oidc" {
description = "Accept OIDC authentication from GitHub Action workflows"
path = "gha"
oidc_discovery_url = "https://token.actions.githubusercontent.com"
bound_issuer = "https://token.actions.githubusercontent.com"
}
At this point, Vault and GitHub are configured to talk to each other. What’s left is defining each use case as its own Vault role configuration on this authentication backend. This is the meat of the configuration and what to do depends on the needs of the developers in your organization.
Organizations configuring OIDC authentication from github.com should take an additional configuration step: switch to a unique token URL. Setting the bound_issuer
and oidc_discovery_url
to https://token.actions.githubusercontent.com
grants the entirety of public GitHub the possibility of authenticating to your Vault server. If you accidentally misconfigure the bound claims that we describe below, you could be exposing your Vault server to other users on github.com.
To prevent this, GitHub has recently added an API-only configuration for organizations to customize your enterprise’s token URL to https://token.actions.githubusercontent.com/<enterpriseSlug>
, where enterpriseSlug
refers to the value that was set when your enterprise cloud account was created. We strongly recommend any enterprise cloud organizations using GitHub OIDC enable this setting. This way, no matter how the bound claims are configured below, it is not possible for other users or enterprises on github.com to get a valid OIDC token to your Vault server. Both the oidc_discovery_url
and bound_issuer
should use this new token URL.
resource "vault_jwt_auth_backend" "github_oidc" {
description = "Accept OIDC authentication from GitHub Action workflows"
path = "gha"
oidc_discovery_url = "https://token.actions.githubusercontent.com/mycompany"
bound_issuer = "https://token.actions.githubusercontent.com/mycompany"
}
This does not apply to GitHub Enterprise Server accounts, as the self-hosted instance is already unique to your enterprise.
The claims provided in GitHub’s JWT define our authentication configuration capabilities. We can bind any combination of these key-value pairs to a Vault role, thereby requiring all of that data to exist in a GitHub workflow’s JWT before granting access to a Vault role and its underlying policies. The following is an example GitHub JWT displaying the claims contained in a token:
The primary property we use at DigitalOcean is the bound subject (sub)
claim, although simple use cases can use alternative JWT properties. For example, to allow one repository to access a certain Vault role while preventing other repositories from authenticating, we can bind the repository
claim to a Vault role instead.
resource "vault_jwt_auth_backend_role" "github_oidc_role" {
role_name = "myrepo-myrole"
bound_claims = { repository = "digitalocean/myrepo" }
# Required configuration attributes
token_policies = ["default", "mypolicy"]
bound_audiences = ["https://github.com/digitalocean"]
role_type = "jwt"
backend = "gha"
user_claim = "actor"
token_type = "batch"
token_ttl = 300 # seconds
}
More commonly, however, we want finer-grained delineation, such as separating pull request workflows from a deployment workflow triggered from the main branch. For this, we can create two separate roles on Vault, each granting access to a respective development or production policy. The subject claim enables us to enforce these separate use cases:
resource "vault_jwt_auth_backend_role" "only_prs" {
role_name = "myrepo-prs"
bound_claims = { sub = "repo:digitalocean/myrepo:pull_request" }
# …
}
resource "vault_jwt_auth_backend_role" "only_main_branch" {
role-name = "myrepo-main"
bound_claims = { sub = "repo:digitalocean/myrepo:ref:refs/heads/main" }
# …
}
Workflows invoked inside of a pull request that attempt to receive an authentication token for the “myrepo-main” Vault role will fail, as the OIDC properties in the JWT will not match the preconfigured expectation in the bound_claims
. Workflows from any event trigger that is not a pull_request
, such as a push
, will fail to authenticate to the “myrepo-prs” Vault role.
There are a number of ways to filter the subject claim. The options boil down to:
pull_request
(but no other) workflow triggers
some specific branch on the repository
some specific tag on the repository
some wildcard pattern for multiple branches or multiple tags (e.g. ref:refs/tags/*
)
some GitHub Environment (or a wildcard pattern for multiple GitHub Environments, although we have not encountered a use case for this)
These five configurations give us almost all of the tools we need to solve the developer use cases we previously identified in this article. Combining other claims in the JWT with the bound subject (sub
) give us everything we need. Crucially, the method developers use to consume secrets remains consistent across all of these use cases. HashiCorp maintains a GitHub Action for consumption of secrets in Action workflows. A developer includes the name of their desired role and what secrets they wish to access:
- uses: hashicorp/vault-action@v2
with:
role: "myrepo-prs"
secrets: |
secrets/data/their/chosen/secrets mysecret | MY_SECRET ;
# Necessary configuration parameters
url: "https://my-vault.company.com:8200"
caCertificate: "optional yet likely for an enterprise vault configuration"
method: "jwt"
path: "gha"
If the expected bound claims match a user’s workflow for the requested Vault role, they will be granted a short-lived token. Because we set the token_ttl
on the Vault role configuration for 5 minutes, the Vault token granted to each workflow will expire after that time. This gives a malicious entity an extremely small window of time to exploit a valid auth token while providing plenty of time for a legitimate developer to retrieve the secrets their workflow requires. In 80% of cases we’ve found that a 60 second TTL is plenty of time. We recently bumped our default TTL from 60 seconds to 5 minutes to account for those other edge cases inside our organization. We will grant certain workflows up to a 30 minute TTL, but we have yet to find a use case that requires a Vault token for longer.
Let’s see how an OIDC configuration can enable each of the five developer use cases we listed above.
Example: A continuous integration (CI) workflow testing pull requests in a repository needs to access nonproduction secrets.
This can be enforced via the pull_request
bound subject mentioned previously. A complete example is:
resource "vault_jwt_auth_backend_role" "myrepo-nonprod-prs" {
role_name = "myrepo-nonprod-prs"
bound_claims = { sub = "repo:digitalocean/myrepo:pull_request" }
# Required configuration attributes
token_policies = ["default", vault_policy.myrepo-nonprod-prs.name]
bound_audiences = ["https://github.com/digitalocean"]
role_type = "jwt"
backend = "gha"
user_claim = "actor"
token_type = "batch"
token_ttl = 300 # seconds
}
data "vault_policy_document" "myrepo-nonprod-pr" {
rule {
path = "secret/data/myteam/myproject/development"
capabilities = ["read"]
}
}
resource "vault_policy" "myrepo-nonprod-prs" {
name = "myrepo-nonprod-prs-policy"
policy = data.vault_policy_document.myrepo-nonprod-prs.hcl
}
Example: Pushes to the main branch trigger a continuous deployment workflow that builds a new version of the application and deploys it to production. This workflow needs access to production secrets.
Similarly, we can use the main branch bound subject construction provided earlier. A complete example of this configuration follows. Note the only material changes are to the role_name
, bound_claims
, and the contents of the policy this Vault role should be granted. The rest of the examples will focus on those values.
resource "vault_jwt_auth_backend_role" "myrepo-prod-branch-main" {
role_name = "myrepo-prod-branch-main"
bound_claims = { sub = "repo:digitalocean/myrepo:ref:refs/heads/main" }
# Required configuration attributes
token_policies = ["default", vault_policy.myrepo-prod-branch-main.name]
bound_audiences = ["https://github.com/digitalocean"]
role_type = "jwt"
backend = "gha"
user_claim = "actor"
token_ttl = 300 # seconds
token_type = "batch"
}
data "vault_policy_document "myrepo-prod-branch-main" {
rule {
path = "secret/data/myteam/myproject/production"
capabilities = ["read"]
}
}
resource "vault_policy" "myrepo-prod-branch-main" {
name = "myrepo-prod-branch-main-policy"
policy = data.vault_policy_document.myrepo-prod-branch-main.hcl
}
Example: A single workflow that deploys first to a staging environment, verifies correct functionality, and then deploys the application to production should have access to staging and production secrets at each respective point inside the workflow, but should not be able to access both staging and production secrets at the same time.
This is a more complicated real-world use case. While there’s a bit more to configure on the GitHub side, the authentication to Vault remains consistent. As there are two sets of secrets involved here - staging secrets and production secrets - we want to create two corresponding Vault roles. But, the same workflow file will need both Vault roles. We need to enforce that at no point can an arbitrary task inside the workflow access both sets of secrets.
To accomplish this, we will use one of the other bound subject filtering options: GitHub Environments. Environments are an access control feature on GitHub repositories. For this use case, we don’t need to configure the environments in any way aside from ensuring they exist on our developer’s repository.
First, we need to create staging
and production
environments, leaving all of the other settings blank.
Second, we need to configure our two Vault roles, using an environment filter on the subject claim.
resource "vault_jwt_auth_backend_role" "myrepo-env-staging" {
role_name = "myrepo-env-staging"
bound_claims = { sub = "repo:digitalocean/myrepo:environment:staging" }
# rest of configuration
}
resource "vault_jwt_auth_backend_role" "myrepo-env-production" {
role_name = "myrepo-env-production"
bound_claims = { sub = "repo:digitalocean/myrepo:environment:production" }
# rest of configuration
}
This configuration means that a workflow job invoked under the staging
GitHub Environment can retrieve an auth token for the myrepo-env-staging
Vault role, while the production
GitHub Environment can retrieve an auth token for the myrepo-env-production
Vault role. Workflows not invoked under those environments will fail to authenticate to Vault, and since only one environment can be applied to a workflow job, neither environment can access the other environment’s secrets.
Third, we build our GitHub Actions workflow. To accomplish this use case of a continuous deployment pushing to staging, running some tests, then deploying to production, we can create two workflow jobs in which one job requires the other to have successfully completed. Each job is assigned its respective environment.
name: Continuous Deployment
on:
push:
branches:
- main
jobs:
deploy-staging:
name: Deploy and Test App on Staging
environment: staging
runs-on: ubuntu-latest
# These are the minimal permissions required if you want to use GitHub OIDC_
# https://docs.github.com/en/enterprise-server@latest/actions/deployment/security-hardening-your-deployments/about-security-hardening-with-openid-connect#adding-permissions-settings_
permissions:
contents: read
id-token: write
steps:
- uses: actions/checkout@v3
- name: Import Secrets
id: secrets
uses: hashicorp/vault-action@v2
with:
role: "myrepo-env-staging"
secrets: |
secrets/data/myteam/myproject/staging mysecret | MY_SECRET ;
# Rest of the configuration
url: "https://my-vault.company.com:8200"
caCertificate: "optional yet likely for an enterprise vault configuration"
method: "jwt"
path: "gha"
exportEnv: false
- name: Deploy something
run: # ...
env:
my_env_var: "${{ steps.secrets.outputs.MY_SECRET }}"
- name: Test something
run: # ...
deploy-production:
name: Deploy to Production
environment: production
# Production will only run if the staging job succeeds
needs:
- deploy-staging
runs-on: ubuntu-latest
permissions:
contents: read
id-token: write
steps:
- uses: actions/checkout@v3
- name: Import Secrets
id: secrets
uses: hashicorp/vault-action@v2
with:
role: "myrepo-env-production"
secrets: |
secrets/data/myteam/myproject/production mysecret | PROD_SECRET ;
# Rest of the configuration
url: "https://my-vault.company.com:8200"
caCertificate: "optional yet likely for an enterprise vault configuration"
method: "jwt"
path: "gha"
exportEnv: false
- name: Deploy something
run: # ...
env:
my_env_var: "${{ steps.secrets.outputs.PROD_SECRET }}"
We can additionally enforce that these environments can only authenticate from a specific workflow file using the technique for our next developer use case. That is, someone cannot create a new file in the repo, add the environment: production
line, and access the production environment secrets from that other workflow.
Example: Multiple teams contributing to a monorepo can define individual .github/workflow/
files inside the same repository and get access to their unique credentials that other teams and workflows inside the monorepo cannot access.
To accomplish this, we combine two attributes for the bound_claims
of this Vault role: sub
and job_workflow_ref
. As a reminder, we can combine any number of the GitHub JWT claims to a Vault role!
The job_workflow_ref
is one of the other supported claims in GitHub’s JWT. Its format is organization/repo/<path to workflow file>@<repo ref>
.
"job_workflow_ref": "octo-org/octo-automation/.github/workflows/oidc.yml@refs/heads/main"
The ref at the end of the string signifies the version of the workflow file that is being bound to this configuration and has to match where a workflow run is invoked. For example, if we were constructing a Vault role intended to be used in a production deployment from the main branch of a monorepo, setting @refs/heads/main
on the job_workflow_ref
means that the specified workflow triggered from the main branch - via most workflow triggers - will succeed, while workflows triggered from something like a pull request will fail, as the job_workflow_ref
will end with something like @refs/pull/12345/merge
.
resource "vault_jwt_auth_backend_role" "mymonorepo-myteam-myworkflow" {
role_name = "mymonorepo-myteam-myworkflow"
bound_claims = {
sub = "repo:digitalocean/myrepo:ref:refs/heads/main"
# Only accept the version of the workflow on the main branch
job_workflow_ref = "digitalocean/myrepo/.github/workflows/myteam-deployment.yml@refs/heads/main"
}
# rest of configuration
}
For a workflow in a monorepo that should run from any pull request - but should not expose secrets to any other workflow file - we can use a wildcard in our job_workflow_ref
.
resource "vault_jwt_auth_backend_role" "mymonorepo-myteam-myworkflow" {
role_name = "mymonorepo-myteam-myworkflow"
bound_claims = {
sub = "repo:digitalocean/myrepo:pull_request"
job_workflow_ref = "digitalocean/myrepo/.github/workflows/myteam-deployment.yml@refs/pull/*"
}
# 'glob' makes Vault treat '*' as a wildcard instead of as a literal string
bound_claims_type = "glob"
# rest of configuration
}
The workflow file itself is constructed similarly to the previous examples. We recommend that teams working in a monorepo make liberal use of GitHub’s paths/paths-ignore filters so their workflows only trigger when necessary.
on:
push:
branches:
- main
paths:
- 'src/teams/myteam/**'
Example: An internal reusable workflow, such as a set of tasks encapsulating publishing artifacts to Artifactory, can access its needed secrets when called from any repository across multiple GitHub organizations. Consumers invoking the workflow do not need to configure anything unique for access to secrets.
Reusable workflows should also make use of sub
and job_workflow_ref
, however in this case we will add a wildcard to the bound subject. How exactly the subject should be constructed will depend on how widely you desire the reusable workflow to be used.
For example, within a GitHub Enterprise Server instance in which every GitHub organization belongs to the company, you could use sub = “repo:*”
combined with a specific job_workflow_ref
. Or replace the bound subject with a similar wildcard claim like repository = “*”
. If, however, you want to grant widespread access within only one GitHub organization in your GitHub Enterprise Server (or you are on github.com, in which case you should restrict access to just your company’s organization), you can set a wildcard subject like sub = “repo:digitalocean/*”
. Don’t forget to set bound_claims_type = “glob”
!
Regardless of the bound subject, your job_workflow_ref
should point to the reusable workflow you expect the organization to trigger. Certain claims in the JWT, such as workflow
and ref
, refer to the caller workflow, the repo whose workflow is invoking a reusable workflow. But job_workflow_ref
refers to the called workflow, which is the workflow that is actually running (our reusable workflow). GitHub provides further information about how the JWT works with reusable workflows. To understand the distinction between caller and called workflows, we’ll use the following example:
Let’s say a platform engineering team sets up a reusable workflow to help deploy artifacts to Artifactory. They create this reusable workflow in the repository digitalocean/shared-workflows
and the path to the reusable workflow file inside that repo is .github/workflows/artifactory.yml
. A developer wants to consume this reusable workflow in their repo. They create a digitalocean/myproject
repository, and create a .github/workflow/deploy.yml
workflow file. The developer’s workflow file might look like:
name: Deploy to Artifactory
on:
release:
types:
- published
jobs:
deploy:
name: push to artifactory
runs-on: ubuntu-latest
permissions:
contents: read
id-token: write
steps:
- uses: actions@checkout@v3
- name: Deploy
uses: digitalocean/shared-workflows/.github/workflows/artifactory.yml@main
with:
inputs: "..."
Inside the reusable workflow, a hashicorp/vault-action
step retrieves secrets using OIDC. Notably, the permissions
block must be set on the developer’s workflow, while the secrets will be retrieved inside the reusable workflow.
When the developer’s workflow file is triggered, the caller workflow will be digitalocean/myproject/.github.workflows/deploy.yml@refs/…
. The called workflow will be digitalocean/shared-workflows/.github/workflows/artifactory.yml@refs/…
.
Therefore, the Vault role we want to construct, which the reusable workflow will use to retrieve its secrets, is:
resource "vault_jwt_auth_backend_role" "reusable-workflow" {
role_name = "reusable-workflow"
bound_claims = {
sub = "repo:digitalocean/*"
job_workflow_ref = "digitalocean/shared-workflows/.github/workflows/artifactory.yml@refs/heads/main"
}
bound_claims_type = "glob"
# rest of configuration
}
This allows any repository inside the digitalocean organization to access the Vault role reusable-workflow
, but only from the called workflow, the reusable workflow, at digitalocean/shared-workflows/.github/workflows/artifactory.yml@refs/heads/main
. We recommend such reusable workflow roles pin the ref of the job_workflow_ref
to the reusable workflow’s default branch or to a specific tag (e.g. @refs/heads/main
). This determines what version of the workflow file someone else can invoke to successfully retrieve a Vault role; all other versions of the artifactory.yml
reusable workflow will fail to authenticate.
As a benefit of this construction, any team in the digitalocean
organization can use this reusable workflow to push to Artifactory with credentials, but no team has access to the actual secrets in their workflows. They are retrieved and handled inside the reusable workflow, and the caller workflow cannot influence or extract any information from the called workflow that isn’t pre-configured.
With all of these possibilities, however, come a plethora of opportunities to misconfigure a Vault role, resulting in frustratingly vague 400 errors when trying to authenticate to Vault (although that may be improved in recent hashicorp/vault-action versions). That’s not a great developer experience! Asking all of your developers to learn the intricacies of the GitHub JWT bound subject filtering conditions or the impacts of combining sub
and job_workflow_ref
, or other claims, will lead to a lot of pain. Our previous article emphasizes the importance of security initiatives solving problems for developers, not introducing them!
This is the point where the security team should, with a developer-first security mindset, invest in providing paved path tooling to solve the security concerns - use these least-privilege Vault roles - while solving the developer concern - let me get secrets and move on with my day! The internals of OIDC claim construction within Vault roles should be encapsulated through tooling that makes it easy for developers to get the right Vault role configuration for their use case. At DigitalOcean, the security team offers a command-line wizard to interactively walk a developer through the steps to create a Vault role for their workflow.
The wizard sets up the necessary configurations on both Vault and, if they need GitHub Environments, applies the necessary changes to their GitHub repository. Experienced users can generate configurations non-interactively as well. This not only provides a paved path for the secrets management that security cares about, but a solution that makes it easier for developers to deploy their engineering pipelines, inside of which secrets consumption is just a small part. Crucially, the wizard does not ask the developer if they want to do “secret task A” or “secret task B.” The developer is not expected to understand the intricacies involved in making that decision. Instead, the developer is asked which engineering task they want to perform, and the tooling guides them through the relevant security steps for that task.
The wizard also offers to create a pull request onto the developer’s repository with a “hello world” deployment workflow leveraging their specific Vault role in whatever authentication pattern they’ve requested.
name: Your Job
on:
push:
branches:
- main
jobs:
FROM_THE_MAIN_BRANCH:
runs-on:
- Linux # self-hosted runner
permissions:
contents: read
id-token: write
steps:
- name: Import Secrets
id: secrets
uses: do-actions/hashicorp-vault-action@v2
with:
jwtGithubAudience: 'digitalocean:secrets:product_security:secrets_example'
role: 'digitalocean-secrets-product_security-secrets_example-main'
secrets: |
secret/data/product_security/secrets_example your_secret | OUTPUT_VALUE_TO_REFERENCE ;
secret/data/product_security/secrets_example your_secret2 | OUTPUT_VALUE_TO_REFERENCE2 ;
- name: Your next steps
run: "echo 'Reference secrets in your steps like this: ${{ steps.secrets.outputs.OUTPUT_VALUE_TO_REFERENCE }}'"
The do-actions/hashicorp-vault-action
role is an internal vendored version of hashicorp/vault-action
with company defaults pre-configured, so developers do not need to set the Vault URL, CA certificate, authentication backend, and other common defaults for our environment.
In the previous workflow example, we are configuring the optional jwtGitHubAudience
parameter on HashiCorp’s vault-action action. The reason for this is due to the security use case we defined above:
There are a few ways to audit the behavior of workflows accessing Vault secrets through GitHub OIDC. The jwtGitHubAudience
parameter in HashiCorp’s GitHub action lets us customize the value of the aud
claim in the OIDC token. By default, the audience is the owner of the given repository on GitHub: the GitHub user or organization to which a repo belongs. For the repo https://github.com/digitalocean/myrepo
, the aud claim is https://github.com/digitalocean
(replace github.com with your GitHub Enterprise Server domain, if necessary). We have chosen to enforce an org:repo:team:service
format to audiences, so we can consistently attribute individual Vault roles and activity to the team and owning service that owns the secrets and usage of a given Vault role.
Additionally, we’ve set the default user_claim
to job_workflow_ref
. GitHub’s examples typically use the actor
claim; however, we believe that job_workflow_ref
gives us more actionable information within an audit log entry than the actor
claim.
The user claim is how you want Vault to uniquely identify a client and can be set to any claim present in the GitHub JWT. This value is used for the name of the identity entity alias upon a successful login. This essentially means that, in Vault’s audit log, the value of the user claim will be retrievable as auth.display_name
.
Using job_workflow_ref
allows us to attribute activity on Vault’s end to the specific GitHub repository and workflow file that triggered it, as well as the pull request, tagged release, or other activity tied to that Vault role session. The auth.display_name
syntax is [auth backend mount name]-[chosen user claim]
. In the above screenshot, our JWT backend is configured at the auth mount github-actions
and the rest is the value of the job_workflow_ref
for this audit log entry.
We use job_workflow_ref
instead of actor
because the actor
claim reflects the github.actor
field in the github context and corresponds to the username who triggered a workflow run. However, re-running a workflow will re-use the original actor that kicked off the initial workflow run, even if that is not the current actor that just triggered the re-run.
In recent months, GitHub has added a new github.triggering_actor
field to the github context to differentiate between the initial executing actor and whoever triggered a re-run of a workflow. However, the GitHub JWT’s actor
claim currently always refers to the github.actor
context attribute. Therefore, we prefer to set the default user claim to job_workflow_ref
and correlate workflow runs specified by that ref with GitHub audit logs if we are interested in identifying an actor. With an example like the image above, however, we can also go to the pull request in question and look at the activity. We believe the job_workflow_ref
gives us more actionable information on an audit log entry than other user claims.
resource "vault_jwt_auth_backend_role" "only_main_branch" {
# …
user_claim = "job_workflow_ref"
role-name = "myrepo-main"
bound_claims = { sub = "repo:digitalocean/myrepo:ref:refs/heads/main" }
}
And as for our other security use case:
As mentioned previously, we set the Vault token TTL to five minutes by default:
resource "vault_jwt_auth_backend_role" "only_main_branch" {
# …
token_ttl = 300 # seconds
token_type = "batch"
role-name = "myrepo-main"
bound_claims = { sub = "repo:digitalocean/myrepo:ref:refs/heads/main" }
}
A compromise of our CI/CD environment in which an attacker extracts the local environment or secrets present in a workflow will find they have only 5 minutes to leverage the granted Vault token before it becomes useless to them. If GitHub Actions follows in the path of other CI/CD providers in recent years and our workflows become compromised via their platform, any malicious activity within the 5-minute validity period of a Vault token could be traced in our logs to a specific job_workflow_ref
and therefore all Vault activity connected to their behavior, without needing to sift through loads of legitimate traffic.
Since we use batch tokens for this OIDC use case, it is extremely cheap to generate thousands of auth tokens on Vault and these tokens cannot be renewed beyond the initial TTL set on the token. We lose the capability to revoke or list these tokens, but they live for such a short time period that this is not a concern to us.
There are additional token attributes that could be set on the Vault role configuration, but we do not currently use them today. We set the token_ttl
default to a short time period and allow teams to customize that if they have a use case. Responses in the Vault audit log include the field auth.token_ttl
so we can observe any unexpectedly long TTLs and investigate further.
We have open sourced a Terraform module to assist organizations with configuring GitHub OIDC authentication to Vault with the fine-grained role claims setup discussed in this article. Organizations can define the audience, desired Vault role name, bound subject, and any additional claims they desire and the module will hook everything up on Vault.
module "github-vault-oidc" {
source = "digitalocean/github-oidc/vault"
version = "~> 2.1.0"
oidc_bindings = [
{
audience : "https://github.com/artis3n",
vault_role_name : "oidc-dev-role",
bound_subject : "repo:artis3n/github-oidc-vault-example:pull_request",
vault_policies : [
vault_policy.dev.name,
],
},
{
audience : "https://github.com/artis3n",
vault_role_name : "oidc-deploy-role",
bound_subject : "repo:artis3n/github-oidc-vault-example:ref:refs/heads/main",
vault_policies : [
vault_policy.deployment.name,
],
},
]
}
data "vault_policy_document" "dev" {
rule {
path = "secret/data/dev/foo"
capabilities = ["read"]
}
}
resource "vault_policy" "dev" {
name = "oidc-dev"
policy = data.vault_policy_document.dev.hcl
}
data "vault_policy_document" "deployment" {
rule {
path = "secret/data/prod/bar"
capabilities = ["read"]
}
}
resource "vault_policy" "deployment" {
name = "oidc-deploy"
policy = data.vault_policy_document.deployment.hcl
}
To assist organizations with managing these Vault roles at scale, we support a JSON files construction in which individual development teams can be empowered via CODEOWNER-ship of their own Vault roles and the module will import all the files and create the requested Vault roles. Or, Vault operators can leverage the JSON files for better organization of their GitHub OIDC Vault roles.
We are working on building a generic version of our wizard to help users create the appropriate oidc_bindings
objects for their desired use cases, which we will include in this repository in the future.
In this article, we discussed DigitalOcean’s approach to securing CI/CD through GitHub Actions, OIDC, and HashiCorp Vault. We showed you five examples of real-world developer use cases to help build a mental model you can apply to your organization, and we showed you how we’re living our mission to build developer-first approaches to security and secrets management with our paved path secrets wizard.
If you would like to explore setting up GitHub OIDC Vault roles in a hands-on course following the first three developer use cases from this article, check out this GitHub Skills course.
We hope this was enlightening and will help you more easily apply safe secrets management concepts to your organization. We plan to share more details about our developer-first security tooling and approach in future articles.
Ari Kalfus is the Manager of Product Security at DigitalOcean. The Product Security team are internal advisors focused on enabling the business to safely innovate and experiment with risks. The team guides secure architecture design and reduces risk in the organization by constructing guardrails and paved paths that empower engineers to make informed security decisions.
]]>Kubernetes, in particular, can improve resource utilization and shorten software development cycles, and because of these benefits, Kubernetes adoption is growing within small and medium tech companies. DigitalOcean Kubernetes (DOKS) can remove some of the complexities associated with containerization and Kubernetes management, helping small businesses access technology that was previously out of reach.
While it’s possible for small teams to implement self-hosted Kubernetes clusters, managing them is often too resource-intensive to make sense, costing teams time and energy that could be better spent somewhere else, and requiring a level of expertise that many small to medium-sized businesses (SMBs) lack. This is demonstrated by the fact that nearly 90% of Kubernetes users now utilize managed services for their Kubernetes implementation, up 20% from 2020, according to the Cloud Native Computing Foundation.
With DigitalOcean Kubernetes, teams don’t need to know how to build or maintain an upstream Kubernetes cluster. DOKS performs critical tasks such as monitoring, security, updates, and more, freeing up teams to focus on their applications and realize a faster time to market.
To help small teams get started with DigitalOcean Kubernetes, check out DigitalOcean’s Developer Center, which includes resources on how to get started with a range of DigitalOcean products. For recommendations on implementing Kubernetes for your small business, download our FREE guide, Kubernetes adoption journey for startups and SMBs.
NitroPack is a website performance platform that uses DigitalOcean Kubernetes to help its application handle hundreds of requests per second and seamlessly scale up and down to meet demand. Originally, the NitroPack team was managing servers by hand. NitroPack is a small startup with a lean team and needed to free up time and resources to focus on other initiatives. Using DOKS, they now automate the delivery of over 120,000 sites with over five million pages—a drastic increase from the 15,000 sites they were servicing in 2020.
“We chose DigitalOcean Kubernetes because we like simplicity. In the beginning, we had a small team and didn’t have the resources to manage a Kubernetes cluster. We wanted to spend time developing the product instead of managing infrastructure.”—Ivailo Hristov, CTO
Framey is a social media application focused on travel that allows individuals to upload photos of their adventures while providing location information, attraction recommendations, and more. The team initially built the entire backend on a single DigitalOcean Droplet, but experienced significant performance issues with that setup. At the time, the team didn’t have anyone in-house with DevOps expertise. They didn’t want any tech debt as they scaled, so they built with DigitalOcean Kubernetes. Starting with Kubernetes can be challenging and expensive for a new business, but with this approach, they have a fully flexible event-driven system ready to scale.
“It may be unconventional, but I’d suggest to anyone that starting with a good, reliable setup in the beginning is crucial to accomplishing goals and saving time later. You don’t want to get into a situation where you have a great product but then you have to go back and fix the infrastructure. That just gets harder and harder to do. Plan for scale. It will be more expensive in the beginning but worth it in the end. We are glad we set ourselves up for success with DigitalOcean.” —Robert Preoteasa- Founder, CEO
Shoppermotion is an IoT (Internet of Things) startup that processes millions of visits and serves custom analytics for merchandising, category, and marketing teams by tracking the movements of shoppers through a retail location. In order to provide industry-leading analytics, Shoppermotion has to solve a complex set of problems with a distributed system that’s an ideal fit for DigitalOcean Kubernetes.
"We were amazed at how easy DigitalOcean is to use, and it let us spend more time focused on our product.”—Marco Doncel, CTO, Shoppermotion
Bright Data is an industry-leading web data platform that allows businesses worldwide to collect, index, and structure public web data quickly and at scale. Their application was originally built on DigitalOcean Droplets. In early 2022, the organization ran nearly 6,000 Droplets on DigitalOcean, with a small team maintaining the entire cloud network. Bright Data has periods of high utilization that coincide with working hours, then periods of lower utilization outside typical working hours and on weekends. Because of their dynamic load, they were underutilizing their Droplets —consistently using about 60% of their computing power. They needed the ability to automatically scale up to meet demand so they could efficiently utilize resources instead of reserving Droplets for periods of high traffic. In order to optimize performance, the team switched to DigitalOcean Kubernetes.
“Kubernetes works faster, works better, and we didn’t have to come up with a new system. It allows us to scale and focus on development. With DigitalOcean Kubernetes, we can grow faster and save money by fully utilizing our Droplets.” - Nir Borenshtein, COO
If you’re interested in trying DigitalOcean’s managed Kubernetes for your applications, take DigitalOcean Kubernetes for a spin. If you have questions about migrating from another cloud provider or what your total costs will be on DigitalOcean once you start scaling, you can schedule a meeting with our team of experts who can help answer any questions you have.
]]>SMBs often optimize on cost by running in a single region, and optimizing their cloud cost, all while increasing velocity. When things go wrong, they should be able to quickly revert back to a previous state.
Running an application and keeping data in a single cloud region can make it impossible to recover if there is a problem with that region. SMBs must have a disaster recovery (DR) plan that accounts for such a situation.
In case of an unforeseen incident, like ransomware affecting an application and data, businesses need to be able to start a fresh copy of the application with all the data from a prior backup.
SMBs should have a simple but robust backup strategy in order to mitigate against the above scenarios, and prevent data loss or disruptions that could put their reputation at risk. The table below shows two example applications with the corresponding backup strategy that businesses may consider building for them. Note how the backup strategy can change depending on the business need.
Application: A NodeJS E-commerce application running on a Droplet + DigitalOcean Volumes Block Storage + Managed Postgres | Application: A Ghost CMS 1-click app running on a Droplet | |
---|---|---|
Critical data: What should you back up? | Volumes: Snapshots or File backup. Postgres: DB backup. Local volume: File backup. Droplet: Snapshots. | Application: Ghost CMS. MySQL: DB backup. Local volume: File backup. |
How much data can you afford to lose? | Customer and order data (Postgres): Cannot afford to lose. Volume data: Can afford to lose a few hours. NodeJS and application configuration: Can rebuild from git, if lost. | Blogs (typically posted a few times a day): Can afford to lose a few hours. MySQL: Cannot afford to lose. |
RPO - Recovery point objective. How often should you back up? | Postgres: Full backup for point-in-time recovery. Volume: Every 4 hours. Application file system: Every 4 hours. Droplet: weekly | MySQL: Full backup for point-in-time recovery. Ghost CMS Application backup: Every 8 hours. |
Retention - How long should you store the copies? | Postgres: 3 months. Volume and Application file system: 1 month. Droplet: 1 week. | MySQL: 1 month. Ghost CMS backup: 1 week. |
RTO - Recovery time objective. How do you test restore? | Once a week: test by restoring the backup DB and applications into a staging domain. | Once a month: test by restoring Ghost CMS app backup into a staging domain. |
Should there be multiple backup locations? | Yes | Locally download the backup once in a while. It is not mandatory for the site to be up in case of a disaster. |
Locally download the backup once in a while. It is not mandatory for the site to be up in case of a disaster. | Yes. It is a requirement for the site to be operational within 10 min. | Optional. |
Do the backups need to be encrypted? | Yes. It is a security resource to ensure that access to sensitive information is not compromised in any way. | No. |
Summary of backup strategy | 3 months of DB, 1 month of volume/file, 1 week of Droplet backup. Recovery point objective (RPO): No loss of database, 4 hours for files. Recovery time objective (RTO): 10 min to get the site up and running from backup. Secure backup. Replicate backup to 2nd region. Disaster recovery: 10min to get the site up and running. | 1 month of DB, 1 week of application back. Recovery point objective (RPO): 8 hours. Recovery time objective (RTO): 1 day. |
While the above is just an example, note how business need and the criticality of the data changes the backup strategy.
There are several types of backups that SMBs should consider, including:
Application backup: This type of backup is used to backup specific applications such as databases or CMS. Backups are stored in object stores (Spaces/S3).
File backup: This type of backup is used to backup specific files or folders. It can be full backup or incremental. Backups are stored in object stores (Spaces/S3).
Virtual Machine (Droplet) backup: This type of backup is used to backup entire Droplets. It uses the snapshots to take point-in-time backup. Backups are stored in NAS storage.
Volume backup: This type of backup is used to backup entire volumes of data, such as a DigitalOcean Volume or a partition. It uses the snapshots to take point-in-time backup. Backups are stored in block storage.
We recommend a hybrid backup strategy depending on your application. Some scenarios will make it obvious. For example:
Say you upgraded your application, and something went wrong. In this case, reverting back to the previous version of the application backup is a better/faster solution.
You want to migrate your application to another region. You can plan to restore the entire virtual machine from the backup for a simpler migration.
You deleted a specific folder by mistake. In this case, you can restore the folder from the backup in minutes.
File and application backups are low-overhead and can be taken frequently. The restore operation is faster because you are restoring a subset of the entire system, and can be tied to the application version. You can also take advantage of low cost and S3 replication to create multiple copies of the backup. For most SMBs, we recommend adopting a hybrid backup plan, starting with files and applications.
SMBs and startups should have a backup plan or strategy in place, and review and update that document time-to-time. Tools such as SnapShooter, a backup and recovery solutions provider recently acquired by DigitalOcean, can help startups easily set up regular backups.
We recommend the following best practices:
Regularly test and verify backups: Regularly test and verify backups to ensure that they can be successfully restored. Automate this solution, or have a step-by-step document.
Store backups in multiple locations: For files and applications backup, this can be done using S3 replication or creating separate jobs for backing up to different destinations.
Encrypt backups: Encrypt backups to protect against data breaches. This can help ensure that your data is protected even if it falls into the wrong hands. With the use of encryption keys in SnapShooter, you get one key to Lock the backup and one to Unlock the backup. Locking keys can be provided without any worries about them being harmful, but Unlock keys should be kept private (i.e. a USB drive for limited physical access) so only the backup owner can access its contents.
Have a disaster recovery plan in place: Have a disaster recovery plan in place that details how you will restore your systems and data in the event of a disaster. Disaster recovery is different from simple restore in that you lose access to the entire datacenter/region.
Use an automated backup solution: A multi-product, multi-cloud backup solution can serve as a single pane-of-glass for all your backup needs.
SnapShooter enables users to back up their files, apps, and databases from DigitalOcean and other cloud providers. It also supports popular apps such as Laravel and WordPress, and databases including MongoDB, MySQL, and PostgreSQL. Comprehensive backups can be taken daily, weekly, and monthly from server backups, or up to every 5 minutes for file backup.
Key features of SnapShooter include:
Backup files, servers, apps, and databases from multiple providers
Quickly restore a previous version of a file or multiple files
Granular backups with the ability to choose what folders to back up
Choose your backup frequency, and see when backups were run
Customize your backup settings and retention policies
Replicate the snapshots across another region for disaster recovery
Get real-time logs and monitoring
Stay secure and compliant with 2-factor authentication and secure encryption
Use your own storage, including DigitalOcean Spaces, AWS S3, Filebase, and other systems, or use SnapShooter’s S3-compatible storage
Get email and slack alerts of backups
SnapShooter is designed to be the go-to solution for backup needs of SMBs and Startups. Start using SnapShooter here, or from DigitalOcean Marketplace.
]]>We currently have five ERGs, and hope to continue adding more in the future:
At DigitalOcean, we believe that our community is bigger than just us. In 2022, our ERGs donated $100,000 to organizations that matter to them and their communities.
Three ERGs donated to organizations whose missions are to educate future technology professionals and leaders. Afro Sharks donated to Black Sisters in STEM, The Hidden Genius Project, and the National Society of Black Engineers, which all upskill students in engineering and technology. Afro Sharks also gave back to Tulsa-based organization Urban Coders Guild which strives to educate parents about the opportunities STEM education can offer their children.
As one Afro Shark member described, “This is important to me personally because most people think tech only happens on the East and West coasts or in larger cities, but we have to expand to other areas. It gives students a chance to see that they can have a big impact in the tech industry without moving to the coasts.”
Women of DO contributed to nonprofits that empower women’s education of females including Malala Fund and Foundation to Educate Girls Globally. Emerging Sharks chose to give to Coded by Kids, which focuses on providing young people from underrepresented groups with software development, digital design, computer science, and tech startup-focused entrepreneurship programs.
Not only do our ERGs have overlapping interests, but they also feel strongly about donating to some of the same organizations that are aligned with their ERG goals and missions. Women of DO and Emerging Sharks both gave to Girls Who Code, an organization that prepares girls to thrive and lead in the technology workforce.
One Emerging Sharks member personally connected with this organization, saying, “Girls Who Code was a major contributor to my career. I participated in the Girls Who Code summer program when it was brand new and loved it so much I decided to major in Computer Science in college. I wouldn’t be an engineer without Girls Who Code and I’m so glad we have the opportunity to support them.” In addition, Afro Sharks and Emerging Sharks shared a common interest in helping adults learn technology, and donated to The Last Mile, which prepares incarcerated individuals for successful reentry through business and technology training.
Shark Tank donated their funds to the International Rescue Committee (IRC) to assist the areas most heavily affected by the conflict in Ukraine. An ERG co-lead stated, “The scars of this and previous wars won’t be fully understood for years to come, but we owe it to ourselves and the survivors to do what we can for those whose lives were impacted, including displaced refugees. To that end, we sought out the most effective aid organizations we could find to ensure every dollar was used wisely.” With an 87% efficiency rating by non-profit watchdog groups, Shark Tank felt that the IRC was a great organization to support to help the 5.9M people in Ukraine who have been displaced and affected by the war.
Rainbow DOlphins donated to Lambda Literacy, Sage, Encircle, The Trevor Project, and PFLAG, which all benefit different parts of the LGBTQ+ community. Rainbow DOlphins also gifted funds to organizations supporting gender identity and expression including the Transgender Legal Defense & Education Fund, the Sylvia Rivera Law Project, and the National Center for Transgender Equality.
According to a Rainbow DOlphin ERG co-lead, “As a community, LGBTQ+ people are often very involved civically and socio-politically; we’ve had to be in order to fight for the equality and equity we aren’t always granted.” She continued, “There are so many amazing LGBTQ+ focused organizations out there who are addressing a range of issues such as creating positive representation in media, providing free access to shapewear and binders for transgender people, providing mental health services to prevent suicide, and fighting discriminatory legislation nation-wide.”
The Women of DO and Rainbow DOlphins ERGs wanted to make an impact on organizations who support reproductive rights. Women of DO made a contribution to the National Network of Abortion Funds, which aims to remove financial and logistical barriers to abortion access. Rainbow DOlphins contributed to Planned Parenthood, whose mission is to ensure all people have access to the care and resources they need to make informed decisions about their bodies, their lives, and their futures.
Our ERGs continue to have a meaningful impact inside and outside DigitalOcean, and that impact is recognized across the organization. Our Chief Marketing Officer and Executive Sponsor of the Women of DO ERG, Carly Brantz, says, “ERGs are a core way that we live our values. We speak up when we have something to say and listen when others DO and love is at our core.” She continued, “I’ve learned a multitude of lessons from various ERG events over the past few years and always look forward to hearing from my teammates about what makes their cultures and experiences unique. Creating these safe spaces and opportunities for us Sharks to connect on a human level and share what’s impacting us is a critical piece of the culture we’ve built here at DigitalOcean.”
]]>With all this content floating around, it can be difficult to know where to begin, so today, we are proud to announce the launch of the DigitalOcean Developer Center, a curated experience to help developers learn and build with DigitalOcean products and services. With the launch of the DigitalOcean Developer Center, we are bringing together all of this disparate content into a unified experience for developers.
The DigitalOcean Developer Center provides a curated experience to help developers of all skill levels learn, upskill, scale, and get in front of their audiences faster using DigitalOcean products like Droplets, Managed Databases, App Platform, and others. While the DigitalOcean Developer Center will primarily focus on helping you get the most out of DigitalOcean products and services, we’ll still continue to publish the amazing content you know and love on the DigitalOcean Community site.
We are kicking off the Developer Center with a brand new Onboarding Experience for new developers just getting started on the platform, multiple new guides on how to deploy best in class open source projects like NocoDB, Mastodon, and a plethora of existing curated tutorials to help you build and scale on the DigitalOcean platform.
We have a lot more in store in the coming weeks and months and look forward to your feedback. Check out the DigitalOcean Developer Center at https://docs.digitalocean.com/developer-center.
]]>For startups and enterprises alike, team design and collaboration can be challenging, especially as team structures change. Ourspace brings clarity to opaque org structures and the question of “Who owns what?” by building a collaborative team design platform that helps product and tech leaders make smarter team design decisions and keeps every employee in the loop. Building a central source of truth to help technical leaders increase their teams’ cohesion and productivity, Ourspace utilizes DigitalOcean’s Hatch program to help them build, test, and deploy their team-focused product. Ourspace has raised $2.5 million dollars in funding from seven investors, including Hatch partner Seedcamp.
For Ourspace, it was important to choose a cloud provider that would accelerate their growth while giving them the support they need to build and scale their infrastructure. That’s why they chose the DigitalOcean’s Hatch program to get started on their company journey.
Choosing DigitalOcean to get their product off the ground was an easy decision for Ourspace. The Hatch program offered access to the products, credits, and support that set them up to succeed. DigitalOcean’s simplicity over its cloud competitors was key to Ourspace’s decision to build on DigitalOcean:
“Choosing to use DigitalOcean rather than a more complex, configurable cloud platform has been one of the best technical decisions that we’ve made. It’s allowed us to focus on building features rather than messing around, configuring permissions, roles, and [other] random stuff that you end up doing in AWS.” – Mark Allen, Ourspace’s Co-founder and CTO
The Hatch program allows for startups to not only build their cloud platform, but also to experiment with different DigitalOcean products and plan their future infrastructure. Ourspace is currently using the following DigitalOcean products to build their platform:
We love listening to what our customers have to say, so we appreciate that Ourspace was eager to share their feedback with us. They constantly provide feedback to our support team to help us improve our products so they can better fit the needs of startups.
As with many startups, Ourspace’s journey from idea to launch wasn’t always easy. DigitalOcean’s intuitive products, cost-effective pricing, and overall simplicity has allowed the company to worry less about their cloud infrastructure and more about their future. With the peace of mind to put their full focus on business strategy, they recently made a strategic pivot—helping companies downsize rather than scale in the current downturn economy. Along the way, the Ourspace team has learned valuable lessons about building a business that other entrepreneurs can learn from… Here are wise words from Ourspace Co-founder and CTO Mark Allen:
Our global startup program Hatch is focused on providing founders with the speed, flexibility, and power they need to scale their digital infrastructure. Apply to the Hatch program to grow your startup today.
]]>Currently, “shifting security left” is the hot trend to scale security activities into engineering practices in modern organizations. Shifting security left attempts to improve security feedback lifecycles by inserting security activities earlier in developer workflows. The goal is to enable developers to catch security problems early and fix them when it is “cheaper” - during development, before the problem reaches the production environment.
However, for many organizations the shift left model is not producing meaningful results. Security teams are attempting to institute a cultural shift within engineering organizations without making requisite cultural changes inside the security program. They are certainly adding a lot of new security work for engineering teams - but these activities are not meaningfully impacting the risk posture of the business.
Shift-left security organizations abdicate ownership of the logistical complexities of integrating security initiatives into development workflows. They throw security signals to developers and make engineering teams accountable for figuring out how to juggle these concerns alongside their product, testing, and infrastructure responsibilities. Nick Liffen recently outlined the challenges this mindset creates for developers in the GitHub Universe 2022 talk, “Shifting left vs developer-first security.” Kelly Shortridge describes this practice as “security obstructionism.” Security practices that simply shift toil work from a security team onto an engineering team are an unfortunate corruption of the shift left mentality. Focusing security initiatives and products around a developer-first approach are essential for security initiatives to match the speed and scale of modern businesses.
A flaw I’ve seen with many well-intended shift left approaches is that they seek culture shifts solely within engineering and do not apply the same mindset shift within security. The security program should focus less on, as Kelly Shortridge describes, “security outputs as a proxy for progress” and instead on the needles that materially move the risk posture of the organization. A business risk is any exposure an organization has to factors that will lower its profit or lead it to fail. A security risk is, similarly, a vulnerability that threatens the company’s ability to achieve its objectives, including losing customer trust. Therefore, security practices that lower the risk posture of the organization while helping the business achieve its goals are crucial. A powerful way to help ensure security activities meaningfully improve the business in this capacity is to prioritize the contextual impacts when planning security initiatives.
To understand the distinction between shifting left and developer-first security, let’s look at what a “shift left” approach to secrets management might look like.
Under this approach, developers are responsible for managing the secrets their systems need. Security offers a secrets store; they’ve set up HashiCorp Vault for the org to consume. Changes to secrets policy occur in a GitHub repository in an infrastructure-as-code setup. When developers want a new Vault role or to make changes to the secret paths accessible to them, they push a pull request to this repo and security approves and merges the changes. To assist with tracking ownership of secrets inside vault, the security team establishes required patterns for how developers’ must format roles and policies in their PRs.
So far, the bones of this approach look good. Developers have the ability to make the changes they need, as opposed to filing a ticket on another team and waiting. Security provides general best practices and expectations to streamline consumption in the org. Changes are managed through version control and codified in source code. Security has a bit of a gatekeeping role, but takes action frequently enough that this is more a questionable use of security’s time than a true impediment to engineers. The problems arise when you look at the day-to-day activities of developers interacting with this setup. Here is a made-up conversation derived from many very real threads I’ve seen in the past:
Developer: “Hey I am setting up secrets for a new service for project ImportantToTheBusiness. What am I supposed to do?”
Security: “Here’s the link to our wiki page.”
20 minutes pass
Developer: “Hi, I think I followed those instructions correctly. Here is my PR.”
Security: “No, this is not going to work how you think it will.”
Developer: “Oh, ok. What do I need to change?”
Security: “Here’s our troubleshooting wiki page.”
2 weeks of back and forth ensue
Security: “Here, I have redone your PR into the correct format. You should be good now.”
Developer: “Thank you!”
Security: “Here are the credentials to your secrets role. Don’t lose them. Store them safely when you create your pipelines.”
Developers: “How do I do that?”
Security: “That is your responsibility.”
While pointed, this is a realistic reflection of the interactions I’ve seen in organizations pushing a “shift left” mentality without the crucial contextual components of a developer-first mindset. Security sets up some secrets management “infrastructure-as-a-service” for developers to consume, and expects certain implementation patterns, but does not provide the “platform-as-a-service” resources to help developers implement those patterns.
Again, there are good bones within that conversation! Security has documented instructions engineers can follow. They step in and provide resolutions when developers are struggling. But, the overall process creates more problems for developers than it solves. There is high friction to getting a working setup and engineers often fail due to unclear regulations, requiring security to step in and do the work for them. At the point where the process no longer requires security’s involvement, they are hands off and unhelpful to the developer.
This “shift left” approach puts the onus of secrets management on the developer, but does not provide the guidance to allow them to be self-service. It solves for the security team’s problems without considering the developers’ issues. It lacks the contextual component of a developer-first security approach.
So what does a developer-first security program look like? The key to developer-first security is contextual injections of security designs into the environment. The security program must integrate into the existing development workflows present in the organization by solving meaningful problems. Instead of creating a list of tasks for developers to follow to comply with a security requirement, security should design a paved path embedded within the normal operating workflows of the organization. Security should make the secure path the easy path for developers to tread. Security should operate as the equivalent of a platform engineering team, providing subject matter expertise and offering foundational secure-by-default practices that engineering teams can consume.
They must solve real business problems. The answer to “why do we have to do this?” should describe a clear path to enabling product objectives. “Because security says so” is the worst response, a legacy artifact of the combative security programs of the past who view their coworkers as the threat from which the business must be protected. Instead, the security team should operate as a collaborative partner, helping the business achieve its initiatives. A developer-first security program requires as large a cultural shift within the security team - perhaps larger - than the one within the engineering department.
So how do we adapt the secrets management program to a developer-first approach? While there should always be documentation, the program should strive to remove the need to use it. The on-ramp should be a paved path about which developers may read further, if they are interested. Turn the common and error-prone instructions into executable scripts that produce a viable solution. Expected structure or patterns within secrets policy should be codified through linters so developers are truly able to self-service their configuration and security can step back from gatekeeping pull requests. At DigitalOcean, our version of a secrets policy repository allows developers to run the equivalent of a ./bin/gen_new_service
script that produces the secrets configuration most commonly employed for systems. Alternative use cases can be configured with flags on the executable, and those capabilities are documented for developers. Validation tests on the secrets PR check whether the desired configuration will break anything or fall outside of acceptable use.
Security should not abdicate any piece of the development workflow in which secrets are involved. Their recommended patterns and guidance should fully extend over the software development lifecycle. Within DigitalOcean, an interactive wizard exists to walk developers through the process of establishing a role to attach to their service, configuring multiple fine-grained access paths given common deployment models, and generating the engineering deployment pipeline(s) for the developer with a working integration of their new, specific secrets role built-in. This not only provides a paved path for the secrets management that security cares about, but a solution that makes it easier for developers to deploy their engineering pipelines, inside of which secrets consumption is just a small part.
Notably, the wizard does not ask the developer if they want to do “secret task A” or “secret task B.” The developer is not expected to understand the intricacies involved in making that decision. Instead, the developer is asked which engineering task they want to perform, and the tooling guides them through the relevant security steps for that task. The security team’s time can therefore be spent on adding additional common paths in the organization into the wizard, strengthening the available paved path. This solves engineering concerns alongside security ones. This makes developers’ lives easier. This is contextual, developer-first security. This requires engineering and product delivery capabilities within the security team, which may be a shift for some organizations. It is a necessary shift to transform from “security obstructionists” to developer-first enablers that embed security into the speed and scale of modern development team lifecycles.
One friction point left unanswered from the initial “shift left” example is the requirement for developers to securely store their authentication credentials for the secrets store in their pipelines. At DigitalOcean, we’ve provided documentation outlining patterns that developers can employ to safely handle these across the various CI/CD tools we have used. Currently, developers are migrating to GitHub Actions. The introduction of OIDC support in late 2021 from Action workflows provided us an opportunity to develop a new, contextual paved path to help developers transition pipelines to GitHub Actions while removing the friction of credential management. Read our follow-up article to learn how DigitalOcean provides engineering teams with a developer-first approach to fine-grained RBAC while removing “secret zero” with GitHub OIDC and HashiCorp Vault.
Ari Kalfus is the Manager of Product Security at DigitalOcean. The Product Security team are internal advisors focused on enabling the business to safely innovate and experiment with risks. The team guides secure architecture design and reduces risk in the organization by constructing guardrails and paved paths that empower engineers to make informed security decisions.
]]>DigitalOcean currently has two supported SDK clients. Godo, DigitalOcean’s Go API Client, and our new Python Client, PyDo. We aim to support more SDKs and we’re finalizing which languages we’re going to deliver, so if you have a request for a DigitalOcean SDK in a particular language, please let us know at api-engineering@digitalocean.com.
In this blog post, we will dive into our journey of building our new Python Client, and how we used code generation to create the SDK.
Traditional SDK builds require unique boilerplate code to bootstrap to a new ecosystem which may include coding in various languages. Recently, there has been an increase in the number of companies using code generation to create their SDKs for a variety of languages. We decided that this approach was the best way for us to deliver and maintain great SDKs.
To support generating the client, we looked into the OpenAPI Specification initiative. The OpenAPI Specification (OAS) defines a standard, language-agnostic interface to RESTful APIs which allows both humans and computers to discover and understand the capabilities of the service without access to source code, documentation, or through network traffic inspection. The OAS developer community has matured over the years and have opensourced powerful OAS toolchains. One of the tools being automatically generating a Python Client, which will be discussed in greater detail in this post.
At the start of our journey we laid out requirements we’d like our client to adhere to. The requirements were the following:
Client Generation: The client was automatically generated from the OpenAPI 3.0 Specification.
Optimal End User Experience: PyDo adhered to Python development best practices and followed “Pythonic” community conventions.
Automated Testing: PyDo testing was automated and a part of the CI.
Automated Documentation: Documentation for PyDo was automatically generated.
CI/CD Support: There was an automated process to ensure that PyDo was always up to date with the latest DigitalOcean OpenAPI 3.0 Specification.
The AutoRest tool generates client libraries for accessing RESTful web services. Input to AutoRest is a spec that describes the REST API using the OpenAPI Specification format. This is the tool we used to generate our new Python SDK. Another tool exists, called openapi-generator, but our Openapi 3.0 Specification uses advanced features such as Inheritance and Polymorphism that were not supported in openapi-generator at the time. Using Openapi 3.0 Inheritance/Polymorphism keywords (allOf, anyOf, oneOf) with openapi-generator caused the client to generate “UNKNOWNBASETYPE" which makes that client endpoint unusable. This is specific to Openapi 3.0. In Openapi 2.0 (swagger 2.0) one could get away with using “produces” and “consumes” in POST requests. With Openapi 3.0, that has been replaced with requestBody and the use of the inheritance keywords (allOf, anyOf, oneOf). Since our specification was very heavy with these unsupported advanced OpenAPI 3.0 features, and also among other reasons, we decided to move away from openapi-generator. Here is a Github thread that describes the issue in more detail.
Autorest supported our specification’s advanced uses of polymorphism and inheritance. Autorest also offered a wealth of features to further enhance end user experience. Creating a positive user experience when interacting with our Python client was our next objective to tackle.
We found Autorest had a lot of features we could take advantage of out the box. This included their use of directives. Directives are used to tweak the generated code prior to generation, and are included in your configuration file (usually a README file), ultimately allowing you to further enhance your generated client. They have a wealth of directives to utilize. Below is an example of how we used a directive to have our Python Client, PyDo, render clearer error messages:
The directive:
where: '$.components.responses.unauthorized'
transform: >
$["x-ms-error-response"] = true;
- from: openapi-document
where: '$.components.responses.too_many_requests'
transform: >
$["x-ms-error-response"] = true;
- from: openapi-document
where: '$.components.responses.server_error'
transform: >
$["x-ms-error-response"] = true;
- from: openapi-document
where: '$.components.responses.unexpected_error'
transform: >
$["x-ms-error-response"] = true;
The code behavior without the directive:
$ python3 examples/poc_droplets_volumes_sshkeys.py
Looking for ssh key named user@odin...
Traceback (most recent call last):
File "/home/user/go/src/github.com/digitalocean/digitalocean-client-python/examples/poc_droplets_volumes_sshkeys.py", line 32, in main
ssh_key = self.find_ssh_key(key_name)
File "/home/user/go/src/github.com/digitalocean/digitalocean-client-python/examples/poc_droplets_volumes_sshkeys.py", line 133, in find_ssh_key
for k in resp["ssh_keys"]:
KeyError: 'ssh_keys'
The code behavior with the directive:
python3 examples/poc_droplets_volumes_sshkeys.py
Looking for ssh key named halkeye@odin...
Traceback (most recent call last):
File "/home/user/go/src/github.com/digitalocean/digitalocean-client-python/examples/poc_droplets_volumes_sshkeys.py", line 138, in find_ssh_key
self.throw(
File "/home/user/go/src/github.com/digitalocean/digitalocean-client-python/examples/poc_droplets_volumes_sshkeys.py", line 26, in throw
raise DigitalOceanError(message) from None
__main__.DigitalOceanError: Error: 401 Unauthorized: Unable to authenticate you
With the directive, the client rendered a much clearer error message. Autorest offers a wealth of directives you could take advantage of to further enhance their client.
Along with the directives feature, Autorest also had a customization feature to further enhance the client. This is done through updating the _patch.py
files. We customized the _patchy.py
file to simplify authorizing against the client. This is what the authorization step looked before the added customization:
import os
from azure.core.credentials import AccessToken
from pydo import DigitalOceanClient
api_token = os.environ.get("DO_TOKEN")
token_creds = AccessToken(api_token, 0)
client = DigitalOceanClient(credential=token_creds)
The user had to import an azure.core.credentials package to pass in their token credentials to properly be authenticated to access DO resources through the client. We were able to simplify this process a bit for the user by adding some customization to the _patch.py to abstract some unnecessary steps for the end user:
class TokenCredentials:
"""Credential object used for token authentication"""
def __init__(self, token: str):
self._token = token
self._expires_on = 0
def get_token(self, *args, **kwargs) -> AccessToken:
return AccessToken(self._token, expires_on=self._expires_on)
class Client(GeneratedClient): # type: ignore
"""The official DigitalOcean Python client
:param token: A valid API token.
:type token: str
:keyword endpoint: Service URL. Default value is "https://api.digitalocean.com".
:paramtype endpoint: str
"""
def __init__(self, token: str, *, timeout: int = 120, **kwargs):
logger = kwargs.get("logger")
if logger is not None and kwargs.get("http_logging_policy") == "":
kwargs["http_logging_policy"] = CustomHttpLoggingPolicy(logger=logger)
sdk_moniker = f"pydo/{_version.VERSION}"
super().__init__(
TokenCredentials(token), timeout=timeout, sdk_moniker=sdk_moniker, **kwargs
)
__all__ = ["Client"]
With these customizations, the authentication process for the client is now condensed to look like this:
import os
from pydo import Client
client = Client(token=os.getenv("$DIGITALOCEAN_TOKEN"))
Directives and patch customizations features that Autorest offered was a gamechanger for our client generation journey. It allowed us more control over the generated client and help support the optimal end user experience.
Adhering to Continuous Integration practices, when you (or in most cases for this repo, automated commits from automatically generating our client code using Autorest) commit code to your repository, we continuously build and test the code to make sure that the commit doesn’t introduce errors. Our tests include code linters (which check style formatting), mocked tests, and integration tests. We use pytest to define and run the tests. There are two types of test suites we built for our CI workflow: mocked tests and integration tests.
Our mocked tests validate the generated client has all the expected classes and methods for the respective API resources and operations. They are tests that exercise individual operations against mocked responses. These are quick and easy to run since they do not require a real token or access any real resources.
Integration tests simulate specific scenarios a customer might use the client for to interact with the API. These tests require a valid API token and DO create real resources on the respective DigitalOcean account.
The advantage of generating a client from a single source of truth, the Openapi 3.0 specification, is that one could generate documentation from that single source of truth and it would always be up to date with the generated client. This is exactly what we did. We used Sphinx to generate the documentation and then hosted the documentation on Read the Docs here
Generating code is one thing but making it automated is also an important step. To do that we use GitHub actions. Every time anything changes in our openapi repository the action kicks in, generating the libraries and pushing it into the pydo repository:
The workflow in the openapi repository that triggers PyDo’s Client Generation workflow:
name: Trigger Python Client Generation
on:
workflow_run
workflows: [Spec Main]
types:
- completed
jobs:
build:
name: Trigger digitalocean-client-python Workflow
runs-on: ubuntu-latest
if: ${{ github.event.workflow_run.conclusion == 'success' }}
steps:
- name: Check out code
uses: actions/checkout@v2
- name: Set outputs
id: vars
run: echo "::set-output name=sha_short::$(git rev-parse --short HEAD)"
- name: Check outputs
run: echo ${{ steps.vars.outputs.sha_short }}
- name: trigger-workflow
run: gh workflow run --repo digitalocean/digitalocean-client-python python-client-gen.yml --ref main -f openapi_short_sha=${{ steps.vars.outputs.sha_short }}
env:
GITHUB_TOKEN: ${{ secrets.WORKFLOW_TRIGGER_TOKEN }}
Pydo’s workflow that generates the client, documentation, and creates a PR:
name: Python Client Generation
on:
workflow_dispatch:
inputs:
openapi_short_sha:
description: 'The short commit sha that triggered the workflow'
required: true
type: string
jobs:
Generate-Python-Client:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
- name: Removes all generated code
run: make clean
- name: Download spec file and Update DO_OPENAPI_COMMIT_SHA.txt
run: |
curl --fail https://api-engineering.nyc3.cdn.digitaloceanspaces.com/spec-ci/DigitalOcean-public-${{ github.event.inputs.openapi_short_sha }}.v2.yaml -o DigitalOcean-public.v2.yaml
echo ${{ github.event.inputs.openapi_short_sha }} > DO_OPENAPI_COMMIT_SHA.txt
env:
GH_TOKEN: ${{ secrets.WORKFLOW_TOKEN }}
- uses: actions/upload-artifact@v2
with:
name: DigitalOcean-public.v2
path: ./DigitalOcean-public.v2.yaml
- name: Checkout new Branch
run: git checkout -b openapi-${{ github.event.inputs.openapi_short_sha }}/clientgen
env:
GH_TOKEN: ${{ secrets.WORKFLOW_TOKEN }}
- name: Install Poetry
uses: snok/install-poetry@v1.3.1
with:
version: 1.1.13
virtualenvs-path: .venv
virtualenvs-create: true
virtualenvs-in-project: true
installer-parallel: false
- name: Generate Python client
run: make generate
- name: Generate Python client documentation
run: make generate-docs
- name: Add and commit changes
run: |
git config --global user.email "api-engineering@digitalocean.com"
git config --global user.name "API Engineering"
git add .
git commit -m "[bot] Updated client based on openapi/${{ github.event.inputs.openapi_short_sha }}"
git push --set-upstream origin ${{ github.event.inputs.openapi_short_sha }}
env:
GH_TOKEN: ${{ secrets.WORKFLOW_TOKEN }}
- name: Create Pull Request
run: gh pr create --title "[bot] Re-Generate w/ digitalocean/openapi ${{ github.event.inputs.openapi_short_sha }}" --body "Regenerate python client with the commit,${{ github.event.inputs.openapi_short_sha }}, pushed to digitalocean/openapi. Owners must review to confirm if integration/mocked tests need to be added to the client to reflect the changes." --head "openapi_trigger_${{ github.event.inputs.openapi_short_sha }}" -r owners
env:
GH_TOKEN: ${{ secrets.WORKFLOW_TOKEN }}
Here’s the general flow:
In general, code generation works quite smoothly, but there were some rough edges here and there. However, it’s nice to have a consistent, repeatable way of generating an API client with each change to the API contract. Keeping it all in sync is a way easier task now. Making it all depend on a single source of truth, which is also an industry standard, makes support far more doable. We really like having a simple toolchain to generate updated versions of the client libraries and are excited for users to try out the new Python Client for themselves!
]]>In April, DigitalOcean launched DO Impact at the NYSE. The event was both a celebration and a commitment to developing our strategy and programming as a Pledge 1% member. We’re just getting started, but feel excited about the meaningful headway we’ve made since then.
Despite an economically challenging year, DigitalOcean remained committed to its philanthropy and Pledge 1% commitment. Between cash grants, ERG-driven donations, employee donation credits, and matching employee contributions, DigitalOcean gave over $1.1M to more than 900 organizations in 2022. This $1.1M was split across:
Finally, this year, we surpassed 2,200 nonprofit organizations using DigitalOcean with the support of infrastructure credits via Hollie’s Hub for Good (HH4G).
In 2022, we hosted two extremely successful employee giving campaigns and launched our first skills-based volunteering pilots. These initiatives reinforced how fortunate we are to be the place of work for such generous and thoughtful employees. Below are a few highlights:
We engaged with amazing partners like UNICEF and Fast Forward, helping catalyze and extend our reach to more tech-driven nonprofits worldwide. We held a workshop in Bolivia with Hollie’s Hub for Good recipient Unicodemy, where we helped young girls in La Paz learn how to use cloud technology. We also held our first panel session at our customer conference highlighting two incredible nonprofit partners, Code Berlin and Bonterra.
Hollie’s Hub for Good was recognized at the 2022 DevRel Awards, winning Best DevRel Initiative and Greatest Contribution to Society, and DigitalOcean was honored as the RaisedBy.Us Impact Partner of the year.
Finally, we joined our Pledge 1% community at Nasdaq to celebrate Giving Tuesday, and DO Impact was highlighted on the NASDAQ tower and the Pledge 1% blog.
Reflecting on 2022 has us feeling equally proud of our early accomplishments and motivated to take continued action in 2023. As we move full steam ahead into this new year, we’re excited to focus even more on several areas, including:
There is so much more to come. Stay tuned for more impact updates in the coming months!
]]>Monitoring of web assets is an important part of business operations, as it helps businesses mitigate risks and reduce downtime. So what risks can be addressed by monitoring your web assets and Droplets/virtual machines, and what opportunities are there to counter them? In this guest post from DigitalOcean partner WebPros, makers of 360 Monitoring, we explore this topic.
First of all, downtime is expensive. For example, if you manage or own a website with $1M annual revenue, a basic calculation of revenue per hour shows you could lose over $100 of turnover every hour when your website is down.
Of course, what needs to be also counted are lost customers, lost credibility, and the cost of the team working on getting the website working again. With monitoring, you can prevent your company from facing these issues.
Focus on scaling your business instead of putting down fires. You can increase the profit margin by upselling premium uptime reports if you maintain projects for your customers.
Another essential aspect of customer relations is learning about a problem before the customer notices it. There’s nothing worse than getting a call from angry customers complaining about a website outage that you overlooked. Monitoring is like an insurance policy for the continuity of your business and your relationship with your customers.
Retain your customers by improving their satisfaction level. The happier they are, the bigger chance they will recommend you in their networks.
To ensure that your project works well, you need to be constantly aware of what’s going on at the website or app and Droplet/virtual machine level. For example, an outdated SSL certificate can flag your website or webshop as not secure enough. Also, keep in mind that the validity period of certificates, which you need to renew after each certificate, is getting shorter and shorter.
If you use Cloudflare, another thing to keep an eye on is to ensure that your project’s Cloudflare status is available. Finally, the loading time from different locations is also an important metric for you, as it can affect the user experience and the page ranking.
Keep your good image. Be sure your projects are always secure, and that you’re perceived as a top-quality service provider.
Whatever you host on your virtual machine, be it a website, app, mailbox, or database, performance is highly dependent on the health and status of the instance.
In addition to basic information like CPU load, memory usage, or disk space, you also need to monitor other metrics closely. For example, whether Apache or Nginx is working according to expectations or whether MySQL is not overloaded. It’s common for a website’s lack of performance to be caused by a query that runs every time the website is visited, overloading MySQL.
If you only monitor availability, all you know is that the website is slow. With a comprehensive tool that monitors both the site and the virtual machine, you’ll also know what’s causing it. Especially if the service you’re using notifies you before the problem occurs. For example, instead of warning you that the disk is running out of space, it tells you in advance that it is 80% full, so you should consider upgrading or cleaning it up.
Uptime and loading speed are not the only factors that affect the performance of your website. With a good monitoring tool, you can also check the quality of your website. Usually, it scans your website and looks for problems such as large files, JavaScript errors, broken links, missing images, and resources. You will receive a notification immediately in case of detecting any of these problems.
Monitoring is not just about websites or virtual machines, but also about an app, Discord servers, or anything else you can imagine. That’s why choosing a tool that offers you many integrations is essential. It’s even better if it allows you to write custom plugins, so you can be sure to integrate them with anything you want.
This way, you’ll be able to precisely monitor the metrics that are most important to you and display them consistently and comprehensively.
If your IP address is listed on Realtime Blackhole Lists (RBL), your emails will end up in SPAM or will not arrive at all. That’s why it’s important that you learn about such an event quickly and react accordingly.
Without automated blocklist monitoring, you are forced to check your IPs on every RBL on the web manually and can never be sure whether it is listed or not.
Sometimes, despite your best efforts, a problem occurs. If the monitoring tool you are using allows you to monitor log files, you can analyze them to determine the cause of the problem.
You want to be quickly informed about any problem in your project or Droplet. However, it’s essential that the tool you choose integrates with the most popular messaging services, such as Slack, Discord, Telegram, and good old text messaging.
To learn more about DigitalOcean partner Plesk, read our case study.
]]>To better enable startups and SMBs to protect their cloud data across files, apps, and databases, we’re excited to announce DigitalOcean’s acquisition of SnapShooter, a backup and recovery solutions provider. SnapShooter makes cloud backups simple, fast and flexible, offering one system to consolidate all backups so you can be confident in knowing your cloud data is protected.
Here’s what SnapShooter customer Siebird, a web design and digital marketing agency, has to say about their experience:
“Siebird has trusted SnapShooter to back up and protect our client websites for over 2.5 years. We host all our client websites on DigitalOcean. From the beginning, SnapShooter stood out from the rest with how effortless it has made configuring backups. We always dreaded setting up new backup jobs–now we don’t. We have peace of mind knowing our client websites are protected and can be restored with a click of a button.”
SnapShooter enables users to back up their files, apps, and databases from DigitalOcean and other cloud providers. It also supports popular apps such as Laravel and WordPress, and databases including MongoDB, MySQL, and PostgreSQL. Comprehensive backups can be taken daily, weekly, and monthly from server backups, or up to every 5 minutes for file backup.
SnapShooter consolidates backups into a single job that can be easily configured by the user to meet their specific needs, eliminating the need for manual backups. For example, a user running WordPress can back up their application files and also generate a MySQL backup at once.
Businesses that are innovating quickly also benefit from easily being able to restore previous versions of individual files or full applications–just select the files you want to restore, confirm your choices, and SnapShooter will do the rest. The ability to choose files to back up, customize settings and flexible retention policies will help startups and SMBs build efficient and cost-effective business continuity strategies for their cloud data.
Back up files, servers, apps, and databases from multiple providers
Quickly restore a previous version of a file or multiple files
Granular backups with the ability to choose what files to back up
Choose your backup frequency, and see when backups were run
Customize your backup settings and retention policies
Get real-time logs and monitoring
Stay secure and compliant with 2-factor authentication and secure encryption
Use your own storage, including DigitalOcean Spaces, AWS S3, Filebase, and other systems, or use SnapShooter’s S3-compatible storage
Get email and slack alerts of backups
To start backing up your applications, files, and databases today, you can get started on the SnapShooter website. DigitalOcean users can benefit from SnapShooter’s 1-Click Application, which makes it simple to add SnapShooter backups to DigitalOcean products including Droplets virtual machines and Volumes block storage. Sign up for a yearly plan, and you’ll get a discount on SnapShooter’s pricing!
Here’s what customer Datacake, a low-code IOT platform, had to say about using SnapShooter with DigitalOcean:
“We’ve been using SnapShooter for our database backups since 2021, after switching from a self-built solution. We use it to back up both our DigitalOcean managed and self-managed Postgres databases to DigitalOcean Spaces. Since using SnapShooter, we’ve had a lot of peace-of-mind knowing that we have reliable and historic backups that just work. The reporting feature is also really helpful for us. Overall, we love SnapShooter because it provides us with the features and reliability we need to ensure our business data is always safe and secure.”
We’re excited for SnapShooter, a Hatch alumnus, to join DigitalOcean and look forward to providing you with future updates!
]]>Data breaches in the services we rely on can be scary. We know third party compromises (e.g. password manager compromise, CI/CD compromises, third-party API integration compromise, public bucket disclosure, etc.) happen regularly, and you may be concerned about the impact to your DigitalOcean account. Check out these 5 ways you can improve the security of your DigitalOcean account, in order of priority.
The most important step to improve the security of your DigitalOcean account is to enable multi-factor authentication. Multi-factor authentication prevents bad actors from logging into your account even if they successfully change the password, and you’ll be able to initiate the password reset process yourself.
While DigitalOcean supports time-based one-time passwords (TOTP), SMS, and backup codes as second factors, we recommend that you use TOTP codes, as it is the more secure secondary factor of the list. This article will show you how to enable multi-factor authentication for your account.
You can also take advantage of our OAuth-based login partnership with Google and GitHub to delegate authentication using those providers.
Note: There is no multi-factor authentication when using these partners, since they will manage authentication. We strongly recommend that you enable two-factor authentication on the Google or GitHub account you use to log in to DigitalOcean.
Prevent immediate access to your account by resetting your password. If you still have access to your account, reset your password in your account settings. If not, use our Forgot Password mechanism to reset your password. Should you have additional issues accessing your account, contact support to regain access to your account.
DigitalOcean personal access token should be treated in the same manner as passwords. Regenerate any DigitalOcean personal access token that you believe may have been leaked and ensure they have the minimum permissions needed.
Bad actors won’t always make themselves known right away. Check your security history for any suspicious activity. Pay special attention to any creation of keys like SSH keys, API Tokens, and Spaces API Keys.
Remember to also check your account activity history to see if there has been any suspicious login activity. Pay close attention to IP addresses to see if they’re different from the IP addresses you normally log in from.
Similar to creating unauthorized api tokens, a bad actor may seek to add themselves as a user to Teams you are an Owner for. Review your Teams and check that only the right people are on your account and they have the right role. You can review the members of your Team here. Be sure to check all your teams if you own more than one. Learn more about team membership management here.
Direct access to your account will not expose your existing API tokens or spaces keys to bad actors, as the secrets are only shown to the user on creation. DigitalOcean’s tokens have new management features that help protect your account. If you have any older API tokens, you can generate a new key, update your integrations to use the new key, then delete your older tokens to take advantage of these new features like expiration and secret scanning in GitHub public repos.
You can also regenerate your Spaces access key secrets as needed.
These steps are by no means exhaustive, but can help provide increased security for your account.
Happy Safe Coding and Happy New Year.
Swimmingly,
The DigitalOcean Team
]]>We increased our global presence with the launch of our Sydney data center in November, and deepened our social impact work with the formal launch of DO Impact in April 2022. We launched new products and features targeted at the needs of small businesses, including DigitalOcean Functions, a high availability control plane for DigitalOcean Kubernetes, and DigitalOcean Support Plans. We also introduced the Partner Pod, our new partner program which gives small businesses even more ways to work with us.
DigitalOcean’s community continued to show up for each other in 2022. Employees donated both time and money to causes important to them, and our employee resource groups hosted events throughout the year to connect with each other and the community.
Here are some more highlights from 2022:
We made many important updates to products in 2022 to better serve the needs of growing businesses and our global customer base. These included the launch of DigitalOcean Functions, providing new ways for customers to get their applications into the cloud. We also made key updates to DigitalOcean Kubernetes, including launching a high availability control plane and adding an egress gateway for DigitalOcean Kubernetes, giving our customers even more confidence in leveraging our Kubernetes offering to deploy their code in the cloud.
Small businesses often need dedicated support, so we added two new support levels to DigitalOcean’s Support Plans, which enable users to get faster response times and dedicated support from technical managers. We’ve also introduced enhancements to Spaces Object Storage and Volumes Block Storage that better address the increasing needs of our customers’ applications.
In databases, we launched a new Dedicated CPU Managed MongoDB that boosts the performance of MongoDB and enables users to migrate databases from any source to DigitalOcean Managed Databases with minimal downtime. We also made enhancements to the DigitalOcean Marketplace with SaaS Add-Ons, which enable builders to quickly add new capabilities to their Droplets, Kubernetes clusters, apps, or business operations. DigitalOcean Uptime is a new tool that enables businesses to be alerted when assets are down, so they can provide their own customers with an excellent experience.
Following our launch of DO Impact in 2022, we also made donations to organizations including TSI, which works to make STEM education available in all communities, The Bronx Community Foundation’s Digital Equity Initiative, The Edhi Foundation, which has been providing on the ground relief from the floods in Pakistan, and several others doing good in their communities.
To further our goal of providing quality educational content for developers and builders, we acquired learning-focused websites CSS-Tricks and JournalDev, and continue to make their tutorials free and available to anyone looking to learn and grow.
We hosted the 9th annual Hacktoberfest, which saw over 140,000 contributors come together for the open source community, and DigitalOcean’s virtual conference, deploy, brought together builders from around the globe to discuss topics such as reducing burnout while building your business, unlocking synthetic datasets, and scaling in the cloud during uncertain times.
As we look to 2023, I feel the same optimism that SMBs reported to us in our recent Currents report, where 63% stated that they feel positive about next year. It is an honor to serve our customers as they work to achieve their dreams, and in 2023 we look forward to empowering more of you to grow through product enhancements, support, educational content, and more. Thank you for allowing DigitalOcean to be the place where you come to test your ideas, build your businesses and realize your dreams!
Happy New Year!
]]>What does a Customer Success Manager (CSM) do?
A CSM is a strategic partner to the customer and an extension of the customer team within DO. The key to this important role is developing a clear understanding of the customer’s strategic goals, how DigitalOcean fits in their strategy, and their overall business model. The CSM organization works with a couple of cross-functional teams, advocating for the customers and helping them to achieve their desired outcomes. They also share customer feedback on products and missing features. In short, a customer success manager partners with the customer and our internal teams to ensure the customers’ success with DigitalOcean.
What does a typical day look like?
I start my day with a standup with the team. During the standup we all talk about the things we hope to achieve for the week/day (on Monday we discuss the week). We highlight any issues we might need help with and give updates on any of the projects we are working on. After the standup I action my plan for the day. I prepare customer meetings and presentations for the calls I have for the day and make sure to prep all internal persons who will be joining the calls.
After each conversation I make sure to update the notes and log them in our CRM Gainsight. In the afternoon we have our global teamsync where we discuss the full team’s updates.
How do you take breaks throughout the day?
I have a 2 year old dachshund that is my full-time work buddy. He definitely makes sure that I take small breaks in between calls. At lunchtime we have lunch together and then we go to the park for him to have a bit of exercise with his friends.
What do you enjoy about being a CSM?
We have really awesome customers. Advocating for them and helping them to succeed on DigitalOcean is really fulfilling. Especially when you help them through a difficult situation.
What do you like to do outside of work?
I volunteer, I help low-income people manage their finances, I help them with their administration and I help them to make sure they have all the assistance the government can offer.
How do you solve problems that arise?
Empathy, understanding, prioritizing, and confirming. Regardless of whether something is a customer issue or an internal problem, my first reaction is to empathize with people and make sure they are heard. I then ask questions so that I can understand what happened, what needs to be done and how urgent it is. Based on that I prioritize and make sure what needs to happen gets done and I confirm once the problem has been resolved.
What keeps the job fun?
The team. Even though our team is geographically spread all over the world we are collaborative and supportive to each other.
Any advice for someone who may want to be a CSM? Prioritize understanding things. Understanding your products and their value, understanding what your customers need, and understanding how to get things done for your customers and the company. If you care to understand the things your customers need and how to get it done, your customers and the company will be successful. And you will be successful as a CSM.
What can you tell us about your time at DO?
Time flies when you are having fun as they say. It feels like I blinked and 2 years have passed. DO is the first company where I feel that our values are not just words on our website. People actually live them.
Anything else we should know?
We have some great customers here at DigitalOcean!
]]>Mastodon is a free, open-source, self-hosted social network. Mastodon was launched in 2016, but has had a massive surge in popularity in the last few months as users explore alternatives to Twitter and traditional social media sites. What makes Mastodon different from a traditional social network like Twitter or Reddit is that it is decentralized and distributed, but also federated, meaning no single organization or user owns it or controls it. Users of the platform can join one server, but also interact with users of other Mastodon instances or servers. Anybody can set up and self-host a Mastodon server. DigitalOcean even has a 1-click Marketplace Mastodon image to get you up and running with a server in no time.
Image of the Fediverse from Axbom
Mastodon is part of the Fediverse, a series of interconnected applications built on top of ActivityPub, a decentralized social networking protocol. From a technology perspective, Mastodon is built as a Ruby on Rails backend, JavaScript frontend, and PostgreSQL as the primary database. Additionally, the Sidekiq Job Queue manages many background jobs such as upload processing and federation. The Mastodon documentation goes into much greater detail on how all these processes come together and how to set them up yourself.
In the past few months, Mastodon server communities have grown at an accelerated rate. Eugen Rochko, founder of Mastodon, shared a post on November 6th, showing the growth of Mastodon communities and users:
“Hey, so, we’ve hit 1,028,362 monthly active users across the network today. 1,124 new Mastodon servers since Oct 27, and 489,003 new users. That’s pretty cool.”
At DigitalOcean, we have also seen a six-fold increase of Mastodon active instances from October 2022 to December 2022. This sudden explosion in growth of users adopting the platform has presented challenges to Mastodon server operators. One such example is the Hachyderm.io Mastodon community. Hachyderm is a Mastodon server that aims to build a curated network of respectful professionals in the tech industry around the globe. It is a community composed of developers, hackers, industry professionals, and enthusiasts.
In a recent blog post, Kris Nóva, founder of Hachyderm, detailed the growth of her Mastodon instance from 720 users on November 3rd, to over 25,000 users as of November 25th, and it continues to grow. The server was getting roughly one new user every 90 seconds for the duration of the month of November. This influx of users, who joined to share their thoughts, insights, and memes, presented a number of scalability challenges and eventually led to intermittent downtime as the current infrastructure could not handle all the new activity on the server.
One possible cause of the slowdown that Nóva and her team identified with was their use of ZFS to house both the media storage as well as the Postgres database that held references to all of this data on a local server. Nóva explained it as “…the more we could correlate slow disks to slow database responses, and slow media storage. Eventually our compute servers and web servers would max out our connection pool against the database and timeout. Eventually our web servers would overload the media server and timeout.”
Getting media off of the local disk was a top priority to ensure stability of the platform. Nóva connected with DigitalOcean’s Chief Product Officer Gabe Monroy, and after explaining the challenges and potential solutions, chose DigitalOcean Spaces Object Storage, a highly scalable cloud storage service, to store Hachyderm’s media. There was a major concern though. Hachyderm was already running in production and had close to 1.4TB of data that needed to be migrated and taking the server down for a prolonged period of time was not an option. The solution?
The brilliant solution came from a Hachyderm infrastructure volunteer, Malte Janduda. The technical solution was NGINX try_files. Kris again wonderfully explained the solution Malte had suggested:
We begin writing data that is cached in our edge nodes directly to the DigitalOcean Spaces object store instead of the local filesystem.
As users access data, we can ensure that it will be taken off of our local server and delivered to the user.
We can then leverage Mastodon’s S3 feature to write the “hot” data directly back to DigitalOcean Spaces using a reverse Nginx proxy.
This meant that when a user requested a particular asset, it would only be served from the local filesystem once, because as soon as it hit the edge node it would be written to DigitalOcean Spaces. Additionally, the more users accessed Hachyderm, the faster the data would be replicated to DigitalOcean Spaces. This had a side effect of the most accessed data being migrated first.
The remaining data would be migrated from the local filesystem to DigitalOcean Spaces using Rclone. This would be a slow running process in the background that would take a couple of days to migrate all of the data over. This was an excellent real-world example of how a distributed system enabled better scalability. The more users accessed Hachyderm, the faster the migration would complete.
Currently, Nóva and her team are still in the process of migrating all of the media over to DigitalOcean Spaces and are actively working on migrating the rest of the infrastructure to the cloud. Follow the journey on their community hub.
DigitalOcean provides a simple way to host a Mastodon server–the DigitalOcean Marketplace offers a 1-click image to deploy your own Mastodon server, and we recently updated this image to support the latest and greatest version of Mastodon. Additionally, you can set up DigitalOcean Spaces with this image from the get-go and get all the benefits of offloading media assets to a dedicated object storage solution.
We are also working on documentation on how you can leverage DigitalOcean Managed PostgreSQL as the database for your Mastodon server as well as a scalability guide to hosting your Mastodon community as it grows into the thousands, tens of thousands, and hopefully millions of users. Stay tuned for much more to come!
]]>deploy kicked off with a keynote from DigitalOcean’s executive leaders as they looked back on 2022 and discussed the challenges global founders, developers, and builders face. We also heard from the DO Impact team as they announced a new round of grants in the opening keynote and interviewed nonprofits and social enterprises throughout the week.
If you weren’t able to attend, we’ve got you covered. You can find recordings of all the sessions on the deploy website. Check out some of the highlights:
In her session, Reducing Burnout while Building Your Business, Elissa O’Dell, founder of Huxly.co, showed founders how to develop a personalized burnout avoidance strategy so they can build a sustainable business while fostering productivity and creativity.
Later, Mason Egger, Lead Developer Advocate at Gretel.ai taught us all about Synthetic Data in his session I Can’t Believe It’s Not Real Data! Unlocking Synthetic Datasets for SMBs. He highlights how access to reliable data is one of the biggest bottlenecks hindering development across multiple industries and how Synthetic Data can help developers get accurate, relevant data. He showed real-world situations where Synthetic Data removes bias, augments data sets, and makes once private data easily shareable while still protecting the privacy of the initial data set.
Justin Mitchel, Founder, Team CFE, explored different questions and considerations startups must make when deciding how to construct their applications in his session A Startup’s Guide to Application Architecture.
Finally, we heard from a panel of experts tackling Scaling in the Cloud During Uncertain Times. Jen Petraglia, Nonprofit Market Development, DigitalOcean, Tracy Kronzak, Director of Partnerships, Bonterra, and Kavita Kapoor, Executive Director, Code Berlin, discussed what we can do when building up an organization in uncertain times. They explain how to identify potential gaps in IT infrastructure and organization strategy and map out the best course of action for your business.
Battlesnake is a multiplayer programming game where your code is the controller. In their session, Powering Battlesnake Players on DigitalOcean, the team recapped their recent Fall tournament and how DigitalOcean enabled global players to prepare their snakes for battle, and in his session, Snake in the Grass! How Battlesnake’s Game Engine Scales on DigitalOcean’s App Platform, Rob O’DWyer, Senior Software Developer, Battlesnake, discusses how DigitalOcean’s App Platform made their transition to global game engine regions, which allows players everywhere to compete at a high level, easy, scalable, and affordable.
Tech in Schools Initiative, TSI, is a non-profit focusing on tech education. In their session, Making the Cloud a Less Scary Place with DigitalOcean, the team shared insights on how to recover quickly, create appropriate protocols, and continue expanding rapidly.
We hope to see you at the next conference! Visit the deploy website to watch content on demand. To stay in the loop with the latest event information, visit DigitalOcean’s community site.
]]>Block storage increased up to 50% and Object storage is up to 100% faster with this major performance boost to DigitalOcean Storage. DigitalOcean Volumes input-output operations per second (IOPS) and throughput increased up to ~50% supporting rapid block storage operations. Spaces Requests Per Second (RPS) has doubled, expanding up to 1500 requests per IP address per second.
New SYD1 Sydney, Australia data center expands our global footprint
We launched the Sydney, Australia (SYD1) data center region and you can now deploy Droplets, managed databases, and other products in our expanding global reach.
Fedora 37 is now available
The Fedora 37 (fedora-37-x64) base image is now available in the control panel and via the API. This quickstart helps you create snapshots of Droplets and Volumes you can save and access images on-demand.
The 3rd Gen of DigitalOcean Premium Droplets with AMD EPYCTM processors
The third generation AMD EPYCTM processors (code name Milan) are available in our Premium AMD Droplet plans. The new Droplets are powered by 3rd Gen AMD EPYCTM processors, superfast PCIe Gen 4 storage, and high-speed 100 GbE networking for top performance.
While customers can’t select their specific processors, many Premium AMD Droplets created in the SFO3, SGP1, FRA1, and NYC1 data centers already run on 3rd gen AMD EPYCTM processors as we continue to deploy them.
Migrate any DOKS cluster to the new control plane and add HA
Today, we are excited to announce that you can migrate any DOKS cluster to the new control plane and enable High Availability your favorite way. If you created a cluster prior to June 2022 you might be on an older control plane, upgrade now to take advantage of the new features.
Free, Standard, and Premium support lifts up startups
At DigitalOcean we are committed to serving every customer. That’s why we offer a choice between Free, Standard, and Premium support plans. The new Standard and Premium plans ensure fast responses by dedicated experts through additional channels like video calls, and customers can get an architecture review, custom onboarding, help with products, and other personalized assistance.
Meet PyDO, DigitalOcean’s official Python API client
We are excited to announce PyDO, DigitalOcean’s Python API client library. PyDO allows Python developers to interact with and manage their DigitalOcean account resources through a Python abstraction layer. Fully supported and maintained by DigitalOcean, PyDO is now available to install.
Instant OSS global communication with Mastodon
The Mastodon Droplet 1-Click app is open-source software that provides a microblogging platform akin to others like Twitter, but instead of being centralized, it is a federated network.
Our Solutions Experts are available to assist you with any custom setups, migration, and pricing.
Happy coding! Ivan Tarin Sr. Product Marketing Manager
]]>At DigitalOcean, our mission is to simplify the cloud so that you can spend time on building what matters the most to you. We believe that the cloud should not turn cost-prohibitive as you scale your business. Our pricing model reflects our vision of cloud computing being an enabler of business growth. Over the past few years, many businesses have migrated their workloads from AWS to DigitalOcean primarily due to our simplicity, affordable pricing, and customer-first support services. If you are running a startup or an SMB, here are some reasons why DigitalOcean might be the better choice for you.
AWS is designed to cater to the IT needs of enterprises, which are vastly different from those of startups and SMBs. When you choose AWS for your startup or SMB, you are force-fitted to a cloud that was built for a large enterprise, resulting in complexity that you might not be equipped to handle. AWS’ cloud platform with more than 200 services, hundreds of instance types and numerous workflows makes your IT environment complex and cumbersome to maintain. The cognitive overload resulting from this complexity necessitates long hours of configuration and management, along with additional training and certifications for your staff. Without a large IT and Devops team, this scale of technological complexity is a huge distraction for growth-focused businesses.
On the other hand, simplicity is at the heart of everything we do at DigitalOcean. Our simple UI, command line interface (CLI), API, and straightforward pricing are all designed to get you up and running quickly. Whether you’re spinning up a virtual machine, managed Kubernetes, fully-managed databases, effortless apps, or simple-and-scalable storage, we help level the field for you with unparalleled ease of use.
Easy to learn, implement and manage, DigitalOcean’s cloud platform can bring significant efficiencies and productivity gains. To use DigitalOcean you don’t need an array of certifications or a massive IT and Devops team. Our simple workflows, documentation, and tutorials let development teams focus on high-value product functionality and move away from undifferentiated cloud management work. According to a Forrester analysis of tech-focused SMBs, these efficiencies translate to $545,000 savings in dedicated IT/ Devops expenses and $300,000 in productivity gains over a three-year period.
Customers such as Ghost, a nonprofit, open source platform for content creators have found it easy to scale fast with a lean team on DigitalOcean’s cloud platform. Hanna Wolfe, co-founder of Ghost says “Unquestionably, it wouldn’t be possible for us to serve 15,000 customers with only a few full-time people if DigitalOcean wasn’t doing the heavy lifting.”
Marco Donel, CTO of Shoppermotion, an IoT platform for retailers, loves DigitalOcean’s simplicity, “Hyperscalers are more difficult to use, which is fine if you have a huge DevOps team. But when you are starting out, it’s all about building and not wasting time having to tweak instances and do DevOps. We were amazed at how easy DigitalOcean is to use and it let us spend more time focused on our product.”
Cash flow and runway are always top of mind for SMBs and startups. Higher, unpredictable cloud computing costs can often negatively impact these metrics. It’s important to pay close attention to your monthly cloud bill. There is often more going on than meets the eye in terms of cloud pricing.
DigitalOcean is on average considerably less expensive than AWS. For example, the pricing analysis below shows that AWS can be over 40% more expensive when compared to DigitalOcean for a single virtual machine. Whether you’re a startup with a few dozen instances or an established business with many more, these costs can add up quickly. Several AWS EC2 configurations don’t come with enough storage and bandwidth for the average business, which means you have to buy them separately at very high prices. For users with significant storage and bandwidth needs, these expenses could send their cloud bill soaring.
AWS (N. Virginia) | DigitalOcean (any data center) | AWS (N. Virginia) | DigitalOcean (any data center) | |
---|---|---|---|---|
Instance name | t2.micro | Basic (regular) Droplet | c6i.xlarge | CPU-optimized Droplet |
Type | On-demand | On-demand | On-demand | On-demand |
vCPU | 1 | 1 | 4 | 4 |
Memory | 1 GiB | 1 GiB | 8 GiB | 8 GiB |
Storage | Not included | 25 GiB included | Not included | 50 GiB included |
Bandwidth | Not included | 1000 GiB included | Not included | 5000 GiB included |
Instance price | $8.47 | $6 | $124.10 | $84 |
Instance price with storage | $10.97 (after adding 25 GiB Amazon EBS storage) | $6 | $129.10 (after adding 50 GiB Amazon EBS storage) | $84 |
Instance price with storage and bandwidth | $100.97 (after adding 1000 GiB DT Outbound Internet) | $6 | $579.10 (after adding 5000 GiB DT Outbound Internet) | $84 |
Startups and SMBs are increasingly looking to benefit from global demand for their products and services. With AWS’ region-based pricing, operating costs will vary based on the datacenter in which the product or service is deployed. Additionally, AWS’ outbound data transfer pricing can vary based on the destination, which introduces more variability in costs, especially for businesses with large outbound data needs. DigitalOcean’s uniform pricing across datacenters is designed for today’s global businesses and enables them to take advantage of new markets without worrying about pricing differences.
With more businesses adopting as-a-service models for product or service delivery, having predictable Cost of Goods Solds (COGS) has become a vital element of business strategy. A stable cloud computing spend that doesn’t fluctuate based on the number of days in a month makes it easier for startups and SMBS to price their products and services right for their market.
On-demand pricing in AWS EC2 is based on the number of hours or seconds in a month. Hence, your AWS EC2 bill will vary based on the number of days in a month (eg: 31 days in July vs 28 days in February). In contrast, DigitalOcean bills you only 28 days every month, even for the months with 30 or 31 days, effectively giving you a discount every month except February and a more predictable cost model.
Every DigitalOcean Droplet comes with significant storage and bandwidth capacity included in the monthly Droplet cost, and any additional storage can be added via DigitalOcean Volumes and Spaces. As illustrated in Table 1 above, many equivalent configurations in Amazon EC2 do not include enough storage by default, resulting in additional payments for Amazon Elastic Block Storage (EBS).
“An affordable price doesn’t mean the same for less; it means more for the same price. We can afford to have several staging environments and multiple instances with DigitalOcean.” - Aleksey Kolupaev, CTO, Co-Founder, eduki
Every DigitalOcean Droplet configuration comes with a significant bandwidth allowance, ranging from 500 GiB for the Basic Regular plan up to 11,000 GiB for higher plans. AWS EC2 configurations do not include outbound data transfer to the Internet, which has to be bought separately. AWS’ data transfers to the Internet are also 5-9 times more expensive.
test | AWS Bandwidth Pricing | DigitalOcean Bandwidth Pricing |
---|---|---|
Bandwidth pricing | $0.05-0.09 per GiB | $0.01 per GiB for overages beyond the free allowance |
Another significant advantage of DigitalOcean Droplet plans is the flexibility to pool these generous bandwidth allowances across Droplets within the same account. Pooling increases the amount of bandwidth available for your account and helps keep bandwidth overage costs in check especially for network-intensive use cases such as video and audio streaming, gaming, real-time communication, IoT and web crawling. Consequently, it is much simpler for businesses to scale their cloud footprint without bandwidth costs eating away their cloud budget.
DigitalOcean customers love our bandwidth pricing model, Joshua Verdehem of Loot.tv says “Cloud providers love gouging on bandwidth for seemingly no reason.The only reason that Loot.tv can exist is because of the very cheap overage [bandwidth charges] on DigitalOcean Spaces.”
Simplicity is a core tenet for DigitalOcean and it is reflected in our intuitive and easy to understand monthly bills. Our simple pricing model makes your cloud expenses more predictable and business planning easier. On the other hand, AWS bills are complex, long and hard to decipher, even for long-time cloud customers.
DigitalOcean’s simple and short invoices that don’t require lengthy review and verification can save up to 48 hours of billing management time every year when compared to hyperscale cloud providers. When compared to hyperscale cloud vendors, DigitalOcean’s customer-oriented pricing can deliver up to 1.5 million of cost savings, an ROI of 186% and a very short payback time of less than 6 months.
Over the years, businesses such as Eduki have maximized their cloud ROI via DigitalOcean’s platform. Alexsey Kolupaev, the co-founder says “An affordable price doesn’t mean the same for less; it means more for the same price. We can afford to have several staging environments and multiple instances with DigitalOcean.”
Startups and SMBs often want to control their technology expenses more closely so that they can pivot when needed or adapt to changes in their operating environment. AWS offers a pricing model where you pay less than on-demand instances if you are on a 1-year or a 3-year contract, which could tie you in long-term contracts with not much flexibility. On the other hand, DigitalOcean’s contract-free pricing gives you affordable pricing plans irrespective of the duration of usage. It also translates to more control over your cloud expenses by providing the flexibility to scale configurations up or down monthly based on your needs.
It’s also easy to get locked into AWS due to the hidden costs associated with migration. AWS’ bandwidth prices are several times higher than DigitalOcean’s, making it a costly effort to move data out, especially if you are entrenched in the AWS ecosystem.
Additionally, several AWS services are based on proprietary technology, which makes it harder to move workloads to other cloud providers or pursue a multi-cloud strategy, effectively locking you in to AWS. DigitalOcean loves open source. Many of the libraries and frameworks we use at DigitalOcean are open source, enabling easier and flexible migration. At DigitalOcean, we also strive to support initiatives that help the open source community thrive.
Startups and SMBs are not the primary customers for enterprise-focused hyperscale cloud vendors, such as AWS, and often do not receive the same level of service that large companies do. On the contrary, DigitalOcean’s Premium Support is purpose-built for the unique needs of startups and SMB, and delivers an exceptional yet personalized support experience. Our Premium Support prioritizes convenience and ease of communication via new ways to deliver support such as Google Meet calls with an expert and Slack channel access to a Technical Account Manager. DigitalOcean’s Premium Support customers also receive access to opportunities for business consultation, co-marketing and architecture review meetings with our internal product teams.
Support plans from AWS have variable and usage-based pricing with no upper limits. That means AWS Business Support customers could end up paying up to 10% of their AWS monthly spend as support charges. In contrast, DigitalOcean’s flat support pricing doesn’t penalize users for higher cloud usage, while also being simpler and more predictable.
Another consideration when evaluating support services from cloud vendors is the definition of response time. AWS’ response times are based on multiple categories of issue severity. For example, AWS Business Support plan has different response times for general guidance, system impaired, production system impaired, production system down and business-critical system down ranging from < 24 hours to < 30 minutes. This categorization of severity not only makes resolution time unpredictable but also results in a complex support experience. As a DigitalOcean Premium Support customer, you will receive a response to all inquiries within 30 minutes, and our goal is to deliver a support experience that truly feels premium.
Here’s what our customers have to say about our innovative support services have to say about our innovative support services.
“The support team is incredible - every time we would have an issue we’d send out a request and somebody really pleasant and nice would get back to you quickly and give you exactly what you need. I’m very pleased with the level and quality of service we get.” - Patrick Wingo, Head of Product, Kea
DigitalOcean is a one-stop shop for everything you need for your apps—compute, storage, networking, managed databases, and more. If you’d like to have a conversation about using DigitalOcean in your business, please feel free to contact our sales team.
]]>Still, startups and SMBs may find themselves seeking to cut costs wherever possible, and when done thoughtfully, reducing cloud spend can have a significant impact on the bottom line. As teams build out their application’s architecture and plan initiatives for 2023, optimizing workload performance and cost can free up money in the budget for growing teams.
Cloud bills are often complex and unpredictable, making it hard for teams to identify areas where costs can be reduced. Each component of an application’s architecture interacts with other parts of the workload, and it’s critical to understand and assess how each piece performs individually and as part of the whole application. Different components will have different pricing models, and teams comparing the costs of multiple cloud providers will find that apples-to-apples comparisons are difficult at best.
Teams will need to balance cost savings with reliability and performance, piecing together a puzzle that best suits the unique needs of the business. Start with understanding and evaluating the four core components of cloud offerings, including compute, storage, bandwidth, and support.
The foundational offering from cloud providers is compute, where servers are partitioned using a hypervisor into smaller virtual machines (VMs). Computing resources can potentially have hundreds of Central Processing Units (CPUs), hundreds or thousands of gigabytes (GBs) of Random-access memory (RAM), and thousands of GBs of storage. Different cloud providers will have different offerings for VMs, with each provider often offering multiple configuration options. Builders can choose the virtual machine configuration that makes the most sense for their workload, prioritizing the amount of RAM and CPUs. DigitalOcean’s Droplets also include storage and bandwidth along with compute, but some cloud providers bill for these separately.
DigitalOcean’s Droplets configurations offer a healthy balance of CPU and RAM to meet the requirements of a variety of applications. Builders can choose a Droplet with a shared vCPU or a dedicated vCPU and choose between a variety of RAM vs. CPU configurations, depending on the needs of the business. This allows teams to save costs by avoiding paying for RAM or CPU that they won’t ultimately need. Our research found that DigitalOcean CPU Optimized Droplets are 88% cheaper than the largest cloud provider in the market.
Pricing models for global workloads will vary depending on your cloud provider. Some providers charge based on location and in local currency, while others, DigitalOcean included, have consistent pricing across the world. Prices also have the potential to vary drastically as organizations scale. With cloud offerings, developers can easily scale vertically by adding more CPU if their VMs don’t have enough processing power or by scaling horizontally to increase instances and handle more load.
As shown in the below graphs, DigitalOcean is more cost-effective for compute than other providers as companies scale with an increase in instances. The comparison is between an $18/monthly basic Droplet with a similar instance from a competing cloud provider.
Some VMs (compute offerings) include a certain amount of storage. All DigitalOcean Droplets include a fixed amount of SSD storage, and additional storage can be added with DigitalOcean’s Volumes and Spaces. A basic $6 Droplet at DigitalOcean comes with a 25GB SSD, and an $18 Droplet with a 60GB SSD. The same $18/monthly Droplet that we compared in the compute section, which seemed slightly more expensive than a competing cloud provider, now seems very cost-effective when the included storage of 60GB SSD is taken into consideration.
However, businesses often need to add additional storage to the baseline amounts that come with VMs. Cloud providers have developed services to fill the storage needs of businesses, mostly comprised of two categories: object storage and block storage.
Storage offerings typically begin with a base price for an allotted Gibibytes (GiB) and then charge for additional GiB beyond that allotment. Block storage offerings like DigitalOcean Volumes allow users to increase storage capacity without paying for a larger VM. Object storage offerings like DigitalOcean’s Spaces, an S3-compatible object storage option that features a built-in content delivery network (CDN) can make scaling easier and data more accessible and reliable, doing so at an affordable price.
Pricing for DigitalOcean’s Spaces starts at $5 per month, including 250 GiB of data storage and a built-in CDN for no extra cost. Additional storage is charged at only 2 cents per GiB. Pricing for DigitalOcean’s Volumes starts at $10 for 100GiB. DigitalOcean’s industry-leading bandwidth pricing and flat bandwidth overage of $0.01 per GiB allow builders to easily estimate monthly bills.
Bandwidth pricing can be complex and is often overlooked because it’s listed as pennies per GiB, but for many network-intensive applications, bandwidth costs quickly add up, sometimes even making up the majority of their cloud bills. DigitalOcean pricing models are extremely beneficial for bandwidth-intensive organizations because of bandwidth pooling. Data transfer into DigitalOcean and within a builder’s private networks is free of charge. Since each Droplet comes with a set amount of bandwidth, the more Droplets you have, the more free transfer you get. Each Droplet includes a quota for outbound data transfer, and all the Droplets in your account together form a bandwidth pool. An account can utilize outbound transfer up to the amount of the bandwidth pool with no additional charge, and any excess transfer costs just $0.01 per GB. For example, an $18/monthly Droplet costs $18, including the data transfer (out) capacity of 3TB of data. Other cloud providers’ price rates show pennies per GB, but they end up impacting a significant part of the bill for bandwidth-intensive organizations.
An $18 monthly charge on DigitalOcean could equal $297 or more with other cloud providers in the market. This is one of the biggest differentiators in cost, especially for businesses anticipating rapid scale.
Support can be a critical need for those with production applications, and being able to reach out to cloud providers to resolve any issues can save time and money. At DigitalOcean, we pride ourselves on the support we provide and have a straightforward approach based on business requirements.
DigitalOcean’s free support plan is useful for accessing support channels and tapping into the knowledge of the pooled agents with an expected response rate of 24 hours. Our paid support offerings, Standard and Premium, provide access to dedicated agents and technical account managers based on what you choose.
When compared to other cloud providers, DigitalOcean’s support plans cost significantly less. The maximum DigitalOcean customers will pay is $1000 for Premium support, while other cloud providers charge tiered pricing based on your monthly bill, often amounting to much higher monthly costs.
For example, if your monthly cloud bill is $85,000, with DigitalOcean Premium support you would pay $1000, while other providers charge up to $6,000 or more for dedicated support.
DigitalOcean supports all types of applications, from basic websites to complex Software as a Service solutions.
Contact us to learn more about how you can save with DigitalOcean, or sign up today to get a $100 free credit with DigitalOcean as a new user.
]]>That’s why we use and sponsor Let’s Encrypt, a Certificate Authority built by the Internet Security Research Group (ISRG), providing free, automated certificates for everyone. Certificate Authorities play a crucial role in securing the Internet, and Let’s Encrypt changed the game by providing not only publicly trusted certificates for free but also a standardized API that enables automation, reducing the burden of certificate maintenance operations.
In this post, we’ll walk through where DigitalOcean uses Let’s Encrypt, how our integration works, and some of the enhancements we’ve made along the way.
Four years ago, we added Let’s Encrypt support to our Load Balancers product, which brought free, automatically renewed certificates for all our customers. Prior to that, customers’ only choice for certificates was to bring and manage their own certificates, which is operationally burdensome and can lead to downtime if they’re not monitoring the expiry of all their certificates. By using our Let’s Encrypt integration, customers no longer have to worry about rotating their certificates; just create the certificate once, configure the Load Balancer to use it, and DigitalOcean takes care of the rest.
Since then, our integration with Let’s Encrypt has expanded in the following ways:
The Spaces product’s CDN feature can also use Let’s Encrypt certificates
Support for wildcard domains
Migrated to ACME v2 API
Managed MongoDB now uses Let’s Encrypt certificates, enabling more seamless and secure connections to clusters
As of today, over two-thirds of all customer certificates use the Let’s Encrypt integration.
So how does our integration with Let’s Encrypt actually work under the hood?
As shown in the architecture diagram, there are quite a few services involved. Let’s walk through the creation of a Let’s Encrypt certificate.
First, the customer requests the creation, either through the Cloud UI or the API (doctl, Terraform, etc.). After a few intermediate services that aren’t pictured here, the request lands at the Certificates and lets-encrypt-certs set of backend services.Next, the Certificates service validates the request to ensure the provided certificate is actually usable and not a duplicate of one the customer already has.
Then the request is forwarded to the lets-encrypt-certs service, which does the following:
Stores some metadata in the MySQL database.
Calls the Let’s Encrypt ACME v2 API, using the open source Go x/crypto/acme package to create and fetch the authorization order, passing along the customer’s provided list of domain names. The Let’s Encrypt ACME v2 API responds with a challenge token.
Because we use the DNS-01 challenge type for Domain Validation, lets-encrypt-certs next calls the Domains service to create a TXT record on the customer’s _acme-challenge subdomain, with the challenge token used as the record value.
Generates a new public/private key pair, sends the associated Certificate Signing Request to the ACME v2 API, and parses the resulting certificate from the API response.
If any of the steps fail, we asynchronously retry them up to 3 hours later.
After that is complete, the Certificates service encrypts the sensitive fields, like the generated private key, before storing it and the PEM-encoded certificate in the MySQL database. Finally, the Certificates service sends an email to the customer to notify them that their certificate has successfully been provisioned.
Now that the certificate has successfully been created, the customer can reference it when configuring their Load Balancer or Spaces bucket.
Creating the certificate is just part of the problem, though. Arguably the most important aspect that we handle on behalf of our customers is automated certificate renewal. With Let’s Encrypt issuing certificates with 90-day lifespans by design, we need something in place to proactively renew certificates and distribute them to customer Load Balancers, Spaces buckets, and MongoDB clusters.
For that, we run a job called lets-encrypt-renewer every four hours that check the MySQL database for any certificates that are between 30 and 90 days old. This means we are renewing at most 60 days sooner than is absolutely necessary; however, because something can always fail, we provide plenty of time for retry attempts.
The job then largely follows the same steps listed above. Upon successful completion of the renewal, however, it additionally publishes a message to a Kafka topic. Backend services of the three consuming products—Load Balancers, Spaces, and Managed MongoDB—all listen for events on that topic and handle the subsequent distribution of the new certificate accordingly.
Earlier this year, we migrated all DigitalOcean customer Let’s Encrypt certificates from RSA with 2048-bit keys to ECDSA using curve P-256. Computationally, ECDSA is significantly more efficient than RSA—that is, a server has to spend fewer CPU cycles to compute ECDSA signatures than RSA signatures. Additionally, P-256 actually provides a higher security level, equivalent to RSA-3072, and has become the industry standard for ECDSA certificates in the past few years.
Over 70% of the certificates attached to Load Balancers were using the Let’s Encrypt integration; this provided us with a big opportunity to improve our product for a lot of our customers. Because we absorb the computational cost of running Load Balancers, every bit of reduction in CPU usage means less overhead for us and better overall performance of our Load Balancers. This has the added benefit of freeing up CPU for regular droplets of customers in the rest of the fleet and reducing the time taken for TLS handshakes also generally yields faster page loads for end users.
Over the course of 30 days, we upgraded all existing customer Let’s Encrypt certificates to ECDSA during the regular renewal process. On our heaviest-usage Load Balancers, CPU usage came down by over 30% with this migration.
With our Managed MongoDB product, customers connect to their database cluster over TLS. When we first released our managed Managed MongoDB product, we created certificates for each node, signed by a self-signed certificate authority, and provided to customers. This led to an inconvenient and confusing “out of the box” user experience: customers had to download and install this root certificate on all their clients before being able to connect to their DB.
In May, we replaced the self-signed certificates with Let’s Encrypt certificates instead. Because Let’s Encrypt’s root CA— ISRG Root X1—is already trusted by nearly every client out there, customer connections to their databases will just work—no extra configuration needed! And, since these certificates are ECDSA, database clusters can spend fewer CPU cycles on TLS handshakes and more time processing queries.
One of the biggest requests we receive from customers of our Certificates product is the ability to create Let’s Encrypt certificates without having to manage their domain with us. At the moment, this prerequisite exists so that we can create the TXT record for DNS-01 challenges on the appropriate sub-domains on behalf of the customer. We’re looking into potential solutions around this limitation, like leveraging CNAME or NS records or using HTTP challenges instead.
In the meantime, check out some of our Let’s Encrypt community tutorials! Or, if you need Let’s Encrypt certificates in your DOKS cluster, consider using our Marketplace app for the popular open-source project, cert-manager.
We want to give a big thank you to our partners at Let’s Encrypt for continuing to work with us on improving the certificate experience for everyone. As a nonprofit, 100% of their funding comes from charitable contributions—if you or your organization want to get more involved in supporting their mission, visit this website.
Interested in building the cloud at DigitalOcean? Check out our careers page for openings on our teams!
]]>During the opening keynote, we shared the exciting news that in 2022, DigitalOcean has committed a total of $1M in cash grants to support nonprofits and social enterprises around the world. This milestone is the first of many we’ll share on our journey toward our Pledge 1% commitment.
As part of this year’s giving, we announced five new grantees at deploy. DO Impact will award a total of $300,000 to five organizations spread across three communities that are important to us—each representing places where most of our employees are concentrated. We’re so inspired by—and thrilled to partner with—these incredible organizations dedicated to improving their communities.
New York: As the location of DigitalOcean’s original headquarters, the city of New York is home to many of our employees. And we know that across NYC, the Bronx is the borough with the highest percentage of residents without broadband internet access. We’ve partnered with The Bronx Community Foundation’s Digital Equity Initiative to expand access to digital resources for children, families, and seniors in the Bronx—the most underserved community in NYC. In addition to the cash grant, our Data Center team is working with the Foundation on a broader collaboration to leverage equipment, like retired servers, and technical capacity with our employee volunteers.
Dr. Meisha Porter, President and CEO of the Bronx Community Foundation, said: “We’re so excited to work alongside DigitalOcean as we build sustainable futures in the Bronx. Our organizations are incredibly mission-aligned, and this partnership will allow us to continue to enable the next generation of diverse innovators.”
Denver: As another large employee hub, the greater Denver area is a key community for DigitalOcean. We are delighted to partner with the GreenLight Fund, an organization focused on advancing equity across 11 U.S. cities and growing. Our grant will enable the Fund to open opportunities for families experiencing poverty in Denver and break down systematic barriers to inclusive prosperity. Since our first meeting with the GreenLight team, we’ve felt deeply aligned with the organization’s overall approach of applying social innovation to community needs.
“We are so grateful for this grant from DigitalOcean that will enable us to bring bold and innovative solutions that are needed to address seemingly intractable issues like food insecurity, financial stability, homelessness, and education success to Denver,” said co-founder John Simon. “Partnering with a company like DigitalOcean, which has a personal stake in the future of the Denver community, will help kickstart our efforts in the area.”
Pakistan: Finally, with our recent acquisition of Cloudways, we gained a new community of colleagues based in Pakistan, many of whom have been impacted by the devastating floods earlier this year. We worked with our colleagues in the region to identify three organizations that are working relentlessly on flood relief and community rehabilitation efforts across Pakistan:
The Edhi Foundation: One of the most trusted organizations in the region, the Edhi Foundation has been on the ground since day one in all flood-affected areas, rescuing families and livelihoods and providing relief assistance.
HANDS Flood Relief Campaign (via I-Care Fund America and I-Care Foundation): One of the leading nonprofit organizations in Pakistan, HANDS flood relief teams are working tirelessly to provide life-saving health services to flood-affected families.
Akhuwat USA: Akhuwat transforms lives by providing collateral and interest-free loans to people in need. The Flood Relief Fund is helping to rehabilitate and support flood victims across the country.
Jen Petraglia, who leads DigitalOcean’s Nonprofit Market Development, led an engaging panel discussion with two nonprofit leaders: Tracy Kronzak, Director of Partnerships at Bonterra, and Kavita Kapoor, Executive Director of the Federation of Humanitarian Technologists. Jen, Tracy, and Kavita discussed the importance of reducing complexity, documenting your failures, and driving impact through your values, particularly during times of global economic uncertainty.
“The data clearly demonstrates that if you broaden who is at your table when you’re making decisions about your technology, you will accelerate adoption, sales, and impact.”—Tracy Kronzak
We heard from DigitalOcean customer TSI on how developers, small businesses, and nonprofits can set themselves up to grow faster and become more resilient in the cloud. Co-founders Ross Cohen and Alisher Farhadi shared three key learnings to ease the uncertainty: 1) come in knowing what you want from your hosting and find a provider that will grow (or even downsize!) with you, 2) seek better control of your infrastructure through software so you can stay consistent in your tech, and 3) supercharge your infrastructure through software.
We’re eager to continue our partnership with MyTSI on an initiative that allows students to manage their own hosting, build infrastructure, and utilize Droplets for free.
Conference participants actively engaged in our #social-impact Discord channel throughout deploy. We heard from Hollie’s Hub for Good members Casa Hacker, Fork Facts, coLegend, Unicodemy, and MyTSI. We shared our love for the SDGs and more information on DO Impact and Hollie’s Hub for Good. We learned some trivia (fun fact: Nov. 15, the first day of deploy, was National Philanthropy Day). And we connected to share our ideas, challenges, and plans for 2023 and beyond.
We’re grateful to be wrapping up 2022 on a high note after the incredible engagement at deploy. We’re also looking forward to our upcoming end-of-year employee giving campaign, which kicks off the week of Giving Tuesday. We still have lots of ground to cover as we fulfill our Pledge 1% commitment, and we’re looking forward to even more progress in 2023. We hope you’ll continue to join us on the journey!
]]>At the core of Hacktoberfest is a commitment to supporting open source projects and a desire to provide meaningful connections within the community, educating new-comers to open-source and encouraging experienced contributors alike. We’re thrilled that this year, 146,891 people from 194 different countries registered for Hacktoberfest and completed a total of 335,000 contributions during the month of October. In fact, Hacktoberfest has contributed 2.35 million accepted pull/merge requests to open source projects in its nine years.
A focus on low and no code contributions. This year, we wanted to raise awareness that there are many ways to contribute to open source. Open source projects need all kinds of talent, both technical and non-technical. We highlighted areas where contributors could use their professional skills in support of open source through low-or-no-code contributions, and 54% of Hacktoberfest registrants this year indicated they were interested in contributing to projects through low-or-no-code.
Participants can show off accomplishments with shareable digital badges. New in 2022 were shareable digital badges (not NFTs!) created in conjunction with Holopin. Participants earned badges for registering and completing each of their PR/MRs up to the goal of four for Hacktoberfest. They could then show off their badges on Twitter, GitHub, GitLab, LinkedIn, and more.
On-theme Hacktoberfest swag. Hacktoberfest swag changes to align with each year’s new theme. If you were one of the 40,000 winners who received a reward kit with nods to sci-fi and manga, we’d love to see pictures of you in your t-shirt (tag us with @hacktoberfest and use #hacktoberfest)! If you completed Hacktoberfest but didn’t get a shirt, you can still claim a tree which we’ll plant on your behalf through our friends at Tree-Nation.
Community-focused workshops and events. DigitalOcean held a weekly livestream during the month of October. We shared best practices, showed how to do low to non-code contributions, explored more advanced technical topics together, shared your stories of Hacktoberfest and much more.
Check out the events here:
Creating a community online. The Hacktoberfest Community is lively and active and they shared their enthusiasm over social media. Through their posts we learned that Hacktoberfest helped advance their careers, develop new passions, level up their skills, inspire mentorship. Open source projects belonging to startups and young companies who participated received help improving their open source projects, which helped grow their businesses. Keep in touch with our community by joining the over 59,000 developers on the Hacktoberfest community on Discord.
Historically, Hacktoberfest has allowed us to build a deeper relationship with the open source community by enabling us to achieve far more collaboration and improving access to our projects for the ever-growing developer community. Hacktoberfest 2022 was no different. We saw various impactful contributions, like new function runtimes, external adapters for various services, function examples, blog posts, demo applications, and much more.
This year, in keeping with 2021 and the updates made during 2020, we stringently enforced our rules to prevent spam. Maintainers were able to flag any spammy PR/MRs with a “spam” label, and participants with two or more PR/MRs identified as spam would be disqualified permanently. The Hacktoberfest community were also able to help with repositories that were designed to cheat Hacktoberfest, reporting them on our website so that we could review and exclude them.
Most importantly, Hacktoberfest remained opt-in this year. Maintainers that wanted to participate in Hacktoberfest added the “hacktoberfest” topic to their repository on GitHub or GitLab, allowing participants to quickly find those that were looking for contributions. Over 125k repositories opted-in on GitHub for Hacktoberfest 2022, with another 800 joining from GitLab.
“For the first time ever, we collaborated with designers in the open source community to assist our team with usability testing of Appwrite’s new console. These contributions allowed us to increase our range of offerings and decrease friction for developers joining our ecosystem. Hacktoberfest has been substantially impactful in progressing Appwrite’s journey both as an open source project and a company, and we hope to continue participating in the years to come!”—Eldad Fux, Founder and CEO, Appwrite
“A few months back, we introduced Docker Extensions. Developers liked the personalization possibilities the new tool brought on and kept on suggesting ideas for new Extensions. As a result of their interest, we decided to join Hacktoberfest 2022 and offer our community a collaborative space to co-create their Extensions. Now that we are sending out swag to our best contributors, it’s time to reflect back on Hacktoberfest. We end up October with new users, great new extensions, fixed bugs, and several hands-on evenings worldwide sourced by our fantastic community. Thank you for making it possible!” —Alba Roza, Senior Community Relations Manager, Docker
“At Holopin, as engineers we’ve always been big fans of Hacktoberfest, so we were all looking forward to partnering with DigitalOcean. So much happens behind the scenes! I’m convinced that the DigitalOcean team behind Hacktoberfest are superhumans. Thousands of developers manifested their love for Hacktoberfest and Open Source in the form of sharing their Holopin badges on socials. In total, Holopin distributed over 500,000 badges as part of the event. An equivalent stack of physical stickers would be taller than the Seattle Space Needle. Isn’t that wild?!”—Elena Lape, Founder and CEO, Holopin
There’s much more open source love to come from us at DigitalOcean—we look forward to more community building in the future!
Phoebe Quincy, Senior Community Relations Manager | Open Source Programs
]]>A new generation of DigitalOcean Premium AMD Droplets AMD EPYCTM processors
We’re excited to introduce the latest generation of AMD EPYCTM processors (code name Milan) for our Premium AMD Droplet plans. These Droplets are powered by 3rd Gen AMD EPYCTM processors, superfast PCIe Gen 4 storage, and high-speed 100 GbE networking to improve performance. Many new Droplets created in SFO3, SGP1, FRA1, and NYC1 data centers will run on 3rd gen AMD EPYCTM processors, and we will see more Droplets with these specifications as we continue with the deployment.
AlmaLinux OS now available on DigitalOcean Droplets
DigitalOcean Droplets now support AlmaLinux OS images. Alma Linux is an open-source, community-owned and governed, free enterprise Linux distribution that focuses on long-term stability. It provides a production-grade platform that is binary compatible with Red Hat Enterprise Linux and pre-Stream CentOS.
Spaces Requests Per Second (RPS) is 100% faster
Spaces RPS has doubled to support the growing needs of small businesses. Now, enjoy up to 1500 requests per IP address per second to all Spaces on an account. Our limits page has more information as well as advice on concurrency and using the free Spaces CDN to increase performance.
IOPS and Throughput have increased by 50%
To support the needs of SMBs on our platform, Volumes has become significantly more performant. Most importantly, IOPS and Throughput have increased significantly by ~50% to support rapid block storage operations. For more information, look at the limits of each Droplet type.
Free, Standard, and Premium support lifts up start-ups
At DigitalOcean we are committed to serving startups and SMBs no matter where they are on their journeys. That’s why we offer a choice between Free, Standard, and Premium support plans. Many startups have custom solutions for their specific use cases and may need extra help integrating those solutions with a cloud. The new Standard and Premium plans ensure a faster response while giving access to our experts through additional channels like video calls and dedicated agents. Also, startups now get access to architecture reviews, custom onboarding, help with trying out new products, and other customized assistance to allow teams to focus more on business goals and customer support.
Scheduled functions now in Beta
DigitalOcean Functions now enables developers to schedule functions at a desired time via Cron expressions. Business applications often include tasks that are periodic or need to be scheduled at a particular time such as cleanup, data manipulation, timed emails etc. With just a few clicks and no infrastructure to manage, scheduled functions are the simplest way to run these on-demand functions on a schedule. Scheduled functions is now in Beta* and you can access it via the cloud console and doctl command line interface (CLI). Want to try out scheduled functions? Read this article to know more.
Long-running functions
We are increasing the maximum possible timeout for DigitalOcean Functions from 30 seconds to 15 mins. This timeout extension makes it easier for users to run batch processing or complex background tasks like video/image processing, ETL, report generation, etc. The new timeout can be set both via the Cloud console and the doctl CLI. Learn more about setting function timeouts on this docs page.
This KubeCon, we announced our new operators that use Kubernetes automation and orchestration to manage DigitalOcean resources and a solution blueprint to manage egress traffic. You can still book an appointment with sales and may even qualify for two free months of DigitalOcean. The promotion is for a limited time so get in contact with our experts today.
Add authentication and authorization in 1-Click
FusionAuth is an API-first Customer Identity and Access Management (CIAM) platform with support for OAuth2, OIDC, SAML v2, social login, federated login, MFA, full-text search, password policies, WebAuthn, and other passwordless options.
Do you have an idea for improving our products? Submit your feedback and vote on other user suggestions on our ideas page. If you have questions, ask them here.
Happy coding!
Ivan Tarin, Sr. Product Marketing Manager
*Beta releases may not be appropriate for production-level workloads. We encourage users to use simulated test data and avoid running sensitive workloads in Beta products."
]]>We can’t wait for you to try the much more redundant, fault-tolerant, and resilient new control plane. Among many other enhancements, the new control plane offers automatic healing and recovery, dynamic resource allocation, and quick feature updates.
The new control plane automatically heals and recovers any of its unhealthy components, making it more resilient to unexpected failures. On-demand CPU and memory allocation let DigitalOcean Kubernetes dynamically adapt to variable usage patterns. The new control plane architecture makes it easy to ship updates to the control plane continuously.
Eliminate the single point of failure (SPOF) that exists in Kubernetes by adding High Availability to your DigitalOcean Kubernetes cluster. High Availability replicates control plane components to ensure resilient clusters, increase fault tolerance, and protect against control plane outages. We offer a 99.95% uptime SLA for High Availability for all HA-enabled clusters. Check out the DigitalOcean Kubernetes HA SLA for more details and the credit-claim process.
After upgrading your Kubernetes clusters to v.1.22.X or higher, you can enable High Availability in the UI or programmatically. Add HA to your existing clusters programmatically with doctl v.1.86.0 or later using the flag --ha=true. To turn on High Availability in the UI, navigate to the control plane card in the Cluster Overview tab and follow the instructions. Be aware that you can’t disable HA once it’s enabled on a cluster. For more details, see the enable HA docs.
You might already be running DigitalOcean Kubernetes clusters on the new control plane and can enable HA anytime. If you need to migrate, follow our guide to eliminate any disruptions. To make sure, check your Kubernetes clusters in your cloud control panel. You’ll see a message like the following if your clusters can upgrade
To check if your cluster is running on a HA control plane see your Cluster Overview tab it has a Control Plane card with the HA status along with the guidance to enable it.
If you’d like to upgrade to enable High Availability only DigitalOcean Kubernetes version equal to 1.22.X or greater can enable it and unlock future developments. Kubernetes clusters older than v1.21.X can update as usual, in order to jump to the next minor version you must be running the latest patch version 1.21.14-do.0. The first doctl version to officially support HA enablement will be 1.87.0.
When migrating your clusters to the upgraded control plane you may experience up to two minutes of disruption to your data plane network connection where you can’t contact the Cluster API. You can add HA once after migrating to the new control plane. Follow this migration guide to minimize any impact to your workloads.
Migrations to the new control plane will also be enabled through the required upgrades going forward after the initial phase of voluntary migrations. Please look out for an extra note about the migration capability in the regular email notifications about required upgrades.
The new control plane is available in all DigitalOcean regions and is completely free because we don’t charge for the control plane. We only charge for the underlying usage of products by the cluster like nodes and Load Balancers. High Availability pricing remains the same at $40 per month for each enabled cluster. Our pricing page has more information.
At DigitalOcean, we’re committed to partnering in your Kubernetes journey, accelerating your growth, and scaling your business. We look forward to helping grow your business on DigitalOcean Kubernetes. Reach out to our Solutions Experts to get more information or for help migrating to DigitalOcean Kubernetes.
Happy Coding,
Udhay Ravindran,Senior Product Manager, Kubernetes
]]>At DigitalOcean, we are committed to addressing the growing storage needs of businesses and are constantly working to make our storage products better so they can address the ever-increasing needs of our customers’ applications. Today we are happy to announce that the performance of our Volumes Block Storage has been increased by 50% and the performance of our Spaces Object Storage has been increased by 100%. In this blog, we’ll dive into the specifics and provide you with some details on how this change can help address the growing needs of businesses that thrive on our platform.
Block storage can be thought of as individual hard drives added to a server. Block storage solutions are provided over the network and are flexible and can be useful for many applications. DigitalOcean’s Volumes is a high-performance block storage service that allows users to easily increase the storage capacity of Droplets independently of CPU and memory. Volumes is attached storage that’s separate from the Droplet-native storage, allowing users to increase storage capacity without paying for a larger Droplet. Volumes is great for demanding applications such as distributed web applications, hosting databases, and storing web and log files, which all need low latency, high throughput and high IOPS. You can add Volumes to new or existing Droplets and can be detached and reattached to other Droplets at any time.
Object storage is the storage and retrieval of unstructured blobs of data and metadata using an HTTP API. Instead of breaking files down into blocks to store it on disk using a filesystem, Object storage deals with whole objects stored over the network. These objects could be an image file, logs, HTML files, or any self-contained blob of bytes. They are unstructured because there is no specific schema or format they need to follow. Object storage is useful for hosting static assets, saving user-generated content such as sound files, images and movies, and storing log and backup files. DigitalOcean’s Spaces is a high-performance S3-compatible object storage with a built-in Content Delivery Network (CDN) that makes data storage and delivery easy, reliable and affordable.
Although there are a variety of metrics to evaluate performance of Block Storage devices, there are two metrics that can be used as a starting point—Input/Output Operations per Second (IOPS) and Throughput. A single read or write operation is considered as an IO operation. Higher values of IOPS correlates with higher performance of the storage device. Throughput measures the amount of data transferred to and from the storage device per second. Over the past few months, our teams at DigitalOcean have worked on improving the performance of Volumes, and we are excited to announce that Volumes Block Storage is now capable of supporting a max IOPS of 10,000 (burst 15,000) and a max throughput of 450 MBps (burst 525 MBps). The performance of Volumes is dependent on the Droplet it is attached to and the table below provides more details:
Volumes Block Storage is designed to increase IOPS and Throughput to absorb temporary spikes in traffic (burst mode). To learn more about the performance of Volumes in burst mode, check out the product docs.
All new Volumes will be provisioned on NVMe drives so users can experience superb performance*. The latency of NVMe drives is low compared to SSD and traditional HDD. Latency is defined as the time it takes for an I/O request to be completed, so the lower the latency numbers, the better the performance. Based on some internal performance testing, we found the latency for 99% of requests (P99) to Volumes (as measured from a Droplet) is ~1.1ms
*Except in BLR and TOR regions - which will have NVMe hardware before the end of 2022
Data stored in an Object Storage system (Spaces) is accessed via HTTP-based APIs anywhere from the Internet. These APIs use HTTP commands such as PUT, GET, and DELETE. Each command is called a “Request” and the Requests Per Second metric tells you how many such requests can be handled by the storage system in a second. Some providers use the terms Request, Queries or Transactions interchangeably but they all mean the same. For Spaces Object Storage, we are pleased to announce that the Requests per Second (RPS) performance metric has now been doubled, which will certainly result in great performance. The table below shows the new performance numbers:
There are many use cases for Volumes as it is a great option for storing files such as website files, log files, backups and hosting databases that need to quickly serve the demanding needs of high traffic applications. The high performance of Volumes ensures that block storage will no longer be a performance bottleneck for your applications - issues that may cause your websites to run slowly or cause applications to fail due to lagging storage speeds. If you are training machine learning models or working with big data, these new performance improvements should enable you to reduce the time required for developing models or for analyzing large volumes of data. In some cases, the performance provided by Volumes is also adequate for many distributed web applications such as for creation of “Proof of Stake” nodes in blockchain applications.
There are multiple use cases for Spaces Object Storage, and it is typically a great match for applications that have “read many, write few requests”. The common ones include storing static files, file sharing applications, and storing and delivering media assets for video streaming applications. The new performance updates for Spaces will result in overall reduction in time required for serving streaming media or large files and avoid buffering issues caused by slow storage devices.
DigitalOcean provides Volumes block storage and Spaces object storage that’s simple to use and with predictable pricing. Pricing for Spaces remains the same starting at $5 per month, including 250 GiB of data storage and a built-in CDN for no extra cost. Additional storage is charged at only 2 cents per GiB. Pricing for Volumes also remains the same starting at $0.10 per GiB per month or $10 for 100GiB. With DigitalOcean’s industry-leading bandwidth pricing and flat bandwidth overage of $0.01 per GiB, you can easily estimate your monthly bills. With DigitalOcean, you get flat and transparent pricing that does not vary with location, no layered pricing models, no contracts, and no hidden surprises.
Sign up for a DigitalOcean account today to get started.
Update: Newer Spaces buckets now have an improved limit of 800 total operations per second. Click here for more information
]]>The cloud computing market in Australia is growing rapidly and SYD1 makes it easier for startups, SMBs, and developers in and around Australia and New Zealand to get the best performance from DigitalOcean. Our Sydney data center also makes it easier for businesses to provide superior experiences to their end customers in this region: Performance tests have shown an average of 6x reduction in round trip time latency for customers using the Sydney data center services from major cities in Australia, when compared to using DigitalOcean’s Singapore data center.
In addition to being DigitalOcean’s first data center in the region, SYD1 has several unique qualities that will enable best-in-class performance:
Excellent network connectivity: The Sydney data center is connected to DigitalOcean’s private internet edge and backbone network. This reduces DigitalOcean’s dependency on the public internet and provides you with direct access to Asia, North America and Europe via direct, diverse connections to California and Singapore. Since the requests travel mostly on our network, users of SYD1 will experience exceptional performance while mitigating the effects of jitter, latency and packet loss which are usually associated with sending your data over the public internet.
High network throughput/capacity: In addition to connecting SYD1 to our backbone network, we have significantly increased its capacity. SYD1 provides 400 Gbps of connectivity to the internet backbone network and is connected to California and Singapore via lowest latency links that are available today. Apart from global connectivity, we also focused heavily on SYD1’s domestic connectivity. SYD1 provides a 400 Gbps of domestic connectivity to key local transit providers such as Telstra and Vocus, and another 400 Gbps of connectivity to domestic internet peering exchanges such as EdgeIX. Think of these investments around network capacity as pipes - we have not only used shortest routes to connect pipes but also significantly increased the diameter of pipes, resulting in vastly improved performance.
Peering with hyperscalers: Startups and SMBs commonly use a multi cloud strategy to increase redundancy, and decrease vendor lock in. DigitalOcean customers are no different and often use hyperscalers as part of their multi cloud strategy. SYD1 provides seamless peering with hyperscalers which makes it easier to adopt a multi-cloud strategy for your business.
Quick Failovers: In the event of a network glitch, the SYD1 infrastructure ensures that failovers happen quickly - often in seconds instead of minutes. We re-route traffic automatically and minimize the impact on your customers.
Security and Privacy: SYD1’s infrastructure equipment is deployed in a secure cage with floor-to-ceiling panels. No one other than a select few DigitalOcean employees have access to it.
The trust of our customers is important to us, and we comply with The Australian Privacy Principles (APPs), as well as provide SYD1-specific security certifications (ISO 27001, PCI-DSS, and SOC). For more information, please see the following resources:
Sydney was a natural choice for our newest data center for multiple reasons, including the growing cloud computing market, a healthy startup and developer ecosystem, and its location in the Southern Hemisphere. According to the Deloitte Access Economics report, Australia is still at the beginning of its cloud journey, indicating a tremendous growth potential. The growing software developer landscape in Sydney coupled with the rich telecommunications connectivity options, including a vast array of submarine communications cables which connect directly to USA and Asia, made Sydney an ideal choice for DigitalOcean to set up a new data center location.
In addition, having a data center in Australia has been one of the most common requests from our customers over the past years. Here’s what our customers have to say about SYD1.
“I was excited to hear that we’ll have DigitalOcean’s reliability coming onshore to Australia. That will make a big impact for local businesses.”—Scott Purcell, Co-Founder, Director, Man of Many
DigitalOcean is focused on making it easier for businesses to deploy and scale their applications and we are extremely excited to see what we can build together in Australia. If you’d like to have a conversation about using DigitalOcean in your business, click here to contact our sales team.
Spin up a Droplet in SYD1 today and let the fun begin!
]]>Watch below as host George Davison and Tomorrow’s World Today’s field reporters speak with Gabe Monroy, DigitalOcean Chief Product Officer, on how cloud technology is helping small and medium-sized businesses scale. You’ll also hear about Hatch, DigitalOcean’s global startup program, and the role that our Customer Success and Solutions teams play in accelerating the success and growth of businesses on DigitalOcean.
]]>Q: What does a Technical Writer do?
Alex: Technical writers, in a general sense, take technical or new and exciting tools and write about them in a way that makes them usable by a broader audience. Broader can mean a lot of things. It can mean a very wide audience, all developers everywhere. At DigitalOcean, we get to go out and write about just about anything. We are always looking for new concepts, new fun things, or new cool cloud technologies to write about.
Q: How did you get into this field of work?
Alex: I started as a digital archivist. I worked with cultural heritage materials, which sometimes would be older software or analog media—audio and video. There is a great community of people who work with audio/visual preservation and do a lot of work with open source tooling. They try to use tools that everyone can access, and I got into this line of work that way. You’re always looking for ways to make a small improvement to existing open source tools for working on a server with all different kinds of media. It was a fun career path.
Q: What does a typical day look like?
Alex: I’ll try and make this interesting. I start my day with some tea. There’s a bakery down the road from me that will sell frozen croissants, and I like to keep those at home and make them in the mornings. I might exercise a bit and then settle in for my day of work. I don’t do my best writing in the morning, so I try to spend that time doing research, providing feedback to my teammates, having meetings, or doing other task-oriented things. Around lunchtime, I will try to get outside to clear my head, usually going for a bike ride. I do my best writing in the afternoon, so I keep that time free to focus on writing. When I want to focus and crank out a piece, I’ll take my laptop to the couch and sit down with an afternoon coffee.
Q: How do you stay productive throughout the day?
Alex: I try to get up and move around frequently. A change of scenery helps me think differently, so even in my apartment, I will move around throughout the day. I come up with my best ideas sometimes when I’m away from my desk when I’m not trying to think about work, so I write down those thoughts in the notes app on my phone. The right amount of caffeine also helps.
Q: Is there anything special you do to take breaks throughout the day?
Alex: I like to take bike rides or take my motorcycle for a lap around downtown. I also like to play with my cats, Leo and Sophie.
Q: What do you enjoy about being a Technical Writer?
Alex: I like being able to cover a lot of popular, useful, modern, and technical content. It’s really hard to find a job where staying on top of what’s popular and interesting is the job. Usually, you have to do your job and then try to keep up with new ideas on top of it. I like making content more accessible to more people and writing about complex topics in a way that more people can understand and use.
Q: What keeps the job fun?
Alex: The team. The culture at DigitalOcean is amazing, and I love the team. It’s great to work at a company that knows what it wants to do. DigitalOcean’s commitment to simplicity for developers comes through in everything we do, and I enjoy that mission.
Q: Any advice for someone who may be interested in becoming a technical writer?
Alex: I have to plug the Recurse center. They are a New York-based organization that I went to when I was considering a career transition. They pitch it as a career retreat for programmers. Besides that, focus on clarity and breadth in your topics. You can also try out the Write for DO program!
Q: You recently made a 100-tutorial milestone. What drives you to learn and write about new technologies?
Alex: I didn’t write 100 pieces from scratch; some of that is updating and expanding on existing content. I think the hobbyist element is really helpful for me. I have several raspberry pi’s here, and I just enjoy thinking about this stuff.
Q: What do you like to do outside of work?
Alex: I like to travel. I’ve visited 40 of the 50 states in the United States and want to make it to all 50. I like film and photography, and I recently started shooting some drone photography which is pretty fun. I also train in Krav Maga.
Q: I also hear you’re a fan of kitsch. Do you have any recent kitschy anecdotes or possessions that you’d like to share with us?
Alex: Yes. I just got this Amtrak bumper sticker from the 1970s oil crisis. It says, “Amtrak Relieves Gas Pains,” and I just can’t believe someone thought this would be something people would want on their cars. I had to have this. I like kitschy old advertising and have a lot of things like this around my house.
Interested in learning more about what it’s like to work at DigitalOcean? Check out a list of benefits and open positions on our careers page.
]]>Today we are excited to announce the launch of DigitalOcean Partner Pod, our brand new partner program. Partner Pod expands on DigitalOcean’s commitment to partners and offers a new approach to partnerships that enables partners themselves to choose what benefits they receive as a member of the Partner Pod, from discounts and credits to Market Development Funds.
At DigitalOcean, we’re focused on the success of small and medium businesses, and we believe that by working closely with our partners we can both give more SMBs the benefits of DigitalOcean’s simple, cost-effective cloud computing solutions, and also enable our partners to grow their own businesses. DigitalOcean partners today include digital agencies, consultants, DevOps providers, hosting providers, managed service providers, and platform builders, and we are excited to add more partners in the coming months.
By joining Partner Pod, partners gain access to unparalleled support, catered for the needs of their businesses and SMBs. We also offer partners free training and enablement on sales, products and other topics, co-marketing opportunities, and Market Development Funds so they can get help running their campaigns and bring in new customers. All partners will benefit from DigitalOcean’s simple and reliable products, which are built to enable businesses around the globe.
"DigitalOcean is a reliable global infrastructure partner with great security and speed – they offer a true opportunity to help partners go from a generalist to a specialist and become a trusted partner of the SMB world.” - Plesk/WebPros
The Partner Pod aims to empower partners to forge their own path with DigitalOcean, with the flexibility to choose from three business tracks based on the partner’s specific goals and business model. The program is also structured with three different levels – Registered, Engaged, and Strategic – that reflect different stages of partnership maturity and investment.
The tracks include:
Partners will have their choice of sales rewards, from discount structures to referral fees to DigitalOcean credits. In addition, the new, easy-to-use Partner Experience Platform will serve as the digital portal for useful information, simplified business processes, and stay current on leads, opportunities and accounts. The Partner Experience Platform has the added benefit of providing a way for Pod members to connect with each other.
To apply to become a partner, please visit the Partner Pod webpage. Team up with DigitalOcean today to grow revenue, tamp down operational costs, and help your company thrive. Come make waves with us!
]]>The DigitalOcean Database operator uses Kubernetes to automate deploying and managing DigitalOcean Managed Databases. You can use the operator with DigitalOcean Kubernetes v.1.23 or greater and it supports all of our managed databases engines: PostgreSQL, MongoDB, MySQL, and Redis. The operator automatically connects your workloads to your databases and can manage the lifecycle and configuration of your database and users. It deploys a custom controller in the control plane that matches the desired state of the database with the current state.
We want developers to focus more on their apps and less on the complexities like configuring, managing, and connecting to DigitalOcean Managed Databases. The DigitalOcean Database operator extends the Kubernetes API and helps you to declaratively automate common database tasks. The database operator is currently in Beta and can be deployed easily by checking a box when creating a new DigitalOcean Kubernetes cluster.
The operator supports two architectures depending on your use case: the controlled database and the referenced database architectures. In the controlled database architecture, Kubernetes can help manage the lifecycle and configuration of your database. The referenced database architecture is preferred where apps run in several Kubernetes clusters and the database should persist when a cluster is deleted. In the referenced architecture the operator will manage the database users, but not the lifecycle of your database.
Check out the documentation in the GitHub repo to learn more about the operator, and its limitations.
DigitalOcean is a big proponent of open source, and we built the Database operator on the do-operator. We’re donating the do-operator to the open source community. It’s a Kubernetes operator that lets you manage DigitalOcean resources from any CNCF conformant Kubernetes clusters. If interested in contributing to the do-operator, please see the contribution guidelines for more details.
Today we are also excited to announce the egress gateway solution blueprint for DigitalOcean Kubernetes, which lets devs route their pods’ outbound traffic through a NAT Gateway in two steps. Kubernetes manages incoming traffic via the built-in ingress resource, but it doesn’t have an egress resource for outgoing traffic. While many egress solutions are available, moving them to production is time-consuming. The egress gateway solution blueprint makes it easier to communicate with external resources so you can enjoy having a static IP address for all your pod’s egress traffic.
You can simplify your network by routing outbound traffic to a small set of egress gateways and scale your Kubernetes cluster without having to update external allow lists. To configure the egress gateway in your environment and for more details check out the GitHub guide.
The Egress gateway solution blueprint and Database operator are completely free. DigitalOcean customers are only billed for Droplets created as NAT Gateways (for the Egress gateway), and any resources consumed by their DigitalOcean Kubernetes cluster.
CTO.ai is partnering with DigitalOcean to simplify and accelerate Kubernetes adoption on DigitalOcean. CTO.ai is a SaaS built to compose an internal developer platform using measurable CI/CD workflows and flexible ChatOps to provide a rich developer experience. Find us at Kubecon for exclusive offers from both DigitalOcean and CTO.ai.
Will you be attending KubeCon? We’d love to meet you! Find us at booth G19 and stop by for some fun demos, swag, and exciting announcements! If you’re attending virtually, check out our virtual booth!
Happy Coding,
Udhay Ravindran Senior Product Manager I
]]>When evaluating replacements for Heroku, you should look at solutions that offer the ease of use of Heroku and also address its pain points. One strong Heroku alternative is DigitalOcean App Platform, our fully managed PaaS solution. App Platform offers many of the benefits of Heroku, at a cost-effective price point and with additional flexibility that allows developers to have more control over their deployments. As builders scale their applications, they can also easily migrate to more advanced infrastructure solutions including our Infrastructure as a Service offerings and DigitalOcean Kubernetes. Below are some of the top reasons to consider DigitalOcean App Platform as an alterative to Heroku.
Cost is one of the most important elements of any Platform as a Service offering, especially for early-stage companies who need predictable, transparent pricing. App Platform is run on DigitalOcean’s own infrastructure. This allows us to have much more control over our pricing. DigitalOcean’s pricing model keeps your costs low, not only when you start, but also when you scale your apps.
Heroku does not have their own infrastructure offerings, and it’s likely that one of the reasons behind the high costs of Heroku is that it’s built on Amazon Web Services. This means when you run Dynos, Heroku’s name for their application building blocks, under the hood they run on AWS’ infrastructure. This results in pricing where you have to pay $50 per month for a single Dyno with 1 GB of RAM. Costs can skyrocket as you add more Dynos or get more powerful Dynos to scale your applications. For startups and SMBs, every dollar matters, and businesses may question the ROI of Heroku, especially when there are alternative solutions that offer a similar developer experience.
Here are some data points that show the significant savings you can get on App Platform:
DigitalOcean | Heroku | |
---|---|---|
Small hobby projects | $5 per month | $7 per month |
Prototype web service (512MB RAM, Dev DB) | $12 per month | $16 per month |
Small production app (2 web service containers with 1MB RAM, managed database with 4GB RAM) | $84 per month | $150 per month |
Production app with dedicated CPU (2 web service containers with dedicated CPU, managed database with 4GB RAM and high availability) | $270 per month | $700 per month |
App Platform is built to provide builders with the utmost flexibility. We have two tiers for compute, Basic and Professional, and offer 10 plans ranging from 512 MB to 16 GB of RAM, with a healthy mix of shared and dedicated CPUs. There are fewer restrictions and you can power your app using any of the plans. This provides more granularity and flexibility when scaling your applications.
Heroku is inflexible and offers very few configurations out of the box. For example, the professional plan provides just two SKUs for Dynos - one with 2.5 GB and another with 14 GB, and nothing in between. There are also limitations around the number of Dynos you can have on an application and around combining Dynos from various tiers. This often makes it difficult to find the right configuration, leading to either under-powering your app or overpaying for infrastructure. The limited configurations also make scaling your apps harder as your business grows.
As your business matures, you might find that PaaS products no longer meet your needs, as your application may require you to have more control of your infrastructure. Because Heroku only offers a PaaS solution, you will have to move out of Heroku to utilize Infrastructure as a Service (IaaS) solutions. This migration can be time-consuming and difficult.
Using a PaaS provider, like DigitalOcean, that also offers other IaaS products can make the transition to a more complex setup much simpler. In addition to App Platform, DigitalOcean offers a comprehensive portfolio of products including Droplet virtual machines, DigitalOcean Managed Kubernetes, managed databases, serverless compute through DigitalOcean Functions, block and object storage, and networking products like load balancers and virtual private clouds. If you reach a stage where you need more control over your infrastructure, we offer an easy migration path from App Platform to other compute options like Droplets and DigitalOcean Kubernetes. We also provide migration support from our technical solution engineers and partners to reduce the migration effort.
Developers value Heroku for their simplicity, and at DigitalOcean, simplicity is at our core. App Platform provides an intuitive, visually rich experience to rapidly build, deploy, manage, and scale apps. You can deploy code directly from GitHub or GitLab repos, and get out-of-the-box support for popular languages and frameworks like Node.js, Python, Django, Go, PHP, and static sites. Check out this video to see App Platform in action:
In addition to App Platform, everything at DigitalOcean is built around simplicity and scalability. Simplicity is infused into our products, docs, pricing, billing, and support. This simplicity enables you to spend more time on your business and less on managing infrastructure. It makes it easy to onboard new developers, decreases training costs, and reduces the need for a large team of engineers to manage infrastructure.
Here’s what MyCast had to say about the benefits of App Platform:
“I don’t know much about maintaining servers, so the App Platform made it a lot easier to deploy and manage, and it saves me time from having to monitor my own Droplets and deploy to multiple places. It’s nice to push changes to Github and know it’s building and deploying in the background. It’s easy to scale up and down and great not to worry about the server-side maintenance.” - Billy Swift, Founder, myCast.io
A Total Economic Impact study for DigitalOcean by Forrester found that an organization experiences 50% time savings on infrastructure management, saving engineers thousands of hours. It also calculated a savings of $300,000 due increased productivity.
DigitalOcean’s sole focus is to empower entrepreneurs, startups, and SMBs by making cloud computing simple, so that they can create world changing apps. Our products, pricing, and support are all designed to super serve the needs of startups and SMBs around the world. DigitalOcean is a big proponent of open source, and we encourage you to use your favorite open-source projects for building apps. There are no multi-year contracts or lock-ins, making it simple and risk-free to adopt the platform.
As you explore alternatives to Heroku, you may evaluate hyperscalers such as Amazon Web Services, Google Cloud Platform, and Microsoft Azure. However, these hyperscalers are expensive, complex, and cater to the needs of large enterprises. You are force-fitted into a cloud that’s designed for someone else.
DigitalOcean is dedicated to providing the best PaaS solution in the market, and continually makes updates to App Platform to add requested features. Some recent updates include significant improvement in the build performance, the ability to securely connect your apps to Managed Databases using Trusted Sources, forward logs to external log providers such as Papertrail, Datadog and Logtail for better analysis and troubleshooting, set up alerts and monitoring for events such as successful deployment and domain configuration, and easily add functions as components of your apps. You can check out the docs and take App Platform for a spin today. If you’d like to have a conversation about using DigitalOcean and App Platform in your business, please feel free to contact our sales team.
]]>Hatch is an online founder program designed to help startups grow to scale. Cloud infrastructure can be one of the largest expenses facing these companies as they begin to grow. With Hatch, startups receive access to both DigitalOcean credit and a range of other resources like technical advice.
DigitalOcean Hatch has supported over 8,000 startups since the program was established in 2016. The program offers members up to $50,000 of DigitalOcean credit (the actual amount varies by partner organization), special events and swag, and opportunities for co-marketing with DigitalOcean. The program also offers various support services such as technical sessions, access to mentorship opportunities, solutions engineering, and founder-focused events. Members also receive special discounts on critical business services from leading SaaS providers, like Cloudflare, SendGrid/Twilio, Notion, Hubspot, and more.
Our goal with Hatch is to give back to the startup ecosystem and provide support to founders around the world so they can focus on building their businesses and not worry about managing tech infrastructure. Having come through the Techstars program ourselves, we know just how valuable this support network can be.
Our team sent a survey to Hatch participants in order to learn more about the Hatch community and how they use DigitalOcean. We received 688 responses from participants who were eager to share their experiences with the Hatch program and the DigitalOcean platform. We were excited to learn that the vast majority of the Hatch community is satisfied with their experiences and want to continue building on DigitalOcean.
Early-stage startups benefit from Hatch
We’ve made it our mission to help startups grow in the cloud at an early stage of their lifecycles. In fact, almost half of the startups that Hatch supports have five or fewer employees, and over three-fourths have 20 or fewer.
Looking at funding rounds raised by Hatch startups tells a similar story: two-thirds of our startups have raised a seed round or less (though we’re excited to see some of our Hatch alumni starting to raise series A, B, or even bigger rounds).
By providing startups with platform usage credits, partner discounts, and technical help, Hatch has been tailored to support earlier-stage businesses when they need support the most. Across our entire program, startups are so satisfied with Hatch that a vast majority of our program participants choose to stay with DigitalOcean even after the program ends, with approximately 65% of Hatch users maintaining their infrastructure on our cloud after the program ends.
Startups have embraced multi-cloud
We were curious about how startups used DigitalOcean, so we asked our Hatch community about their cloud infrastructure setups. Startups were asked if they used only DigitalOcean, hybrid infrastructure (DigitalOcean plus on-prem infrastructure), multi-cloud infrastructure (mirrored setups on DigitalOcean and elsewhere), or multiple infrastructure providers, including DigitalOcean. A third of Hatch startups use only DigitalOcean for their cloud infrastructure needs—more than any other cloud setup.
Hatch program participants plan to increase their cloud spending
Over half of our startups also planned on growing their cloud budget on DigitalOcean next year. We believe that this success reflects the power of our program and our partner ecosystem in allowing startups to scale up quickly and easily.
Finally, we were excited to see that of the startups who plan on reducing their cloud budget next year, 41%—a large plurality—are doing so because cost optimization on DigitalOcean has helped them cut costs.
Remote + Hybrid is here to stay
As our community continues to grow, we’re looking to provide content, events, and support around specific and relevant topics in the Hatch network. One important discovery we made is that most of our community is not ready to return to only in-person events—our startups indicated that they would prefer hybrid and virtual events to in-person events by a large margin.
There were also a mix of topics that startups wanted to learn more about, with startup fundraising and containers / Docker technologies leading the pack, followed by DevOps, security, and newer technologies like serverless.
Our startups have been vocal about what they need to be successful. They are clear about what a partnership means in an intimate program. We are delighted to share the changes we made in response to their input. In 2022, we adapted our startup onboarding program to help founders quickly orient themselves to their cloud computing options. While many of our founders are ready from the start, a few founders need some assistance accelerating to production-level environments. Our technical assistance includes monthly office hours just for startup founders, direct engagement with our product teams, and solution architects to help founders overcome technical impediments to their business. We added assistance with live sessions on market intelligence, sales, and technical development. Additionally, we realized that the founders in our community love the help they get from other founders. The center of our community is the help they can offer each other. We want to reward this sense of community. We added a referral option to award credit to alumni that refer new founders to DigitalOcean.
We added support for bootstrap startup companies that have funding from investors new to DigitalOcean’s startup partner community. Hatch is a fast-moving program, with most startups getting into the program in 10 business days and offering the flexibility to start the program when it makes sense.
Because we’re always looking for ways to improve our program and our support for startups, we also asked users what they wanted to see in a startup program beyond what Hatch already provides. Here’s what we discovered our startups wanted to see:
Over the next few months, we’ll brainstorm how to improve on these program components. Stay tuned for what’s coming next to DigitalOcean Hatch!
Hatch is available for early-stage startups who are affiliated with an approved accelerator, incubator, or venture capital firm (full list here). Eligible startups must not have received previous promotional credits and must be new DigitalOcean customers.
Learn more about Hatch and fill out our quick application here.
]]>Here’s what development teams can do now to prepare infrastructure for the holiday season and give their growing businesses the best chance of success.
Not everyone who comes to the site is going to view the homepage. In order to best simulate possible scenarios, you need to understand what your peak workloads are at any given time and where the traffic is distributed at those times. Look at web logs (from your edge services) for certain times of the day to determine when your peak commerce hours are and compare that to what the load is during other parts of the day. Figure out what the traffic mix looks like at peak periods and identify patterns and conversion rates along with the traffic distribution breakdown of the site.
For example, assume your website has the following pages:
/homepage
/dealpage
/about
/supportItem
If you identify that peak hours for you are between 9-10 a.m., meaning that’s when the most page views happen for you in a given day, then you’ll want to map out the % of page views per page:
70% → /homepage
20% → /dealpage
2% → /about
28% → /support
You can identify similar patterns for API calls as well. The process is no different, but your sources may be different depending on how you log web vs. API traffic. When you’re ready to test, you’ll want to leverage this data to help you implement a synthetic load test that can exercise your website with a pattern that’s representative of your production traffic.
When you have a good idea of traffic patterns and distribution, run some tests. Take snapshots throughout the day and analyze the performance of your systems powering your runtime. Simulate what a customer is likely going to do with the traffic distribution you’ve collected and ratchet up the volume of requests steadily. Teams should work to find out where the breakpoints and bottlenecks are within the full stack with the load tests. When your team finds bottlenecks or breakpoints, prioritize fixes based on the effect they have on the end user. If you have timeouts for 1% of traffic, that’s a lower priority than something that will impact a more significant amount of customers. This step will be repeated multiple times until you’re at a point where you’re OK with the known limitations at a scale that’s going to satisfy your holiday season needs. Don’t forget to let your ops teams know you’re about to flood your systems too. They’ll appreciate the heads-up!
It’s one thing to know you’ll have increased traffic during the holiday season, but it’s another thing entirely to know how much. If you have a marketing team, find out what campaigns they’re running for the holiday season and what they anticipate traffic to be based on those campaigns. If there aren’t specific campaigns to prepare for, pull data from the previous year as a benchmark. Size up based on potential demand, factoring in normal daily traffic as well. If you aren’t sure what the anticipated load will be, it’s best to be prepared for 5-6x normal peak times. Bring that new capacity plan to your leadership teams to review and be on the same page regarding what works for you (whether it should be more or less).
From there, it’s a matter of scale. Ensure you’re modeling as close to production as possible, and then continue to scale while making sure nothing breaks. Don’t wait until the last minute to add new servers. Set up your infrastructure to handle peak loads and let it sit. Keep testing it throughout the week, running tests every few days until about a week before the first anticipated traffic increase. One week out, monitor changes in production. Be wary of any last-minute changes that will change the capacity profile.
Identify the pain points in your architecture and build redundancy into your stack. Find out where you can implement circuit breakers that prevent failures from taking down your entire application or site. Build monitors for the high-traffic season (or the day of a special deal) to keep track of trigger points that were recognized as potential problems through the previous testing.
For details on how to prepare your infrastructure for large traffic spikes and how to balance speed, security, and scalability to increase profits, check out this tech talk with Austin Black, Solutions Engineer at DigitalOcean, and Roxana Ciobanu, Chief Technical Officer at Bunnyshell.
DigitalOcean has many solutions for growing eCommerce businesses. Tools like DigitalOcean Uptime can give your team added confidence during this holiday season, and our support team is always ready to help. From Droplet virtual machines to App Platform, our Platform as a Service offering, Managed Kubernetes, and Load Banacers, we provide the tools you need to build and grow your applications. To sign up for a DigitalOcean account, contact sales.
]]>Newsweek’s Most Loved Workplaces List surveys employees across the country on select criteria such as the level of trust, respect, collaboration, care, and recognition that they experience at their companies. These scores culminate in an overall “Love Index” score for each company, as measured in collaboration with the Best Practice Institute (BPI). This year, more than 1.4 million employees across the U.S., from small, medium, and large companies alike, participated.
“The companies on this list represent the best at placing love at the center of their employee’s experience,” said Louis Carter, CEO of the Best Practice Institute and Most Loved Workplace Founder. “The number of applications this year and analysis of survey data reinforces our original findings that love is the strongest predictor of the strength of a company’s culture, employee engagement, and satisfaction.”
DigitalOcean received particularly high sentiment scores in categories such as trust, inclusivity, and our employees’ overall vision of the future.
Trust (Score: 4.7/5)
DigitalOcean cultivates trust among employees through a commitment to transparency and open communication at all levels. DigitalOcean employees are encouraged to ask questions of executives at company-wide All-Hands, and individual departments will also host regular All-Hands and AUAs (Ask-Us-Anythings) to ensure teams are aligned with overall business and organizational goals.
Inclusivity (Score: 4.6/5)
At DigitalOcean, one of our core values is that “our community is bigger than us”. This value ensures that our teams work toward inclusivity not only among employees, but across our larger developer, entrepreneurial, and open source communities as well. Internal events such as those put on by DigitalOcean’s ERGs help build inclusive communities internally, while externally, employees are encouraged to get involved with the organizations and nonprofits of DO Impact via volunteer opportunities, or contribute to open source events such as our yearly Hacktoberfest.
Positive Vision of the Future (Score: 4.6/5)
Employees also reported having a strong “positive vision of the future” at DigitalOcean. Since helping builders chase their dreams is at the core of our work at DigitalOcean, our global employee base thrives on fostering innovation and inspiration. The spirit of helping entrepreneurs and small businesses achieve their goals resonates with our employee experience internally as well.
According to the Newsweek survey, 95.8% of DigitalOcean employees would recommend their workplace to friends and family. If you’re interested in exploring a career at DigitalOcean, check out our current openings here.
]]>On November 15th and 16th, you’re invited to join us as experts from DigitalOcean’s ecosystem share practical takeaways and personal tales on achieving scale with simplicity.
Building bridges between business goals and developer happiness
Our speakers have first-hand experience building and scaling complex applications. Through keynotes and technical workshops, they’ll provide the tools you need to simplify your infrastructure, smooth out processes, and create a happier development team.
deploy will cover a variety of topics, including architecture for different stages of growth, implementing Kubernetes, migrating legacy software, and building for scale. In addition to technical topics, DigitalOcean’s leaders will discuss the challenges global founders, developers, and builders face on the journey to create helpful tools and systems, scaling during uncertain times, and reducing burnout. Finally, we’ll share real stories from other builders, discussing their startup journeys and how scrappy developer teams use DigitalOcean to meet their business goals.
Connect with the builder community
Event attendees will have an opportunity to connect and collaborate with builders around the world on the DO Community Discord server. Chat in real-time with attendees, speakers, and DigitalOcean executives to dive deeper into business challenges and participate in workshops and discussion panels.
We can’t wait to connect with everyone at the event!
]]>Supercharge your app builds
The DigitalOcean App Platform now builds apps in nearly half the time and also supports local builds that are even faster than building remote. Local builds often take seconds and easily integrate with your CI/CD pipeline. For further automation, App Platform can redeploy your app when you push a new image to the DigitalOcean Container Registry.
By using trusted sources for connection pools, you can now further secure your PostgreSQL databases and easily connect from a specific app. Learn more about PostgreSQL connection pools in the App Platform.
Introducing DigitalOcean Uptime
Empower yourself with DigitalOcean Uptime, which enables you to determine if your services are available and responsive from an external user’s perspective. DigitalOcean Uptime alerts you when your assets are slow, down, or vulnerable to SSL attacks. One free check is available per month for any DigitalOcean account.
DigitalOcean Functions now supports multiple namespaces so you can isolate and organize your functions by project, environment, region, or any other grouping you need to develop.
We added a CPU-Optimized Droplet plan with 48 vCPUs of dedicated computing power and 96 GB of memory. The new 48 vCPU Droplets are also available for DigitalOcean Kubernetes nodes. You can also resize your existing Droplets to this node plan.
Kubecon is quickly approaching and we have exciting announcements coming. Don’t miss it! Find our booth here or book an appointment now—you may even qualify for two free months of DigitalOcean.
Do you have an idea for improving our products? Submit your feedback and vote on other user suggestions on our ideas page. If you have questions, ask them here.
Happy coding!
Ivan Tarin
Sr. Product Marketing Manager
]]>You don’t have to be able to write code to participate in Hacktoberfest. Professionals from all backgrounds are integral to the advancement of open source projects, and non-technical skills can be used in a variety of ways. This year, we’re making it easier to contribute in areas that require some technical experience or none at all.
Some of the ways non-technical individuals can contribute to open source projects are:
In 2020, we discovered that the popularity of Hacktoberfest over the years and the excitement of earning a limited-edition Hacktoberfest t-shirt resulted in low-quality pull/merge requests (PR/MRs) as participants chased the t-shirt without thinking about their actions. These actions resulted in more work for maintainers and a poorer experience for the open source community. In response, we immediately tightened our rules for submissions during the 2020 event and were more consistent with enforcement, successfully ensuring higher-quality PR/MRs and reducing the number of spam complaints. We kept those same rules in 2021 and will be doing the same this year, continually refining them based on feedback from the community to ensure Hacktoberfest is a positive experience.
As part of our continued dedication to providing a high-quality, helpful experience that brings the open source community together, we have the following rules for this year:
Hacktoberfest is open to anyone who wants to participate, whether you’ve joined us for all nine years or you’re new to contributing to open source. To complete Hacktoberfest, you must contribute four accepted pull requests or merge requests to opted-in repos on GitHub or GitLab, but we encourage participation of all levels–you can also participate by completing a single PR/MR, making a donation to your favorite open-source project, or organizing or attending a virtual event. Participants (maintainers and contributors) who complete Hacktoberfest can choose one of two prizes: a tree planted in their name or the Hacktoberfest 2022 t-shirt, while supplies last. See this page for complete rules.
Hacktoberfest couldn’t happen without the support and participation of the open source community, especially our sponsoring partners. This year we are joined by returning partner Appwrite and are welcoming Docker, Novu, RapidAPI and Devtron. Our partners have some great Hacktoberfest activities you can participate in. Come meet them and learn more! Join our Hacktoberfest livestream events.
We can’t wait to see all the ways you choose to participate this year!
Happy hacking!
Phoebe Quincy, Senior Community Relations Manager
]]>At DigitalOcean, we are always looking for ways to enhance the developer experience. We are excited to announce that we have made major updates to DigitalOcean App Platform, our fully managed Platform as a Service (PaaS) solution, that will significantly increase the speed and flexibility of deploying apps.
DigitalOcean loves open source. Many of the libraries and frameworks we use at DigitalOcean are open source, and we strive to support initiatives that help the open source community thrive. So when it came to improving build performance of App Platform, we decided to leverage Kata Containers, an open source container runtime, building lightweight virtual machines that seamlessly plug into the containers ecosystem.
As you may know, App Platform builds and runs apps in containers. Traditional docker-style containers offer considerable security capabilities, but do not include virtual-machine-like isolation used in products such as DigitalOcean Droplets. To enhance container security, App Platform has in the past used gvisor to isolate the shared Linux kernel during system calls. However, this implementation came with a build-performance cost due to the system call latencies, especially when performing file I/O.
Our latest implementation of secure containers for App Platform uses Kata Containers, which results in a more secure container runtime with lightweight virtual machines that feel and perform like containers, but help provide stronger workload isolation using hardware virtualization technology as a second layer of defense. With the use of Kata Containers, applications on App Platform will help build significantly faster on the cloud. Based on our internal benchmark tests some apps have seen over a 50% reduction in the build time for remote builds in the App Platform.
Another new feature in our latest update is local builds, which is the first of several initiatives under development to create a holistic local development experience within App Platform. Local builds give you the flexibility of building your code on your local machine or server. They deliver a multitude of benefits to developers:
Curious about how local builds work? Check out this video
We have made significant improvements in App Platform in the past year. The most popular updates being the ability to securely connect your apps to Managed Databases using Trusted Sources, forward logs to external log providers such as Papertrail, Datadog and Logtail for better analysis and troubleshooting, set up alerts and monitoring for events such as successful deployment and domain configuration, easily add functions as components of your apps and rollback to previous deployments. Developers can now enable automatic deployment when using DigitalOcean Container Registry (DOCR). This allows them to deploy apps from a tag (e.g. latest, production) and makes it easier to ensure the app is always up to date when new images are pushed to the same tag, without having to update AppSpec for every new version. With significant improvements to the build performance, there has never been a better time to try App Platform.
If you’d like to have a conversation about using DigitalOcean and App Platform in your business, please feel free to contact our sales team.
]]>The first day of my internship started with a couple of onboarding sessions where I met the People team along with the other summer interns. Following the sessions, I was welcomed by my manager, Mavis, my mentor, Greg, and the rest of the App Platform team. I was invited to all of the team meetings, where I was able to ask questions about the different tasks my team was working on. This helped me get a better understanding of the work my team does and made me feel more comfortable speaking up. Throughout the onboarding process, I noticed DigitalOcean’s values were apparent everywhere.
DigitalOcean’s mission is to simplify cloud computing so builders can spend more time creating software that changes the world. As DigitalOcean rolls out a serverless cloud offering, it’s important for developers to have a useful standard library of off-the-shelf serverless functions so that developers can focus on quickly building applications. That’s where my project came in! Over the summer, I worked on building three serverless functions as well as an app composed of those functions to showcase the serverless functions.
DigitalOcean Functions lets you deploy code that performs the same tasks as a traditional application without the requirement of setting up a server to manage the requests. These functions run on demand without the need to manage any infrastructure. Users are able to test on their local machine with their own tools, deploy the code using doctl or the DigitalOcean API, and deploy to App Platform with no servers required.
Off-the-shelf functions
My summer project was building three off-the-shelf functions: a Twilio function that sends an SMS between Twilio verified phone numbers, a SendGrid function that sends an email to email addresses with or without DMARCS, and a presigned URL function that gets a presigned URL to upload a file or to download a file from a DigitalOcean Space. To showcase the serverless functions, I created a Slack bot that ties the three functions together. The slack bot takes in the status code and the body of the returned response from the functions and sends back a slack attachment to the channel the bot is in.
The three serverless functions can be used to create apps on the App Platform Control Panel shown below. You can also find a step-by-step tutorial on how to create an app on App Platform Quickstart.
The same functions can also be found on the Functions Control Panel as shown below. The Functions Quickstart can provide more information on how to deploy a function.
Here are two other locations where you can learn more about the three serverless functions:
Slackbot
My favorite part of the project was creating an app on App Platform that consisted of the Slackbot as a worker and the three functions as function components. The Slackbot was built to see if users are able to use the serverless functions together. An app manifest for slack is included inside the GitHub Repo for users that would like to create a slack app that would work with the functions. More information about how to create a slack app, the requirements, and the deployment instructions can be found in the GitHub repo. Below, I included a preview of the Slackbot returning a success status code and body after sending a Twilio SMS and a SendGrid email.
One of DigitalOcean’s values that I love is “Love is at our core”. Under that value, a key point is to express appreciation and gratitude to others. During my weekly meetings with the App Platform team, we take the first ten minutes to give kudos to people that have helped us during that week. I’d like to take this time to give kudos to a few people and teams that have made a powerful impact on me during my internship. I’d like to give kudos to Mavis, Greg, the App Platform team, the Functions team, the Serverless BU, the People team, and DigitalOcean for giving me this wonderful opportunity to continue growing as an engineer and for teaching me that Love really is at our Core.
Interested in learning more about what it’s like to work at DigitalOcean? Check out a list of benefits and open positions on our careers page.
]]>Our 2021 workforce data demonstrates progress against some of these goals and identified areas for continued improvement. Our new hires in the United States are more diverse than employees hired prior to 2021, and importantly, a higher percentage of employees self-identified in 2021 than 2020, demonstrating that employees are comfortable sharing information about their identities. However, attrition among females was higher than males in 2021, and as we see an increasingly competitive job market for female talent we will look to create even more opportunities for women.
In the past year, we have also expanded our internal efforts to meet our DEIB goals, ensuring employees at all levels of the company are accountable to deliver on our commitments. We have hired a diversity program manager, expanded our recruiting efforts, and updated our interviewing guidelines and training. All executives including myself have specific goals around DEIB for 2022. Finally, I have been pleased to see the progress of our employee resource groups (ERGs). These groups have hosted multiple events in the past year, including programs surrounding Pride, salary equity discussions, volunteering events, and more.
To reach our full potential as a company, we must have a transparent workplace that values diverse people and viewpoints. We recognize that our DEIB goals are aggressive and know we have a long way to go to achieve them, but strongly believe that progress isn’t made without ambitious goals. I look forward to updating you again next year.
You can read the full DEIB report at this page.
]]>It’s hard to find personalized music recommendations using major music streaming platforms. Listeners often find themselves swiping through randomized playlists hoping to find their new favorite song, only to be disappointed by the tedious experience. That’s why Discz Music has been such a hit. Hailing from Y Combinator, the app allows listeners to quickly and easily find new music to enjoy by swiping through 30-second snippets of songs. If you like a song, swipe right, and it will be saved to a playlist for you to enjoy. The approach has gained favor with demographics like sought-after Gen Z and since its launch, Discz Music has gained TikTok virality and shot to the top of Apple’s app store charts.
As a startup, Discz Music needs to move fast, be able to pivot quickly, test new ideas, and ultimately become profitable. They partnered with DigitalOcean’s Hatch program to help them achieve their goals.
DigitalOcean’s Hatch program helps startups get up and running quickly. Hatch provides infrastructure credits that helped Discz build their product, experiment with new features, iterate using customer feedback, and more. Using credits helped Discz find product-market fit and then gave them the opportunity to scale as they found success. The team at Discz also appreciates DigitalOcean’s transparent approach to pricing and the additional support they receive by being a part of the Hatch community:
“Quickly after launch, we scaled and maxed out our limits… so someone on the Hatch [team] reached out just to make sure we were accommodated before we even had to find the right contact. It was really nice to feel taken care of, especially at such a critical time when things are on fire, services are melting, because there’s so much demand.” —Michelle Yin–Discz Music’s co-founder and CTO
Discz Music is currently utilizing three of DigitalOcean’s core product lines to build their product:
First, we love the great product idea. As a team full of music lovers, we understand how hard it can be to find new music. We are huge fans of being able to listen to snippets of songs to find songs that we actually enjoy.
We also love their partnership. Discz Music makes the most of Hatch’s resources, finding what they need in DigitalOcean’s product documents and 7,000+ tutorials or utilizing DigitalOcean’s support when they require a little extra help.
Finally, we appreciate their commitment to growth and community. As a participant in Y Combinator’s W22 cohort, Discz Music has gained a lot of wisdom about building a successful startup. Michelle Yin, Discz Music’s co-founder and CTO, shared some key takeaways that she learned while building the growing business:
Our global startup program Hatch is focused on providing founders with the speed, flexibility, and power they need to scale their digital infrastructure. Apply to the Hatch program to grow your startup today.
]]>Powerful DigitalOcean Droplets with 48 vCPUs
We added a CPU-Optimized Droplet plan with 48 vCPUs of dedicated computing power and 96 GB of memory. The new 48 vCPU Droplets are also available for DigitalOcean Kubernetes nodes. You can also resize your existing Droplets to this node plan.
Empower yourself with DigitalOcean Uptime to determine if resources are available and responsive from an external user’s perspective. DigitalOcean Uptime alerts you when your assets are slow, down, or vulnerable to SSL attacks and it’s free for new and existing DigitalOcean accounts
Watch as Chris Sevilleja, Senior Developer Advocate II creates an uptime check in a few seconds and monitors an endpoint.
Access the latest Buildpacks automatically
You can now easily upgrade your App Platform Buildpacks to their latest version to make sure all of your apps and components are up to date.
Bypass PostgreSQL’s built-in performance limits
Mitigate potential performance issues from PostgreSQL’s built-in connection limits and memory requirements by using connection pools. For detailed instructions see our reference page.
Debian 9 and Ubuntu 21.10 end of life
Debian 9 and Ubuntu 21.10 reached end of life and are deprecated as of August 10, 2022. These images will remain accessible for Droplet creation via the API for 30 days after the initial deprecation. After full deprecation, you can create images from a snapshot of a Droplet with that version or from a custom image.
Reminder: Updated API Tokens with new management features
Here’s a reminder to take advantage of the updated DigitalOcean API Tokens. The tokens have many new features including GitHub secret scanning, prefixed tokens, last used at, and expiring tokens. Generate a new API token and revoke old tokens to ensure you can use the new features.
Do you have an idea for improving our products? Submit your feedback and vote on other user suggestions on our ideas page. If you have questions, ask them here.
Happy coding!
Ivan Tarin
Sr. Product Marketing Manager
]]>Website latency is the amount of time it takes for data to go from a network to another, and is a critical metric for production applications. The longer your website or application takes to load, the more opportunities your users have to go somewhere else. Empower yourself with DigitalOcean Uptime to determine if resources are available and responsive from an external user’s perspective.
DigitalOcean Uptime allows you to check your services no matter the cloud provider–you can monitor almost any IP or endpoint after a quick and easy set up. Simply paste your endpoint to start checking.
Watch as Chris Sevilleja, Senior Developer Advocate II creates an uptime check in a few seconds and monitors an endpoint.
Slow websites can be due to a range of causes, from needing more computing power, to sites being under attack, or dependencies crashing. By setting an uptime alert you can protect your business during incidents that are common during your busiest times.
You can create alerts for up to four regions at a threshold of latency down to 1 millisecond (ms) and set how long before you’re alerted. By leaving the default configuration you’ll receive an alert immediately after two minutes of latency, by email or to a chosen Slack channel with relevant metrics so you can find the cause.
DigitalOcean Uptime also includes a regional latency graph, which will visualize any slowdown from the last hour up to the previous 90 days. This can reveal unexpected insights into your app over time and is easy to read, a baseline shows normal use and spikes appear when latency is worst. The regional latency graph can show you latency patterns at specific intervals that can tell you a lot about your infrastructure, product, and users.
Additionally, with DigitalOcean Uptime, you can track the validity of your SSL certificate and create an alert to tell you before certificates lapse so you can update them to avoid being vulnerable to attack.
"As a solo-developer of a popular photo platform it’s crucial for me to focus on the users and their needs while Uptime helps me to react extremely fast when services don’t work as expected.”—Manuel, Founder of Locationscout.net
“We use DigitalOcean Uptime to monitor our services and make sure they’re all operational. DigitalOcean Uptime is effective, reliable and costs only a fraction compared to similar services. As a long-time DigitalOcean user, all we can say is thank you! It’s always great to see DigitalOcean developing new helpful tools for their clients. This review is an honest one. We love DigitalOcean Uptime!” —René Hermenau, Founder wp-staging.com
New and existing DigitalOcean users get one uptime check for free every month! Any additional uptime checks are $1 each per month, keeping the price simple and competitive.
All uptime checks monitor performance at 1-minute intervals for detailed information on DigitalOcean Uptime like editing regions, and how to set up alerts refer to our documentation.
Happy Coding,
Daniel Levy
Senior Product Manager II, Insights/Experimentation
]]>The issues we addressed comprised three different areas:
The Reserved IP logic was scattered throughout our architecture from the product level down to the hypervisors where events are scheduled. The scattered logic, coupled with a multitude of microservices in-between, resulted in a very distributed workflow. This was the cause of many bugs and customer reports, adding to operational issues for our customers. These continued problems left our team consistently spending time putting out fires.
The legacy Reserved IP tech stack (Rails apps, MySQL cluster, Perl running on the hypervisors) made feature development and improvements slow. There was a lack of fine grained observability and independent scalability, as well as friction for integrations that caused internal product teams to make external calls for Reserved IP operations through the customer facing Public API rather than internal to our system.
Reserved IPs (FLIPs) allow customers to have a dynamic IP address that they can easily reserve to their account, assign to a Droplet, reassign to a different Droplet in the same data center, and ultimately release back into our pool of available Reserved IPs. This enables our customers to create a more highly available system architecture and minimize downtime.
As an example, imagine a Reserved IP assigned to a Droplet that is running a load balancer which is fielding all requests to your backend system. With some scripting and config, you could arrange for a secondary load balancer to run in a passive setting while sending health check requests back and forth with the primary load balancer. If the primary load balancer ever fails its health check, you could easily failover to the secondary and reroute traffic by assigning the Reserved IP to it.
In DigitalOcean’s internal system, the logic for managing Reserved IPs was located in a pair of legacy Rails applications: one for our web UI called Cloud given that a user’s account page was at cloud.digitalocean.com and another for our Public API called API. Both of these apps contained some shared logic for handling customer requests for Reserved IPs while also having some nuanced differences. The shared logic would include updating the state of a Reserved IP and inserting events in our shared MySQL cluster used by the majority of our internal services, as well as emitting events to a Kafka cluster to update the billing state of a user’s Reserved IP.
This architecture worked well for many years, but as time went on it became apparent that there were several issues we needed to solve:
With these problems to solve, our team scoped and designed a migration project to build a new set of microservices able to handle all of the Reserved IP logic that lived in the Rails apps.
After several iterations of our proposed design for the new architecture, we settled on introducing two Go microservices that would have a clear separation of concerns between logic needed to manage the state changes of Reserved IPs and logic needed to handle the user request, gather information from other internal services, and craft the response back to the user.
The first microservice is an orchestrator service that manages the Reserved IP state whenever a user reserves an IP, assigns it to one of their Droplets, unassigns it from a Droplet, and releases it from their account. This state management involves three key components:
The second microservice is an aggregator and its responsibility in the stack is to receive the incoming HTTP request from a user, parse the request and data, make gRPC requests to any other necessary internal services to retrieve information on the Reserved IP, and then package the data from these responses into a HTTP response. Thus, it “aggregates” all the data needed in the response to the user.
At a high level, these two microservices handle the same responsibilities as the legacy Rails applications but with some noticeable improvements:
Here’s a high-level diagram of how these two microservices fit into our larger system:
The reward was truly worth the effort, and it required a lot of care in planning, design, development, and rollout to production.
Given the scale of this project, our team took a step-by-step approach to minimize any impact to our customers. The initial phase was largely spent on understanding the legacy Rails applications, including its design, API, common failure modes, and integrations with other services in our system. The time we spent here was crucial to establishing a firm foundation for the rest of the project, given that the legacy applications hadn’t been actively maintained by a team for a lengthy period of time and existing documentation was minimal. Taking the time to explore and document our learnings before considering the design of our new architecture ensured that we would take into account various edge cases, user behavior/expectations, current performance metrics, and quirks when we began development.
Once we had mapped out the existing code paths for the various Reserved IP operations, we began the development of our new architecture using a cyclical process that we followed for each code path. The steps were as follows:
In total, we had 18 code paths that needed to migrate to our new architecture. While it may look time-consuming, we extracted many benefits from designing, developing, testing, and rolling out each code path individually because it was then easier to uncover and address any bugs we found. This process also ensured customer impact would be minimal with each migration.
An important part of our process was the use of a “feature flipper” to control the amount of traffic that was directed to our services. A feature flipper can be thought of as a gate or filter for requests entering certain code paths in a system. You can use a feature flipper to completely block any requests from exercising a code path and then, with a small config change, remove the block incrementally or all together.
At DigitalOcean, we have feature flippers built into our Edge Gateway, a service that receives all external traffic sent to our system routes them to the correct internal services, and then returns the response to the user that sent each request. It’s similar in concept to an API gateway.
With a little configuration, it’s easy for us to define a feature flipper in the Edge Gateway that allows us to dynamically change the amount of user traffic that is redirected away from the legacy Rails apps and toward our new microservices stack. Our current options for setting our feature flippers include:
Whenever we were ready to test a new code path in production, we simply enabled just the user IDs of our team members, went through our test plan, and then gradually enabled an increasing number of users each day to exercise our new stack while also moving them off of using our legacy Rails apps.
These feature flippers also enabled us to have a faster response in the case of any problems discovered with our new code paths. Instead of needing to perform a “rollback” by deploying an older version of our code, we could simply turn the feature flipper off, and then all users would go back to using the established legacy apps. This significantly reduced any downtime that our customers experienced and provided a fast mitigation strategy that our operations team could perform without needing to page our on-call team member.
After several months of work, our team completed the migration project and routed 100% of user traffic for Reserved IP operations to the new architecture. The immediate impact on our metrics was a dramatic decrease of 4 - 10x in our response times which directly resulted in a faster user experience. We also noticed a decrease in our internal error rates as our new architecture more gracefully handled errors and allowed for retrying internal operations that could transiently fail.
Aside from improvements to our metrics, our new architecture also improved the overall performance of other products that use Reserved IPs in their underlying architecture. The internal services managing these products used to make requests that traveled out of our internal system and through our Public API, which added to the overall latency of their operations. With our new architecture that provided a gRPC API for internal services, these other services could switch to calling this API directly, which cut their response times in half.
The new architecture improved the reliability and scalability of the Reserved IP stack in our system as a consequence of decoupling the legacy logic into two microservices that could be scaled independently. We also implemented techniques to gracefully handle internal errors that might be transient and retry them using exponential backoff. This made our system more robust in the face of any hiccups that the system might experience day-to-day.
Finally, the migration from Rails to Go led to a boost in our team’s developer productivity. Most members were more experienced with Go and were able to leverage existing tools and patterns that weren’t applicable to the Rails apps. This meant that we were able to address customer issues, bugs, and performance fixes more quickly and efficiently than before.
This is one of the larger projects our team has taken on and took several months to complete. Along the way, we encountered several challenges that provided valuable learnings for future projects.
Put more time upfront in discovery and documentation
One of the biggest challenges we faced with this project was the lack of internal documentation on Reserved IP operations, what dependencies existed with other internal services or products, edge cases with user requests to the API, and more. This lack of documentation and existing knowledge at the company as engineers left over the years led us to spend a lot of time upfront on discovery and writing documentation. We needed to know all of these details in order to properly design, develop, and test the new code paths without any regressions in existing operations and user experience. While discovery and documentation might not be the most “fun” part of the process, it ended up being one of the most valuable as the knowledge gathered during this time led to our system requirements and ultimate designs and ultimately saved time and development pain in the later stages of the project.
Strive to have an exhaustive test plan
When it comes to migrating an existing product, it’s vital to ensure that you have a test plan to cover all of the known “happy path” cases, common failure cases, and edge cases to gain confidence that your new architecture supports the same features and use cases as before. The “happy path” cases cover successful executions of the code and ensure that existing functionality still works between the legacy and new architectures. The common failure cases cover any requests/inputs that will result in the system returning a known error. It’s important to preserve these common failure cases as users of the system will depend on them just as much as your happy path cases. Lastly, the edge cases are tests that cover any possible requests that might appear strange or unlikely but could still occur and negatively impact your system. While it may seem tedious and unnecessary to create such a rigorous test plan, the payoff is immense in the amount of time and customer impact saved from catching bugs ahead of time before they land in production.
Capture important metrics before and after
It’s important to know what the vital metrics for your service/project are and which ones you’re expecting will improve by the completion of your work. This will allow you to make data-driven decisions so that you can spend design and development time on the areas that will make the biggest impact. Additionally, these metrics will provide valuable feedback on what improvements were made to the system. Unless you have datasets to compare against, you can’t state with confidence what improvements were a result of which efforts in your work, i.e. “Completing X led to Y results”. Lastly, having this data on hand also enables you to share your achievements across the wider organization to show that the project was a success and worth the resources spent on it. You can also use this data in conference talks or online articles that detail your work and the improvements it made to your company’s system and customers, which can have an incredibly positive impact on your career.
Preserve the existing API
When working on a migration project that exposes an API used by internal services and/or customers, it’s of the utmost importance that you preserve the existing API as much as possible. Performing any breaking changes without properly thinking them through, communicating them to your users ahead of time, or maintaining backwards API compatibility will lead to a poor customer experience that should be avoided. There is certainly a time and place for public/external API changes, but coupling it with a migration between architectures is very risky. That said, consider changes to any internal API changes, i.e. endpoints in your service that are only used by other services in your internal system, as it’s much easier and faster to get other teams to update their code to use the modified or new API rather than customers.
Our team successfully migrated the tech stack for our Reserved IP product from our legacy Rails applications to a new set of Golang microservices. By using a rigorous cycle of design, development, testing, and rollout steps we were able to complete this migration with minimal impact to our customers. At the same time, we gained large improvements to the system’s key performance metrics and our team’s productivity.
Interested in building the cloud at DigitalOcean? Check out our careers page for openings on our teams!
]]>However, identifying those foundational metrics can be difficult. Too many guides list upwards of 20 potential key performance indicators that may or may not be relevant to startup businesses. So let’s keep it simple, focusing on the immediate impact with these five metrics relevant to every startup from the moment they begin their operations.
As its name suggests, customer acquisition cost (CAC) is the total budget you need to spend to acquire the average new customer. It’s a key growth metric because your customers are the revenue lifeblood of your startup. The lower it gets, the more sustainable your efforts to grow your customer base become.
Most startups have a relatively high CAC to begin, thanks to the amount of money they have to spend on brand awareness. The key to this metric is a positive trend, in which your CAC slowly begins to sink for each period in which you measure it.
You can calculate your CAC by taking your total marketing and sales spend over the last quarter, then dividing that cost by the number of customers gained during the same quarter. For example, if you spend $10,000 on marketing and sales-related costs in Q1, and gain 50 new customers as a result, your CAC is $10,000/50 = $200. Costs should include both direct customer acquisition tactics, like ad campaigns, and indirect sales-related costs, like your CRM platform or the salaries of the sales team.
Churn and retention rate are closely related. For subscription-based businesses, churn calculates exactly what percentage of subscribers leave your platform every month. Meanwhile, retention matters regardless of the business, and measures what percentage of customers or subscribers like your product enough to come back to it.
Calculate your monthly churn by dividing the number of customers or subscribers lost during a given month by the total number of subscribers you had at the beginning of that month. For example, if you started the month of January with 500 subscribers but lost 50 of them by January 31, your churn that month was 50/500 = 0.1 or 10%.
When calculating your retention rate, pick a time period, then look at how many customers already in your database from a previous period made a new purchase. Now, divide that number by your total customers at the beginning of the period. For example, if you started Q1 with 500 customers and 100 of them were returning customers, your retention rate for Q1 is 100/500 = 0.20 or 20%.
Monthly recurring revenue (MRR) is a projection metric that helps you understand how much revenue you can anticipate on a monthly basis. It’s key not only to help you plan future budgets but also to show potential investors and other stakeholders the sustainability of your revenue flow. MRR is especially relevant for subscription-based businesses such as SaaS products which expect to keep customers month-to-month, but even non-subscription bsaed businesses may have some contracts that are recurring or continue for multiple months.
There are two basic ways to project your MRR:
You can also use more complex calculations for a more accurate MRR forecast. In some cases, like customers with yearly contracts, you might need to expand this metric to yearly recurring revenue instead of the typical monthly period.
Keeping track of how your business spends money is just as important as understanding how you gain revenue. Your burn rate indicates how fast the business is spending money. It helps you determine how far your revenue and any other sources of cash can get you in running and growing your business.
Investors are particularly interested in burn rate because it can also tell them how wisely you will spend their investment. Burning through cash and revenue can be a red flag, whereas judicious spending of that same investment while still steadily growing the business is generally a positive sign. It also helps you uncover unimportant or unexpected expenses that your business might be able to do without.
Calculating your burn rate is simple. Take the total amount of cash available at the beginning of the month, and subtract the total amount of cash available at the end of the month. You can also use a moving average of the last three months to determine burn rate trends in a positive or negative direction.
Finally, cash runway can help you understand how much longer your business has before it runs out of money, should no other revenue enter the equation. It takes the concept of your burn rate, and applies it in a forward-looking sense to better predict your startup’s financial future and help you make strategic and budgetary decisions accordingly.
You’ll need two variables for a reliable cash runway projection: your cash balance and your monthly rate. The cash balance is simply the amount of liquid cash your business has available to spend. Your monthly rate describes how many expenses your business has during the average month.
Using these variables, the formula for cash runway is cash balance / monthly rate. For example, if you have $100,000 in liquid funds for your startup and need about $2,000 monthly to keep it running, your cash runway is $100,000 / $2,000 = 50 months. Your cash runway shortens, of course, if major investments (like new hires) increase costs in a given month or across every month.
Understanding which metrics to track is only the beginning. An understanding of these five foundational startup metrics can help you focus your efforts, but you still can’t walk the growth path on your own.
DigitalOcean’s simple, cost-effective cloud computing solutions can help to power your startup, and enable you to focus on growing your business. Learn how customers are using DigitalOcean to scale their businesses, then dive in yourself to start building towards long-term, sustainable success.
]]>