Category: General Help

RackCorp Kubernetes Service (RKS) for Partners

RackCorp Kubernetes Service RKS

As a RackCorp partner offering Kubernetes services, establishing your credibility begins with understanding the platform’s evolution. Since Kubernetes was created in 2014 by Google (based on their internal Borg system), it has transformed how applications are deployed and managed.

The nickname K8s—why Kubernetes is called K8s is because it replaces the 8 letters between ‘K’ and ‘s’ with the number 8—has become an industry shorthand. Including this trivia in your client conversations demonstrates your familiarity with the technology and builds trust. Here are some common questions we often come across:

1)    Why Kubernetes Over Docker: Addressing Client Questions

When prospective clients ask about “why Kubernetes over Docker,” they’re often conflating distinct technologies. This common misconception presents an opportunity to demonstrate your expertise.

Docker primarily focuses on creating and running containers on a single host, while Kubernetes orchestrates containers across multiple hosts. The more accurate comparison would be Kubernetes versus Docker Swarm (Docker’s native orchestration tool).

Educating clients on this distinction positions you as a knowledgeable advisor rather than just a reseller, building trust that leads to longer-term partnerships.

2)    Overcoming Client Adoption Concerns

With technology evolving rapidly, risk-averse clients may worry, “is Kubernetes here to stay” or be replaced soon? This concern often stalls purchase decisions.

Address this by explaining that Kubernetes has established itself as the industry standard with massive ecosystem investment from every major technology vendor. Its architecture allows for continuous evolution without requiring wholesale replacement, making it a safe long-term investment.

This reassurance helps overcome adoption hesitation, accelerating your sales cycle.

3)    Simplifying Technical Decisions

When implementing Kubernetes, technical stakeholders often obsess over “which Kubernetes version” to deploy. This focus on technical details can delay purchase decisions.

Position RKS as eliminating this concern entirely—RackCorp’s experts handle version selection, testing, and upgrades. Clients always run on thoroughly tested, security-hardened versions appropriate for their workloads without managing the upgrade process themselves.

This removes a significant technical obstacle to client adoption, streamlining your sales process.

What Kubernetes Cluster Architecture Means for Partner Success

Kubernetes cluster architecture consists of control plane components managing the overall system and worker nodes running applications.

This distributed architecture delivers several crucial benefits that support your value proposition:

  1. High Availability: Applications continue running even if individual nodes fail
  2. Scalability: Resources can expand or contract based on demand
  3. Resource Efficiency: Workloads are packed efficiently across infrastructure
  4. Declarative Configuration: Systems maintain desired state automatically

Each of these benefits addresses specific business pain points, giving you multiple angles to position RKS.

How RackCorp Kubernetes Services Maximises Partner Profitability

RKS transforms the Kubernetes experience while creating sustainable partner revenue:

1)    Minimising Technical Investment

For partners wondering how to enter the Kubernetes market without significant technical investment, RKS provides:

  1. Pre-built infrastructure eliminating setup complexity
  2. Technical support handling client questions
  3. Automated operations reducing ongoing management
  4. Comprehensive documentation supporting your team

These features allow you to offer Kubernetes services without expanding your technical team, maximising profit margins.

2)    Accelerating Sales Cycles

RKS reduces the time between identifying an opportunity and generating revenue:

  1. Clear packaging and pricing simplifying proposal development
  2. Pre-built infrastructure eliminating lengthy implementation phases
  3. Technical demonstrations showcasing immediate value
  4. Fast deployment getting clients operational quickly

Shorter sales cycles mean more efficient use of your sales resources and faster revenue recognition.

3)    What Kubernetes Do RackCorp Experts Handle vs. Partners

Understanding the division of responsibilities helps set appropriate expectations with your clients:

RackCorp handles:

  1. Cluster provisioning and configuration
  2. Security hardening and compliance alignment
  3. Monitoring and alerting setup
  4. When Kubernetes restart pod is necessary, automated healing
  5. Backup and disaster recovery implementation
  6. Performance optimisation and troubleshooting

Partners focus on:

  1. Client relationship management
  2. Business requirements gathering
  3. Application migration guidance
  4. Growth opportunity identification
  5. Additional service opportunities

This clear delineation ensures you can confidently sell RKS without worrying about delivery capabilities.

4)    Where to Start Your Kubernetes Journey?

For channel partners wondering Kubernetes: where to start, RKS offers multiple entry points:

  1. Partner Onboarding: Technical and sales training to build your team’s capabilities
  2. Co-Selling Support: Joint client engagements to build confidence
  3. Marketing Resources: Ready-to-use content accelerating your go-to-market
  4. Trial Environments: Demonstration platforms showcasing RKS capabilities

These resources ensure you can begin generating revenue quickly while building long-term expertise.

Identifying Your Best Prospects

Understanding who uses Kubernetes helps target your sales efforts. Organisations across virtually every industry are adopting Kubernetes, but your most promising prospects include:

  1. Mid-sized enterprises with development teams but limited infrastructure expertise
  2. Regulated industries concerned with data sovereignty and compliance
  3. Digital-first businesses seeking to accelerate deployment cycles
  4. Cost-conscious organisations looking to optimise infrastructure spending

Focusing on these segments will yield higher conversion rates and larger deal sizes.

Join Our Exclusive Partner Webinar: Fully Managed Kubernetes, Fully Realised Potential with RKS

Ready to add RackCorp Kubernetes Services to your portfolio? Join our exclusive partner webinar on June 11th, where we’ll explore:

  • Partner program details including margin structure and incentives
  • Sales qualification methodology for identifying prime RKS opportunities
  • Go-to-market support and co-marketing opportunities
  • Technical demonstration showcasing key selling points

Register for our exclusive partner webinar now and discover how RackCorp Kubernetes Services can create new high-margin revenue streams while solving genuine client challenges.


This concludes our 3-part series on RackCorp Kubernetes Services for partners. For more information or to schedule a partnership discussion, contact our team at sales@rackcorp.com.

Kubernetes Security: The RKS Differentiator

RackCorp Kubernetes Service RKS Security

Security concerns represent one of the most significant barriers to Kubernetes adoption—and one of your greatest value propositions as a RackCorp partner. According to the Cloud Native Computing Foundation’s 2024 survey, 37% of organisations identified security as their top Kubernetes challenge, creating an opening for partners who can offer secure, managed solutions.

Are Kubernetes Secrets Encrypted? Addressing Client Security Concerns

When your prospects research Kubernetes, they’ll discover that by default, Kubernetes secrets are merely base64-encoded but not encrypted at rest in the etcd database. This often leads to the question: “are Kubernetes secrets secure?”

This security limitation creates an excellent opportunity to position RackCorp Kubernetes Services (RKS) as a superior solution that addresses these fundamental security concerns.

How RKS Security Features Create Partner Differentiation

RKS implements a comprehensive security approach that goes beyond basic Kubernetes capabilities, providing partners with powerful differentiation points:

1)    Enhanced Secret Management

When clients ask about security, highlight how RKS implements additional encryption layers for Kubernetes secrets:

  1. Encryption of etcd content using industry-standard algorithms
  2. Key rotation policies to maintain cryptographic security
  3. Integration with external key management systems for enhanced protection

These features address the common concern “are Kubernetes secrets encrypted?” with a definitive yes when using RKS.

2)    Where Kubernetes Stores Secrets and How RKS Protects Them

Understanding where Kubernetes stores secrets allows you to articulate RKS’s security advantages during sales conversations. Default Kubernetes stores secrets in etcd alongside other cluster data, but RKS enhances this with:

  • Network-level encryption for all etcd communications
  • Access controls limiting who can retrieve secret data
  • Audit logging of all secret access attempts

This comprehensive approach gives partners a compelling security story for prospects in regulated industries.

3)    Comprehensive Security Beyond Secrets: Multi-Cluster Architecture

RKS’s multi-cluster architecture provides partners with a unique selling proposition by separating:

  1. Workloads Cluster – Where customer applications run
  2. Services Cluster – For logging, metrics, and app delivery
  3. Management Cluster – Handling identity and admin functions

This separation creates security boundaries that traditional Kubernetes deployments lack, giving you a powerful differentiator against less secure alternatives.

Understanding Container Security: Addressing Client Questions

When clients ask “are Kubernetes containers inherently secure?” you can confidently explain that containers provide isolation, but require additional security measures that RKS delivers as standard:

  1. Pod security policies enforcing security best practices
  2. Container image scanning for vulnerabilities
  3. Runtime security monitoring
  4. Regular security patches for container runtimes

Data Sovereignty Advantages

Understanding where Kubernetes stores images allows you to position RKS’s sovereign cloud advantage. While Kubernetes itself pulls images from registries, RKS ensures this process happens within the country’s borders for local businesses through:

  1. Private image registries with access controls
  2. Image scanning for vulnerabilities before deployment
  3. Enforced signed image policies
  4. Data residency compliance with local regulations

For clients in regulated industries, this sovereignty presents a compelling advantage over multinational providers.

Why Kubernetes Is Used in DevOps and How Partners Can Leverage This Trend

Kubernetes in DevOps extends beyond technical benefits—it establishes consistent, declarative approaches to application deployment that enhance security and reliability. Position RKS as enhancing these DevOps practices through:

  1. Security guardrails preventing unsafe deployments
  2. Automated compliance checking
  3. Consistent security policies across environments
  4. Reduced attack surface through infrastructure standardisation

These benefits address both security and operational concerns, broadening your potential customer base.

Kubernetes Resources and Compliance Tracking

The ability to track Kubernetes resources provides crucial audit capabilities for security incidents and compliance. Highlight how RKS enhances attribution through:

  1. Detailed audit logging of all resource creation
  2. Integration with identity management systems
  3. Non-repudiation mechanisms ensuring accountability
  4. Historical tracking of resource modifications

These capabilities are particularly valuable for partners targeting financial services, healthcare, and government clients with strict compliance requirements.

Join Our Upcoming Partner Webinar: Fully Managed Kubernetes, Fully Realised Potential with RKS

Ready to leverage RKS’s security advantages to win new business? Join our exclusive partner webinar on June 11th, where we’ll explore:

  • How to position RKS security features against competing solutions
  • Identifying security-conscious prospects most likely to convert
  • Compliance frameworks supported by RKS’s sovereign architecture
  • Partner resources for security-focused sales conversations

Register for our exclusive partner webinar now and discover how RackCorp Kubernetes Services can help you win security-conscious clients while delivering genuine value.


This is part 2 of our 3-part series on RackCorp Kubernetes Services for partners. In our final installment, we’ll explore how to position RKS for maximum partner profitability and long-term client success.

Understanding Kubernetes and How RackCorp Kubernetes Services Simplifies Container Orchestration

Rackcorp Managed Cloud-Kubernetes Service (RKS) container orchestration

Container orchestration has become a critical component of modern application deployment. And Kubernetes has emerged as the standard platform for container orchestration. However, its complexity creates a perfect opportunity for partners who can deliver its benefits without the technical burden.

But what is Kubernetes exactly, and why has it garnered such widespread adoption?

What Kubernetes Is: The Foundation of Modern Container Orchestration

Kubernetes (often abbreviated as K8s) is an open-source container orchestration platform designed to automate the deployment, scaling, and management of containerised applications. Originally developed by Google and later donated to the Cloud Native Computing Foundation, Kubernetes has revolutionised how organisations deploy and manage applications.

Since Kubernetes was introduced in 2014 and officially launched in 2015, it has grown to become the de facto standard for container orchestration. But what does Kubernetes do that creates partner opportunities?

How Kubernetes Works: The Technical Foundation of Your New Service Offering

How Kubernetes works is through a distributed system architecture consisting of:

  1. Control Plane – The brain of the operation, managing the overall state of the cluster
  2. Nodes – Worker machines that run containerised applications
  3. Pods – The smallest deployable units in Kubernetes, containing one or more containers

This architecture delivers numerous business benefits, but introduces significant technical complexity that many organisations struggle to manage internally. As a channel partner, this gap between promise and reality represents your opportunity.

Are Kubernetes and Docker the Same?

Docker is a containerisation platform that creates and runs containers, while Kubernetes orchestrates those containers across multiple hosts. While Kubernetes can run without Docker (using alternative container runtimes like containerd or CRI-O), explaining this relationship helps establish your credibility during sales conversations.

What Is RKS and How Does It Create Partner Growth Opportunities?

RackCorp Kubernetes Service (RKS) is a fully-managed Kubernetes-as-a-Service offering built on a secure, resilient, sovereign cloud platform. It’s designed specifically for channel partners seeking to add high-value container services without expanding their technical teams.

When evaluating which Kubernetes distribution to recommend to your clients, RKS offers compelling partner advantages:

  • Recurring Revenue Stream: High-margin subscription model with minimal technical overhead
  • Sovereign Cloud Infrastructure: Data remains onshore, addressing compliance and data sovereignty concerns
  • Complete Management: RackCorp handles infrastructure management, allowing you to focus on client relationships
  • Comprehensive Security: Advanced protocols protect applications and data throughout the stack
  • Cost Efficiency: Eliminate the need for clients to hire specialist Kubernetes engineers (currently averaging $150K+ per annum)

When your clients ask about how Kubernetes deployment works, RKS allows you to offer a complete solution:

  1. Fully-managed environment with 99.9% uptime guarantee
  2. Expert support for container strategy
  3. Automated resource adjustment based on real-time workload demands
  4. Choice between dedicated Kubernetes clusters or shared clusters based on client needs

How to Identify Prospect Opportunities

Kubernetes is used in virtually every industry—from finance and healthcare to retail and manufacturing. As a partner, look for these opportunity indicators:

  1. Organisations with active development teams building new applications
  2. Businesses expressing concerns about application scaling and deployment speed
  3. Companies struggling with infrastructure costs or performance issues
  4. Enterprises mentioning “digital transformation” or “application modernisation” initiatives

What Kubernetes is used for typically includes microservices architecture, continuous integration/delivery pipelines, scalable web applications, big data analytics, and machine learning workloads. Each of these use cases represents a potential RKS opportunity.

Join Our Upcoming Partner Webinar: Fully Managed Kubernetes, Fully Realised Potential with RKS

Ready to add this high-margin, high-demand service to your portfolio? Join our exclusive partner webinar on June 11th, 2025, where we’ll explore:

  • The RKS partner program and competitive margins
  • How to identify and qualify Kubernetes opportunities
  • Technical demonstration of our 15-minute deployment process
  • Partner resources to help close deals faster

Register for our exclusive partner webinar now and discover how RackCorp Kubernetes Services can create new revenue streams while solving real challenges for your clients.


This is part 1 of our 3-part series on RackCorp Kubernetes Services for partners. Stay tuned for our next installment focusing on security differentiation and compliance advantages.

CentOS 7 grub virtio error migrating to KVM

When migrating CentOS7 from physical servers or vmware / hyperv, it typically does not have virtio drivers built into it. This will often give grub or initramfs errors.

Once the image has been migrated, you have three choices:

  1. Boot using Rescue mode from a latest CentOS or RockyOS (v8+), and choose to mount the existing machine (option 1). Version 8+ has the virtio drivers built in, so will see the drive no problem.
  2. Boot using recovery mode. This is usually a grub menu option. We find this typically has virtio built in
  3. Change emulation for the drive to IDE and boot as normal

Once booted into the os (or mounted via rescue cd and chroot /mnt/sysimage), change to root user if you aren’t already.

Run the following command to rebuild the initramfs:

mkinitrd -f --allow-missing --with=virtio_blk --preload=virtio_blk --with=virtio_net --preload=virtio_net --with=virtio_console --preload=virtio_console /boot/initramfs-$(uname -r).img $(uname -r)

Then just shut down the VM and boot it back up. new initramfs will now have the virtio drivers and be able to see the disk.

chronyd NTP server for local network

Configuration on Redhat / CentOS / Rocky Linux / Almalinux

yum install chrony

These are the important bits in your /etc/chrony.conf file:

local stratum 10
manual
allow 192.168.0.0/16
allow 10.10.0.0/16
ratelimit interval 3 burst 16

local stratum is a bit like a trust score, lower is more trusted. 10 is high enough that you wont affect much if your particular server goes horribly wrong.

manual keyword specifies that you’re able to use chronyc on the command line to manually set the time. I always leave this enabled but you can choose to not include this if you prefer.

allow directive specifies the networks that should be allowed. specify multiple times to allow multiple networks. Alternatively you can just say allow any, but please do read about dns reflection ddos attacks first.

ratelimit allows rate limiting replies on a per-ip address basis. I always specify this just in case some client software goes haywire. interval is not in seconds, but actually 2 to the power of X seconds. so interval of 3 actually means 8 seconds. burst is how many responses are allowed above the threshold before enforcing this interval.

Dont forget to restart chronyd:

systemctl restart chronyd

Example chrony.conf configuration file

# Use public servers from the pool.ntp.org project.
# Please consider joining the pool (https://www.pool.ntp.org/join.html).
pool 2.rocky.pool.ntp.org iburst

# Use NTP servers from DHCP.
sourcedir /run/chrony-dhcp

# Record the rate at which the system clock gains/losses time.
driftfile /var/lib/chrony/drift

# Allow the system clock to be stepped in the first three updates
# if its offset is larger than 1 second.
makestep 1.0 3

# Enable kernel synchronization of the real-time clock (RTC).
rtcsync

# Enable hardware timestamping on all interfaces that support it.
#hwtimestamp *

# Increase the minimum number of selectable sources required to adjust
# the system clock.
#minsources 2

# Rate limit responses
ratelimit interval 3 burst 6

# Allow NTP client access from local network.
allow 10.0.0.0/8

# Serve time even if not synchronized to a time source.
local stratum 10
manual
# Require authentication (nts or key option) for all NTP sources.
#authselectmode require

# Specify file containing keys for NTP authentication.
keyfile /etc/chrony.keys

# Save NTS keys and cookies.
ntsdumpdir /var/lib/chrony

# Insert/delete leap seconds by slewing instead of stepping.
#leapsecmode slew

# Get TAI-UTC offset and leap seconds from the system tz database.
leapsectz right/UTC

# Specify directory for log files.
logdir /var/log/chrony

# Select which information is logged.
#log measurements statistics tracking

nftables installation

You can choose your own firewall policy implementation, but we use nftables:

yum install nftables

I usually edit this file:

[root@XXXXXXXXXXXXXXXXX admin]# cat /etc/sysconfig/nftables.conf
# Uncomment the include statement here to load the default config sample
# in /etc/nftables for nftables service.

#include “/etc/nftables/main.nft”

I swap out the commented out include line for the following:

include "/etc/nftables/nftables.nft"

And then inside that config file I put all my rules:

[root@XXXXXXXXXXXXXXXXX admin]# cat /etc/nftables/nftables.nft
table inet filter {
chain INPUT {
type filter hook input priority 0; policy accept;
iif "lo" accept
ct state established,related accept
ip protocol icmp icmp type echo-request accept
ip6 nexthdr ipv6-icmp icmpv6 type 1 counter accept comment "accept ICMPv6 dest unreachable"
ip6 nexthdr ipv6-icmp icmpv6 type 2 counter accept comment "accept ICMPv6 packet too big"
ip6 nexthdr ipv6-icmp icmpv6 type 3 counter accept comment "accept ICMPv6 time exceeded"
ip6 nexthdr ipv6-icmp icmpv6 type 4 counter accept comment "accept ICMPv6 parameter problem"
ip6 nexthdr ipv6-icmp icmpv6 type 128 icmpv6 code 0 counter accept comment "accept ICMPv6 echo request"
ip6 nexthdr ipv6-icmp icmpv6 type 129 icmpv6 code 0 counter accept comment "accept ICMPv6 echo reply"
ip6 nexthdr ipv6-icmp icmpv6 type 133 icmpv6 code 0 counter accept comment "accept ICMPv6 router solicitation"
ip6 nexthdr ipv6-icmp icmpv6 type 134 icmpv6 code 0 counter accept comment "accept ICMPv6 router advertisement"
ip6 nexthdr ipv6-icmp icmpv6 type 135 icmpv6 code 0 counter accept comment "accept ICMPv6 neighbor solicitation"
ip6 nexthdr ipv6-icmp icmpv6 type 136 icmpv6 code 0 counter accept comment "accept ICMPv6 neighbor advertisement"

tcp dport 22 ip saddr X.X.X.X accept
udp dport 123 accept
drop
}

chain OUTPUT {
type filter hook output priority 0; policy accept;
}
}

This will allow SSH port 22 access to your system from a predefined X.X.X.X IP, and open access to NTP. This could be dangerous if you’re putting this on a public network, so either restrict this to your local IPs by adding a ip saddr X.X.X.X/X accept on the end, or just know what you’re opening yourself up for by reading up on NTP software compromises and NTP reflection ddos attacks.

Testing using ntpdate

And ofcourse we need to do some testing….

Testing NTP server using ntpdate:

ntpdate -q 103.43.119.204
server 103.43.119.204, stratum 3, offset 0.000072, delay 0.02623
29 Oct 08:03:48 ntpdate[14770]: adjust time server 103.43.119.204 offset 0.000072 sec

As long as the offset is tiny it should be good to go.