50+ Google Cloud Platform (GCP) Jobs in Bangalore (Bengaluru) | Google Cloud Platform (GCP) Job openings in Bangalore (Bengaluru)
Apply to 50+ Google Cloud Platform (GCP) Jobs in Bangalore (Bengaluru) on CutShort.io. Explore the latest Google Cloud Platform (GCP) Job opportunities across top companies like Google, Amazon & Adobe.
Client Located in Bangalore Location
Job Description-
We are seeking a highly skilled Solution Architect to join our dynamic team in Tokyo. The ideal candidate will have substantial experience in designing, implementing, and deploying cutting-edge solutions involving Machine Learning (ML), Cloud Computing, Full Stack Development, and Kubernetes. The Solution Architect will play a key role in architecting and delivering innovative solutions that meet business objectives while leveraging advanced technologies and industry best practices.
Job Title: Solution Architect ML, Cloud
Experience:5-10 years
Client Location: Bangalore
Work Location: Tokyo, Japan (Onsite)
We are seeking a highly skilled Solution Architect to join our dynamic team in Tokyo. The ideal candidate will have substantial experience in designing, implementing, and deploying cutting-edge solutions involving Machine Learning (ML), Cloud Computing, Full Stack Development, and Kubernetes. The Solution Architect will play a key role in architecting and delivering innovative solutions that meet business objectives while leveraging advanced technologies and industry best practices.
Job Title: Solution Architect (ML, Cloud)
Location: Tokyo, Japan (Onsite)
Experience: 5-10 years
Overview: We are looking for a skilled Solution Architect with expertise in Machine Learning (ML), Cloud Computing, Full Stack Development, and Kubernetes to join our team in Tokyo. The ideal candidate will be responsible for designing and implementing cutting-edge, scalable solutions while leveraging the latest technologies and best practices to meet business objectives.
Key Responsibilities:
Collaborate with stakeholders to understand business needs and develop scalable, efficient technical solutions.
Architect and implement complex systems integrating Machine Learning, Cloud platforms (AWS, Azure, Google Cloud), and Full Stack Development.
Lead the development and deployment of cloud-native applications using NoSQL databases, Python, and Kubernetes.
Design and optimize algorithms to improve performance, scalability, and reliability of solutions.
Review, validate, and refine architecture to ensure flexibility, scalability, and cost-efficiency.
Mentor development teams and ensure adherence to best practices for coding, testing, and deployment.
Contribute to the development of technical documentation and solution roadmaps.
Stay up-to-date with emerging technologies and continuously improve solution design processes.
Required Skills & Qualifications:
5-10 years of experience as a Solution Architect or similar role with expertise in ML, Cloud, and Full Stack Development.
Proficiency in at least two major cloud platforms (AWS, Azure, Google Cloud).
Solid experience with Kubernetes for container orchestration and deployment.
Hands-on experience with NoSQL databases (e.g., MongoDB, Cassandra, DynamoDB).
Expertise in Python and ML frameworks like TensorFlow, PyTorch, etc.
Practical experience implementing at least two real-world algorithms (e.g., classification, clustering, recommendation systems).
Strong knowledge of scalable architecture design and cloud-native application development.
Familiarity with CI/CD tools and DevOps practices.
Excellent problem-solving abilities and the ability to thrive in a fast-paced environment.
Strong communication and collaboration skills with cross-functional teams.
Bachelor’s or Master’s degree in Computer Science, Engineering, or related field.
Preferred Qualifications:
Experience with microservices and containerization.
Knowledge of distributed systems and high-performance computing.
Cloud certifications (AWS Certified Solutions Architect, Google Cloud Professional Architect, etc.).
Familiarity with Agile methodologies and Scrum.
Japanese language proficiency is an added advantage (but not mandatory).
Skills : ML, Cloud (any two major clouds), algorithms (two algorithms must be implemented), full stack, kubernatics, no sql, Python
Responsibilities:
- Collaborate with stakeholders to understand business needs and translate them into scalable and efficient technical solutions.
- Design and implement complex systems involving Machine Learning, Cloud Computing (at least two major clouds such as AWS, Azure, or Google Cloud), and Full Stack Development.
- Lead the design, development, and deployment of cloud-native applications with a focus on NoSQL databases, Python, and Kubernetes.
- Implement algorithms and provide scalable solutions, with a focus on performance optimization and system reliability.
- Review, validate, and improve architectures to ensure high scalability, flexibility, and cost-efficiency in cloud environments.
- Guide and mentor development teams, ensuring best practices are followed in coding, testing, and deployment.
- Contribute to the development of technical documentation and roadmaps.
- Stay up-to-date with emerging technologies and propose enhancements to the solution design process.
Key Skills & Requirements:
- Proven experience 5-10 years) as a Solution Architect or similar role, with deep expertise in Machine Learning, Cloud Architecture, and Full Stack Development.
- Expertise in at least two major cloud platforms (AWS, Azure, Google Cloud).
- Solid experience with Kubernetes for container orchestration and deployment.
- Strong hands-on experience with NoSQL databases (e.g., MongoDB, Cassandra, DynamoDB, etc.).
- Proficiency in Python, including experience with ML frameworks (such as TensorFlow, PyTorch, etc.) and libraries for algorithm development.
- Must have implemented at least two algorithms (e.g., classification, clustering, recommendation systems, etc.) in real-world applications.
- Strong experience in designing scalable architectures and applications from the ground up.
- Experience with DevOps and automation tools for CI/CD pipelines.
- Excellent problem-solving skills and ability to work in a fast-paced environment.
- Strong communication skills and ability to collaborate with cross-functional teams.
- Bachelor’s or Master’s degree in Computer Science, Engineering, or related field.
Preferred Skills:
- Experience with microservices architecture and containerization.
- Knowledge of distributed systems and high-performance computing.
- Certifications in cloud platforms (AWS Certified Solutions Architect, Google Cloud Professional Cloud Architect, etc.).
- Familiarity with Agile methodologies and Scrum.
- Knowing Japanese Language is an additional advantage for the Candidate. Not mandatory.
The candidate should have a background in development/programming with experience in at least one of the following: .NET, Java (Spring Boot), ReactJS, or AngularJS.
Primary Skills:
- AWS or GCP Cloud
- DevOps CI/CD pipelines (e.g., Azure DevOps, Jenkins)
- Python/Bash/PowerShell scripting
Secondary Skills:
- Docker or Kubernetes
Client based at Bangalore location.
Job Title: Solution Architect
Work Location: Tokyo
Experience: 7-10 years
Number of Positions: 3
Job Description:
We are seeking a highly skilled Solution Architect to join our dynamic team in Tokyo. The ideal candidate will have substantial experience in designing, implementing, and deploying cutting-edge solutions involving Machine Learning (ML), Cloud Computing, Full Stack Development, and Kubernetes. The Solution Architect will play a key role in architecting and delivering innovative solutions that meet business objectives while leveraging advanced technologies and industry best practices.
Responsibilities:
- Collaborate with stakeholders to understand business needs and translate them into scalable and efficient technical solutions.
- Design and implement complex systems involving Machine Learning, Cloud Computing (at least two major clouds such as AWS, Azure, or Google Cloud), and Full Stack Development.
- Lead the design, development, and deployment of cloud-native applications with a focus on NoSQL databases, Python, and Kubernetes.
- Implement algorithms and provide scalable solutions, with a focus on performance optimization and system reliability.
- Review, validate, and improve architectures to ensure high scalability, flexibility, and cost-efficiency in cloud environments.
- Guide and mentor development teams, ensuring best practices are followed in coding, testing, and deployment.
- Contribute to the development of technical documentation and roadmaps.
- Stay up-to-date with emerging technologies and propose enhancements to the solution design process.
Key Skills & Requirements:
- Proven experience (7-10 years) as a Solution Architect or similar role, with deep expertise in Machine Learning, Cloud Architecture, and Full Stack Development.
- Expertise in at least two major cloud platforms (AWS, Azure, Google Cloud).
- Solid experience with Kubernetes for container orchestration and deployment.
- Strong hands-on experience with NoSQL databases (e.g., MongoDB, Cassandra, DynamoDB, etc.).
- Proficiency in Python, including experience with ML frameworks (such as TensorFlow, PyTorch, etc.) and libraries for algorithm development.
- Must have implemented at least two algorithms (e.g., classification, clustering, recommendation systems, etc.) in real-world applications.
- Strong experience in designing scalable architectures and applications from the ground up.
- Experience with DevOps and automation tools for CI/CD pipelines.
- Excellent problem-solving skills and ability to work in a fast-paced environment.
- Strong communication skills and ability to collaborate with cross-functional teams.
- Bachelor’s or Master’s degree in Computer Science, Engineering, or related field.
Preferred Skills:
- Experience with microservices architecture and containerization.
- Knowledge of distributed systems and high-performance computing.
- Certifications in cloud platforms (AWS Certified Solutions Architect, Google Cloud Professional Cloud Architect, etc.).
- Familiarity with Agile methodologies and Scrum.
- Knowing Japanese Language is an additional advantage for the Candidate. Not mandatory.
Senior Backend Developer
Job Overview: We are looking for a highly skilled and experienced Backend Developer who excels in building robust, scalable backend systems using multiple frameworks and languages. The ideal candidate will have 4+ years of experience working with at least two backend frameworks and be proficient in at least two programming languages such as Python, Node.js, or Go. As an Sr. Backend Developer, you will play a critical role in designing, developing, and maintaining backend services, ensuring seamless real-time communication with WebSockets, and optimizing system performance with tools like Redis, Celery, and Docker.
Key Responsibilities:
- Design, develop, and maintain backend systems using multiple frameworks and languages (Python, Node.js, Go).
- Build and integrate APIs, microservices, and other backend components.
- Implement real-time features using WebSockets and ensure efficient server-client communication.
- Collaborate with cross-functional teams to define, design, and ship new features.
- Optimize backend systems for performance, scalability, and reliability.
- Troubleshoot and debug complex issues, providing efficient and scalable solutions.
- Work with caching systems like Redis to enhance performance and manage data.
- Utilize task queues and background job processing tools like Celery.
- Develop and deploy applications using containerization tools like Docker.
- Participate in code reviews and provide constructive feedback to ensure code quality.
- Mentor junior developers, sharing best practices and promoting a culture of continuous learning.
- Stay updated with the latest backend development trends and technologies to keep our solutions cutting-edge.
Required Qualifications:
- Bachelor’s degree in Computer Science, Engineering, or a related field, or equivalent work experience.
- 4+ years of professional experience as a Backend Developer.
- Proficiency in at least two programming languages: Python, Node.js, or Go.
- Experience working with multiple backend frameworks (e.g., Express, Flask, Gin, Fiber, FastAPI).
- Strong understanding of WebSockets and real-time communication.
- Hands-on experience with Redis for caching and data management.
- Familiarity with task queues like Celery for background job processing.
- Experience with Docker for containerizing applications and services.
- Strong knowledge of RESTful API design and implementation.
- Understanding of microservices architecture and distributed systems.
- Solid understanding of database technologies (SQL and NoSQL).
- Excellent problem-solving skills and attention to detail.
- Strong communication skills, both written and verbal.
Preferred Qualifications:
- Experience with cloud platforms such as AWS, Azure, or GCP.
- Familiarity with CI/CD pipelines and DevOps practices.
- Experience with GraphQL and other modern API paradigms.
- Familiarity with task queues, caching, or message brokers (e.g., Celery, Redis, RabbitMQ).
- Understanding of security best practices in backend development.
- Knowledge of automated testing frameworks for backend services.
- Familiarity with version control systems, particularly Git.
at appscrip
Key Responsibilities
AI Model Development
- Design and implement advanced Generative AI models (e.g., GPT-based, LLaMA, etc.) to support applications across various domains, including text generation, summarization, and conversational agents.
- Utilize tools like LangChain and LlamaIndex to build robust AI-powered systems, ensuring seamless integration with data sources, APIs, and databases.
Backend Development with FastAPI
- Develop and maintain fast, efficient, and scalable FastAPI services to expose AI models and algorithms via RESTful APIs.
- Ensure optimal performance and low-latency for API endpoints, focusing on real-time data processing.
Pipeline and Integration
- Build and optimize data processing pipelines for AI models, including ingestion, transformation, and indexing of large datasets using tools like LangChain and LlamaIndex.
- Integrate AI models with external services, databases, and other backend systems to create end-to-end solutions.
Collaboration with Cross-Functional Teams
- Collaborate with data scientists, machine learning engineers, and product teams to define project requirements, technical feasibility, and timelines.
- Work with front-end developers to integrate AI-powered functionalities into web applications.
Model Optimization and Fine-Tuning
- Fine-tune and optimize pre-trained Generative AI models to improve accuracy, performance, and scalability for specific business use cases.
- Ensure efficient deployment of models in production environments, addressing issues related to memory, latency, and resource management.
Documentation and Code Quality
- Maintain high standards of code quality, write clear, maintainable code, and conduct thorough unit and integration tests.
- Document AI model architectures, APIs, and workflows for future reference and onboarding of team members.
Research and Innovation
- Stay updated with the latest advancements in Generative AI, LangChain, and LlamaIndex, and actively contribute to the adoption of new techniques and technologies.
- Propose and explore innovative ways to leverage cutting-edge AI technologies to solve complex problems.
Required Skills and Experience
Expertise in Generative AI
Strong experience working with Generative AI models, including but not limited to GPT-3/4, LLaMA, or other large language models (LLMs).
LangChain & LlamaIndex
Hands-on experience with LangChain for building language model-driven applications, and LlamaIndex for efficient data indexing and querying.
Python Programming
Proficiency in Python for building AI applications, working with frameworks such as TensorFlow, PyTorch, Hugging Face, and others.
API Development with FastAPI
Strong experience developing RESTful APIs using FastAPI, with a focus on high-performance, scalable web services.
NLP & Machine Learning
Solid foundation in Natural Language Processing (NLP) and machine learning techniques, including data preprocessing, feature engineering, model evaluation, and fine-tuning.
Database & Storage Systems Familiarity with relational and NoSQL databases, data storage, and management strategies for large-scale AI datasets.
Version Control & CI/CD
Experience with Git, GitHub, and implementing CI/CD pipelines for seamless deployment.
Preferred Skills
Containerization & Cloud Deployment
Familiarity with Docker, Kubernetes, and cloud platforms (e.g., AWS, GCP, Azure) for deploying scalable AI applications.
Data Engineering
Experience in working with data pipelines and frameworks such as Apache Spark, Airflow, or Dask.
Knowledge of Front-End Technologies Familiarity with front-end frameworks (React, Vue.js, etc.) for integrating AI APIs with user-facing applications.
a leading data & analytics intelligence technology solutions provider to companies that value insights from information as a competitive advantage
Roles & Responsibilities:
- Develop and maintain mobile-responsive web applications using React.
- Collaborate with our UI/UX designers to translate design wireframes into responsive web applications.
- Ensure web applications function flawlessly on various web browsers and platforms.
- Implement performance optimizations to enhance the mobile user experience.
- Proven experience as a Mobile Responsive Web Developer or a similar role is a must.
- Knowledge of web performance optimization and browser compatibility.
- Excellent problem-solving skills and attention to detail.
What are we looking for?
- 4+ years’ experience as a Front-End developer with hands on experience in React.js & Redux
- Experience as a UI/UX designer.
- Familiar with cloud infrastructure (Azure, AWS, or Google Cloud Services).
- Expert knowledge of CSS, CSS extension languages (Less, Sass), and CSS preprocessor tools.
- Expert knowledge of HTML5 and its best practices.
- Proficiency in designing interfaces and building clickable prototypes.
- Experience with Test Driven Development and Acceptance Test Driven Development.
- Proficiency using version control tools
- Effective communication and teamwork skills.
a leading Data & Analytics intelligence technology solutions provider to companies that value insights from information as a competitive advantage. We are the partner of choice for enterprises on their digital transformation journey. Our teams offer solutions and services at the intersection of Advanced Data, Analytics, and AI.
Skills: Python, Fast API, AWS/GCP/Azure'
Location - Bangalore / Mangalore (Hybrid)
NP - Immediate - 20 days
• Experience in building python-based utility-scale Enterprise APIs with QoS/SLA based specs building upon Cloud APIs from GCP, AWS and Azure
• Exposure to Multi-modal (Text, Audio, Video) development in synchronous and Batch mode in high-volume use-cases leveraging Queuing, Pooling and enterprise scaling patterns. • Solid understanding of API life cycle including versioning (ex: parallel deployment of multiple versions), exception management.
• Working experience (Development and/or troubleshooting skills) of Enterprise scale AWS, CI/CD leveraging GitHub actions-based workflow.
• Solid knowledge of developing/updating enterprise Cloud formation templates for python centric code-assets along with quality/security tooling
• Design/support tracing/monitoring capability (X-Ray, AWS distro for Open Telemetry) for Fargate services. • Responsible and able to communicate requirements.
Position Overview: We are seeking a talented and experienced Cloud Engineer specialized in AWS cloud services to join our dynamic team. The ideal candidate will have a strong background in AWS infrastructure and services, including EC2, Elastic Load Balancing (ELB), Auto Scaling, S3, VPC, RDS, CloudFormation, CloudFront, Route 53, AWS Certificate Manager (ACM), and Terraform for Infrastructure as Code (IaC). Experience with other AWS services is a plus.
Responsibilities:
• Design, deploy, and maintain AWS infrastructure solutions, ensuring scalability, reliability, and security.
• Configure and manage EC2 instances to meet application requirements.
• Implement and manage Elastic Load Balancers (ELB) to distribute incoming traffic across multiple instances.
• Set up and manage AWS Auto Scaling to dynamically adjust resources based on demand.
• Configure and maintain VPCs, including subnets, route tables, and security groups, to control network traffic.
• Deploy and manage AWS CloudFormation and Terraform templates to automate infrastructure provisioning using Infrastructure as Code (IaC) principles.
• Implement and monitor S3 storage solutions for secure and scalable data storage
• Set up and manage CloudFront distributions for content delivery with low latency and high transfer speeds.
• Configure Route 53 for domain management, DNS routing, and failover configurations.
• Manage AWS Certificate Manager (ACM) for provisioning, managing, and deploying SSL/TLS certificates.
• Collaborate with cross-functional teams to understand business requirements and provide effective cloud solutions.
• Stay updated with the latest AWS technologies and best practices to drive continuous improvement.
Qualifications:
• Bachelor's degree in computer science, Information Technology, or a related field.
• Minimum of 2 years of relevant experience in designing, deploying, and managing AWS cloud solutions.
• Strong proficiency in AWS services such as EC2, ELB, Auto Scaling, VPC, S3, RDS, and CloudFormation.
• Experience with other AWS services such as Lambda, ECS, EKS, and DynamoDB is a plus.
• Solid understanding of cloud computing principles, including IaaS, PaaS, and SaaS.
• Excellent problem-solving skills and the ability to troubleshoot complex issues in a cloud environment.
• Strong communication skills with the ability to collaborate effectively with cross-functional teams.
• Relevant AWS certifications (e.g., AWS Certified Solutions Architect, AWS Certified DevOps Engineer, etc.) are highly desirable.
Additional Information:
• We value creativity, innovation, and a proactive approach to problem-solving.
• We offer a collaborative and supportive work environment where your ideas and contributions are valued.
• Opportunities for professional growth and development. Someshwara Software Pvt Ltd is an equal opportunity employer.
We celebrate diversity and are dedicated to creating an inclusive environment for all employees.
NASDAQ listed, Service Provider IT Company
Job Summary:
As a Cloud Architect at organization, you will play a pivotal role in designing, implementing, and maintaining our multi-cloud infrastructure. You will work closely with various teams to ensure our cloud solutions are scalable, secure, and efficient across different cloud providers. Your expertise in multi-cloud strategies, database management, and microservices architecture will be essential to our success.
Key Responsibilities:
- Design and implement scalable, secure, and high-performance cloud architectures across multiple cloud platforms (AWS, Azure, Google Cloud Platform).
- Lead and manage cloud migration projects, ensuring seamless transitions between on-premises and cloud environments.
- Develop and maintain cloud-native solutions leveraging services from various cloud providers.
- Architect and deploy microservices using REST, GraphQL to support our application development needs.
- Collaborate with DevOps and development teams to ensure best practices in continuous integration and deployment (CI/CD).
- Provide guidance on database architecture, including relational and NoSQL databases, ensuring optimal performance and security.
- Implement robust security practices and policies to protect cloud environments and data.
- Design and implement data management strategies, including data governance, data integration, and data security.
- Stay-up-to-date with the latest industry trends and emerging technologies to drive continuous improvement and innovation.
- Troubleshoot and resolve cloud infrastructure issues, ensuring high availability and reliability.
- Optimize cost and performance across different cloud environments.
Qualifications/ Experience & Skills Required:
- Bachelor's degree in Computer Science, Information Technology, or a related field.
- Experience: 10 - 15 Years
- Proven experience as a Cloud Architect or in a similar role, with a strong focus on multi-cloud environments.
- Expertise in cloud migration projects, both lift-and-shift and greenfield implementations.
- Strong knowledge of cloud-native solutions and microservices architecture.
- Proficiency in using GraphQL for designing and implementing APIs.
- Solid understanding of database technologies, including SQL, NoSQL, and cloud-based database solutions.
- Experience with DevOps practices and tools, including CI/CD pipelines.
- Excellent problem-solving skills and ability to troubleshoot complex issues.
- Strong communication and collaboration skills, with the ability to work effectively in a team environment.
- Deep understanding of cloud security practices and data protection regulations (e.g., GDPR, HIPAA).
- Experience with data management, including data governance, data integration, and data security.
Preferred Skills:
- Certifications in multiple cloud platforms (e.g., AWS Certified Solutions Architect, Google Certified Professional Cloud Architect, Microsoft Certified: Azure Solutions Architect).
- Experience with containerization technologies (Docker, Kubernetes).
- Familiarity with cloud cost management and optimization tools.
the forefront of innovation in the digital video industry
Responsibilities:
- Work with development teams and product managers to ideate software solutions
- Design client-side and server-side architecture
- Creating a well-informed cloud strategy and managing the adaptation process
- Evaluating cloud applications, hardware, and software
- Develop and manage well-functioning databases and applications Write effective APIs
- Participate in the entire application lifecycle, focusing on coding and debugging
- Write clean code to develop, maintain and manage functional web applications
- Get feedback from, and build solutions for, users and customers
- Participate in requirements, design, and code reviews
- Engage with customers to understand and solve their issues
- Collaborate with remote team on implementing new requirements and solving customer problems
- Focus on quality of deliverables with high accountability and commitment to program objectives
Required Skills:
- 7– 10 years of SW development experience
- Experience using Amazon Web Services (AWS), Microsoft Azure, Google Cloud, or other major cloud computing services.
- Strong skills in Containers, Kubernetes, Helm
- Proficiency in C#, .NET, PHP /Java technologies with an acumen for code analysis, debugging and problem solving
- Strong skills in Database Design(PostgreSQL or MySQL)
- Experience in Caching and message Queue
- Experience in REST API framework design
- Strong focus on high-quality and maintainable code
- Understanding of multithreading, memory management, object-oriented programming
Preferred skills:
- Experience in working with Linux OS
- Experience in Core Java programming
- Experience in working with JSP/Servlets, Struts, Spring / Spring Boot, Hibernate
- Experience in working with web technologies HTML,CSS
- Knowledge of source versioning tools particularly JIRA, Git, Stash, and Jenkins.
- Domain Knowledge of Video, Audio Codecs
Job Purpose and Impact
The DevOps Engineer is a key position to strengthen the security automation capabilities which have been identified as a critical area for growth and specialization within Global IT’s scope. As part of the Cyber Intelligence Operation’s DevOps Team, you will be helping shape our automation efforts by building, maintaining and supporting our security infrastructure.
Key Accountabilities
- Collaborate with internal and external partners to understand and evaluate business requirements.
- Implement modern engineering practices to ensure product quality.
- Provide designs, prototypes and implementations incorporating software engineering best practices, tools and monitoring according to industry standards.
- Write well-designed, testable and efficient code using full-stack engineering capability.
- Integrate software components into a fully functional software system.
- Independently solve moderately complex issues with minimal supervision, while escalating more complex issues to appropriate staff.
- Proficiency in at least one configuration management or orchestration tool, such as Ansible.
- Experience with cloud monitoring and logging services.
Qualifications
Minimum Qualifications
- Bachelor's degree in a related field or equivalent exp
- Knowledge of public cloud services & application programming interfaces
- Working exp with continuous integration and delivery practices
Preferred Qualifications
- 3-5 years of relevant exp whether in IT, IS, or software development
- Exp in:
- Code repositories such as Git
- Scripting languages (Python & PowerShell)
- Using Windows, Linux, Unix, and mobile platforms within cloud services such as AWS
- Cloud infrastructure as a service (IaaS) / platform as a service (PaaS), microservices, Docker containers, Kubernetes, Terraform, Jenkins
- Databases such as Postgres, SQL, Elastic
Job Description - Manager Sales
Min 15 years experience,
Should be experience from Sales of Cloud IT Saas Products portfolio which Savex deals with,
Team Management experience, leading cloud business including teams
Sales manager - Cloud Solutions
Reporting to Sr Management
Good personality
Distribution backgroung
Keen on Channel partners
Good database of OEMs and channel partners.
Age group - 35 to 45yrs
Male Candidate
Good communication
B2B Channel Sales
Location - Bangalore
If interested reply with cv and below details
Total exp -
Current ctc -
Exp ctc -
Np -
Current location -
Qualification -
Total exp Channel Sales -
What are the Cloud IT products, you have done sales for?
What is the Annual revenue generated through Sales ?
Experience: 5+ Years
• Experience in Core Java, Spring Boot
• Experience in microservices and angular
• Extensive experience in developing enterprise-scale systems for global organization. Should possess good architectural knowledge and be aware of enterprise application design patterns.
• Should be able to analyze, design, develop and test complex, low-latency client-facing applications.
• Good development experience with RDBMS in SQL Server, Postgres, Oracle or DB2
• Good knowledge of multi-threading
• Basic working knowledge of Unix/Linux
• Excellent problem solving and coding skills in Java
• Strong interpersonal, communication and analytical skills.
• Should be able to express their design ideas and thoughts
Responsibilities:
- Design, implement, and maintain robust CI/CD pipelines using Azure DevOps for continuous integration and continuous delivery (CI/CD) of software applications.
- Provision and manage infrastructure resources on Microsoft Azure, including virtual machines, containers, storage, and networking components.
- Implement and manage Kubernetes clusters for containerized application deployments and orchestration.
- Configure and utilize Azure Container Registry (ACR) for secure container image storage and management.
- Automate infrastructure provisioning and configuration management using tools like Azure Resource Manager (ARM) templates.
- Monitor application performance and identify potential bottlenecks using Azure monitoring tools.
- Collaborate with developers and operations teams to identify and implement continuous improvement opportunities for the DevOps process.
- Troubleshoot and resolve DevOps-related issues, ensuring smooth and efficient software delivery.
- Stay up-to-date with the latest advancements in cloud technologies, DevOps tools, and best practices.
- Maintain a strong focus on security throughout the software delivery lifecycle.
- Participate in code reviews to identify potential infrastructure and deployment issues.
- Effectively communicate with technical and non-technical audiences on DevOps processes and initiatives.
Qualifications:
- Proven experience in designing and implementing CI/CD pipelines using Azure DevOps.
- In-depth knowledge of Microsoft Azure cloud platform services (IaaS, PaaS, SaaS).
- Expertise in deploying and managing containerized applications using Kubernetes.
- Experience with Infrastructure as Code (IaC) tools like ARM templates.
- Familiarity with Azure monitoring tools and troubleshooting techniques.
- A strong understanding of DevOps principles and methodologies (Agile, Lean).
- Excellent problem-solving and analytical skills.
- Ability to work independently and as part of a team.
- Strong written and verbal communication skills.
- A minimum of one relevant Microsoft certification (e.g., Azure Administrator Associate, DevOps Engineer Expert) is highly preferred.
GCP Cloud Engineer:
- Proficiency in infrastructure as code (Terraform).
- Scripting and automation skills (e.g., Python, Shell). Knowing python is must.
- Collaborate with teams across the company (i.e., network, security, operations) to build complete cloud offerings.
- Design Disaster Recovery and backup strategies to meet application objectives.
- Working knowledge of Google Cloud
- Working knowledge of various tools, open-source technologies, and cloud services
- Experience working on Linux based infrastructure.
- Excellent problem-solving and troubleshooting skills
at Wissen Technology
Job Title: .NET Developer with Cloud Migration Experience
Job Description:
We are seeking a skilled .NET Developer with experience in C#, MVC, and ASP.NET to join our team. The ideal candidate will also have hands-on experience with cloud migration projects, particularly in migrating on-premise applications to cloud platforms such as Azure or AWS.
Responsibilities:
- Develop, test, and maintain .NET applications using C#, MVC, and ASP.NET
- Collaborate with cross-functional teams to define, design, and ship new features
- Participate in code reviews and ensure coding best practices are followed
- Work closely with the infrastructure team to migrate on-premise applications to the cloud
- Troubleshoot and debug issues that arise during migration and post-migration phases
- Stay updated with the latest trends and technologies in .NET development and cloud computing
Requirements:
- Bachelor's degree in Computer Science or related field
- X+ years of experience in .NET development using C#, MVC, and ASP.NET
- Hands-on experience with cloud migration projects, preferably with Azure or AWS
- Strong understanding of cloud computing concepts and principles
- Experience with database technologies such as SQL Server
- Excellent problem-solving and communication skills
Preferred Qualifications:
- Microsoft Azure or AWS certification
- Experience with other cloud platforms such as Google Cloud Platform (GCP)
- Familiarity with DevOps practices and tools
Publicis Sapient Overview:
The Senior Associate People Senior Associate L1 in Data Engineering, you will translate client requirements into technical design, and implement components for data engineering solution. Utilize deep understanding of data integration and big data design principles in creating custom solutions or implementing package solutions. You will independently drive design discussions to insure the necessary health of the overall solution
.
Job Summary:
As Senior Associate L2 in Data Engineering, you will translate client requirements into technical design, and implement components for data engineering solution. Utilize deep understanding of data integration and big data design principles in creating custom solutions or implementing package solutions. You will independently drive design discussions to insure the necessary health of the overall solution
The role requires a hands-on technologist who has strong programming background like Java / Scala / Python, should have experience in Data Ingestion, Integration and data Wrangling, Computation, Analytics pipelines and exposure to Hadoop ecosystem components. You are also required to have hands-on knowledge on at least one of AWS, GCP, Azure cloud platforms.
Role & Responsibilities:
Your role is focused on Design, Development and delivery of solutions involving:
• Data Integration, Processing & Governance
• Data Storage and Computation Frameworks, Performance Optimizations
• Analytics & Visualizations
• Infrastructure & Cloud Computing
• Data Management Platforms
• Implement scalable architectural models for data processing and storage
• Build functionality for data ingestion from multiple heterogeneous sources in batch & real-time mode
• Build functionality for data analytics, search and aggregation
Experience Guidelines:
Mandatory Experience and Competencies:
# Competency
1.Overall 5+ years of IT experience with 3+ years in Data related technologies
2.Minimum 2.5 years of experience in Big Data technologies and working exposure in at least one cloud platform on related data services (AWS / Azure / GCP)
3.Hands-on experience with the Hadoop stack – HDFS, sqoop, kafka, Pulsar, NiFi, Spark, Spark Streaming, Flink, Storm, hive, oozie, airflow and other components required in building end to end data pipeline.
4.Strong experience in at least of the programming language Java, Scala, Python. Java preferable
5.Hands-on working knowledge of NoSQL and MPP data platforms like Hbase, MongoDb, Cassandra, AWS Redshift, Azure SQLDW, GCP BigQuery etc
6.Well-versed and working knowledge with data platform related services on at least 1 cloud platform, IAM and data security
Preferred Experience and Knowledge (Good to Have):
# Competency
1.Good knowledge of traditional ETL tools (Informatica, Talend, etc) and database technologies (Oracle, MySQL, SQL Server, Postgres) with hands on experience
2.Knowledge on data governance processes (security, lineage, catalog) and tools like Collibra, Alation etc
3.Knowledge on distributed messaging frameworks like ActiveMQ / RabbiMQ / Solace, search & indexing and Micro services architectures
4.Performance tuning and optimization of data pipelines
5.CI/CD – Infra provisioning on cloud, auto build & deployment pipelines, code quality
6.Cloud data specialty and other related Big data technology certifications
Personal Attributes:
• Strong written and verbal communication skills
• Articulation skills
• Good team player
• Self-starter who requires minimal oversight
• Ability to prioritize and manage multiple tasks
• Process orientation and the ability to define and set up processes
Publicis Sapient Overview:
The Senior Associate People Senior Associate L1 in Data Engineering, you will translate client requirements into technical design, and implement components for data engineering solution. Utilize deep understanding of data integration and big data design principles in creating custom solutions or implementing package solutions. You will independently drive design discussions to insure the necessary health of the overall solution
.
Job Summary:
As Senior Associate L1 in Data Engineering, you will do technical design, and implement components for data engineering solution. Utilize deep understanding of data integration and big data design principles in creating custom solutions or implementing package solutions. You will independently drive design discussions to insure the necessary health of the overall solution
The role requires a hands-on technologist who has strong programming background like Java / Scala / Python, should have experience in Data Ingestion, Integration and data Wrangling, Computation, Analytics pipelines and exposure to Hadoop ecosystem components. Having hands-on knowledge on at least one of AWS, GCP, Azure cloud platforms will be preferable.
Role & Responsibilities:
Job Title: Senior Associate L1 – Data Engineering
Your role is focused on Design, Development and delivery of solutions involving:
• Data Ingestion, Integration and Transformation
• Data Storage and Computation Frameworks, Performance Optimizations
• Analytics & Visualizations
• Infrastructure & Cloud Computing
• Data Management Platforms
• Build functionality for data ingestion from multiple heterogeneous sources in batch & real-time
• Build functionality for data analytics, search and aggregation
Experience Guidelines:
Mandatory Experience and Competencies:
# Competency
1.Overall 3.5+ years of IT experience with 1.5+ years in Data related technologies
2.Minimum 1.5 years of experience in Big Data technologies
3.Hands-on experience with the Hadoop stack – HDFS, sqoop, kafka, Pulsar, NiFi, Spark, Spark Streaming, Flink, Storm, hive, oozie, airflow and other components required in building end to end data pipeline. Working knowledge on real-time data pipelines is added advantage.
4.Strong experience in at least of the programming language Java, Scala, Python. Java preferable
5.Hands-on working knowledge of NoSQL and MPP data platforms like Hbase, MongoDb, Cassandra, AWS Redshift, Azure SQLDW, GCP BigQuery etc
Preferred Experience and Knowledge (Good to Have):
# Competency
1.Good knowledge of traditional ETL tools (Informatica, Talend, etc) and database technologies (Oracle, MySQL, SQL Server, Postgres) with hands on experience
2.Knowledge on data governance processes (security, lineage, catalog) and tools like Collibra, Alation etc
3.Knowledge on distributed messaging frameworks like ActiveMQ / RabbiMQ / Solace, search & indexing and Micro services architectures
4.Performance tuning and optimization of data pipelines
5.CI/CD – Infra provisioning on cloud, auto build & deployment pipelines, code quality
6.Working knowledge with data platform related services on at least 1 cloud platform, IAM and data security
7.Cloud data specialty and other related Big data technology certifications
Job Title: Senior Associate L1 – Data Engineering
Personal Attributes:
• Strong written and verbal communication skills
• Articulation skills
• Good team player
• Self-starter who requires minimal oversight
• Ability to prioritize and manage multiple tasks
• Process orientation and the ability to define and set up processes
A LEADING US BASED MNC
Data Engineering : Senior Engineer / Manager
As Senior Engineer/ Manager in Data Engineering, you will translate client requirements into technical design, and implement components for a data engineering solutions. Utilize a deep understanding of data integration and big data design principles in creating custom solutions or implementing package solutions. You will independently drive design discussions to insure the necessary health of the overall solution.
Must Have skills :
1. GCP
2. Spark streaming : Live data streaming experience is desired.
3. Any 1 coding language: Java/Pyhton /Scala
Skills & Experience :
- Overall experience of MINIMUM 5+ years with Minimum 4 years of relevant experience in Big Data technologies
- Hands-on experience with the Hadoop stack - HDFS, sqoop, kafka, Pulsar, NiFi, Spark, Spark Streaming, Flink, Storm, hive, oozie, airflow and other components required in building end to end data pipeline. Working knowledge on real-time data pipelines is added advantage.
- Strong experience in at least of the programming language Java, Scala, Python. Java preferable
- Hands-on working knowledge of NoSQL and MPP data platforms like Hbase, MongoDb, Cassandra, AWS Redshift, Azure SQLDW, GCP BigQuery etc.
- Well-versed and working knowledge with data platform related services on GCP
- Bachelor's degree and year of work experience of 6 to 12 years or any combination of education, training and/or experience that demonstrates the ability to perform the duties of the position
Your Impact :
- Data Ingestion, Integration and Transformation
- Data Storage and Computation Frameworks, Performance Optimizations
- Analytics & Visualizations
- Infrastructure & Cloud Computing
- Data Management Platforms
- Build functionality for data ingestion from multiple heterogeneous sources in batch & real-time
- Build functionality for data analytics, search and aggregation
Job Title: Senior Full Stack Engineer
Location: Bangalore
About threedots:
At threedots, we are committed to helping our customers navigate the complex world of secured credit financing. Our mission is to facilitate financial empowerment through innovative, secured credit solutions like Loans Against Property, Securities, FD & More. Founded by early members of Groww, we are a well funded startup with over $4M in funding from India’s top investors.
Role Overview:
The Senior Full Stack Engineer will be responsible for developing and managing our web infrastructure and leading a team of talented engineers. With a solid background in both front and back-end technologies, and a proven track record of developing scalable web applications, the ideal candidate will have a hands-on approach and a leader's mindset.
Key Responsibilities:
- Lead the design, development, and deployment of our Node and ReactJS-based applications.
- Architect scalable and maintainable web applications that can handle the needs of a rapidly growing user base.
- Ensure the technical feasibility and smooth integration of UI/UX designs.
- Optimize applications for maximum speed and scalability.
- Implement comprehensive security and data protection.
- Manage and review code contributed by the team and maintain high standards of software quality.
- Deploy applications on AWS/GCP and manage server infrastructure.
- Work collaboratively with cross-functional teams to define, design, and ship new features.
- Provide technical leadership and mentorship to other team members.
- Keep abreast with the latest technological advancements to leverage new tech and tools.
Minimum Qualifications:
- Bachelor’s or Master’s degree in Computer Science, Engineering, or a related field.
- Minimum 3 years of experience as a full-stack developer.
- Proficient in Node.js and ReactJS.
- Experience with cloud services (AWS/GCP).
- Solid understanding of web technologies, including HTML5, CSS3, JavaScript, and responsive design.
- Experience with databases, web servers, and UI/UX design.
- Strong problem-solving skills and the ability to make sound architectural decisions.
- Proven ability to lead and mentor a tech team.
Preferred Qualifications:
- Experience in fintech
- Strong knowledge of software development methodologies and best practices.
- Experience with CI/CD pipelines and automated testing.
- Familiarity with microservices architecture.
- Excellent communication and leadership skills.
What We Offer:
- The opportunity to be part of a founding team and shape the company's future.
- Competitive salary with equity options.
- A creative and collaborative work environment.
- Professional growth opportunities as the company expands.
- Additional Startup Perks
How You'll Contribute:
● Redefine Fintech architecture standards by building easy-to-use, highly scalable,robust, and flexible APIs
● In-depth analysis of the systems/architectures and predict potential future breakdown and proactively bring solution
● Partner with internal stakeholders, to identify potential features implementation on that could cater to our growing business needs
● Drive the team towards writing high-quality codes, tackle abstracts/flaws in system design to attain revved-up API performance, high code reusability and readability.
● Think through the complex Fintech infrastructure and propose an easy-to-deploy modular infrastructure that could adapt and adjust to the specific requirements of the growing client base
● Design and create for scale, optimized memory usage and high throughput performance.
Skills Required:
● 5+ years of experience in the development of complex distributed systems
● Prior experience in building sustainable, reliable and secure microservice-based scalable architecture using Python Programming Language
● In-depth understanding of Python associated libraries and frameworks
● Strong involvement in managing and maintaining produc Ɵ on-level code with high volume API hits and low-latency APIs
● Strong knowledge of Data Structure, Algorithms, Design Patterns, Multi threading concepts, etc
● Ability to design and implement technical road maps for the system and components
● Bring in new software development practices, design/architecture innovations to make our Tech stack more robust
● Hands-on experience in cloud technologies like AWS/GCP/Azure as well as relational databases like MySQL/PostgreSQL or any NoSQL database like DynamoDB
Description:
As a Data Engineering Lead at Company, you will be at the forefront of shaping and managing our data infrastructure with a primary focus on Google Cloud Platform (GCP). You will lead a team of data engineers to design, develop, and maintain our data pipelines, ensuring data quality, scalability, and availability for critical business insights.
Key Responsibilities:
1. Team Leadership:
a. Lead and mentor a team of data engineers, providing guidance, coaching, and performance management.
b. Foster a culture of innovation, collaboration, and continuous learning within the team.
2. Data Pipeline Development (Google Cloud Focus):
a. Design, develop, and maintain scalable data pipelines on Google Cloud Platform (GCP) using services such as BigQuery, Dataflow, and Dataprep.
b. Implement best practices for data extraction, transformation, and loading (ETL) processes on GCP.
3. Data Architecture and Optimization:
a. Define and enforce data architecture standards, ensuring data is structured and organized efficiently.
b. Optimize data storage, processing, and retrieval for maximum
performance and cost-effectiveness on GCP.
4. Data Governance and Quality:
a. Establish data governance frameworks and policies to maintain data quality, consistency, and compliance with regulatory requirements. b. Implement data monitoring and alerting systems to proactively address data quality issues.
5. Cross-functional Collaboration:
a. Collaborate with data scientists, analysts, and other cross-functional teams to understand data requirements and deliver data solutions that drive business insights.
b. Participate in discussions regarding data strategy and provide technical expertise.
6. Documentation and Best Practices:
a. Create and maintain documentation for data engineering processes, standards, and best practices.
b. Stay up-to-date with industry trends and emerging technologies, making recommendations for improvements as needed.
Qualifications
● Bachelor's or Master's degree in Computer Science, Data Engineering, or related field.
● 5+ years of experience in data engineering, with a strong emphasis on Google Cloud Platform.
● Proficiency in Google Cloud services, including BigQuery, Dataflow, Dataprep, and Cloud Storage.
● Experience with data modeling, ETL processes, and data integration. ● Strong programming skills in languages like Python or Java.
● Excellent problem-solving and communication skills.
● Leadership experience and the ability to manage and mentor a team.
Lead Data Engineer
Data Engineers develop modern data architecture approaches to meet key business objectives and provide end-to-end data solutions. You might spend a few weeks with a new client on a deep technical review or a complete organizational review, helping them to understand the potential that data brings to solve their most pressing problems. On other projects, you might be acting as the architect, leading the design of technical solutions, or perhaps overseeing a program inception to build a new product. It could also be a software delivery project where you're equally happy coding and tech-leading the team to implement the solution.
Job responsibilities
· You might spend a few weeks with a new client on a deep technical review or a complete organizational review, helping them to understand the potential that data brings to solve their most pressing problems
· You will partner with teammates to create complex data processing pipelines in order to solve our clients' most ambitious challenges
· You will collaborate with Data Scientists in order to design scalable implementations of their models
· You will pair to write clean and iterative code based on TDD
· Leverage various continuous delivery practices to deploy, support and operate data pipelines
· Advise and educate clients on how to use different distributed storage and computing technologies from the plethora of options available
· Develop and operate modern data architecture approaches to meet key business objectives and provide end-to-end data solutions
· Create data models and speak to the tradeoffs of different modeling approaches
· On other projects, you might be acting as the architect, leading the design of technical solutions, or perhaps overseeing a program inception to build a new product
· Seamlessly incorporate data quality into your day-to-day work as well as into the delivery process
· Assure effective collaboration between Thoughtworks' and the client's teams, encouraging open communication and advocating for shared outcomes
Job qualifications Technical skills
· You are equally happy coding and leading a team to implement a solution
· You have a track record of innovation and expertise in Data Engineering
· You're passionate about craftsmanship and have applied your expertise across a range of industries and organizations
· You have a deep understanding of data modelling and experience with data engineering tools and platforms such as Kafka, Spark, and Hadoop
· You have built large-scale data pipelines and data-centric applications using any of the distributed storage platforms such as HDFS, S3, NoSQL databases (Hbase, Cassandra, etc.) and any of the distributed processing platforms like Hadoop, Spark, Hive, Oozie, and Airflow in a production setting
· Hands on experience in MapR, Cloudera, Hortonworks and/or cloud (AWS EMR, Azure HDInsights, Qubole etc.) based Hadoop distributions
· You are comfortable taking data-driven approaches and applying data security strategy to solve business problems
· You're genuinely excited about data infrastructure and operations with a familiarity working in cloud environments
· Working with data excites you: you have created Big data architecture, you can build and operate data pipelines, and maintain data storage, all within distributed systems
Professional skills
· Advocate your data engineering expertise to the broader tech community outside of Thoughtworks, speaking at conferences and acting as a mentor for more junior-level data engineers
· You're resilient and flexible in ambiguous situations and enjoy solving problems from technical and business perspectives
· An interest in coaching others, sharing your experience and knowledge with teammates
· You enjoy influencing others and always advocate for technical excellence while being open to change when needed
We are looking "Sr.Software Engineer(Devops)" for Reputed Client @ Bangalore Permanent Role.
Experience: 4+ Yrs
Responsibilities:
• As part of a team you will design, develop, and maintain scalable multi cloud DevOps blueprint.
• Understand overall virtualization platform architecture in cloud environments and design best of class solutions that fit the SaaS offering & legacy application modernization
• Continuously improve CI/CD pipeline, tools, processes and procedures and systems relating to Developer Productivity
• Collaborate continuously with the product development teams to implement CI/CD pipeline.
• Contribute to the subject matter on Developer Productivity, DevOps, Infrastructure Automation best practices.
Mandatory Skills:
• 1+ years of commercial server-side software development experience & 3+ years of commercial DevOps experience.
• Strong scripting skills (Java or Python) is a must.
• Experience with automation tools such as Ansible, Chef, Puppet etc.
• Hands-on experience with CI/CD tools such as GitLab, Jenkins, Nexus, Artifactory, Maven, Gradle
• Hands-on working experience in developing or deploying microservices is a must.
• Hands-on working experience of at least of the popular cloud infrastructure such as AWS / Azure / GCP / Red Hat OpenStack is a must.
• Knowledge about microservices hosted in leading cloud environments
• Experience with containerizing applications (Docker preferred) is a must
• Hands-on working experience of automating deployment, scaling, and management of containerized applications (Kubernetes) is a must.
• Strong problem-solving, analytical skills and good understanding of the best practices for building, testing, deploying and monitoring software
Mandatory Skills:
• Experience working with Secret management services such as HashiCorp Vault is desirable.
• Experience working with Identity and access management services such as Okta, Cognito is desirable.
• Experience with monitoring systems such as Prometheus, Grafana is desirable.
Educational Qualifications and Experience:
• B.E/B.Tech/MCA/M.Tech (Computer science/Information science/Information Technology is a Plus)
• 4 to 6 years of hands-on experience in server-side application development & DevOps
FINTECH CANDIDATES ONLY
About the job:
Emint is a fintech startup with the mission to ‘Make the best investing product that Indian consumers love to use, with simplicity & intelligence at the core’. We are creating a platformthat
gives a holistic view of market dynamics which helps our users make smart & disciplined
investment decisions. Emint is founded by a stellar team of individuals who come with decades of
experience of investing in Indian & global markets. We are building a team of highly skilled &
disciplined team of professionals and looking at equally motivated individuals to be part of
Emint. Currently are looking at hiring a Devops to join our team at Bangalore.
Job Description :
Must Have:
• Hands on experience on AWS DEVOPS
• Experience in Unix with BASH scripting is must
• Experience working with Kubernetes, Docker.
• Experience in Gitlab, Github or Bitbucket artifactory
• Packaging, deployment
• CI/CD pipeline experience (Jenkins is preferable)
• CI/CD best practices
Good to Have:
• Startup Experience
• Knowledge of source code management guidelines
• Experience with deployment tools like Ansible/puppet/chef is preferable
• IAM knowledge
• Coding knowledge of Python adds value
• Test automation setup experience
Qualifications:
• Bachelor's degree or equivalent experience in Computer Science or related field
• Graduates from IIT / NIT/ BITS / IIIT preferred
• Professionals with fintech ( stock broking / banking ) preferred
• Experience in building & scaling B2C apps preferred
Overview
Apiwiz (Itorix Inc) is looking for software engineers to join our team, grow with us, introduce us to new ideas and develop products that empower our users. Every day, you’ll work with team members across disciplines developing products for Apiwiz (Itorix Inc). You’ll interact daily with our product managers to understand our domain and create technical solutions that push us forward. We want to work with other engineers who bring knowledge and excitement about our opportunities.
You will impact major features and new product decisions as part of our remarkably high performing, collaborative team of engineers who thrive on the business impact of their work. With strong team support and significant freedom and self direction, you will experience the wealth of interesting, challenging problems that only a high growth startup can provide.
Roles & Responsibilities
- Build, configure, and manage cloud compute and data storage infrastructure for multiple instances of AWS and Google Cloud Platform.
- Manage VPCs, security groups, and user access to our various public cloud systems and services.
- Develop processes and procedures for using cloud-based infrastructures, including, access key rotation, disaster recovery, and building new services.
- Help the business control costs by categorizing and tagging assets running in the cloud.
- Develop scripts and workflows to manage cloud computing systems
- Provide oversight on log aggregation and application performance monitoring surrounding our production environments.
What we’re looking for
- 2-3 years of experience in the provision, configuring, administrating, automating, monitoring, and supporting enterprise Cloud services
- Strong experience in designing, building, maintaining and securing AWS resources for high-availability and production level systems and services
- Familiar with Cloud concepts with practical hands-on experience on any Cloud Platform.
- Hands-on experience with AWS services like Elastic Compute Cloud (EC2), Elastic Load-balancers, S3, Elastic File system, VPC, Route53, and IAM.
- Providing 24/7 support for the application and Infrastructure support
- Prior experience using infrastructure as a code software tool like Terraform.
- Knowledge in software provisioning, configuration management, and application-deployment tools like Ansible.
- Working knowledge of container technologies like Docker & Kubernetes cluster operations.
- Familiarity with software automation tools Git, Jenkins, Code Pipeline, SonarQube
we’d love to speak with you. Skills and Qualifications:
Strong experience with continuous integration/continuous deployment (CI/CD) pipeline tools such as Jenkins, TravisCI, or GitLab CI.
Proficiency in scripting languages such as Python, Bash, or Ruby.
Knowledge of infrastructure automation tools such as Ansible, Puppet, or Terraform.
Experience with cloud platforms such as AWS, Azure, or GCP.
Knowledge of container orchestration tools such as Docker, Kubernetes, or OpenShift.
Experience with version control systems such as Git.
Familiarity with Agile methodologies and practices.
Understanding of networking concepts and principles.
Knowledge of database technologies such as MySQL, MongoDB, or PostgreSQL.
Good understanding of security and data protection principles.
Roles and responsibilities:
● Building and setting up new development tools and infrastructure
● Working on ways to automate and improve development and release processes
● Deploy updates and fixes
● Helping to ensure information security best practices
● Provide Level 2 technical support
● Perform root cause analysis for production errors
● Investigate and resolve technical issues
Objectives :
- Building and setting up new development tools and infrastructure
- Working on ways to automate and improve development and release processes
- Testing code written by others and analyzing results
- Ensuring that systems are safe and secure against cybersecurity threats
- Identifying technical problems and developing software updates and ‘fixes’
- Working with software developers and software engineers to ensure that development follows established processes and works as intended
- Planning out projects and being involved in project management decisions
Daily and Monthly Responsibilities :
- Deploy updates and fixes
- Build tools to reduce occurrences of errors and improve customer experience
- Develop software to integrate with internal back-end systems
- Perform root cause analysis for production errors
- Investigate and resolve technical issues
- Develop scripts to automate visualization
- Design procedures for system troubleshooting and maintenance
Skills and Qualifications :
- Degree in Computer Science or Software Engineering or BSc in Computer Science, Engineering or relevant field
- 3+ years of experience as a DevOps Engineer or similar software engineering role
- Proficient with git and git workflows
- Good logical skills and knowledge of programming concepts(OOPS,Data Structures)
- Working knowledge of databases and SQL
- Problem-solving attitude
- Collaborative team spirit
We have the below active job vacancies open with a global aerospace brand in Bangalore (Hebbal) & (Devanahalli).
Java Full Stack Developer (Product) | 5-17 Y | Bangalore (Hebbal) | WFO | Locals Only | F2F Must |
Role: Java Full Stack Developer
Work Model: Hybrid - Working (2/3 days)
Mode Of Interview: F2F @ CV Raman Nagar & Hebbal - Office Site
Work Sites: Until March 2024 Hebbal & thereafter from Devanahalli
Key Skills: Java, Full Stack, Microservices, Springboot, Spring, JavaScript, HTML/CSS, Angular, Cloud (Azure or AWS), DevOps, Database
Levels: (All roles demands high technical expertise and Individual contributions)
Associate Software Engineer: 5 - 8 Yrs
Senior Java Full Stack Developer: 8 -12 Y
Lead Java Full Stack Developer: 12 - 17 Y
Prefer applicants from Aerospace, Consumer Tech Products & Electronics, Automotive, Unicorn, D2C Brands, who can join in short notice (30 Days).
If this role excites you, please apply here
Now, more than ever, the Toast team is committed to our customers. We’re taking steps to help restaurants navigate these unprecedented times with technology, resources, and community. Our focus is on building a restaurant platform that helps restaurants adapt, take control, and get back to what they do best: building the businesses they love. And because our technology is purpose-built for restaurants by restaurant people, restaurants can trust that we’ll deliver on their needs for today while investing in experiences that will power their restaurant of the future.
At Toast, our Site Reliability Engineers (SREs) are responsible for keeping all customer-facing services and other Toast production systems running smoothly. SREs are a blend of pragmatic operators and software craftspeople who apply sound software engineering principles, operational discipline, and mature automation to our environments and our codebase. Our decisions are based on instrumentation and continuous observability, as well as predictions and capacity planning.
About this roll* (Responsibilities)
- Gather and analyze metrics from both operating systems and applications to assist in performance tuning and fault finding
- Partner with development teams to improve services through rigorous testing and release procedures
- Participate in system design consulting, platform management, and capacity planning
- Create sustainable systems and services through automation and uplift
- Balance feature development speed and reliability with well-defined service level objectives
Troubleshooting and Supporting Escalations:
- Gather and analyze metrics from both operating systems and applications to assist in performance tuning and fault finding
- Diagnose performance bottlenecks and implement optimizations across infrastructure, databases, web, and mobile applications
- Implement strategies to increase system reliability and performance through on-call rotation and process optimization
- Perform and run blameless RCAs on incidents and outages aggressively, looking for answers that will prevent the incident from ever happening again
Do you have the right ingredients? (Requirements)
- Extensive industry experience with at least 7+ years in SRE and/or DevOps roles
- Polyglot technologist/generalist with a thirst for learning
- Deep understanding of cloud and microservice architecture and the JVM
- Experience with tools such as APM, Terraform, Ansible, GitHub, Jenkins, and Docker
- Experience developing software or software projects in at least four languages, ideally including two of Go, Python, and Java
- Experience with cloud computing technologies ( AWS cloud provider preferred)
Bread puns are encouraged but not required
Golang Developer
Location: Chennai/ Hyderabad/Pune/Noida/Bangalore
Experience: 4+ years
Notice Period: Immediate/ 15 days
Job Description:
- Must have at least 3 years of experience working with Golang.
- Strong Cloud experience is required for day-to-day work.
- Experience with the Go programming language is necessary.
- Good communication skills are a plus.
- Skills- Aws, Gcp, Azure, Golang
We are building a consumer-first rewards platform that brings personalised offers and rewards for every consumer.
This is a very early opportunity, you will be working with the Founding team to build Seek's first product and business from the ground up.
What will I do in this role?
- develop designs into high-perfomance Flutter apps for Android and iOS
- own and be responsible for performance, security and experience on the Seek mobile apps
What's in it for me?
- You'll be one of the earliest members in the team
- Experience how a startup is built in its early days
- Explore and acquire new skills along with building depth in your desired field of work
Required Skills and Interests
- Flutter
- Firebase
- Android/iOS Development
- AWS
- Full-stack preferred
- 5+ years of experience in DevOps including automated system configuration, application deployment, and infrastructure-as-code.
- Advanced Linux system administration abilities.
- Real-world experience managing large-scale AWS or GCP environments. Multi-account management a plus.
- Experience with managing production environments on AWS or GCP.
- Solid understanding CI/CD pipelines using GitHub, CircleCI/Jenkins, JFrog Artifactory/Nexus.
- Experience on any configuration management tools like Ansible, Puppet or Chef is a must.
- Experience in any one of the scripting languages: Shell, Python, etc.
- Experience in containerization using Docker and orchestration using Kubernetes/EKS/GKE is a must.
- Solid understanding of SSL and DNS.
- Experience on deploying and running any open-source monitoring/graphing solution like Prometheus, Grafana, etc.
- Basic understanding of networking concepts.
- Always adhere to security best practices.
- Knowledge on Bigdata (Hadoop/Druid) systems administration will be a plus.
- Knowledge on managing and running DBs (MySQL/MariaDB/Postgres) will be an added advantage.
What you get to do
- Work with development teams to build and maintain cloud environments to specifications developed closely with multiple teams. Support and automate the deployment of applications into those environments
- Diagnose and resolve occurring, latent and systemic reliability issues across entire stack: hardware, software, application and network. Work closely with development teams to troubleshoot and resolve application and service issues
- Continuously improve Conviva SaaS services and infrastructure for availability, performance and security
- Implement security best practices – primarily patching of operating systems and applications
- Automate everything. Build proactive monitoring and alerting tools. Provide standards, documentation, and coaching to developers.
- Participate in 12x7 on-call rotations
- Work with third party service/support providers for installations, support related calls, problem resolutions etc.
About us:
HappyFox is a software-as-a-service (SaaS) support platform. We offer an enterprise-grade help desk ticketing system and intuitively designed live chat software.
We serve over 12,000 companies in 70+ countries. HappyFox is used by companies that span across education, media, e-commerce, retail, information technology, manufacturing, non-profit, government and many other verticals that have an internal or external support function.
To know more, Visit! - https://www.happyfox.com/
Responsibilities:
- Build and scale production infrastructure in AWS for the HappyFox platform and its products.
- Research, Build/Implement systems, services and tooling to improve uptime, reliability and maintainability of our backend infrastructure. And to meet our internal SLOs and customer-facing SLAs.
- Proficient in managing/patching servers with Unix-based operating systems like Ubuntu Linux.
- Proficient in writing automation scripts or building infrastructure tools using Python/Ruby/Bash/Golang
- Implement consistent observability, deployment and IaC setups
- Patch production systems to fix security/performance issues
- Actively respond to escalations/incidents in the production environment from customers or the support team
- Mentor other Infrastructure engineers, review their work and continuously ship improvements to production infrastructure.
- Build and manage development infrastructure, and CI/CD pipelines for our teams to ship & test code faster.
- Participate in infrastructure security audits
Requirements:
- At least 5 years of experience in handling/building Production environments in AWS.
- At least 2 years of programming experience in building API/backend services for customer-facing applications in production.
- Demonstrable knowledge of TCP/IP, HTTP and DNS fundamentals.
- Experience in deploying and managing production Python/NodeJS/Golang applications to AWS EC2, ECS or EKS.
- Proficient in containerised environments such as Docker, Docker Compose, Kubernetes
- Proficient in managing/patching servers with Unix-based operating systems like Ubuntu Linux.
- Proficient in writing automation scripts using any scripting language such as Python, Ruby, Bash etc.,
- Experience in setting up and managing test/staging environments, and CI/CD pipelines.
- Experience in IaC tools such as Terraform or AWS CDK
- Passion for making systems reliable, maintainable, scalable and secure.
- Excellent verbal and written communication skills to address, escalate and express technical ideas clearly
- Bonus points – if you have experience with Nginx, Postgres, Redis, and Mongo systems in production.
About us:
HappyFox is a software-as-a-service (SaaS) support platform. We offer an enterprise-grade help desk ticketing system and intuitively designed live chat software.
We serve over 12,000 companies in 70+ countries. HappyFox is used by companies that span across education, media, e-commerce, retail, information technology, manufacturing, non-profit, government and many other verticals that have an internal or external support function.
To know more, Visit! - https://www.happyfox.com/
Responsibilities
- Build and scale production infrastructure in AWS for the HappyFox platform and its products.
- Research, Build/Implement systems, services and tooling to improve uptime, reliability and maintainability of our backend infrastructure. And to meet our internal SLOs and customer-facing SLAs.
- Implement consistent observability, deployment and IaC setups
- Lead incident management and actively respond to escalations/incidents in the production environment from customers and the support team.
- Hire/Mentor other Infrastructure engineers and review their work to continuously ship improvements to production infrastructure and its tooling.
- Build and manage development infrastructure, and CI/CD pipelines for our teams to ship & test code faster.
- Lead infrastructure security audits
Requirements
- At least 7 years of experience in handling/building Production environments in AWS.
- At least 3 years of programming experience in building API/backend services for customer-facing applications in production.
- Proficient in managing/patching servers with Unix-based operating systems like Ubuntu Linux.
- Proficient in writing automation scripts or building infrastructure tools using Python/Ruby/Bash/Golang
- Experience in deploying and managing production Python/NodeJS/Golang applications to AWS EC2, ECS or EKS.
- Experience in security hardening of infrastructure, systems and services.
- Proficient in containerised environments such as Docker, Docker Compose, Kubernetes
- Experience in setting up and managing test/staging environments, and CI/CD pipelines.
- Experience in IaC tools such as Terraform or AWS CDK
- Exposure/Experience in setting up or managing Cloudflare, Qualys and other related tools
- Passion for making systems reliable, maintainable, scalable and secure.
- Excellent verbal and written communication skills to address, escalate and express technical ideas clearly
- Bonus points – Hands-on experience with Nginx, Postgres, Postfix, Redis or Mongo systems.
Main tasks
- Supervision of the CI/CD process for the automated builds and deployments of web services and web applications as well as desktop tool in the cloud and container environment
- Responsibility of the operations part of a DevOps organization especially for development in the environment of container technology and orchestration, e.g. with Kubernetes
- Installation, operation and monitoring of web applications in cloud data centers for the purpose of development of the test as well as for the operation of an own productive cloud
- Implementation of installations of the solution especially in the container context
- Introduction, maintenance and improvement of installation solutions for development in the desktop and server environment as well as in the cloud and with on-premise Kubernetes
- Maintenance of the system installation documentation and implementation of trainings
Execution of internal software tests and support of involved teams and stakeholders
- Hands on Experience with Azure DevOps.
Qualification profile
- Bachelor’s or master’s degree in communications engineering, electrical engineering, physics or comparable qualification
- Experience in software
- Installation and administration of Linux and Windows systems including network and firewalling aspects
- Experience with build and deployment automation with tools like Jenkins, Gradle, Argo, AnangoDB or similar as well as system scripting (Bash, Power-Shell, etc.)
- Interest in operation and monitoring of applications in virtualized and containerized environments in cloud and on-premise
- Server environments, especially application, web-and database servers
- Knowledge in VMware/K3D/Rancer is an advantage
- Good spoken and written knowledge of English
Job Description:
• Drive end-to-end automation from GitHub/GitLab/BitBucket to Deployment,
Observability and Enabling the SRE activities
• Guide operations support (setup, configuration, management, troubleshooting) of
digital platforms and applications
• Solid understanding of DevSecOps Workflows that support CI, CS, CD, CM, CT.
• Deploy, configure, and manage SaaS and PaaS cloud platform and applications
• Provide Level 1 (OS, patching) and Level 2 (app server instance troubleshooting)
• DevOps programming: writing scripts, building operations/server instance/app/DB
monitoring tools Set up / manage continuous build and dev project management
environment: JenkinX/GitHub Actions/Tekton, Git, Jira Designing secure networks,
systems, and application architectures
• Collaborating with cross-functional teams to ensure secure product development
• Disaster recovery, network forensics analysis, and pen-testing solutions
• Planning, researching, and developing security policies, standards, and procedures
• Awareness training of the workforce on information security standards, policies, and
best practices
• Installation and use of firewalls, data encryption and other security products and
procedures
• Maturity in understanding compliance, policy and cloud governance and ability to
identify and execute automation.
• At Wesco, we discuss more about solutions than problems. We celebrate innovation
and creativity.
Role Introduction
• This role involves guiding the DevOps team towards successful delivery of Governance and
toolchain initiatives by removing manual tasks.
• Operate toolchain applications to empower engineering teams by providing, reliable, governed
self-service tools and supporting their adoption
• Driving good practice for consumption and utilisation of the engineering toolchain, with a focus
on DevOps practices
• Drive good governance for cloud service consumption
• Involves working in a collaborative environment and focus on leading team and providing
technical leadership to team members.
• Involves setting up process and improvements for teams on supporting various DevOps tooling
and governing the tooling.
• Co-ordinating with multiple teams within organization
• Lead on handovers from architecture teams to support major project rollouts which require the
Toolchain governance DevOps team to operationally support tooling
What you will do
• Identify and implement best practices, process improvement and automation initiatives for
improvement towards quicker delivery by removing manual tasks
• Ensure best practices and process are documented for reusability and keeping up-to date on
good practices and standards.
• Re-usable automation and compliance service, tools and processes
• Support and management of toolchain, toolchain changes and selection
• Identify and implement risk mitigation plans, avoid escalations, resolve blockers for teams.
Toolchain governance will involve operating and responding to alerts, enforcing good tooling
governance by driving automation, remediating technical debt and ensuring the latest tools
are utilised and on the latest versions
• Triage product pipelines, performance issues, SLA/SLO breaches, service unavailable along
with ancillary actions such as providing access to logs, tools, environments.
• Involve in initial / detailed estimates during roadmap planning or feature
estimation/planning of any automation identified for a given toolset.
• Develop, refine, and tune integrations between various tools
• Discuss with Product Owner/team on any challenges from implementation, deployment
perspective and assist in arriving probable solution and escalate any risks to get them
resolved w.r.t DevOps toolchain.
• In consultation with Head of DevOps and other stake holders, prioritization of items, item-
task breakdown; accountable for squad deliverables for sprint
• Involve in reviewing current components and plan for upgrade and ensure its communicated
to wider audience within Organization
• Involve in reviewing access / role and enhance and automate provisioning.
• Identify and encourage areas for growth and improvement within the team e.g conducts
regular 1-2-1’s with squad members to provide support, mentoring and goal setting
• Involve in performance management ,rewards and recognition of team members, Involve in
hiring process.• Plan for upskill of team to know about tools and perform tasks. Ensure quicker onboarding
of new joiners/freshers to team to be productive.
• Review ticket metrics to measure the health of the project including SLAs and plan for
improvement.
• Requirement for on call for critical incidents that happen Out of Hours, based on tooling SLA.
This may include planning standby schedule for squad, carrying out retrospective for every
callout and reviewing SLIs/SLOs.
• Owns the tech/repair debt, risk and compliance for the tooling with respect to
infrastructure, pipelines, access etc
• Track optimum utilization of resources and monitor/track the delivery schedule
• Review solutions designs with the Architects / Principal DevOps Engineers as required
• Provide monthly reporting which align to DevOps Tooling KPIs
What you will have
• Candidate should have 8+ years of experience and Hands-on DevOps experience and
experience in team management.
• Strong communication and interpersonal skills, Team player
• Good working experience of CI/CD tools like Jenkins, SonarQube, FOSSA, Harness, Jira, JSM,
ServiceNow etc.
• Good hands on knowledge of AWS Services like EC2, ECS, S3, IAM, SNS, SQS, VPC, Lambda,
API Gateway, Cloud Watch, Cloud Formation etc.
• Experience in operating and governing DevOps Toolchain
• Experience in operational monitoring, alerting and identifying and delivering on both repair
and technical debt
• Experience and background in ITIL/ITSM processes. The candidate will ensure development
of the appropriate (ITSM) model and processes, based on the ITIL Service Management
framework. This includes the strategic, design, transition, and operation services and
continuous service improvement
• Provide ITSM leadership experience and coaching processes
• Experience on various tools like Jenkins, Harness, Fossa,
• Experience of hosting and managing applications on AWS/AZURE•
• Experience in CI/CD pipeline (Jenkins build pipelines)
• Experience in containerization (Docker/Kubernetes)
• Experience in any programming language (Node.js or Python is preferred)
• Experience in Architecting and supporting cloud based products will be a plus
• Experience in PowerShell & Bash will be a plus
• Able to self manage multiple concurrent small projects, including managing priorities
between projects
• Able to quickly learn new tools
• Should be able to mentor/drive junior team members to achieve desired outcome of
roadmap-
• Ability to analyse information to identify problems and issues, and make effective decisions
within short span
• Excellent problem solving and critical thinking
• Experience in integrating various components including unit testing / CI/CD configuration.
• Experience to review current toolset and plan for upgrade.
• Experience with Agile framework/Jira/JSM tool.• Good communication skills and ability to communicate/work independently with external
teams.
• Highly motivated, able to work proficiently both independently and in a team environment
Good knowledge and experience with security constructs –
About Kloud9:
Kloud9 exists with the sole purpose of providing cloud expertise to the retail industry. Our team of cloud architects, engineers and developers help retailers launch a successful cloud initiative so you can quickly realise the benefits of cloud technology. Our standardised, proven cloud adoption methodologies reduce the cloud adoption time and effort so you can directly benefit from lower migration costs.
Kloud9 was founded with the vision of bridging the gap between E-commerce and cloud. The E-commerce of any industry is limiting and poses a huge challenge in terms of the finances spent on physical data structures.
At Kloud9, we know migrating to the cloud is the single most significant technology shift your company faces today. We are your trusted advisors in transformation and are determined to build a deep partnership along the way. Our cloud and retail experts will ease your transition to the cloud.
Our sole focus is to provide cloud expertise to retail industry giving our clients the empowerment that will take their business to the next level. Our team of proficient architects, engineers and developers have been designing, building and implementing solutions for retailers for an average of more than 20 years.
We are a cloud vendor that is both platform and technology independent. Our vendor independence not just provides us with a unique perspective into the cloud market but also ensures that we deliver the cloud solutions available that best meet our clients' requirements.
● Overall 8+ Years of Experience in Web Application development.
● 5+ Years of development experience with JAVA8 , Springboot, Microservices and middleware
● 3+ Years of Designing Middleware using Node JS platform.
● good to have 2+ Years of Experience in using NodeJS along with AWS Serverless platform.
● Good Experience with Javascript / TypeScript, Event Loops, ExpressJS, GraphQL, SQL DB (MySQLDB), NoSQL DB(MongoDB) and YAML templates.
● Good Experience with TDD Driven Development and Automated Unit Testing.
● Good Experience with exposing and consuming Rest APIs in Java 8, Springboot platform and Swagger API contracts.
● Good Experience in building NodeJS middleware performing Transformations, Routing, Aggregation, Orchestration and Authentication(JWT/OAUTH).
● Experience supporting and working with cross-functional teams in a dynamic environment.
● Experience working in Agile Scrum Methodology.
● Very good Problem-Solving Skills.
● Very good learner and passion for technology.
● Excellent verbal and written communication skills in English
● Ability to communicate effectively with team members and business stakeholders
Secondary Skill Requirements:
● Experience working with any of Loopback, NestJS, Hapi.JS, Sails.JS, Passport.JS
Why Explore a Career at Kloud9:
With job opportunities in prime locations of US, London, Poland and Bengaluru, we help build your career paths in cutting edge technologies of AI, Machine Learning and Data Science. Be part of an inclusive and diverse workforce that's changing the face of retail technology with their creativity and innovative solutions. Our vested interest in our employees translates to deliver the best products and solutions to our customers.
German Based IT Start-up
6 - 12 years of professional experience with any of the below stacks:
∙MERN stack: JavaScript - MongoDB - Express - ReactJS - Node,
∙MEAN stack: JavaScript - MongoDB - Express - AngularJS - Node.js
Requirements:
∙Professional experience with JavaScript and associated web technologies (CSS, semantic HTML).
∙Proficiency in the English language, both written and verbal, sufficient for success in a remote and largely asynchronous work environment.
∙Demonstrated capacity to clearly and concisely communicate about complex technical, architectural, and/or organizational problems and propose thorough iterative solutions.
∙Experience with performance and optimization problems and a demonstrated ability to both diagnose and prevent these problems.
∙Comfort working in a highly agile software development process.
∙Positive and solution-oriented mindset.
∙Experience owning a project from concept to production, including proposal, discussion, and execution.
∙Strong sense of ownership with the eagerness to design and deliver significant and impactful technology solutions.
∙Demonstrated ability to work closely with other parts of the organization.
Why LiftOff?
We at LiftOff specialize in product creation, for our main forte lies in helping Entrepreneurs realize their dream. We have helped businesses and entrepreneurs launch more than 70 plus products.
Many on the team are serial entrepreneurs with a history of successful exits.
As a Devops Engineer, you will work directly with our founders and alongside our engineers on a variety of software projects covering various languages, frameworks, and application architectures.
Must Have
*Work experience of at least 2 years with Kubernetes.
*Hands-on experience working with Kubernetes. Preferably on Azure Cloud.
*Well-versed with Kubectl
*Experience in using Azure Monitor, setting up analytics and reports for Azure containers and services.
*Monitoring and observability
*Setting Alerts and auto-scaling
Nice to have
*Scripting and automation
*Experience with Jenkins or any sort of CI/CD pipelines
*Past experience in setting up cloud infrastructure, configurations and database backups
*Experience with Azure App Service
*Experience of setting up web socket-based applications.
*Working knowledge of Azure APIM
We are a group of passionate people driven by core values. We strive to make every process transparent and have flexible work timings along with excellent startup culture and vibe.
About Seek
We are building a consumer-first rewards platform that brings personalised offers and rewards for every consumer.
This is a very early opportunity, you will be working with the Founding team to build Seek's first product and business from the ground up.
What will I do in this role?
- develop designs into high-perfomance Flutter apps for Android and iOS
- own and be responsible for performance, security and experience on the Seek mobile apps
What's in it for me?
- You'll be one of the earliest members in the team
- Experience how a startup is built in its early days
- Explore and acquire new skills along with building depth in your desired field of work
Required Skills and Interests
- Flutter
- Firebase
- Android/iOS Development
- AWS
- Full-stack experience preferred
Responsibilities
● Be a hands-on engineer, ensure frameworks/infrastructure built is well designed,
scalable & are of high quality.
● Build and/or operate platforms that are highly available, elastic, scalable, operable and
observable.
● Build/Adapt and implement tools that empower the TI AI engineering teams to
self-manage the infrastructure and services owned by them.
● You will identify, articulate, and lead various long-term tech vision, strategies,
cross-cutting initiatives and architecture redesigns.
● Design systems and make decisions that will keep pace with the rapid growth of TI AI.
● Document your work and decision-making processes, and lead presentations and
discussions in a way that is easy for others to understand.
● Available for on-call during emergencies to handle and resolve problems in a quick and
efficient manner.
Requirements
● 2+ years of Hands-on experience as a DevOps / Infrastructure engineer with AWS and
Kubernetes or similar infrastructure platforms. (preferably AWS)
● Hands-on with DevOps principles and practices ( Everything-as-a-code, CI/CD, Test
everything, proactive monitoring etc).
● Experience in building and operating distributed systems.
● Understanding of operating systems, virtualization, containerization and networks
preferable
● Hands-on coding on any of the languages like Python or GoLang.
● Familiarity with software engineering practices including unit testing, code reviews, and
design documentation.
● Strong debugging and problem-solving skills Curiosity about how things work and love to
share that knowledge with others.
Benefits :
● Work with a world class team working on a very forward looking problem
● Competitive Pay
● Flat hierarchy
● Health insurance for the family
Please find the JD below:
- Candidate should have good Platform experience on Azure with Terraform.
- The devops engineer needs to help developers, create the Pipelines and K8s Deployment Manifests.
- Good to have experience on migrating data from (AWS) to Azure.
- To manage/automate infrastructure automatically using Terraforms. Jenkins is the key CI/CD tool which we uses and it will be used to run these Terraforms.
- VMs to be provisioned on Azure Cloud and managed.
- Good hands-on experience of Networking on Cloud is required.
- Ability to setup Database on VM as well as managed DB and Proper set up of cloud hosted microservices needs to be done to communicate with the db services.
- Kubernetes, Storage, KeyValult, Networking (load balancing and routing) and VMs are the key infrastructure expertise which are essential.
- Requirement is to administer Kubernetes cluster end to end. (Application deployment, managing namespaces, load balancing, policy setup, using blue green/canary deployment models etc).
- The experience in AWS is desirable.
- Python experience is optional however Power shell is mandatory.
- Know-how on the use of GitHub
- Administration of Azure Kubernetes services
Sr Enterprise Software Architect with Cloud skills and preferably having either a GCP Associate or Professional Certification.
The requirement is to understand existing Enterprise Applications and help design a solution to enable Load balancing & Auto Scaling the application to meet certain KPIs.
Should be well versed with
- Designing and Deploying Large enterprise software in Cloud
- Understands Cloud fundamentals
- DevOps & Kubernetes
- Experience deploying cloud applications and monitoring operations
- Preferably Google Cloud
- Associate or Professional Certification.
Role Description:
● Own, deploy, configure, and manage infrastructure environment and/or applications in
both private and public cloud through cross-technology administration (OS, databases,
virtual networks), scripting, and monitoring automation execution.
● Manage incidents with a focus on service restoration.
● Act as the primary point of contact for all compute, network, storage, security, or
automation incidents/requests.
● Manage rollout of patches and release management schedule and implementation.
Technical experience:
● Strong knowledge of scripting languages such as Bash, Python, and Golang.
● Expertise in using command line tools and shells
● Strong working knowledge of Linux/UNIX and related applications
● Knowledge in implementing DevOps and having an inclination towards automation.
● Sound knowledge in infrastructure-as-a-code approaches with Puppet, Chef, Ansible, or
Terraform, and Helm. (preference towards Terraform, Ansible, and Helm)
● Must have strong experience in technologies such as Docker, Kubernetes, OpenShift,
etc.
● Working with REST/gRPC/GraphQL APIs
● Knowledge in networking, firewalls, network automation
● Experience with Continuous Delivery pipelines - Jenkins/JenkinsX/ArgoCD/Tekton.
● Experience with Git, GitHub, and related tools
● Experience in at least one public cloud provider
Skills/Competencies
● Foundation: OS (Linux/Unix) & N/w concepts and troubleshooting
● Automation: Bash or Python or Golang
● CI/CD & Config Management: Jenkin, Ansible, ArgoCD, Helm, Chef/Puppet, Git/GitHub
● Infra as a Code: Terraform
● Platform: Docker, K8s, VMs
● Databases: MySQL, PostgreSql, DataStore (Mongo, Redis, AeroSpike) good to have
● Security: Vulnerability Management and Golden Image
● Cloud: Deep working knowledge on any public cloud (GCP preferable)
● Monitoring Tools: Prometheus, Grafana, NewRelic
Responsibility :
- Install, configure, and maintain Kubernetes clusters.
- Develop Kubernetes-based solutions.
- Improve Kubernetes infrastructure.
- Work with other engineers to troubleshoot Kubernetes issues.
Kubernetes Engineer Requirements & Skills
- Kubernetes administration experience, including installation, configuration, and troubleshooting
- Kubernetes development experience
- Linux/Unix experience
- Strong analytical and problem-solving skills
- Excellent communication and interpersonal skills
- Ability to work independently and as part of a team
We are looking to fill the role of Kubernetes engineer. To join our growing team, please review the list of responsibilities and qualifications.
Kubernetes Engineer Responsibilities
- Install, configure, and maintain Kubernetes clusters.
- Develop Kubernetes-based solutions.
- Improve Kubernetes infrastructure.
- Work with other engineers to troubleshoot Kubernetes issues.
Kubernetes Engineer Requirements & Skills
- Kubernetes administration experience, including installation, configuration, and troubleshooting
- Kubernetes development experience
- Linux/Unix experience
- Strong analytical and problem-solving skills
- Excellent communication and interpersonal skills
- Ability to work independently and as part of a team
Data Engineer- Senior
Cubera is a data company revolutionizing big data analytics and Adtech through data share value principles wherein the users entrust their data to us. We refine the art of understanding, processing, extracting, and evaluating the data that is entrusted to us. We are a gateway for brands to increase their lead efficiency as the world moves towards web3.
What are you going to do?
Design & Develop high performance and scalable solutions that meet the needs of our customers.
Closely work with the Product Management, Architects and cross functional teams.
Build and deploy large-scale systems in Java/Python.
Identify, design, and implement internal process improvements: automating manual processes, optimizing data delivery, re-designing infrastructure for greater scalability, etc.
Create data tools for analytics and data scientist team members that assist them in building and optimizing their algorithms.
Follow best practices that can be adopted in Bigdata stack.
Use your engineering experience and technical skills to drive the features and mentor the engineers.
What are we looking for ( Competencies) :
Bachelor’s degree in computer science, computer engineering, or related technical discipline.
Overall 5 to 8 years of programming experience in Java, Python including object-oriented design.
Data handling frameworks: Should have a working knowledge of one or more data handling frameworks like- Hive, Spark, Storm, Flink, Beam, Airflow, Nifi etc.
Data Infrastructure: Should have experience in building, deploying and maintaining applications on popular cloud infrastructure like AWS, GCP etc.
Data Store: Must have expertise in one of general-purpose No-SQL data stores like Elasticsearch, MongoDB, Redis, RedShift, etc.
Strong sense of ownership, focus on quality, responsiveness, efficiency, and innovation.
Ability to work with distributed teams in a collaborative and productive manner.
Benefits:
Competitive Salary Packages and benefits.
Collaborative, lively and an upbeat work environment with young professionals.
Job Category: Development
Job Type: Full Time
Job Location: Bangalore
- 3 - 6 years of software development, and operations experience deploying and maintaining multi-tiered infrastructure and applications at scale.
- Design cloud infrastructure that is secure, scalable, and highly available on AWS
- Experience managing any distributed NoSQL system (Kafka/Cassandra/etc.)
- Experience with Containers, Microservices, deployment and service orchestration using Kubernetes, EKS (preferred), AKS or GKE.
- Strong scripting language knowledge, such as Python, Shell
- Experience and a deep understanding of Kubernetes.
- Experience in Continuous Integration and Delivery.
- Work collaboratively with software engineers to define infrastructure and deployment requirements
- Provision, configure and maintain AWS cloud infrastructure
- Ensure configuration and compliance with configuration management tools
- Administer and troubleshoot Linux-based systems
- Troubleshoot problems across a wide array of services and functional areas
- Build and maintain operational tools for deployment, monitoring, and analysis of AWS infrastructure and systems
- Perform infrastructure cost analysis and optimization
- AWS
- Docker
- Kubernetes
- Envoy
- Istio
- Jenkins
- Cloud Security & SIEM stacks
- Terraform