
About Swissclear
About
Tappp (www.tappp.com), the prepaid consumer market place of Singapore-based Swissclear Global is changing the way people consume premium digital content, by making it accessible without the need for credit cards.
Offering immediate access to a wide range of prepaid entertainment, gaming and recharge services, Tappp meets the digital consumption needs of a large segment of the world's population, who either do not own credit cards or are averse to making credit card transactions online.
Connect with the team
Similar jobs
Profile: Sr. Devops Engineer
Location: Gurugram
Experience: 05+ Years
Company: Watsoo
Required Skills & Qualifications
- Bachelor’s degree in Computer Science, Engineering, or related field.
 - 5+ years of proven hands-on DevOps experience.
 - Strong experience with CI/CD tools (Jenkins, GitLab CI, GitHub Actions, etc.).
 - Expertise in containerization & orchestration (Docker, Kubernetes, Helm).
 - Hands-on experience with cloud platforms (AWS, Azure, or GCP).
 - Proficiency in Infrastructure as Code (IaC) tools (Terraform, Ansible, Pulumi, or CloudFormation).
 - Experience with monitoring and logging solutions (Prometheus, Grafana, ELK, CloudWatch, etc.).
 - Proficiency in scripting languages (Python, Bash, or Shell).
 - Knowledge of networking, security, and system administration.
 - Strong problem-solving skills and ability to work in fast-paced environments.
 - Troubleshoot production issues, perform root cause analysis, and implement preventive measures.
 - Advocate DevOps best practices, automation, and continuous improvement
 
Key Skills Required:
· You will be part of the DevOps engineering team, configuring project environments, troubleshooting integration issues in different systems also be involved in building new features for next generation of cloud recovery services and managed services.
· You will directly guide the technical strategy for our clients and build out a new capability within the company for DevOps to improve our business relevance for customers.
· You will be coordinating with Cloud and Data team for their requirements and verify the configurations required for each production server and come with Scalable solutions.
· You will be responsible to review infrastructure and configuration of micro services and packaging and deployment of application
To be the right fit, you'll need:
· Expert in Cloud Services like AWS.
· Experience in Terraform Scripting.
· Experience in container technology like Docker and orchestration like Kubernetes.
· Good knowledge of frameworks such as Jenkins, CI/CD pipeline, Bamboo Etc.
· Experience with various version control system like GIT, build tools (Mavan, ANT, Gradle ) and cloud automation tools (Chef, Puppet, Ansible)
Wissen Technology is hiring for Devops engineer
Required:
-4 to 10 years of relevant experience in Devops
-Must have hands on experience on AWS, Kubernetes, CI/CD pipeline
-Good to have exposure on Github or Gitlab
-Open to work from hashtag Chennai
-Work mode will be Hybrid
Company profile:
Company Name : Wissen Technology
Group of companies in India : Wissen Technology & Wissen Infotech
Work Location - Chennai
Website : www.wissen.com
Wissen Thought leadership : https://lnkd.in/gvH6VBaU
LinkedIn: https://lnkd.in/gnK-vXjF
About the Role:
We are seeking a talented and passionate DevOps Engineer to join our dynamic team. You will be responsible for designing, implementing, and managing scalable and secure infrastructure across multiple cloud platforms. The ideal candidate will have a deep understanding of DevOps best practices and a proven track record in automating and optimizing complex workflows.
Key Responsibilities:
Cloud Management:
- Design, implement, and manage cloud infrastructure on AWS, Azure, and GCP.
 - Ensure high availability, scalability, and security of cloud resources.
 
Containerization & Orchestration:
- Develop and manage containerized applications using Docker.
 - Deploy, scale, and manage Kubernetes clusters.
 
CI/CD Pipelines:
- Build and maintain robust CI/CD pipelines to automate the software delivery process.
 - Implement monitoring and alerting to ensure pipeline efficiency.
 
Version Control & Collaboration:
- Manage code repositories and workflows using Git.
 - Collaborate with development teams to optimize branching strategies and code reviews.
 
Automation & Scripting:
- Automate infrastructure provisioning and configuration using tools like Terraform, Ansible, or similar.
 - Write scripts to optimize and maintain workflows.
 
Monitoring & Logging:
- Implement and maintain monitoring solutions to ensure system health and performance.
 - Analyze logs and metrics to troubleshoot and resolve issues.
 
Required Skills & Qualifications:
- 3-5 years of experience with AWS, Azure, and Google Cloud Platform (GCP).
 - Proficiency in containerization tools like Docker and orchestration tools like Kubernetes.
 - Hands-on experience building and managing CI/CD pipelines.
 - Proficient in using Git for version control.
 - Experience with scripting languages such as Bash, Python, or PowerShell.
 - Familiarity with infrastructure-as-code tools like Terraform or CloudFormation.
 - Solid understanding of networking, security, and system administration.
 - Excellent problem-solving and troubleshooting skills.
 - Strong communication and teamwork skills.
 
Preferred Qualifications:
- Certifications such as AWS Certified DevOps Engineer, Azure DevOps Engineer, or Google Professional DevOps Engineer.
 - Experience with monitoring tools like Prometheus, Grafana, or ELK Stack.
 - Familiarity with serverless architectures and microservices.
 
Job description
Problem Statement-Solution
Only 10% of India speaks English and 90% speak over 25 languages and 1000s of dialects. The internet has largely been in English. A good part of India is now getting internet connectivity thanks to cheap smartphones and Jio. The non-English speaking internet users will balloon to about 600 million users out of the total 750 million internet users in India by 2020. This will make the vernacular segment one of the largest segments in the world - almost 2x the size of the US population. The vernacular segment has very few products that they can use on the internet.
One large human need is that of sharing thoughts and connecting with people of the same community on the basis of language and common interests. Twitter serves this need globally but the experience is mostly in English. There’s a large unaddressed need for these vernacular users to express themselves in their mother tongue and connect with others from their community. Koo is a solution to this problem.
About Koo
Koo was founded in March 2020, as a micro-blogging platform in both Indian languages and English, which gives a voice to the millions of Indians who communicate in Indian languages.
Currently available in Assamese, Bengali, English, Hindi, Kannada, Marathi, Tamil and Telugu, Koo enables people from across India to express themselves online in their mother tongues. In a country where under 10% of the population speaks English as a native language, Koo meets the need for a social media platform that can deliver an immersive language experience to an Indian user, thereby enabling them to connect and interact with each other. The recently introduced ‘Talk to Type’ enables users to leverage the voice assistant to share their thoughts without having to type. In August 2021, Koo crossed 10 million downloads, in just 16 months of launch.
Since June 2021, Koo is available in Nigeria.
Founding Team
Koo is founded by veteran internet entrepreneurs - Aprameya Radhakrishna (CEO, Taxiforsure) and Mayank Bidawatka (Co-founder, Goodbox & Coreteam, redBus).
Technology Team & Culture
The technology team comprises sharp coders, technology geeks and guys who have been entrepreneurs or are entrepreneurial and extremely passionate towards technology. Talent is coming from the likes of Google, Walmart, Redbus, Dailyhunt. Anyone being part of a technology team will have a lot to learn from their peers and mentors. Download our android app and take a look at what we’ve built. Technology stack compromises of a wide variety of cutting-edge technologies like Kotlin, Java 15, Reactive Programming, MongoDB, Cassandra, Kubernetes, AWS, NodeJS, Python, ReactJS, Redis, Aerospike, ML, Deep learning etc. We believe in giving a lot of independence and autonomy to ownership-driven individuals.
Technology skill sets required for a matching profile
- Experience between 3 to 7 years in a devops role with mandatory one or more stints at a fast paced startup.
 - Mandatory experience with containers, kubernetes (EKS) (setting up from scratch), istio and microservices.
 - Sound knowledge of technologies like Terraform, Automation Scripts, Cron jobs etc. Must have worked with goals of putting infra as code like setting up new environments with code.
 - Knowledge of industry standards around monitoring, alerting, self healing, high availability, auto scaling etc.
 - Exhaustive experience with various cloud technologies (especially on AWS) like SQS, SNS, Elastic Search, Elastic Cache, Elastic Transcoder, VPC, Subnets, Security groups etc.
 - Must have set up stable CI-CD pipelines with capabilities to do zero downtime deployments on any one of rolling updates, blue green or canary modes.
 - Experience with VPN and LDAP solutions for securely login to infrastructure and proving SSO. 8. Master of deploying and troubleshooting all layers of application from network, frontend, backend and databases (MongoDB, Redis, Postgres, Cassandra, ElasticSearch, Aerospike etc).
 
Description
DevOps Engineer / SRE
- Understanding of maintenance of existing systems (Virtual machines), Linux stack
 - Experience running, operating and maintainence of Kubernetes pods
 - Strong Scripting skills
 - Experience in AWS
 - Knowledge of configuring/optimizing open source tools like Kafka, etc.
 - Strong automation maintenance - ability to identify opportunities to speed up build and deploy process with strong validation and automation
 - Optimizing and standardizing monitoring, alerting.
 - Experience in Google cloud platform
 - Experience/ Knowledge in Python will be an added advantage
 - Experience on Monitoring Tools like Jenkins, Kubernetes ,Nagios,Terraform etc
 
Implementation Engineer
Implementation Engineer Duties and Responsibilities
- Understanding requirements from internal consumers about program functionality.
 - Perform UAT tests on application with help of test cases and prepare documents for same and coordinate with team to resolve all issues within required timeframe and inform management of any delays.
 - Collaborate with development team to design new programs for all client implementation activities and manage all communication with department to resolve all issues and assist implementation analyst to manage all production data.
 - Perform research on all client issues and document all findings and implement all technical activities with help of JIRA.
 - Assist internal teams to monitor all software implementation lifecycle and assist to track appropriate customization to all software for clients.
 - Train technical staff on all OS and software issues and identify all issues in processes and provide solutions for same. Train other team members on processes, procedures, API functionality, and development specifications.
 - Supervise/support crossed-functional teams to design, test and deploy to achieve on-time project completion.
 - Implement, configure, and debug MySQL, JAVA, Redis, PHP, Node, ActiveMQ setups.
 - Monitor and troubleshoot infrastructure utilizing SYSLOG, SNMP and other monitoring software.
 - Install, configure, monitor and upgrade applications during installation/upgrade activities.
 - Assisting team to identify network issue and help them with respective resolutions.
 - Utilize JIRA for issue reporting, status, activity planning, tracking and updating project defects and tasks.
 - Managing JIRA and tracking tickets to closure and follow-ups with team members.
 - Troubleshoot software issues
 - Provide on-call support as necessary
 
Implementation Engineer Requirements and Qualifications
- Bachelor’s degree in computer science, software engineering, or a related field
 - Experience working with
 - Linux & Windows Operating system
 - Working on shell and bat scripts
 - SIP/ISUP based solutions
 - deploying / debugging Java, C++ based solutions.
 - MySQL to install, backup, update and retrieve data
 - Front-end or back-end software development for LINUX
 - database management and security a plus
 - Very good debugging and analytical skills
 - Good Communication skills
 
Profile: DevOps Engineer
Experience: 5-8 Yrs
Notice Period: Immediate to 30 Days
Job Descrtiption:
Technical Experience (Must Have):
Cloud: Azure
DevOps Tool: Terraform, Ansible, Github, CI-CD pipeline, Docker, Kubernetes
Network: Cloud Networking
Scripting Language: Any/All - Shell Script, PowerShell, Python
OS: Linux (Ubuntu, RHEL etc)
Database: MongoDB
Professional Attributes: Excellent communication, written, presentation,
and problem-solving skills.
Experience: Minimum of 5-8 years of experience in Cloud Automation and
Application
Additional Information (Good to have):
Microsoft Azure Fundamentals AZ-900
Terraform Associate
Docker
Certified Kubernetes Administrator
Role:
Building and maintaining tools to automate application and
infrastructure deployment, and to monitor operations.
Design and implement cloud solutions which are secure, scalable,
resilient, monitored, auditable and cost optimized.
Implementing transformation from an as is state, to the future.
Coordinating with other members of the DevOps team, Development, Test,
and other teams to enhance and optimize existing processes.
Provide systems support,  implement monitoring and logging alerting
solutions that enable the production systems to be monitored.
Writing Infrastructure as Code (IaC) using Industry standard tools and
services.
Writing application deployment automation using industry standard
deployment and configuration tools.
Design and implement continuous delivery pipelines that serve the
purpose of provisioning and operating client test as well as production
environments.
Implement and stay abreast of Cloud and DevOps industry best practices
and tooling.
At Karza technologies, we take pride in building one of the most comprehensive digital onboarding & due-diligence platforms by profiling millions of entities and trillions of associations amongst them using data collated from more than 700 publicly available government sources. Primarily in the B2B Fintech Enterprise space, we are headquartered in Mumbai in Lower Parel with 100+ strong workforce. We are truly furthering the cause of Digital India by providing the entire BFSI ecosystem with tech products and services that aid onboarding customers, automating processes and mitigating risks seamlessly, in real-time and at fraction of the current cost.
A few recognitions:
- Recognized as Top25 startups in India to work with 2019 by LinkedIn
 - Winner of HDFC Bank's Digital Innovation Summit 2020
 - Super Winners (Won every category) at Tecnoviti 2020 by Banking Frontiers
 - Winner of Amazon AI Award 2019 for Fintech
 - Winner of FinTech Spot Pitches at Fintegrate Zone 2018 held at BSE
 - Winner of FinShare 2018 challenge held by ShareKhan
 - Only startup in Yes Bank Global Fintech Accelerator to win the account during the Cohort
 - 2nd place Citi India FinTech Challenge 2018 by Citibank
 - Top 3 in Viacom18's Startup Engagement Programme VStEP
 
What your average day would look like:
- Deploy and maintain mission-critical information extraction, analysis, and management systems
 - Manage low cost, scalable streaming data pipelines
 - Provide direct and responsive support for urgent production issues
 - Contribute ideas towards secure and reliable Cloud architecture
 - Use open source technologies and tools to accomplish specific use cases encountered within the project
 - Use coding languages or scripting methodologies to solve automation problems
 - Collaborate with others on the project to brainstorm about the best way to tackle a complex infrastructure, security, or deployment problem
 - Identify processes and practices to streamline development & deployment to minimize downtime and maximize turnaround time
 
What you need to work with us:
- Proficiency in at least one of the general-purpose programming languages like Python, Java, etc.
 - Experience in managing the IAAS and PAAS components on popular public Cloud Service Providers like AWS, Azure, GCP etc.
 - Proficiency in Unix Operating systems and comfortable with Networking concepts
 - Experience with developing/deploying a scalable system
 - Experience with the Distributed Database & Message Queues (like Cassandra, ElasticSearch, MongoDB, Kafka, etc.)
 - Experience in managing Hadoop clusters
 - Understanding of containers and have managed them in production using container orchestration services.
 - Solid understanding of data structures and algorithms.
 - Applied exposure to continuous delivery pipelines (CI/CD).
 - Keen interest and proven track record in automation and cost optimization.
 
Experience:
- 1-4 years of relevant experience
 - BE in Computer Science / Information Technology
 










