About Lakeba IT Solutions
About
Connect with the team
Similar jobs
Job Title: Data Architect - Azure DevOps
Job Location: Mumbai (Andheri East)
About the company:
MIRACLE HUB CLIENT, is a predictive analytics and artificial intelligence company headquartered in Boston, US with offices across the globe. We build prediction models and algorithms to solve high priority business problems. Working across multiple industries, we have designed and developed breakthrough analytic products and decision-making tools by leveraging predictive analytics, AI, machine learning, and deep domain expertise.
Skill-sets Required:
- Design Enterprise Data Models
- Azure Data Specialist
- Security and Risk
- GDPR and other compliance knowledge
- Scrum/Agile
Job Role:
- Design and implement effective database solutions and models to store and retrieve company data
- Examine and identify database structural necessities by evaluating client operations, applications, and programming.
- Assess database implementation procedures to ensure they comply with internal and external regulations
- Install and organize information systems to guarantee company functionality.
- Prepare accurate database design and architecture reports for management and executive teams.
Desired Candidate Profile:
- Bachelor’s degree in computer science, computer engineering, or relevant field.
- A minimum of 3 years’ experience in a similar role.
- Strong knowledge of database structure systems and data mining.
- Excellent organizational and analytical abilities.
- Outstanding problem solver.
- IMMEDIATE JOINING (A notice period of 1 month is also acceptable)
- Excellent English communication and presentation skills, both verbal and written
- Charismatic, competitive and enthusiastic personality with negotiation skills
Compensation: NO BAR .
What we look for:
As a DevOps Developer, you will contribute to a thriving and growing AIGovernance Engineering team. You will work in a Kubernetes-based microservices environment to support our bleeding-edge cloud services. This will include custom solutions, as well as open source DevOps tools (build and deploy automation, monitoring and data gathering for our software delivery pipeline). You will also be contributing to our continuous improvement and continuous delivery while increasing maturity of DevOps and agile adoption practices.
Responsibilities:
- Ability to deploy software using orchestrators /scripts/Automation on Hybrid and Public clouds like AWS
- Ability to write shell/python/ or any unix scripts
- Working Knowledge on Docker & Kubernetes
- Ability to create pipelines using Jenkins or any CI/CD tool and GitOps tool like ArgoCD
- Working knowledge of Git as a source control system and defect tracking system
- Ability to debug and troubleshoot deployment issues
- Ability to use tools for faster resolution of issues
- Excellent communication and soft skills
- Passionate and ability work and deliver in a multi-team environment
- Good team player
- Flexible and quick learner
- Ability to write docker files, Kubernetes yaml files / Helm charts
- Experience with monitoring tools like Nagios, Prometheus and visualisation tools such as Grafana.
- Ability to write Ansible, terraform scripts
- Linux System experience and Administration
- Effective cross-functional leadership skills: working with engineering and operational teams to ensure systems are secure, scalable, and reliable.
- Ability to review deployment and operational environments, i.e., execute initiatives to reduce failure, troubleshoot issues across the entire infrastructure stack, expand monitoring capabilities, and manage technical operations.
Please find the JD below:
- Candidate should have good Platform experience on Azure with Terraform.
- The devops engineer needs to help developers, create the Pipelines and K8s Deployment Manifests.
- Good to have experience on migrating data from (AWS) to Azure.
- To manage/automate infrastructure automatically using Terraforms. Jenkins is the key CI/CD tool which we uses and it will be used to run these Terraforms.
- VMs to be provisioned on Azure Cloud and managed.
- Good hands-on experience of Networking on Cloud is required.
- Ability to setup Database on VM as well as managed DB and Proper set up of cloud hosted microservices needs to be done to communicate with the db services.
- Kubernetes, Storage, KeyValult, Networking (load balancing and routing) and VMs are the key infrastructure expertise which are essential.
- Requirement is to administer Kubernetes cluster end to end. (Application deployment, managing namespaces, load balancing, policy setup, using blue green/canary deployment models etc).
- The experience in AWS is desirable.
- Python experience is optional however Power shell is mandatory.
- Know-how on the use of GitHub
- Administration of Azure Kubernetes services
Implementation Engineer
Implementation Engineer Duties and Responsibilities
- Understanding requirements from internal consumers about program functionality.
- Perform UAT tests on application with help of test cases and prepare documents for same and coordinate with team to resolve all issues within required timeframe and inform management of any delays.
- Collaborate with development team to design new programs for all client implementation activities and manage all communication with department to resolve all issues and assist implementation analyst to manage all production data.
- Perform research on all client issues and document all findings and implement all technical activities with help of JIRA.
- Assist internal teams to monitor all software implementation lifecycle and assist to track appropriate customization to all software for clients.
- Train technical staff on all OS and software issues and identify all issues in processes and provide solutions for same. Train other team members on processes, procedures, API functionality, and development specifications.
- Supervise/support crossed-functional teams to design, test and deploy to achieve on-time project completion.
- Implement, configure, and debug MySQL, JAVA, Redis, PHP, Node, ActiveMQ setups.
- Monitor and troubleshoot infrastructure utilizing SYSLOG, SNMP and other monitoring software.
- Install, configure, monitor and upgrade applications during installation/upgrade activities.
- Assisting team to identify network issue and help them with respective resolutions.
- Utilize JIRA for issue reporting, status, activity planning, tracking and updating project defects and tasks.
- Managing JIRA and tracking tickets to closure and follow-ups with team members.
- Troubleshoot software issues
- Provide on-call support as necessary
Implementation Engineer Requirements and Qualifications
- Bachelor’s degree in computer science, software engineering, or a related field
- Experience working with
- Linux & Windows Operating system
- Working on shell and bat scripts
- SIP/ISUP based solutions
- deploying / debugging Java, C++ based solutions.
- MySQL to install, backup, update and retrieve data
- Front-end or back-end software development for LINUX
- database management and security a plus
- Very good debugging and analytical skills
- Good Communication skills
• DevOps/Build and Release Engineer with maturity to help, define and automate the processes.
• Work, configure, install, manage, on source control tools like AWS Codecommit / GitHub / BitBucket.
• Automate implementation/deployment of code in the cloud-based infrastructure (AWS Preferred).
• Setup monitoring of infrastructure and applications with alerting frameworks
Requirements:
• Able to code in Python.
• Extensive experience with building and supporting Docker and Kubernetes in
production.
• Understand AWS (Amazon Web Services) and be able to jump right into our
environment.
• Security Clearance will be required.
• Lambda used in conjunction with S3, CloudTrail and EC2.
• CloudFormation (Infrastructure as code)
• CloudWatch and CloudTrail
• Version Control (SVN, Git, Artifactory, Bit bucket)
• CI/CD (Jenkins or similar)
• Docker Compose or other orchestration tools
• Rest API
• DB (Postgres/Oracle/SQL Server or NoSql or Graph DB)
• Bachelor’s Degree in Computer Science, Computer Engineering or a closely
related field.
• Server orchestration using tools like Puppet, Chef, Ansible, etc.
Please send your CV at priyanka.sharma @ neotas.com
Neotas.com
CI/CD tools Jenkins/Bamboo/Teamcity/CircleCI, DevSecOps Pipeline, Cloud Services (AWS/Azure/GCP), Ansible, Terraform, Docker, Helm, Cloud formation template, Webserver deployment & config, Databases(SQL/NoSQL) deployment & config, Git, Artifactory, Monitoring tools (Nagios, Grafana, Prometheus etc), Application logs (ELK/EFK, Splunk etc.), API Gateways, Security tools, Vault. |
Azure DeVops
On premises to Azure Migration
Docker, Kubernetes
Terraform CI CD pipeline
9+ Location
Budget – BG, Hyderabad, Remote , Hybrid –
Budget – up to 30 LPA
About Us
We have grown over 1400% in revenues in the last year.
Interface.ai provides an Intelligent Virtual Assistant (IVA) to FIs to automate calls and customer inquiries across multiple channels and engage their customers with financial insights and upsell/cross-sell.
Our IVA is transforming financial institutions’ call centers from a cost to a revenue center.
Our core technology is built 100% in-house with several breakthroughs in Natural Language Understanding. Our parser is built based on zero-shot learning that helps us to launch industry-specific IVA that can achieve over 90% accuracy on Day-1.
We are 45 people strong with employees spread across India and US locations. Many of them come from ML teams at Apple, Microsoft, and Salesforce in the US along with enterprise architects with over 20+ years of experience building large-scale systems. Our India team consists of people from ISB, IIMs, and many who have been previously part of early-stage startups.
We are a fully remote team.
Founders come from Banking and Enterprise Technology backgrounds with previous experience scaling companies from scratch to $50M+ in revenues.
As a Site Reliability Engineer you will be in charge of:
- Designing, analyzing and troubleshooting large-scale distributed systems
- Engaging in cross-functional team discussions on design, deployment, operation, and maintenance, in a fast-moving, collaborative set up
- Building automation scripts to validate the stability, scalability, and reliability of interface.ai’s products & services as well as enhance interface.ai’s employees’ productivity
- Debugging and optimizing code and automating routine tasks
- Troubleshoot and diagnose issues (hardware or software), propose and implement solutions to ensure they occur with reduced frequency
- Perform the periodic on-call duty to handle security, availability, and reliability of interface.ai’s products
- You will follow and write good code and solid engineering practices
Requirements
You can be a great fit if you are :
- Extremely self motivated
- Ability to learn quickly
- Growth Mindset (read this if you don't know what it means - https://www.amazon.com/Mindset-Psychology-Carol-S-Dweck/dp/0345472322" target="_blank">link)
- Emotional Maturity (read this if you don't know what it means - https://medium.com/@krisgage/15-signs-of-emotional-maturity-38b1a2ab9766" target="_blank">link)
- Passionate about the possibilities at the intersection of AI + Banking
- Worked in a startup of 5 to 30 employees
- Developer with a strong interest in systems Design. You will be building, maintaining, and scaling our cloud infrastructure through software tooling and automation.
- 4-8 years of industry experience developing and troubleshooting large-scale infrastructure on the cloud
- Have a solid understanding of system availability, latency, and performance
- Strong programming skills in at least one major programming language and the ability to learn new languages as needed
- Strong System/network debugging skills
- Experience with management/automation tools such as Terraform/Puppet/Chef/SALT
- Experience with setting up production-level monitoring and telemetry
- Expertise in Container management & AWS
- Experience with kubernetes is a plus
- Experience building CI/CD pipelines
- Experience working with Web sockets, Redis, Postgres, Elastic search, Logstash
- Experience working in an agile team environment and proficient understanding of code versioning tools, such as Git.
- Ability to effectively articulate technical challenges and solutions.
- Proactive outlook for ways to make our systems more reliable
Job Brief:
We are looking for candidates that have experience in development and have performed CI/CD based projects. Should have a good hands-on Jenkins Master-Slave architecture, used AWS native services like CodeCommit, CodeBuild, CodeDeploy and CodePipeline. Should have experience in setting up cross platform CI/CD pipelines which can be across different cloud platforms or on-premise and cloud platform.
Job Location:
Pune.
Job Description:
- Hands on with AWS (Amazon Web Services) Cloud with DevOps services and CloudFormation.
- Experience interacting with customer.
- Excellent communication.
- Hands-on in creating and managing Jenkins job, Groovy scripting.
- Experience in setting up Cloud Agnostic and Cloud Native CI/CD Pipelines.
- Experience in Maven.
- Experience in scripting languages like Bash, Powershell, Python.
- Experience in automation tools like Terraform, Ansible, Chef, Puppet.
- Excellent troubleshooting skills.
- Experience in Docker and Kuberneties with creating docker files.
- Hands on with version control systems like GitHub, Gitlab, TFS, BitBucket, etc.
Role : DevOps Engineer
Experience : 1 to 2 years and 2 to 5 Years as DevOps Engineer (2 Positions)
Location : Bangalore. 5 Days Working.
Education Qualification : Tech/B.E for Tier-1/Tier-2/Tier-3 Colleges or equivalent institutes
Skills :- DevOps Engineering, Ruby On Rails or Python and Bash/Shell skills, Docker, rkt or similar container engine, Kubernetes or similar clustering solutions
As DevOps Engineer, you'll be part of the team building the stage for our Software Engineers to work on, helping to enhance our product performance and reliability.
Responsibilities:
- Build & operate infrastructure to support website, backed cluster, ML projects in the organization.
- Helping teams become more autonomous and allowing the Operation team to focus on improving the infrastructure and optimizing processes.
- Delivering system management tooling to the engineering teams.
- Working on your own applications which will be used internally.
- Contributing to open source projects that we are using (or that we may start).
- Be an advocate for engineering best practices in and out of the company.
- Organizing tech talks and participating in meetups and representing Box8 at industry events.
- Sharing pager duty for the rare instances of something serious happening. ∙ Collaborate with other developers to understand & setup tooling needed for Continuous Integration/Delivery/Deployment (CI/CD) practices.
Requirements:
- 1+ Years Of Industry Experience Scale existing back end systems to handle ever increasing amounts of traffic and new product requirements.
- Ruby On Rails or Python and Bash/Shell skills.
- Experience managing complex systems at scale.
- Experience with Docker, rkt or similar container engine.
- Experience with Kubernetes or similar clustering solutions.
- Experience with tools such as Ansible or Chef Understanding of the importance of smart metrics and alerting.
- Hands on experience with cloud infrastructure provisioning, deployment, monitoring (we are on AWS and use ECS, ELB, EC2, Elasticache, Elasticsearch, S3, CloudWatch).
- Experience with relational SQL and NoSQL databases, including Postgres and Cassandra.
- Knowledge of data pipeline and workflow management tools: Azkaban, Luigi, Airflow, etc.
- Experience in working on linux based servers.
- Managing large scale production grade infrastructure on AWS Cloud.
- Good Knowledge on scripting languages like ruby, python or bash.
- Experience in creating in deployment pipeline from scratch.
- Expertise in any of the CI tools, preferably Jenkins.
- Good knowledge of docker containers and its usage.
- Using Infra/App Monitoring tools like, CloudWatch/Newrelic/Sensu.
Good to have:
- Knowledge of Ruby on Rails based applications and its deployment methodologies.
- Experience working on Container Orchestration tools like Kubernetes/ECS/Mesos.
- Extra Points For Experience With Front-end development NewRelic GCP Kafka, Elasticsearch.