50+ Windows Azure Jobs in India
Apply to 50+ Windows Azure Jobs on CutShort.io. Find your next job, effortlessly. Browse Windows Azure Jobs and apply today!
We are looking for a seasoned DevOps Engineer with a strong background in solution architecture, ideally from the Banking or BFSI (Banking, Financial Services, and Insurance) domain. This role is crucial for implementing scalable, secure infrastructure and CI/CD practices tailored to the needs of high-compliance, high-availability environments. The ideal candidate will have deep expertise in Docker, Kubernetes, cloud platforms, and solution architecture, with knowledge of ML/AI and database management as a plus.
Key Responsibilities:
● Infrastructure & Solution Architecture: Design secure, compliant, and high-
performance cloud infrastructures (AWS, Azure, or GCP) optimized for BFSI-specific
applications.
● Containerization & Orchestration: Lead Docker and Kubernetes initiatives,
deploying applications with a focus on security, compliance, and resilience.
● CI/CD Pipelines: Build and maintain CI/CD pipelines suited to BFSI workflows,
incorporating automated testing, security checks, and rollback mechanisms.
● Cloud Infrastructure & Database Management: Manage cloud resources and
automate provisioning using Terraform, ensuring security standards. Optimize
relational and NoSQL databases for BFSI application needs.
● Monitoring & Incident Response: Implement monitoring and alerting (e.g.,
Prometheus, Grafana) for rapid incident response, ensuring uptime and reliability.
● Collaboration: Work closely with compliance, security, and development teams,
aligning infrastructure with BFSI standards and regulations.
Qualifications:
● Education: Bachelor’s or Master’s degree in Computer Science, Engineering,
Information Technology, or a related field.
● Experience: 5+ years of experience in DevOps with cloud infrastructure and solution
architecture expertise, ideally in ML/AI environments.
● Technical Skills:
○ Cloud Platforms: Proficient in AWS, Azure, or GCP; certifications (e.g., AWS
Solutions Architect, Azure Solutions Architect) are a plus.
○ Containerization & Orchestration: Expertise with Docker and Kubernetes,
including experience deploying and managing clusters at scale.
○ CI/CD Pipelines: Hands-on experience with CI/CD tools like Jenkins, GitLab
CI, or GitHub Actions, with automation and integration for ML/AI workflows
preferred.
○ Infrastructure as Code: Strong knowledge of Terraform and/or
CloudFormation for infrastructure provisioning.
○ Database Management: Proficiency in relational databases (PostgreSQL,
MySQL) and NoSQL databases (MongoDB, DynamoDB), with a focus on
optimization and scalability.
○ ML/AI Infrastructure: Experience supporting ML/AI pipelines, model serving,
and data processing within cloud or hybrid environments.
○ Monitoring and Logging: Proficient in monitoring tools like Prometheus and
Grafana, and log management solutions like ELK Stack or Splunk.
○ Scripting and Automation: Strong skills in Python, Bash, or PowerShell for
scripting and automating processes.
Key Responsibilities:
- Cloud Infrastructure Management: Oversee the deployment, scaling, and management of cloud infrastructure across platforms like AWS, GCP, and Azure. Ensure optimal configuration, security, and cost-effectiveness.
- Application Deployment and Maintenance: Responsible for deploying and maintaining web applications, particularly those built on Django and the MERN stack (MongoDB, Express.js, React, Node.js). This includes setting up CI/CD pipelines, monitoring performance, and troubleshooting.
- Automation and Optimization: Develop scripts and automation tools to streamline operations. Continuously seek ways to improve system efficiency and reduce downtime.
- Security Compliance: Ensure that all cloud deployments comply with relevant security standards and practices. Regularly conduct security audits and coordinate with security teams to address vulnerabilities.
- Collaboration and Support: Work closely with development teams to understand their needs and provide technical support. Act as a liaison between developers, IT staff, and management to ensure smooth operation and implementation of cloud solutions.
- Disaster Recovery and Backup: Implement and manage disaster recovery plans and backup strategies to ensure data integrity and availability.
- Performance Monitoring: Regularly monitor and report on the performance of cloud services and applications. Use data to make informed decisions about upgrades, scaling, and other changes.
Required Skills and Experience:
- Proven experience in managing cloud infrastructure on AWS, GCP, and Azure.
- Strong background in deploying and maintaining Django-based and MERN stack web applications.
- Expertise in automation tools and scripting languages.
- Solid understanding of network architecture and security protocols.
- Experience with continuous integration and deployment (CI/CD) methodologies.
- Excellent problem-solving abilities and a proactive approach to system optimization.
- Good communication skills for effective collaboration with various teams.
Desired Qualifications:
- Bachelor’s degree in Computer Science, Information Technology, or a related field.
- Relevant certifications in AWS, GCP, or Azure are highly desirable.
- Minimum 5 years of experience in a DevOps or similar role, with a focus on cloud computing and web application deployment.
Client Located in Bangalore Location
Job Description-
We are seeking a highly skilled Solution Architect to join our dynamic team in Tokyo. The ideal candidate will have substantial experience in designing, implementing, and deploying cutting-edge solutions involving Machine Learning (ML), Cloud Computing, Full Stack Development, and Kubernetes. The Solution Architect will play a key role in architecting and delivering innovative solutions that meet business objectives while leveraging advanced technologies and industry best practices.
Job Title: Solution Architect ML, Cloud
Experience:5-10 years
Client Location: Bangalore
Work Location: Tokyo, Japan (Onsite)
We are seeking a highly skilled Solution Architect to join our dynamic team in Tokyo. The ideal candidate will have substantial experience in designing, implementing, and deploying cutting-edge solutions involving Machine Learning (ML), Cloud Computing, Full Stack Development, and Kubernetes. The Solution Architect will play a key role in architecting and delivering innovative solutions that meet business objectives while leveraging advanced technologies and industry best practices.
Job Title: Solution Architect (ML, Cloud)
Location: Tokyo, Japan (Onsite)
Experience: 5-10 years
Overview: We are looking for a skilled Solution Architect with expertise in Machine Learning (ML), Cloud Computing, Full Stack Development, and Kubernetes to join our team in Tokyo. The ideal candidate will be responsible for designing and implementing cutting-edge, scalable solutions while leveraging the latest technologies and best practices to meet business objectives.
Key Responsibilities:
Collaborate with stakeholders to understand business needs and develop scalable, efficient technical solutions.
Architect and implement complex systems integrating Machine Learning, Cloud platforms (AWS, Azure, Google Cloud), and Full Stack Development.
Lead the development and deployment of cloud-native applications using NoSQL databases, Python, and Kubernetes.
Design and optimize algorithms to improve performance, scalability, and reliability of solutions.
Review, validate, and refine architecture to ensure flexibility, scalability, and cost-efficiency.
Mentor development teams and ensure adherence to best practices for coding, testing, and deployment.
Contribute to the development of technical documentation and solution roadmaps.
Stay up-to-date with emerging technologies and continuously improve solution design processes.
Required Skills & Qualifications:
5-10 years of experience as a Solution Architect or similar role with expertise in ML, Cloud, and Full Stack Development.
Proficiency in at least two major cloud platforms (AWS, Azure, Google Cloud).
Solid experience with Kubernetes for container orchestration and deployment.
Hands-on experience with NoSQL databases (e.g., MongoDB, Cassandra, DynamoDB).
Expertise in Python and ML frameworks like TensorFlow, PyTorch, etc.
Practical experience implementing at least two real-world algorithms (e.g., classification, clustering, recommendation systems).
Strong knowledge of scalable architecture design and cloud-native application development.
Familiarity with CI/CD tools and DevOps practices.
Excellent problem-solving abilities and the ability to thrive in a fast-paced environment.
Strong communication and collaboration skills with cross-functional teams.
Bachelor’s or Master’s degree in Computer Science, Engineering, or related field.
Preferred Qualifications:
Experience with microservices and containerization.
Knowledge of distributed systems and high-performance computing.
Cloud certifications (AWS Certified Solutions Architect, Google Cloud Professional Architect, etc.).
Familiarity with Agile methodologies and Scrum.
Japanese language proficiency is an added advantage (but not mandatory).
Skills : ML, Cloud (any two major clouds), algorithms (two algorithms must be implemented), full stack, kubernatics, no sql, Python
Responsibilities:
- Collaborate with stakeholders to understand business needs and translate them into scalable and efficient technical solutions.
- Design and implement complex systems involving Machine Learning, Cloud Computing (at least two major clouds such as AWS, Azure, or Google Cloud), and Full Stack Development.
- Lead the design, development, and deployment of cloud-native applications with a focus on NoSQL databases, Python, and Kubernetes.
- Implement algorithms and provide scalable solutions, with a focus on performance optimization and system reliability.
- Review, validate, and improve architectures to ensure high scalability, flexibility, and cost-efficiency in cloud environments.
- Guide and mentor development teams, ensuring best practices are followed in coding, testing, and deployment.
- Contribute to the development of technical documentation and roadmaps.
- Stay up-to-date with emerging technologies and propose enhancements to the solution design process.
Key Skills & Requirements:
- Proven experience 5-10 years) as a Solution Architect or similar role, with deep expertise in Machine Learning, Cloud Architecture, and Full Stack Development.
- Expertise in at least two major cloud platforms (AWS, Azure, Google Cloud).
- Solid experience with Kubernetes for container orchestration and deployment.
- Strong hands-on experience with NoSQL databases (e.g., MongoDB, Cassandra, DynamoDB, etc.).
- Proficiency in Python, including experience with ML frameworks (such as TensorFlow, PyTorch, etc.) and libraries for algorithm development.
- Must have implemented at least two algorithms (e.g., classification, clustering, recommendation systems, etc.) in real-world applications.
- Strong experience in designing scalable architectures and applications from the ground up.
- Experience with DevOps and automation tools for CI/CD pipelines.
- Excellent problem-solving skills and ability to work in a fast-paced environment.
- Strong communication skills and ability to collaborate with cross-functional teams.
- Bachelor’s or Master’s degree in Computer Science, Engineering, or related field.
Preferred Skills:
- Experience with microservices architecture and containerization.
- Knowledge of distributed systems and high-performance computing.
- Certifications in cloud platforms (AWS Certified Solutions Architect, Google Cloud Professional Cloud Architect, etc.).
- Familiarity with Agile methodologies and Scrum.
- Knowing Japanese Language is an additional advantage for the Candidate. Not mandatory.
Job Title: QA Automation Engineer
Job Type: Full Time
Location: Indore
Summary/Objective:
We are seeking an experienced Automation QA (Quality Assurance) professional to join our team in Indore. As an Automation QA, you will be responsible for designing, developing, and executing automated test scripts to ensure the quality and reliability of our software products. You will work closely with the development team to identify areas for automation and implement efficient testing strategies.
Responsibilities/Duties:
1. Design, develop, and maintain automated test scripts using Selenium WebDriver and other automation tools.
2. Execute automated test suites to validate software functionality, performance, and reliability across different platforms and environments.
3. Work closely with cross-functional teams to understand project requirements, identify test scenarios, and develop comprehensive test plans.
4. Collaborate with developers to ensure that test cases are integrated into the continuous integration/continuous deployment (CI/CD) pipeline.
5. Analyze test results and report defects in a clear and concise manner, providing detailed information to facilitate debugging and resolution.
6. Participate in agile ceremonies such as sprint planning, daily stand-ups, and retrospectives to provide QA input and feedback.
7. Continuously research and evaluate new testing tools, technologies, and methodologies to improve efficiency and effectiveness.
8. Contribute to the development and maintenance of QA documentation, including test cases, test scripts, and test reports.
Qualifications/Requirements:
Education:
- Bachelor's degree in Computer Science, Engineering, or related field.
Experience:
- 4 to 5 years of experience in automation testing, preferably in a software development environment.
- Experience with automation testing tools such as Selenium, Appium, or similar.
- Strong understanding of software testing methodologies, tools, and processes.
- Experience with cloud-based testing environments (e.g., AWS, Azure).
- ISTQB or similar certification in software testing.
Skills:
- Excellent analytical and problem-solving skills.
- Strong communication and collaboration skills.
· Excellent attention to detail and ability to work independently or as part of a team.
· Ability to multitask and prioritize tasks in a fast-paced environment.
Position - Senior Full stack Developer
Location - Mumbai
Experience - 3-10 Years
Who are we
Based out of IIT Bombay, HaystackAnalytics is a HealthTech company creating clinical genomics products, which enable diagnostic labs and hospitals to offer accurate and personalized diagnostics. Supported by India's most respected science agencies (DST, BIRAC, DBT), we created and launched a portfolio of products to offer genomics in infectious diseases. Our genomics based diagnostic solution for Tuberculosis was recognized as one of top innovations supported by BIRAC in the past 10 years, and was launched by the Prime Minister of India in the BIRAC Showcase event in Delhi, 2022.
Objectives of this Role:
- Work across the full stack, building highly scalable distributed solutions that enable positive user experiences and measurable business growth
- Ideate and develop new product features in collaboration with domain experts in healthcare and genomics
- Develop state of the art enterprise standard front-end and backend services
- Develop cloud platform services based on container orchestration platform
- Continuously embrace automation for repetitive tasks
- Ensure application performance, uptime, and scale, maintaining high standards of code quality by using clean coding principles and solid design patterns
- Build robust tech modules that are Unit Testable, Automating recurring tasks and processes
- Engage effectively with team members and collaborate to upskill and unblock each other
Frontend Skills
- HTML 5
- CSS framework ( LESS/ SASS / Tailwind )
- Es6 / Typescript
- Electron app /Tauri)
- Component library ( Bootstrap , material UI, Lit )
- Responsive web layout ( Flex layout , Grid layout )
- Package manager --> yarn / npm / turbo
- Build tools - > (Vite/Webpack/Parcel)
- Frameworks -- > React with Redux of Mobx / Next JS
- Design patterns
- Testing - JEST / MOCHA / JASMINE / Cypress
- Functional Programming concepts (Good to have)
- Scripting ( powershell , bash , python )
Backend Skills
- Nodejs - Express / NEST JS
- Python / Rust
- REST API
- SOLID Design Principles
- Database (postgresql / mysql / redis / cassandra / mongodb )
- Caching ( Redis )
- Container Technology ( Docker / Kubernetes )
- Cloud ( azure , aws , openshift, google cloud )
- Version Control - GIT
- GITOPS
- Automation ( terraform , ansible )
Cloud Skills
- Object storage
- VPC concepts
- Containerize Deployment
- Serverless architecture
Other Skills
- Innovation and thought leadership
- UI - UX design skills
- Interest in in learning new tools, languages, workflows, and philosophies to grow
- Communication
To know more about us- https://haystackanalytics.in/
The candidate should have a background in development/programming with experience in at least one of the following: .NET, Java (Spring Boot), ReactJS, or AngularJS.
Primary Skills:
- AWS or GCP Cloud
- DevOps CI/CD pipelines (e.g., Azure DevOps, Jenkins)
- Python/Bash/PowerShell scripting
Secondary Skills:
- Docker or Kubernetes
This requirement is for Data Engineer in Gurugram for Data Analytics Project.
Building ETL/ELT pipelines of data from various sources using SQL/Python/Spark
Ensuring that data are modelled and processed according to architecture and requirements both
functional and non-functional
Understanding and implementing required development guidelines, design standards and best
practices Delivering right solution architecture, automation and technology choices
Working cross-functionally with enterprise architects, information security teams, and platform
teams Suggesting and implementing architecture improvements
Experience with programming languages such as Python or Scala
Knowledge of Data Warehouse, Business Intelligence and ETL/ELT data processing issues
Ability to create and orchestrate ETL/ELT processes in different tools (ADF, Databricks
Workflows) Experience working with Databricks platform: workspace, delta lake, workflows, jobs,
Unity Catalog Understanding of SQL and relational databases
Practical knowledge of various relational and non-relational database engines in the cloud (Azure
SQL Database, Azure Cosmos DB, Microsoft Fabric, Databricks)
Hands-on experience with data services offered by Azure cloud
Knowledge of Apache Spark (Databricks, Azure Synapse Spark Pools)
Experience in performing code review of ETL/ELT pipelines and SQL queries Analytical approach
to problem solvin
We are looking for a Full Stack Software Engineer with 3-5 years of hands-on experience who is proficient in Node.js. The ideal candidate will also have knowledge of MongoDB, and React, and experience with AWS, though the latter two are not mandatory but highly preferred.
You will be responsible for developing and maintaining robust web applications, ensuring seamless integration between the front-end and back-end, and working with cross-functional teams to deliver innovative solutions. If you’re passionate about developing high-quality software, working in a collaborative environment, and continuously learning, this is the perfect role for you.
Key Responsibilities:
● Develop and maintain scalable server-side applications using Node.js.
● Design and implement APIs, ensuring high performance and responsiveness.
● Collaborate with front-end developers to integrate React-based user interfaces with
the backend.
● Manage databases, primarily working with MongoDB, for optimized data storage and
retrieval.
● Deploy and manage services on AWS (or other cloud platforms).
● Write clean, maintainable, and well-documented code.
● Conduct unit and integration testing to ensure product quality.
● Troubleshoot, debug and upgrade existing applications as required.
●Work Closely with product managers, designers, and other engineers to ensure timely delivery of features.
● Conduct unit and integration testing to ensure product quality.
● Troubleshoot, debug and upgrade existing applications as required.
Skills and Qualifications:
● 3-5 years of hands-on experience with Node.js (compulsory).
● Experience working with MongoDB.
● Proficiency with React.js for front-end development (nice to have).
● Familiarity with AWS cloud services for deployment and management (nice to have).
● Strong understanding of RESTful services and API integration.
● Good understanding of server-side templating languages and front-end technologies.
● Familiarity with version control systems like Git.
● Problem-solving attitude with attention to detail.
● Ability to work both independently and in a team.
● Excellent communication skills to collaborate effectively with cross-functional teams.
● Pragmatic approach to problem-solving and solution architecture.
at Wissen Technology
Requirements:
• Bachelor’s degree in computer science, Engineering, or a related field.
• Strong understanding of distributed data processing platforms like Databricks and BigQuery.
• Proficiency in Python, PySpark, and SQL programming languages.
• Experience with performance optimization for large datasets.
• Strong debugging and problem-solving skills.
• Fundamental knowledge of cloud services, preferably Azure or GCP.
• Excellent communication and teamwork skills.
Nice to Have:
• Experience in data migration projects.
• Understanding of technologies like Delta Lake/warehouse.
Job Title: DevOps Engineer
Location: Remote
Type: Full-time
About Us:
At Tese, we are committed to advancing sustainability through innovative technology solutions. Our platform empowers SMEs, financial institutions, and enterprises to achieve their Environmental, Social, and Governance (ESG) goals. We are looking for a skilled and passionate DevOps Engineer to join our team and help us build and maintain scalable, reliable, and efficient infrastructure.
Role Overview:
As a DevOps Engineer, you will be responsible for designing, implementing, and managing the infrastructure that supports our applications and services. You will work closely with our development, QA, and data science teams to ensure smooth deployment, continuous integration, and continuous delivery of our products. Your role will be critical in automating processes, enhancing system performance, and maintaining high availability.
Key Responsibilities:
- Infrastructure Management:
- Design, implement, and maintain scalable cloud infrastructure on platforms such as AWS, Google Cloud, or Azure.
- Manage server environments, including provisioning, monitoring, and maintenance.
- CI/CD Pipeline Development:
- Develop and maintain continuous integration and continuous deployment pipelines using tools like Jenkins, GitLab CI/CD, or CircleCI.
- Automate deployment processes to ensure quick and reliable releases.
- Configuration Management and Automation:
- Implement infrastructure as code (IaC) using tools like Terraform, Ansible, or CloudFormation.
- Automate system configurations and deployments to improve efficiency and reduce manual errors.
- Monitoring and Logging:
- Set up and manage monitoring tools (e.g., Prometheus, Grafana, ELK Stack) to track system performance and troubleshoot issues.
- Implement logging solutions to ensure effective incident response and system analysis.
- Security and Compliance:
- Ensure systems are secure and compliant with industry standards and regulations.
- Implement security best practices, including identity and access management, network security, and vulnerability assessments.
- Collaboration and Support:
- Work closely with development and QA teams to support application deployments and troubleshoot issues.
- Provide support for infrastructure-related inquiries and incidents.
Qualifications:
- Education:
- Bachelor's degree in Computer Science, Engineering, or a related field, or equivalent practical experience.
- Experience:
- 3-5 years of experience in DevOps, system administration, or related roles.
- Hands-on experience with cloud platforms such as AWS, Google Cloud Platform, or Azure.
- Technical Skills:
- Proficiency in scripting languages like Bash, Python, or Ruby.
- Strong experience with containerization technologies like Docker and orchestration tools like Kubernetes.
- Knowledge of configuration management tools (Ansible, Puppet, Chef).
- Experience with CI/CD tools (Jenkins, GitLab CI/CD, CircleCI).
- Familiarity with monitoring and logging tools (Prometheus, Grafana, ELK Stack).
- Understanding of networking concepts and security best practices.
- Soft Skills:
- Strong problem-solving skills and attention to detail.
- Excellent communication and collaboration abilities.
- Ability to work in a fast-paced environment and manage multiple tasks.
Preferred Qualifications:
- Experience with infrastructure as code (IaC) tools like Terraform or CloudFormation.
- Knowledge of microservices architecture and serverless computing.
- Familiarity with database administration (SQL and NoSQL databases).
- Experience with Agile methodologies and working in a Scrum or Kanban environment.
- Passion for sustainability and interest in ESG initiatives.
Benefits:
- Competitive salary and benefits package,and performance bonuses.
- Flexible working hours and remote work options.
- Opportunity to work on impactful projects that promote sustainability.
- Professional development opportunities, including access to training and conferences.
A System Administrator (Sys Admin) is a crucial IT professional responsible for maintaining, configuring, and ensuring the smooth operation of an organization’s computer systems, servers, and networks. Their role encompasses managing both software and hardware resources to support reliable, high-performing, and secure IT environments. Sys Admins are involved in installing and upgrading operating systems, troubleshooting technical issues, managing user access and permissions, and ensuring data security through regular updates, patches, and backup systems.
They often work with automation tools and scripts to streamline repetitive tasks and enhance efficiency. Additionally, Sys Admins handle network configurations, firewall management, and VPN setups to maintain secure connectivity. They also monitor system health and performance using various monitoring tools, enabling quick response to potential issues.
In modern environments, Sys Admins may work with cloud services, virtualization technologies, and configuration management platforms to support scalability and remote access. Their job requires strong problem-solving abilities, attention to detail, and the capacity to communicate technical information clearly to non-technical team members. In short, Sys Admins are essential for maintaining the backbone of IT infrastructure, ensuring systems remain reliable, responsive, and secure.
About The Role:
The products/services of Eclat Engineering Pvt. Ltd. are being used by some of the leading institutions in India and abroad. Our services/Products are rapidly growing in demand. We are looking for a capable and dynamic Senior DevOps engineer to help setup, maintain and scale the infrastructure operations. This Individual will have the challenging responsibility of channelling our IT infrastructure and offering customer services with stringent international standard levels of service quality. This individual will leverage the latest IT tools to automate and streamline the delivery of our services while implementing industry-standard processes and knowledge management.
Roles & Responsibilities:
- Infrastructure and Deployment Automation: Design, implement, and maintain automation for infrastructure
provisioning and application deployment. Own the CI/CD pipelines and ensure they are efficient, reliable, and
scalable.
- System Monitoring and Performance: -Take ownership of monitoring systems and ensure the health and
performance of the infrastructure. Proactively identify and address performance bottlenecks and system issues.
- Cloud Infrastructure Management: Manage cloud infrastructure (e.g., AWS, Azure, GCP) and optimize resource
usage. Implement cost-saving measures while maintaining scalability and reliability.
- Configuration Management: Manage configuration management tools (e.g., Ansible, Puppet, Chef) to ensure
consistency across environments. Automate configuration changes and updates.
- Security and Compliance: Own security policies, implement best practices, and ensure compliance with industry
standards. Lead efforts to secure infrastructure and applications, including patch management and access controls.
- Collaboration with Development and Operations Teams: Foster collaboration between development and
operations teams, promoting a DevOps culture. Be the go-to person for resolving cross-functional infrastructure
issues and improving the development process.
- Disaster Recovery and Business Continuity: Develop and maintain disaster recovery plans and procedures. Ensure
business continuity in the event of system failures or other disruptions.
- Documentation and Knowledge Sharing: Create and maintain comprehensive documentation for configurations,
processes, and best practices. Share knowledge and mentor junior team members.
- Technical Leadership and Innovation: Stay up-to-date with industry trends and emerging technologies. Lead efforts
to introduce new tools and technologies that enhance DevOps practices.
- Problem Resolution and Troubleshooting: Be responsible for diagnosing and resolving complex issues related to
infrastructure and deployments. Implement preventive measures to reduce recurring problems.
Requirements:
● B.E / B.Tech / M.E / M.Tech / MCA / M.Sc.IT (if not should be able to demonstrate required skills)
● Overall 3+ years of experience in DevOps and Cloud operations specifically in AWS.
● Experience with Linux Administrator
● Experience with microservice architecture, containers, Kubernetes, and Helm is a must
● Experience in Configuration Management preferably Ansible
● Experience in Shell Scripting is a must
● Experience in developing and maintaining CI/CD processes using tools like Gitlab, Jenkins
● Experience in logging, monitoring and analytics
● An Understanding of writing Infrastructure as a Code using tools like Terraform
● Preferences - AWS, Kubernetes, Ansible
Must Have:
● Knowledge of AWS Cloud Platform.
● Good experience with microservice architecture, Kubernetes, helm and container-based technologies
● Hands-on experience with Ansible.
● Should have experience in working and maintaining CI/CD Processes.
● Hands-on experience in version control tools like GIT.
● Experience with monitoring tools such as Cloudwatch/Sysdig etc.
● Sound experience in administering Linux servers and Shell Scripting.
● Should have a good understanding of IT security and have the knowledge to secure production environments (OS and server software).
at appscrip
Key Responsibilities
AI Model Development
- Design and implement advanced Generative AI models (e.g., GPT-based, LLaMA, etc.) to support applications across various domains, including text generation, summarization, and conversational agents.
- Utilize tools like LangChain and LlamaIndex to build robust AI-powered systems, ensuring seamless integration with data sources, APIs, and databases.
Backend Development with FastAPI
- Develop and maintain fast, efficient, and scalable FastAPI services to expose AI models and algorithms via RESTful APIs.
- Ensure optimal performance and low-latency for API endpoints, focusing on real-time data processing.
Pipeline and Integration
- Build and optimize data processing pipelines for AI models, including ingestion, transformation, and indexing of large datasets using tools like LangChain and LlamaIndex.
- Integrate AI models with external services, databases, and other backend systems to create end-to-end solutions.
Collaboration with Cross-Functional Teams
- Collaborate with data scientists, machine learning engineers, and product teams to define project requirements, technical feasibility, and timelines.
- Work with front-end developers to integrate AI-powered functionalities into web applications.
Model Optimization and Fine-Tuning
- Fine-tune and optimize pre-trained Generative AI models to improve accuracy, performance, and scalability for specific business use cases.
- Ensure efficient deployment of models in production environments, addressing issues related to memory, latency, and resource management.
Documentation and Code Quality
- Maintain high standards of code quality, write clear, maintainable code, and conduct thorough unit and integration tests.
- Document AI model architectures, APIs, and workflows for future reference and onboarding of team members.
Research and Innovation
- Stay updated with the latest advancements in Generative AI, LangChain, and LlamaIndex, and actively contribute to the adoption of new techniques and technologies.
- Propose and explore innovative ways to leverage cutting-edge AI technologies to solve complex problems.
Required Skills and Experience
Expertise in Generative AI
Strong experience working with Generative AI models, including but not limited to GPT-3/4, LLaMA, or other large language models (LLMs).
LangChain & LlamaIndex
Hands-on experience with LangChain for building language model-driven applications, and LlamaIndex for efficient data indexing and querying.
Python Programming
Proficiency in Python for building AI applications, working with frameworks such as TensorFlow, PyTorch, Hugging Face, and others.
API Development with FastAPI
Strong experience developing RESTful APIs using FastAPI, with a focus on high-performance, scalable web services.
NLP & Machine Learning
Solid foundation in Natural Language Processing (NLP) and machine learning techniques, including data preprocessing, feature engineering, model evaluation, and fine-tuning.
Database & Storage Systems Familiarity with relational and NoSQL databases, data storage, and management strategies for large-scale AI datasets.
Version Control & CI/CD
Experience with Git, GitHub, and implementing CI/CD pipelines for seamless deployment.
Preferred Skills
Containerization & Cloud Deployment
Familiarity with Docker, Kubernetes, and cloud platforms (e.g., AWS, GCP, Azure) for deploying scalable AI applications.
Data Engineering
Experience in working with data pipelines and frameworks such as Apache Spark, Airflow, or Dask.
Knowledge of Front-End Technologies Familiarity with front-end frameworks (React, Vue.js, etc.) for integrating AI APIs with user-facing applications.
Responsibilities include:
- Develop and maintain data validation logic in our proprietary Control Framework tool
- Actively participate in business requirement elaboration and functional design sessions to develop an understanding of our Operational teams’ analytical needs, key data flows and sources
- Assist Operational teams in the buildout of Checklists and event monitoring workflows within our Enterprise Control Framework platform
- Build effective working relationships with Operational users, Reporting and IT development teams and business partners across the organization
- Conduct interviews, generate user stories, develop scenarios and workflow analyses
- Contribute to the definition of reporting solutions that empower Operational teams to make immediate decisions as to the best course of action
- Perform some business user acceptance testing
- Provide production support and troubleshooting for existing operational dashboards
- Conduct regular demos and training of new features for the stakeholder community
Qualifications
- Bachelor’s degree or equivalent in Business, Accounting, Finance, MIS, Information Technology or related field of study
- Minimum 5 years’ of SQL required
- Experience querying data on cloud platforms (AWS/ Azure/ Snowflake) required
- Exceptional problem solving and analytical skills, attention to detail and organization
- Able to independently troubleshoot and gather supporting evidence
- Prior experience developing within a BI reporting tool (e.g. Spotfire, Tableau, Looker, Information Builders) a plus
- Database Management and ETL development experience a plus
- Self-motivated, self-assured, and self-managed
- Able to multi-task to meet time-driven goals
- Asset management experience, including investment operation a plus
at Delivery Solutions
About UPS:
Moving our world forward by delivering what matters! UPS is a company with a proud past and an even brighter future. Our values define us. Our culture differentiates us. Our strategy drives us. At UPS we are customer first, people led and innovation driven. UPS’s India based Technology Development Centers will bring UPS one step closer to creating a global technology workforce that will help accelerate our digital journey and help us engineer technology solutions that drastically improve our competitive advantage in the field of Logistics.
Job Summary:
- Applies the principles of software engineering to design, develop, maintain, test, and evaluate computer software that provide business capabilities, solutions, and/or product suites. Provides systems life cycle management (e.g., analyses, technical requirements, design, coding, testing, implementation of systems and applications software, etc.) to ensure delivery of technical solutions is on time and within budget.
- Researches and supports the integration of emerging technologies.
- Provides knowledge and support for applications’ development, integration, and maintenance.
- Develops program logic for new applications or analyzes and modifies logic in existing applications.
- Analyzes requirements, tests, and integrates application components.
- Ensures that system improvements are successfully implemented. May focus on web/internet applications specifically, using a variety of languages and platforms.
REQUIREMENTS
- Experience with Azure Data bricks, SQL, ETL – SSIS Packages – Very Critical.
- Azure Data Factory, Function Apps, DevOps – A must
- Experience with Azure and other cloud technologies
- Database – Oracle, SQL Server and COSMOS experience needed.
- Azure Services (key vault, app config, Blob storage, Redis cache, service bus, event grid, ADLS, App insight etc.)
- Knowledge of STRIIMs
Preffered skills: Microservices experience, preferred. Experience with Angular, .NET core– Not critical
Additional Information **This role will be in-office 3 days a week in Chennai, India **
Experience:
○ 2-4 years of hands-on experience with Microsoft Power Automate (Flow).
○ Experience with Power Apps, Power BI, and Power Platform technologies.
○ Experience in integrating REST APIs, SOAP APIs, and custom connectors.
○ Proficiency in using tools like Microsoft SharePoint, Azure, and Dataverse.
○ Familiarity with Microsoft 365 apps like Teams, Outlook, and Excel.
● Technical Skills:
○ Knowledge of JSON, OData, HTML, JavaScript, and other web-based technologies.
○ Strong understanding of automation, data integration, and process optimization.
○ Experience with D365 (Dynamics 365) and Azure Logic Apps is a plus.
○ Proficient in troubleshooting, problem-solving, and debugging automation workflows.
● Soft Skills:
○ Excellent communication skills to liaise with stakeholders and technical teams.
○ Strong analytical and problem-solving abilities.
○ Self-motivated and capable of working independently as well as part of a team.
Educational Qualifications:
● Bachelor's Degree in Computer Science, Information Technology, Engineering, or a related field (or equivalent practical
experience).
Good to have Qualifications:
● Microsoft Certified: Power Platform certifications (e.g., Power Platform Functional Consultant, Power Automate RPA
Developer) would be advantageous.
● Experience with Agile or Scrum methodologies.
Roles and Responsibilities:
JKTech Ltd. is looking for a.Net Development Lead who will be responsible for architecting, creating, and deploying product updates, identifying production issues, and establishing integrations that suit the needs of our customers. The ideal applicant will have a strong background in software engineering and be familiar with.Net Core, ASP.Net Core, C#, WPF, Web API, WCF, SQL Server (T-SQL), and will collaborate with developers and engineers to ensure that software development adheres to established processes and functions as intended. The Development Lead will also assist with delivery planning and will be involved in project management decisions.
Role and Responsibilities
- Design and architect complex, scalable web applications
- Maintain and enhance existing applications
- Proven experience in web application development with Microsoft .NET technologies including, .net Core, ASP.NET, Azure and DevOps.
- Proficient in .Net Core, ASP.Net Core. C#, WPF, Web API, WCF, SQL Server (T-SQL), PowerShell, HTML, JavaScript should have working knowledge on Azure Service Bus and Rabbit MQ
- Experience in client-side development using JavaScript, HTML, and CSS.
- Proficient in CI/ CD and release management using Azure pipeline (Infrastructure as Code using ARM templates)
- Should have experience in the implementation of microservice/ micro database projects.
- Minimum 3-4 years of experience in SQL Server, Stored procedures.
- Ability to adapt quickly to an existing, complex environment.
- Ability to communicate clearly with business users and stakeholders.
- Write and supervise the technical product documentation
- Technical design, mentoring and implementation of best practices and processes
- Develop the software architecture based on the business requirements and constraints
- Be responsible to deliver proof of concepts which validates technical choices
- Organize knowledge sharing and continuous learning
- Testing and examining code written by others and analysing results
- Identifying technical problems and developing software updates and fixes
- Working with software developers to ensure that development follows established processes and works as intended
Qualifications and Education Requirements
- Experience as a Dot .net Software Engineer with Azure DevOps role
- Proficient in .Net Core, ASP.Net Core. C#, WPF, Web API, WCF, SQL Server (T-SQL), PowerShell, HTML, JavaScript should have working knowledge on Azure Service Bus and Rabbit MQ
- Good working knowledge of Azure cloud services and moving on-prem applications to the Cloud
- Working knowledge of databases and SQL
- Problem-solving attitude
- Collaborative team spirit
Preferred Skills
- Bachelor of science degree (or equivalent) in computer science, engineering, or relevant field
- Experience in developing/engineering applications for a large company
- Azure Certification is a plus.
Primary Skills:
- Proficient in .Net Core, ASP.Net Core. C#, VB .NET, WPF, Web API, WCF, SQL Server (T-SQL), PowerShell, HTML, JavaScript should have working knowledge on Azure
Wissen Technology is hiring for Devops engineer
Required:
-4 to 10 years of relevant experience in Devops
-Must have hands on experience on AWS, Kubernetes, CI/CD pipeline
-Good to have exposure on Github or Gitlab
-Open to work from hashtag Chennai
-Work mode will be Hybrid
Company profile:
Company Name : Wissen Technology
Group of companies in India : Wissen Technology & Wissen Infotech
Work Location - Chennai
Website : www.wissen.com
Wissen Thought leadership : https://lnkd.in/gvH6VBaU
LinkedIn: https://lnkd.in/gnK-vXjF
Position Overview: We are seeking a talented and experienced Cloud Engineer specialized in AWS cloud services to join our dynamic team. The ideal candidate will have a strong background in AWS infrastructure and services, including EC2, Elastic Load Balancing (ELB), Auto Scaling, S3, VPC, RDS, CloudFormation, CloudFront, Route 53, AWS Certificate Manager (ACM), and Terraform for Infrastructure as Code (IaC). Experience with other AWS services is a plus.
Responsibilities:
• Design, deploy, and maintain AWS infrastructure solutions, ensuring scalability, reliability, and security.
• Configure and manage EC2 instances to meet application requirements.
• Implement and manage Elastic Load Balancers (ELB) to distribute incoming traffic across multiple instances.
• Set up and manage AWS Auto Scaling to dynamically adjust resources based on demand.
• Configure and maintain VPCs, including subnets, route tables, and security groups, to control network traffic.
• Deploy and manage AWS CloudFormation and Terraform templates to automate infrastructure provisioning using Infrastructure as Code (IaC) principles.
• Implement and monitor S3 storage solutions for secure and scalable data storage
• Set up and manage CloudFront distributions for content delivery with low latency and high transfer speeds.
• Configure Route 53 for domain management, DNS routing, and failover configurations.
• Manage AWS Certificate Manager (ACM) for provisioning, managing, and deploying SSL/TLS certificates.
• Collaborate with cross-functional teams to understand business requirements and provide effective cloud solutions.
• Stay updated with the latest AWS technologies and best practices to drive continuous improvement.
Qualifications:
• Bachelor's degree in computer science, Information Technology, or a related field.
• Minimum of 2 years of relevant experience in designing, deploying, and managing AWS cloud solutions.
• Strong proficiency in AWS services such as EC2, ELB, Auto Scaling, VPC, S3, RDS, and CloudFormation.
• Experience with other AWS services such as Lambda, ECS, EKS, and DynamoDB is a plus.
• Solid understanding of cloud computing principles, including IaaS, PaaS, and SaaS.
• Excellent problem-solving skills and the ability to troubleshoot complex issues in a cloud environment.
• Strong communication skills with the ability to collaborate effectively with cross-functional teams.
• Relevant AWS certifications (e.g., AWS Certified Solutions Architect, AWS Certified DevOps Engineer, etc.) are highly desirable.
Additional Information:
• We value creativity, innovation, and a proactive approach to problem-solving.
• We offer a collaborative and supportive work environment where your ideas and contributions are valued.
• Opportunities for professional growth and development. Someshwara Software Pvt Ltd is an equal opportunity employer.
We celebrate diversity and are dedicated to creating an inclusive environment for all employees.
ob Description:
We are seeking an experienced Azure Data Engineer with expertise in Azure Data Factory, Azure Databricks, and Azure Data Fabric to lead the migration of our existing data pipeline and processing infrastructure. The ideal candidate will have a strong background in Azure cloud data services, big data analytics, and data engineering, with specific experience in Azure Data Fabric. We are looking for someone who has at least 6 months of hands-on experience with Azure Data Fabric or has successfully completed at least one migration to Azure Data Fabric.
Key Responsibilities:
- Assess the current data architecture using Azure Data Factory and Databricks and develop a detailed migration plan to Azure Data Fabric.
- Design and implement end-to-end data pipelines within Azure Data Fabric, including data ingestion, transformation, storage, and analytics.
- Optimize data workflows to leverage Azure Data Fabric's unified platform for data integration, big data processing, and real-time analytics.
- Ensure seamless integration of data from SharePoint and other sources into Azure Data Fabric, maintaining data quality and integrity.
- Collaborate with business analysts and business stakeholders to align data strategies and optimize the data environment for machine learning and AI workloads.
- Implement security best practices, including data governance, access control, and monitoring within Azure Data Fabric.
- Conduct performance tuning and optimization for data storage and processing within Azure Data Fabric to ensure high availability and cost efficiency.
Key Requirements:
- Proven experience (5+ years) in Azure data engineering with a strong focus on Azure Data Factory and Azure Databricks.
- At least 6 months of hands-on experience with Azure Data Fabric or completion of one migration to Azure Data Fabric.
- Hands-on experience in designing, building, and managing data pipelines, data lakes, and data warehouses on Azure.
- Expertise in Spark, SQL, and data transformation techniques within Azure environments.
- Strong understanding of data governance, security, and compliance in cloud environments.
- Experience with migrating data architectures and optimizing workflows on cloud platforms.
- Ability to work collaboratively with cross-functional teams and communicate technical concepts effectively to non-technical stakeholders.
- Azure certifications (e.g., Azure Data Engineer Associate, Azure Solutions Architect Expert) are a plus.
Key requirements:
- The person should have at least 6 months of work experience in Data Fabric. Make sure the experience is not less than 6 months
- Solid technical skills: data bricks, data fabric and data factory
- Polished, good communication and interpersonal skills
- The Person should have at least 6 years of experience in Databricks, Datafactory.
Job Description
We are seeking a talented DevOps Engineer to join our dynamic team. The ideal candidate will have a passion for building and maintaining cloud infrastructure while ensuring the reliability and efficiency of our applications. You will be responsible for deploying and maintaining cloud environments, enhancing CI/CD pipelines, and ensuring optimal performance through proactive monitoring and troubleshooting.
Roles and Responsibilities:
- Cloud Infrastructure: Deploy and maintain cloud infrastructure on Microsoft Azure or AWS, ensuring scalability and reliability.
- CI/CD Pipeline Enhancement: Continuously improve CI/CD pipelines and build robust development and production environments.
- Application Deployment: Manage application deployments, ensuring high reliability and minimal downtime.
- Monitoring: Monitor infrastructure health and perform application log analysis to identify and resolve issues proactively.
- Incident Management: Troubleshoot and debug incidents, collaborating closely with development teams to implement effective solutions.
- Infrastructure as Code: Enhance Ansible roles and Terraform modules, maintaining best practices for Infrastructure as Code (IaC).
- Tool Development: Write tools and utilities to streamline and improve infrastructure operations.
- SDLC Practices: Establish and uphold industry-standard Software Development Life Cycle (SDLC) practices with a strong focus on quality.
- On-call Support: Be available 24/7 for on-call incident management for production environments.
Requirements:
- Cloud Experience: Hands-on experience deploying and provisioning virtual machines on Microsoft Azure or Amazon AWS.
- Linux Administration: Proficient with Linux systems and basic system administration tasks.
- Networking Knowledge: Working knowledge of network fundamentals (Ethernet, TCP/IP, WAF, DNS, etc.).
- Scripting Skills: Proficient in BASH and at least one high-level scripting language (Python, Ruby, Perl).
- Tools Proficiency: Familiarity with tools such as Git, Nagios, Snort, and OpenVPN.
- Containerization: Strong experience with Docker and Kubernetes is mandatory.
- Communication Skills: Excellent interpersonal communication skills, with the ability to engage with peers, customers, vendors, and partners across all levels of the organization.
About Company :
Nomiso is a product and services engineering company. We are a team of Software Engineers, Architects, Managers, and Cloud Experts with expertise in Technology and Delivery Management.
Our mission is to Empower and Enhance the lives of our customers, through efficient solutions for their complex business problems.
At Nomiso we encourage entrepreneurial spirit - to learn, grow and improve. A great workplace, thrives on ideas and opportunities. That is a part of our DNA. We’re in pursuit of colleagues who share similar passions, are nimble and thrive when challenged. We offer a positive, stimulating and fun environment – with opportunities to grow, a fast-paced approach to innovation, and a place where your views are valued and encouraged.
We invite you to push your boundaries and join us in fulfilling your career aspirations!
Position Overview:
We are looking for an Integration Architect who leads the scoping, design and implementation of the Enterprise Integration Platform, with a focus on innovation and continuous improvement.. The candidate will work in a multi-vendor environment to conduct complex integration across data, apps, assets and business processes. The candidate will initially assist in integration architecture for the retail segment, with an enterprise lens, and thereafter scaling up the learnings in the broader enterprise context beyond retail, especially for integrated energy operations, asset management, corporate, trading and current (and emerging) businesses.
Roles and Responsibilities:
The candidate’s success will be defined by the successful delivery of an executable integration strategy through stakeholder engagement as a trusted advisor, and guidance for integration outcomes in a multi-vendor setting. Example deliverables are:
- Enterprise Integration Strategy – scoping and requirements, and multi-year deliverables.
- Architecture principles across the organisation to drive seamless and standardised integration, including recommendations for technology and vendor choices.
- Definition of “What good looks like” and specifically development of a reference architecture to execute on the Enterprise Integration Strategy.
- Rules of the Road – Integration implementation guidelines and standards (APIs, integration patterns, performance expectations, data schema/format, operational metrics) to vendors and providers in the integration ecosystem.
- Technology Implementation Plan – Translation of multi-year deliverables stated in the Enterprise Integration Strategy into a set of iterative and incremental technology development plans.
- Innovation & Continuous Improvement: Stay abreast of emerging integration technologies and trends, evaluating their potential impact on the enterprise. Foster a culture and mindset of continuous improvement by encouraging experimentation and the adoption of new tools, patterns and frameworks.
Must Have Skills:
To be successful, both business and technical skills are required:
.
- Overall 8+years of core technical experience with at least 4 years working as an Integration Engineer.
- Hands-on in architecture skills, with proven track record of deployment in production and knowledge of operating high-performance platforms.
- Understanding of Enterprise Integration Patterns, and practical applicability in a SaaS and micro services environment.
- Should have experience in Microsoft Azure Integration Platform, COSMOS DB, Dell Boomi, MS APIM,MS ASB, Axway MFT.
- Hands-on Salesforce integration experience. Knowledge of billing, CRM, sales, digital channels, marketing tools, pricing engine, and front-end applications development are required.
- Understanding of Business Process Management is required. Working knowledge of Appian is a plus.
- Past experience of IT strategy development and end-to-end understanding of the IT/digital technology stack.
- Excellent stakeholder management and interpersonal skills, coupled with executive presentation skills.
- Proficiency in one or more programming languages like java/Golang/Node
Qualification:
- Bachelor’s degree in Computer Science Engineering, or a related technical degree.
Required Skill Set :--
- Data Model & Mapping
- MS SQL Database
- Analytics SQL Query
- Genesys Cloud Reporting & Analytics API
- Snow Flake (Good to have)
- Cloud Exposure – AWS or Azure
Technical Experience –
· 5 - 8 Years of experience, preferable at technology or Financial firm
· Strong understanding of data analysis & reporting tools.
· Experience with data mining & machine learning techniques.
· Excellent communication & presentation skills
· Must have at least 2 – 3 years of experience in Data Model/Analysis /mapping
· Must have hands on experience in database tools & technologies
· Must have exposure to Genesys cloud, WFM, GIM, Genesys Analytics API
· Good to have experience or exposure on salesforce, AWS or AZUre , & Genesys cloud
· Ability to work independently & as part of a team
· Strong attention to detail and accuracy.
Work Scope –
- Data Model similar GIM database based on the Genesys Cloud data.
- API to column data mapping.
- Data Model for business for Analytics
- Data base artifacts
- Scripting – Python
- Autosys, TWS job setup.
the forefront of innovation in the digital video industry
Responsibilities:
- Work with development teams and product managers to ideate software solutions
- Design client-side and server-side architecture
- Creating a well-informed cloud strategy and managing the adaptation process
- Evaluating cloud applications, hardware, and software
- Develop and manage well-functioning databases and applications Write effective APIs
- Participate in the entire application lifecycle, focusing on coding and debugging
- Write clean code to develop, maintain and manage functional web applications
- Get feedback from, and build solutions for, users and customers
- Participate in requirements, design, and code reviews
- Engage with customers to understand and solve their issues
- Collaborate with remote team on implementing new requirements and solving customer problems
- Focus on quality of deliverables with high accountability and commitment to program objectives
Required Skills:
- 7– 10 years of SW development experience
- Experience using Amazon Web Services (AWS), Microsoft Azure, Google Cloud, or other major cloud computing services.
- Strong skills in Containers, Kubernetes, Helm
- Proficiency in C#, .NET, PHP /Java technologies with an acumen for code analysis, debugging and problem solving
- Strong skills in Database Design(PostgreSQL or MySQL)
- Experience in Caching and message Queue
- Experience in REST API framework design
- Strong focus on high-quality and maintainable code
- Understanding of multithreading, memory management, object-oriented programming
Preferred skills:
- Experience in working with Linux OS
- Experience in Core Java programming
- Experience in working with JSP/Servlets, Struts, Spring / Spring Boot, Hibernate
- Experience in working with web technologies HTML,CSS
- Knowledge of source versioning tools particularly JIRA, Git, Stash, and Jenkins.
- Domain Knowledge of Video, Audio Codecs
Role & Responsiblities
- DevOps Engineer will be working with implementation and management of DevOps tools and technologies.
- Create and support advanced pipelines using Gitlab.
- Create and support advanced container and serverless environments.
- Deploy Cloud infrastructure using Terraform and cloud formation templates.
- Implement deployments to OpenShift Container Platform, Amazon ECS and EKS
- Troubleshoot containerized builds and deployments
- Implement processes and automations for migrating between OpenShift, AKS and EKS
- Implement CI/CD automations.
Required Skillsets
- 3-5 years of cloud-based architecture software engineering experience.
- Deep understanding of Kubernetes and its architecture.
- Mastery of cloud security engineering tools, techniques, and procedures.
- Experience with AWS services such as Amazon S3, EKS, ECS, DynamoDB, AWS Lambda, API Gateway, etc.
- Experience with designing and supporting infrastructure via Infrastructure-as-Code in AWS, via CDK, CloudFormation Templates, Terraform or other toolset.
- Experienced with tools like Jenkins, Github, Puppet or other similar toolset.
- Experienced with monitoring functions like cloudwatch, newrelic, graphana, splunk, etc,
- Excellence in verbal and written communication, and in working collaboratively with a variety of colleagues and clients in a remote development environment.
- Proven track record in cloud computing systems and enterprise architecture and security
TVARIT GmbH develops and delivers solutions in the field of artificial intelligence (AI) for the Manufacturing, automotive, and process industries. With its software products, TVARIT makes it possible for its customers to make intelligent and well-founded decisions, e.g., in forward-looking Maintenance, increasing the OEE and predictive quality. We have renowned reference customers, competent technology, a good research team from renowned Universities, and the award of a renowned AI prize (e.g., EU Horizon 2020) which makes Tvarit one of the most innovative AI companies in Germany and Europe.
We are looking for a self-motivated person with a positive "can-do" attitude and excellent oral and written communication skills in English.
We are seeking a skilled and motivated Data Engineer from the manufacturing Industry with over two years of experience to join our team. As a data engineer, you will be responsible for designing, building, and maintaining the infrastructure required for the collection, storage, processing, and analysis of large and complex data sets. The ideal candidate will have a strong foundation in ETL pipelines and Python, with additional experience in Azure and Terraform being a plus. This role requires a proactive individual who can contribute to our data infrastructure and support our analytics and data science initiatives.
Skills Required
- Experience in the manufacturing industry (metal industry is a plus)
- 2+ years of experience as a Data Engineer
- Experience in data cleaning & structuring and data manipulation
- ETL Pipelines: Proven experience in designing, building, and maintaining ETL pipelines.
- Python: Strong proficiency in Python programming for data manipulation, transformation, and automation.
- Experience in SQL and data structures
- Knowledge in big data technologies such as Spark, Flink, Hadoop, Apache and NoSQL databases.
- Knowledge of cloud technologies (at least one) such as AWS, Azure, and Google Cloud Platform.
- Proficient in data management and data governance
- Strong analytical and problem-solving skills.
- Excellent communication and teamwork abilities.
Nice To Have
- Azure: Experience with Azure data services (e.g., Azure Data Factory, Azure Databricks, Azure SQL Database).
- Terraform: Knowledge of Terraform for infrastructure as code (IaC) to manage cloud.
· Support and fix issues in the IT infrastructure, end-user devices and applications.
· Manage tickets and provide helpdesk support.
· Perform desktop troubleshooting for end users.
· Aid with remote computer deployments.
· Install and maintain network infrastructure.
· Provide support for Azure/AWS/Google Cloud resources.
· Handle support escalations.
Develop and document SOPs to streamline support processes
TVARIT GmbH develops and delivers solutions in the field of artificial intelligence (AI) for the Manufacturing, automotive, and process industries. With its software products, TVARIT makes it possible for its customers to make intelligent and well-founded decisions, e.g., in forward-looking Maintenance, increasing the OEE and predictive quality. We have renowned reference customers, competent technology, a good research team from renowned Universities, and the award of a renowned AI prize (e.g., EU Horizon 2020) which makes TVARIT one of the most innovative AI companies in Germany and Europe.
We are looking for a self-motivated person with a positive "can-do" attitude and excellent oral and written communication skills in English.
We are seeking a skilled and motivated senior Data Engineer from the manufacturing Industry with over four years of experience to join our team. The Senior Data Engineer will oversee the department’s data infrastructure, including developing a data model, integrating large amounts of data from different systems, building & enhancing a data lake-house & subsequent analytics environment, and writing scripts to facilitate data analysis. The ideal candidate will have a strong foundation in ETL pipelines and Python, with additional experience in Azure and Terraform being a plus. This role requires a proactive individual who can contribute to our data infrastructure and support our analytics and data science initiatives.
Skills Required:
- Experience in the manufacturing industry (metal industry is a plus)
- 4+ years of experience as a Data Engineer
- Experience in data cleaning & structuring and data manipulation
- Architect and optimize complex data pipelines, leading the design and implementation of scalable data infrastructure, and ensuring data quality and reliability at scale
- ETL Pipelines: Proven experience in designing, building, and maintaining ETL pipelines.
- Python: Strong proficiency in Python programming for data manipulation, transformation, and automation.
- Experience in SQL and data structures
- Knowledge in big data technologies such as Spark, Flink, Hadoop, Apache, and NoSQL databases.
- Knowledge of cloud technologies (at least one) such as AWS, Azure, and Google Cloud Platform.
- Proficient in data management and data governance
- Strong analytical experience & skills that can extract actionable insights from raw data to help improve the business.
- Strong analytical and problem-solving skills.
- Excellent communication and teamwork abilities.
Nice To Have:
- Azure: Experience with Azure data services (e.g., Azure Data Factory, Azure Databricks, Azure SQL Database).
- Terraform: Knowledge of Terraform for infrastructure as code (IaC) to manage cloud.
- Bachelor’s degree in computer science, Information Technology, Engineering, or a related field from top-tier Indian Institutes of Information Technology (IIITs).
- Benefits And Perks
- A culture that fosters innovation, creativity, continuous learning, and resilience
- Progressive leave policy promoting work-life balance
- Mentorship opportunities with highly qualified internal resources and industry-driven programs
- Multicultural peer groups and supportive workplace policies
- Annual workcation program allowing you to work from various scenic locations
- Experience the unique environment of a dynamic start-up
Why should you join TVARIT ?
Working at TVARIT, a deep-tech German IT startup, offers a unique blend of innovation, collaboration, and growth opportunities. We seek individuals eager to adapt and thrive in a rapidly evolving environment.
If this opportunity excites you and aligns with your career aspirations, we encourage you to apply today!
- Architectural Leadership:
- Design and architect robust, scalable, and high-performance Hadoop solutions.
- Define and implement data architecture strategies, standards, and processes.
- Collaborate with senior leadership to align data strategies with business goals.
- Technical Expertise:
- Develop and maintain complex data processing systems using Hadoop and its ecosystem (HDFS, YARN, MapReduce, Hive, HBase, Pig, etc.).
- Ensure optimal performance and scalability of Hadoop clusters.
- Oversee the integration of Hadoop solutions with existing data systems and third-party applications.
- Strategic Planning:
- Develop long-term plans for data architecture, considering emerging technologies and future trends.
- Evaluate and recommend new technologies and tools to enhance the Hadoop ecosystem.
- Lead the adoption of big data best practices and methodologies.
- Team Leadership and Collaboration:
- Mentor and guide data engineers and developers, fostering a culture of continuous improvement.
- Work closely with data scientists, analysts, and other stakeholders to understand requirements and deliver high-quality solutions.
- Ensure effective communication and collaboration across all teams involved in data projects.
- Project Management:
- Lead large-scale data projects from inception to completion, ensuring timely delivery and high quality.
- Manage project resources, budgets, and timelines effectively.
- Monitor project progress and address any issues or risks promptly.
- Data Governance and Security:
- Implement robust data governance policies and procedures to ensure data quality and compliance.
- Ensure data security and privacy by implementing appropriate measures and controls.
- Conduct regular audits and reviews of data systems to ensure compliance with industry standards and regulations.
Job Purpose and Impact
The DevOps Engineer is a key position to strengthen the security automation capabilities which have been identified as a critical area for growth and specialization within Global IT’s scope. As part of the Cyber Intelligence Operation’s DevOps Team, you will be helping shape our automation efforts by building, maintaining and supporting our security infrastructure.
Key Accountabilities
- Collaborate with internal and external partners to understand and evaluate business requirements.
- Implement modern engineering practices to ensure product quality.
- Provide designs, prototypes and implementations incorporating software engineering best practices, tools and monitoring according to industry standards.
- Write well-designed, testable and efficient code using full-stack engineering capability.
- Integrate software components into a fully functional software system.
- Independently solve moderately complex issues with minimal supervision, while escalating more complex issues to appropriate staff.
- Proficiency in at least one configuration management or orchestration tool, such as Ansible.
- Experience with cloud monitoring and logging services.
Qualifications
Minimum Qualifications
- Bachelor's degree in a related field or equivalent exp
- Knowledge of public cloud services & application programming interfaces
- Working exp with continuous integration and delivery practices
Preferred Qualifications
- 3-5 years of relevant exp whether in IT, IS, or software development
- Exp in:
- Code repositories such as Git
- Scripting languages (Python & PowerShell)
- Using Windows, Linux, Unix, and mobile platforms within cloud services such as AWS
- Cloud infrastructure as a service (IaaS) / platform as a service (PaaS), microservices, Docker containers, Kubernetes, Terraform, Jenkins
- Databases such as Postgres, SQL, Elastic
CoinFantasy is looking for a tech enthusiast working primarily on blockchain technology to be part of the core blockchain team at CoinFantasy. You would be a part of the Roadmap team that is working on the architecture, design, development, and deployment of our decentralised platform.
Your primary responsibilities would be analysing requirements, designing blockchain technology around a certain business model, and writing smart contracts.
Job Responsibilities
- Administer our blockchain, database, and DevOps infrastructure.
- Cross team collaboration to coordinate safe, efficient releases.
- Build complex pipelines for
- Databases, Messaging, Storage, Compute in AWS.
- Build deployment pipeline with Github CI (Actions).
- Build tools to reduce occurrences of errors and improve our protocols.
- Develop software to integrate with internal back-end systems.
- Perform root cause analysis for production errors.
- Investigate and resolve technical issues.
- Design procedures for system troubleshooting and maintenance.
Requirements
- 8+ years of Experience working with DevOps, Infrastructure, Site Reliability or Cloud Engineering
- Understanding the entire tech stack of Blockchain Dapps
- Strong experience working with any configuration management tools
- Languages: Any modern programming language
- Experience working with some of the major public clouds. e.g. AWS, Azure
- Competent with the “basics”: E.g. Computer Networking
- Self-motivated individual with enthusiasm for learning and building things
- Collaborative, communicative, and confident in their abilities to work well with all team members at all seniority and skill levels
- Hands-on experience with Rust/Substrate and Contribution to open-source blockchain projects is an added advantage
About Us
CoinFantasy is a Play to Invest platform that brings the world of investment to users through engaging games. With multiple categories of games, it aims to make investing fun, intuitive, and enjoyable for users.
It features a sandbox environment in which users are exposed to the end-to-end investment journey without risking financial losses.
Website: https://www.coinfantasy.io/
Benefits
- Competitive Salary
- An opportunity to be part of the Core team in a fast-growing company
- A fulfilling, challenging and flexible work experience
- Practically unlimited professional and career growth opportunities
Job Description - Manager Sales
Min 15 years experience,
Should be experience from Sales of Cloud IT Saas Products portfolio which Savex deals with,
Team Management experience, leading cloud business including teams
Sales manager - Cloud Solutions
Reporting to Sr Management
Good personality
Distribution backgroung
Keen on Channel partners
Good database of OEMs and channel partners.
Age group - 35 to 45yrs
Male Candidate
Good communication
B2B Channel Sales
Location - Bangalore
If interested reply with cv and below details
Total exp -
Current ctc -
Exp ctc -
Np -
Current location -
Qualification -
Total exp Channel Sales -
What are the Cloud IT products, you have done sales for?
What is the Annual revenue generated through Sales ?
- Bachelor of Computer Science or Equivalent Education
- At least 5 years of experience in a relevant technical position.
- Azure and/or AWS experience
- Strong in CI/CD concepts and technologies like GitOps (Argo CD)
- Hands-on experience with DevOps Tools (Jenkins, GitHub, SonarQube, Checkmarx)
- Experience with Helm Charts for package management
- Strong in Kubernetes, OpenShift, and Container Network Interface (CNI)
- Experience with programming and scripting languages (Spring Boot, NodeJS, Python)
- Strong container image management experience using Docker and distroless concepts
- Familiarity with Shared Libraries for code reuse and modularity
- Excellent communication skills (verbal, written, and presentation)
Note: Looking for immediate joiners only.
Staff DevOps Engineer with Azure
EGNYTE YOUR CAREER. SPARK YOUR PASSION.
Egnyte is a place where we spark opportunities for amazing people. We believe that every role has meaning, and every Egnyter should be respected. With 22,000+ customers worldwide and growing, you can make an impact by protecting their valuable data. When joining Egnyte, you’re not just landing a new career, you become part of a team of Egnyters that are doers, thinkers, and collaborators who embrace and live by our values:
Invested Relationships
Fiscal Prudence
Candid Conversations
ABOUT EGNYTE
Egnyte is the secure multi-cloud platform for content security and governance that enables organizations to better protect and collaborate on their most valuable content. Established in 2008, Egnyte has democratized cloud content security for more than 22,000 organizations, helping customers improve data security, maintain compliance, prevent and detect ransomware threats, and boost employee productivity on any app, any cloud, anywhere. For more information, visit www.egnyte.com.
Our Production Engineering team enables Egnyte to provide customers access to their data 24/7 by providing best in class infrastructure.
ABOUT THE ROLE
We store multibillion files and multiple petabytes of data. We observe more than 11K API requests per second on average. To make that possible and to provide the best possible experience, we rely on great engineers. For us, people who own their work, from start to finish, are integral. Our engineers are part of the process from design to code, to test, to deployment and back again for further iterations. You can, and will, touch every level of the infrastructure depending on the day and what project you are working on. The ideal candidate should be able to take a complex problem and execute end to end. Mentor and set higher standards for the rest of the team and for the new hires.
WHAT YOU’LL DO:
• Design, build and maintain self-hosted and cloud environments to serve our own applications and services.
• Collaborate with software developers to build stable, scalable and high-performance solutions.
• Taking part in big projects like migrating solutions from self-hosted environments to the cloud, from virtual machines to Kubernetes, from monolith to microservices.
- Proactively make our organization and technology better!
- Advising others as to how DevOps can make a positive impact on their work.
• Share knowledge, mentor more junior team members while also still learning and gaining new skills.
- Maintain consistently high standards of communication, productivity, and teamwork across all teams.
YOUR QUALIFICATIONS:
• 5+ years of proven experience in a DevOps Engineer, System Administrator or Developer role, working on infrastructure or build processes.
• Expert knowledge of Microsoft Azure.
• Programming prowess (Python, Golang).
• Knowledge and experience about deployment and maintenance of Java and Python apps using application and web servers (Tomcat, Nginx, etc.).
• Ability to solve complex problems with simple, elegant and clean code.
• Practical knowledge of CI/CD solutions, GitLab CI or similar.
• Practical knowledge of Docker as a tool for testing and building an environment.
• Knowledge of Kubernetes and related technologies.
• Experience with metric-based monitoring solutions.
• Solid English skills to effectively communicate with other team members.
• Good understanding of the Linux Operating System on the administration level.
• Drive to grow as a DevOps Engineer (we value open-mindedness and a can-do attitude).
• Strong sense of ownership and ability to drive big projects.
BONUS SKILLS:
• Work experience as a Microsoft Azure architect.
• Experience in Cloud migrations projects.
• Leadership skills and experience.
COMMITMENT TO DIVERSITY, EQUITY, AND INCLUSION:
At Egnyte, we celebrate our differences and thrive on our diversity for our employees, our products, our customers, our investors, and our communities. Egnyters are encouraged to bring their whole selves to work and to appreciate the many differences that collectively make Egnyte a higher-performing company and a great place to be.
· IMMEDIATE JOINER
Professional Experience with 5+ years in Confluent Kafka Admin
· Demonstrated experience design / development.
· Must have proven knowledge and practical application of – Confluent Kafka (Producers/ Consumers / Kafka Connectors / Kafka Stream/ksqlDB/Schema Registry)
· Experience in performance optimization of consumers, producers.
· Good experience debugging issues related offset, consumer lag, partitions.
· Experience with Administrative tasks on Confluent Kafka.
· Kafka admin experience including but not limited to setup new Kafka cluster, create topics, grant permissions, offset reset, purge data, setup connectors, setup replicator task, troubleshooting issues, Monitor Kafka cluster health and performance, backup and recovery.
· Experience in implementing security measures for Kafka clusters, including access controls and encryption, to protect sensitive data.
· Install/Upgrade Kafka cluster techniques.
· Good experience with writing unit tests using Junit and Mockito
· Have experience with working in client facing project.
· Exposure to any cloud environment like AZURE is added advantage.
· Experience in developing or working on REST Microservices
Experience in Java, Springboot is a plus
Job Description: Data Engineer
Experience: Over 4 years
Responsibilities:
- Design, develop, and maintain scalable data pipelines for efficient data extraction, transformation, and loading (ETL) processes.
- Architect and implement data storage solutions, including data warehouses, data lakes, and data marts, aligned with business needs.
- Implement robust data quality checks and data cleansing techniques to ensure data accuracy and consistency.
- Optimize data pipelines for performance, scalability, and cost-effectiveness.
- Collaborate with data analysts and data scientists to understand data requirements and translate them into technical solutions.
- Develop and maintain data security measures to ensure data privacy and regulatory compliance.
- Automate data processing tasks using scripting languages (Python, Bash) and big data frameworks (Spark, Hadoop).
- Monitor data pipelines and infrastructure for performance and troubleshoot any issues.
- Stay up to date with the latest trends and technologies in data engineering, including cloud platforms (AWS, Azure, GCP).
- Document data pipelines, processes, and data models for maintainability and knowledge sharing.
- Contribute to the overall data governance strategy and best practices.
Qualifications:
- Strong understanding of data architectures, data modelling principles, and ETL processes.
- Proficiency in SQL (e.g., MySQL, PostgreSQL) and experience with big data querying languages (e.g., Hive, Spark SQL).
- Experience with scripting languages (Python, Bash) for data manipulation and automation.
- Experience with distributed data processing frameworks (Spark, Hadoop) (preferred).
- Familiarity with cloud platforms (AWS, Azure, GCP) for data storage and processing (a plus).
- Experience with data quality tools and techniques.
- Excellent problem-solving, analytical, and critical thinking skills.
- Strong communication, collaboration, and teamwork abilities.
Key Responsibilities:
- Develop and Maintain CI/CD Pipelines: Design, implement, and manage CI/CD pipelines using GitOps practices.
- Kubernetes Management: Deploy, manage, and troubleshoot Kubernetes clusters to ensure high availability and scalability of applications.
- Cloud Infrastructure: Design, deploy, and manage cloud infrastructure on AWS, utilizing services such as EC2, S3, RDS, Lambda, and others.
- Infrastructure as Code: Implement and manage infrastructure using IaC tools like Terraform, CloudFormation, or similar.
- Monitoring and Logging: Set up and manage monitoring, logging, and alerting systems to ensure the health and performance of the infrastructure.
- Automation: Identify and automate repetitive tasks to improve efficiency and reliability.
- Security: Implement security best practices and ensure compliance with industry standards.
- Collaboration: Work closely with development, QA, and operations teams to ensure seamless integration and delivery of products.
Required Skills and Qualifications:
- Experience: 2-5 years of experience in a DevOps role.
- AWS: In-depth knowledge of AWS services and solutions.
- CI/CD Tools: Experience with CI/CD tools such as Jenkins, GitLab CI, CircleCI, or similar.
- GitOps Expertise: Proficient in GitOps methodologies and tools.
- Kubernetes: Strong hands-on experience with Kubernetes and container orchestration.
- Scripting and Automation: Proficient in scripting languages such as Bash, Python, or similar.
- Infrastructure as Code (IaC): Hands-on experience with IaC tools like Terraform, CloudFormation, or similar.
- Monitoring Tools: Familiarity with monitoring and logging tools like Prometheus, Grafana, ELK stack, or similar.
- Version Control: Strong understanding of version control systems, primarily Git.
- Problem-Solving: Excellent problem-solving and debugging skills.
- Collaboration: Ability to work in a fast-paced, collaborative environment.
- Education: Bachelor’s or master’s degree in computer science or a related field.
Responsibilities:
- Design, implement, and maintain robust CI/CD pipelines using Azure DevOps for continuous integration and continuous delivery (CI/CD) of software applications.
- Provision and manage infrastructure resources on Microsoft Azure, including virtual machines, containers, storage, and networking components.
- Implement and manage Kubernetes clusters for containerized application deployments and orchestration.
- Configure and utilize Azure Container Registry (ACR) for secure container image storage and management.
- Automate infrastructure provisioning and configuration management using tools like Azure Resource Manager (ARM) templates.
- Monitor application performance and identify potential bottlenecks using Azure monitoring tools.
- Collaborate with developers and operations teams to identify and implement continuous improvement opportunities for the DevOps process.
- Troubleshoot and resolve DevOps-related issues, ensuring smooth and efficient software delivery.
- Stay up-to-date with the latest advancements in cloud technologies, DevOps tools, and best practices.
- Maintain a strong focus on security throughout the software delivery lifecycle.
- Participate in code reviews to identify potential infrastructure and deployment issues.
- Effectively communicate with technical and non-technical audiences on DevOps processes and initiatives.
Qualifications:
- Proven experience in designing and implementing CI/CD pipelines using Azure DevOps.
- In-depth knowledge of Microsoft Azure cloud platform services (IaaS, PaaS, SaaS).
- Expertise in deploying and managing containerized applications using Kubernetes.
- Experience with Infrastructure as Code (IaC) tools like ARM templates.
- Familiarity with Azure monitoring tools and troubleshooting techniques.
- A strong understanding of DevOps principles and methodologies (Agile, Lean).
- Excellent problem-solving and analytical skills.
- Ability to work independently and as part of a team.
- Strong written and verbal communication skills.
- A minimum of one relevant Microsoft certification (e.g., Azure Administrator Associate, DevOps Engineer Expert) is highly preferred.
GCP Cloud Engineer:
- Proficiency in infrastructure as code (Terraform).
- Scripting and automation skills (e.g., Python, Shell). Knowing python is must.
- Collaborate with teams across the company (i.e., network, security, operations) to build complete cloud offerings.
- Design Disaster Recovery and backup strategies to meet application objectives.
- Working knowledge of Google Cloud
- Working knowledge of various tools, open-source technologies, and cloud services
- Experience working on Linux based infrastructure.
- Excellent problem-solving and troubleshooting skills
Responsibilities:
Develop and maintain high-quality, scalable, and efficient Java codebase for our ad-serving platform.
Collaborate with cross-functional teams including product managers, designers, and other developers to
understand requirements and translate them into technical solutions.
Design and implement new features and functionalities in the ad-serving system, focusing on performance
optimization and reliability.
Troubleshoot and debug complex issues in the ad server environment, providing timely resolutions to ensure
uninterrupted service.
Conduct code reviews, provide constructive feedback, and enforce coding best practices to maintain code quality
and consistency across the platform.
Stay updated with emerging technologies and industry trends in ad serving and digital advertising, and integrate
relevant innovations into our platform.
Work closely with DevOps and infrastructure teams to deploy and maintain the ad-serving platform in a cloud- based environment.
Collaborate with stakeholders to gather requirements, define technical specifications, and estimate development
efforts for new projects and features.
Mentor junior developers, sharing knowledge and best practices to foster a culture of continuous learning and
improvement within the development team.
Participate in on-call rotations and provide support for production issues as needed, ensuring maximum uptime
and reliability of the ad-serving platform.
About the role:
We are seeking a highly skilled Azure DevOps Engineer with a strong background in backend development to join our rapidly growing team. The ideal candidate will have a minimum of 4 years of experience and has have extensive experience in building and maintaining CI/CD pipelines, automating deployment processes, and optimizing infrastructure on Azure. Additionally, expertise in backend technologies and development frameworks is required to collaborate effectively with the development team in delivering scalable and efficient solutions.
Responsibilities
- Collaborate with development and operations teams to implement continuous integration and deployment processes.
- Automate infrastructure provisioning, configuration management, and application deployment using tools such as Ansible, and Jenkins.
- Design, implement, and maintain Azure DevOps pipelines for continuous integration and continuous delivery (CI/CD)
- Develop and maintain build and deployment pipelines, ensuring that they are scalable, secure, and reliable.
- Monitor and maintain the health of the production infrastructure, including load balancers, databases, and application servers.
- Automate the software development and delivery lifecycle, including code building, testing, deployment, and release.
- Familiarity with Azure CLI, Azure REST APIs, Azure Resource Manager template, Azure billing/cost management and the Azure Management Console
- Must have experience of any one of the programming language (Java, .Net, Python )
- Ensure high availability of the production environment by implementing disaster recovery and business continuity plans.
- Build and maintain monitoring, alerting, and trending operational tools (CloudWatch, New Relic, Splunk, ELK, Grafana, Nagios).
- Stay up to date with new technologies and trends in DevOps and make recommendations for improvements to existing processes and infrastructure.
- ontribute to backend development projects, ensuring robust and scalable solutions.
- Work closely with the development team to understand application requirements and provide technical expertise in backend architecture.
- Design and implement database schemas.
- Identify and implement opportunities for performance optimization and scalability of backend systems.
- Participate in code reviews, architectural discussions, and sprint planning sessions.
- Stay updated with the latest Azure technologies, tools, and best practices to continuously improve our development and deployment processes.
- Mentor junior team members and provide guidance and training on best practices in DevOps.
Required Qualifications
- BS/MS in Computer Science, Engineering, or a related field
- 4+ years of experience as an Azure DevOps Engineer (or similar role) with experience in backed development.
- Strong understanding of CI/CD principles and practices.
- Expertise in Azure DevOps services, including Azure Pipelines, Azure Repos, and Azure Boards.
- Experience with infrastructure automation tools like Terraform or Ansible.
- Proficient in scripting languages like PowerShell or Python.
- Experience with Linux and Windows server administration.
- Strong understanding of backend development principles and technologies.
- Excellent communication and collaboration skills.
- Ability to work independently and as part of a team.
- Problem-solving and analytical skills.
- Experience with industry frameworks and methodologies: ITIL/Agile/Scrum/DevOps
- Excellent problem-solving, critical thinking, and communication skills.
- Have worked in a product based company.
What we offer:
- Competitive salary and benefits package
- Opportunity for growth and advancement within the company
- Collaborative, dynamic, and fun work environment
- Possibility to work with cutting-edge technologies and innovative projects
Job Title: Backend Developer
Job Description: We are seeking a skilled Backend Developer to join our dynamic team. The ideal candidate will be responsible for designing, implementing, and maintaining the server-side logic, databases, and APIs of our applications. Key responsibilities include collaborating with cross-functional teams to develop scalable and efficient backend systems, troubleshooting and resolving issues, and staying updated on industry trends. Proficiency in programming languages such as Python, Java, or Node.js, along with experience in database management and API development, are essential for success in this role. Strong problem-solving skills, attention to detail, and a passion for creating robust and high-performance backend solutions are highly valued.
Requirements:
Proficiency in one or more backend programming languages (e.g., Python, Java, Node.js)
Experience with database management systems (e.g., MySQL, PostgreSQL, MongoDB)
Knowledge of RESTful API design and development
Familiarity with cloud services (e.g., AWS, Azure) is a plus
Strong problem-solving and troubleshooting skills
Collaborative mindset with excellent communication skills
Ability to work in a fast-paced and dynamic environment
About Kiru:
Kiru is a forward-thinking payments startup on a mission to revolutionise the digital payments landscape in Africa and beyond. Our innovative solutions will reshape how people transact, making payments safer, faster, and more accessible. Join us on our journey to redefine the future of payments.
Position Overview:
We are searching for a highly skilled and motivated DevOps Engineer to join our dynamic team in Pune, India. As a DevOps Engineer at Kiru, you will play a critical role in ensuring our payment infrastructure's reliability, scalability, and security.
Key Responsibilities:
- Utilize your expertise in technology infrastructure configuration to manage and automate infrastructure effectively.
- Collaborate with cross-functional teams, including Software Developers and technology management, to design and implement robust and efficient DevOps solutions.
- Configure and maintain a secure backend environment focusing on network isolation and VPN access.
- Implement and manage monitoring solutions like ZipKin, Jaeger, New Relic, or DataDog and visualisation and alerting solutions like Prometheus and Grafana.
- Work closely with developers to instrument code for visualisation and alerts, ensuring system performance and stability.
- Contribute to the continuous improvement of development and deployment pipelines.
- Collaborate on the selection and implementation of appropriate DevOps tools and technologies.
- Troubleshoot and resolve infrastructure and deployment issues promptly to minimize downtime.
- Stay up-to-date with emerging DevOps trends and best practices.
- Create and maintain comprehensive documentation related to DevOps processes and configurations.
Qualifications:
- Bachelor's degree in Computer Science, Information Technology, or a related field (or equivalent work experience).
- Proven experience as a DevOps Engineer or in a similar role.
- Experience configuring infrastructure on Microsoft Azure
- Experience with Kubernetes as a container orchestration technology
- Experience with Terraform and Azure ARM or Bicep templates for infrastructure provisioning and management.
- Experience configuring and maintaining secure backend environments, including network isolation and VPN access.
- Proficiency in setting up and managing monitoring and visualization tools such as ZipKin, Jaeger, New Relic, DataDog, Prometheus, and Grafana.
- Ability to collaborate effectively with developers to instrument code for visualization and alerts.
- Strong problem-solving and troubleshooting skills.
- Excellent communication and teamwork skills.
- A proactive and self-motivated approach to work.
Desired Skills:
- Experience with Azure Kubernetes Services and managing identities across Azure services.
- Previous experience in a financial or payment systems environment.
About Kiru:
At Kiru, we believe that success is achieved through collaboration. We recognise that every team member has a vital role to play, and it's the partnerships we build within our organisation that drive our customers' success and our growth as a business.
We are more than just a team; we are a close-knit partnership. By bringing together diverse talents and fostering powerful collaborations, we innovate, share knowledge, and continually learn from one another. We take pride in our daily achievements but never stop challenging ourselves and supporting each other. Together, we reach new heights and envision a brighter future.
Regardless of your career journey, we provide the guidance and resources you need to thrive. You will have everything required to excel through training programs, mentorship, and ongoing support. At Kiru, your success is our success, and that success matters because we are the essential partners for the world's most critical businesses. These companies manufacture, transport, and supply the world's essential goods.
Equal Opportunities and Accommodations Statement:
Kiru is committed to fostering a workplace and global community where inclusion is celebrated and where you can bring your authentic selfbecause that's who we're interested in. If you are interested in this role but don't meet every qualification in the job description, don't hesitate to apply. We are an equal opportunity employer.
Infra360 Solutions is a services company specializing in Cloud, DevSecOps, Security, and Observability solutions. We help technology companies adapt DevOps culture in their organization by focusing on long-term DevOps roadmap. We focus on identifying technical and cultural issues in the journey of successfully implementing the DevOps practices in the organization and work with respective teams to fix issues to increase overall productivity. We also do training sessions for the developers and make them realize the importance of DevOps. We provide these services - DevOps, DevSecOps, FinOps, Cost Optimizations, CI/CD, Observability, Cloud Security, Containerization, Cloud Migration, Site Reliability, Performance Optimizations, SIEM and SecOps, Serverless automation, Well-Architected Review, MLOps, Governance, Risk & Compliance. We do assessments of technology architecture, security, governance, compliance, and DevOps maturity model for any technology company and help them optimize their cloud cost, streamline their technology architecture, and set up processes to improve the availability and reliability of their website and applications. We set up tools for monitoring, logging, and observability. We focus on bringing the DevOps culture to the organization to improve its efficiency and delivery.
Job Description
Our Mission
Our mission is to help customers achieve their business objectives by providing innovative, best-in-class consulting, IT solutions and services and to make it a joy for all stakeholders to work with us. We function as a full stakeholder in business, offering a consulting-led approach with an integrated portfolio of technology-led solutions that encompass the entire Enterprise value chain.
Our Customer-centric Engagement Model defines how we engage with you, offering specialized services and solutions that meet the distinct needs of your business.
Our Culture
Culture forms the core of our foundation and our effort towards creating an engaging workplace has resulted in Infra360 Solution Pvt Ltd.
Our Tech-Stack:
- Azure DevOps, Azure Kubernetes Service, Docker, Active Directory (Microsoft Entra)
- Azure IAM and managed identity, Virtual network, VM Scale Set, App Service, Cosmos
- Azure, MySQL Scripting (PowerShell, Python, Bash),
- Azure Security, Security Documentation, Security Compliance,
- AKS, Blob Storage, Azure functions, Virtual Machines, Azure SQL
- AWS - IAM, EC2, EKS, Lambda, ECS, Route53, Cloud formation, Cloud front, S3
- GCP - GKE, Compute Engine, App Engine, SCC
- Kubernetes, Linux, Docker & Microservices Architecture
- Terraform & Terragrunt
- Jenkins & Argocd
- Ansible, Vault, Vagrant, SaltStack
- CloudFront, Apache, Nginx, Varnish, Akamai
- Mysql, Aurora, Postgres, AWS RedShift, MongoDB
- ElasticSearch, Redis, Aerospike, Memcache, Solr
- ELK, Fluentd, Elastic APM & Prometheus Grafana Stack
- Java (Spring/Hibernate/JPA/REST), Nodejs, Ruby, Rails, Erlang, Python
What does this role hold for you…??
- Infrastructure as a code (IaC)
- CI/CD and configuration management
- Managing Azure Active Directory (Entra)
- Keeping the cost of the infrastructure to the minimum
- Doing RCA of production issues and providing resolution
- Setting up failover, DR, backups, logging, monitoring, and alerting
- Containerizing different applications on the Kubernetes platform
- Capacity planning of different environments infrastructure
- Ensuring zero outages of critical services
- Database administration of SQL and NoSQL databases
- Setting up the right set of security measures
Requirements
Apply if you have…
- A graduation/post-graduation degree in Computer Science and related fields
- 2-4 years of strong DevOps experience in Azure with the Linux environment.
- Strong interest in working in our tech stack
- Excellent communication skills
- Worked with minimal supervision and love to work as a self-starter
- Hands-on experience with at least one of the scripting languages - Bash, Python, Go etc
- Experience with version control systems like Git
- Understanding of Azure cloud computing services and cloud computing delivery models (IaaS, PaaS, and SaaS)
- Strong scripting or programming skills for automating tasks (PowerShell/Bash)
- Knowledge and experience with CI/CD tools: Azure DevOps, Jenkins, Gitlab etc.
- Knowledge and experience in IaC at least one (ARM Templates/ Terraform)
- Strong experience with managing the Production Systems day in and day out
- Experience in finding issues in different layers of architecture in a production environment and fixing them
- Experience in automation tools like Ansible/SaltStack and Jenkins
- Experience in Docker/Kubernetes platform and managing OpenStack (desirable)
- Experience with Hashicorp tools i.e. Vault, Vagrant, Terraform, Consul, VirtualBox etc. (desirable)
- Experience in Monitoring tools like Prometheus/Grafana/Elastic APM.
- Experience in logging tools Like ELK/Loki.
- Experience in using Microsoft Azure Cloud services
If you are passionate about infrastructure, and cloud technologies, and want to contribute to innovative projects, we encourage you to apply. Infra360 offers a dynamic work environment and opportunities for professional growth.
Interview Process
Application Screening=>Test/Assessment=>2 Rounds of Tech Interview=>CEO Round=>Final Discussion
This opening is with an MNC
ROLE AND RESPONSIBILITIES
Should be able to work as an individual contributor and maintain good relationship with stakeholders. Should
be proactive to learn new skills per business requirement. Familiar with extraction of relevant data, cleanse and
transform data into insights that drive business value, through use of data analytics, data visualization and data
modeling techniques.
QUALIFICATIONS AND EDUCATION REQUIREMENTS
Technical Bachelor’s Degree.
Non-Technical Degree holders should have 1+ years of relevant experience.
JOB Requirements and Responsibilities:
#SeniorSystemadministrator
- #ActiveDirectory Domain, #GroupPolicies, #Domaincontroller migration and upgrades.
- File and Print sharing, #NTFS permissions. #FileServer #migrations.
- #MicrosoftExchange or #Office365 messaging, #Outlook Configurations.
- Knowledge of Data #Backups, Backup Strategies, Experience on #backuptools will be an additional advantage.
- Basic knowledge of #Routers, #Firewalls, NAT, #VPN configuration #Sonicwalll preferable.
- Knowledge and working experience on #TicketingSystems & #RemoteAdministration tools.
- Good #DesktopTroubleshooting experience.
- #AntiVirus installations and #Troubleshooting.
- Knowledge of #DHCP , #DNS Management.
- Ticketing tool and #RMM tool #Labtech, #Kaseya, #Autotask (Experience preferred)
Fintrac Global services
Required Qualifications:
∙Bachelor’s degree in computer science, Information Technology, or related field, or equivalent experience.
∙5+ years of experience in a DevOps role, preferably for a SaaS or software company.
∙Expertise in cloud computing platforms (e.g., AWS, Azure, GCP).
∙Proficiency in scripting languages (e.g., Python, Bash, Ruby).
∙Extensive experience with CI/CD tools (e.g., Jenkins, GitLab CI, Travis CI).
∙Extensive experience with NGINX and similar web servers.
∙Strong knowledge of containerization and orchestration technologies (e.g., Docker, Kubernetes).
∙Familiarity with infrastructure-as-code tools (e.g. Terraform, CloudFormation).
∙Ability to work on-call as needed and respond to emergencies in a timely manner.
∙Experience with high transactional e-commerce platforms.
Preferred Qualifications:
∙Certifications in cloud computing or DevOps are a plus (e.g., AWS Certified DevOps Engineer,
Azure DevOps Engineer Expert).
∙Experience in a high availability, 24x7x365 environment.
∙Strong collaboration, communication, and interpersonal skills.
∙Ability to work independently and as part of a team.
Title/Role: Python Django Consultant
Experience: 8+ Years
Work Location: Indore / Pune /Chennai / Vadodara
Notice period: Immediate to 15 Days Max
Key Skills: Python, Django, Crispy Forms, Authentication, Bootstrap, jQuery, Server Side Rendered, SQL, Azure, React, Django DevOps
Job Description:
- Should have knowledge and created forms using Django. Crispy forms is a plus point.
- Must have leadership experience
- Should have good understanding of function based and class based views.
- Should have good understanding about authentication (JWT and Token authentication)
- Django – at least one senior with deep Django experience. The other 1 or 2 can be mid to senior python or Django
- FrontEnd – Must have React/ Angular, CSS experience
- Database – Ideally SQL but most senior has solid DB experience
- Cloud – Azure preferred but agnostic
- Consulting / client project background ideal.
Django Stack:
- Django
- Server Side Rendered HTML
- Bootstrap
- jQuery
- Azure SQL
- Azure Active Directory
- Server Side Rendered/jQuery is older tech but is what we are ok with for internal tools. This is a good combination of late adopter agile stack integrated within an enterprise. Potentially we can push them to React for some discreet projects or pages that need more dynamism.
Django Devops:
- Should have expertise with deploying and managing Django in Azure.
- Django deployment to Azure via Docker.
- Django connection to Azure SQL.
- Django auth integration with Active Directory.
- Terraform scripts to make this setup seamless.
- Easy, proven to deployment / setup to AWS, GCP.
- Load balancing, more advanced services, task queues, etc.
Mandatory skills
.net
Aws/Azure
Location Chennai
Development background
Mandatory skills
.net
Aws/Azure
Location Chennai
Development background
We are looking "Sr.Software Engineer(Devops)" for Reputed Client @ Bangalore Permanent Role.
Experience: 4+ Yrs
Responsibilities:
• As part of a team you will design, develop, and maintain scalable multi cloud DevOps blueprint.
• Understand overall virtualization platform architecture in cloud environments and design best of class solutions that fit the SaaS offering & legacy application modernization
• Continuously improve CI/CD pipeline, tools, processes and procedures and systems relating to Developer Productivity
• Collaborate continuously with the product development teams to implement CI/CD pipeline.
• Contribute to the subject matter on Developer Productivity, DevOps, Infrastructure Automation best practices.
Mandatory Skills:
• 1+ years of commercial server-side software development experience & 3+ years of commercial DevOps experience.
• Strong scripting skills (Java or Python) is a must.
• Experience with automation tools such as Ansible, Chef, Puppet etc.
• Hands-on experience with CI/CD tools such as GitLab, Jenkins, Nexus, Artifactory, Maven, Gradle
• Hands-on working experience in developing or deploying microservices is a must.
• Hands-on working experience of at least of the popular cloud infrastructure such as AWS / Azure / GCP / Red Hat OpenStack is a must.
• Knowledge about microservices hosted in leading cloud environments
• Experience with containerizing applications (Docker preferred) is a must
• Hands-on working experience of automating deployment, scaling, and management of containerized applications (Kubernetes) is a must.
• Strong problem-solving, analytical skills and good understanding of the best practices for building, testing, deploying and monitoring software
Mandatory Skills:
• Experience working with Secret management services such as HashiCorp Vault is desirable.
• Experience working with Identity and access management services such as Okta, Cognito is desirable.
• Experience with monitoring systems such as Prometheus, Grafana is desirable.
Educational Qualifications and Experience:
• B.E/B.Tech/MCA/M.Tech (Computer science/Information science/Information Technology is a Plus)
• 4 to 6 years of hands-on experience in server-side application development & DevOps