50+ Python Jobs in Delhi, NCR and Gurgaon | Python Job openings in Delhi, NCR and Gurgaon
Apply to 50+ Python Jobs in Delhi, NCR and Gurgaon on CutShort.io. Explore the latest Python Job opportunities across top companies like Google, Amazon & Adobe.
Wishlist’s mission is to amplify company performance by igniting the power of people. We understand that companies don’t build a business, they build people, and the people build the business. Our rewards and recognition platform takes human psychology, digitizes it, and then connects people to business performance.
We accomplish our mission through our values:
- Build the House
- Be Memorable
- Win Together
- Seek Solutions
We are looking for a talented and experienced machine learning engineer to build efficient, data-driven artificial intelligence systems that advance our predictive automation capabilities. The candidate should be highly skilled in statistics and programming, with the ability to confidently assess, analyze, and organize large amounts of data as well as a deep understanding of data science and software engineering principles.
Responsibilities -
- Design, develop, and optimize recommendation systems, leveraging content-based, collaborative, and hybrid filtering techniques.
- Conduct end-to-end development of recommendation algorithms, from prototyping and experimentation to deployment and monitoring.
- Build and maintain data pipelines to support real-time and batch recommendation use cases.
- Implement KNN search, cosine similarity, semantic search, and other relevant techniques to improve recommendation accuracy.
- Utilize A/B testing and other experimental methods to validate the performance of recommendation models.
- Collaborate with cross-functional teams, including product managers, engineers, and designers, to define recommendation requirements and ensure effective integration.
- Monitor, troubleshoot, and improve recommendation models to ensure optimal system performance and scalability.
- Document models, experiments, and analysis results clearly and comprehensively.
- Stay up-to-date with the latest ML/AI and Gen AI trends, techniques, and tools, bringing innovative ideas to enhance our platform and improve the customer experience.
Required Skills -
- 3+ years of experience in building recommendation systems, with a strong understanding of content-based, collaborative filtering, and hybrid approaches.
- Bachelor’s or Master’s degree in Computer Science, Data Science, Machine Learning, or a related field.
- Strong knowledge of machine learning techniques, statistics, and algorithms related to recommendations.
- Proficiency in Python, R, or similar programming languages.
- Hands-on experience with ML libraries and frameworks (e.g., TensorFlow, PyTorch, Scikit-Learn).
- Experience with KNN search, cosine similarity, semantic search, and other similarity measures.
- Familiarity with NLP techniques and libraries (e.g., spaCy, Hugging Face) to improve content-based recommendations.
- Hands-on experience deploying recommendation systems on cloud platforms like AWS, GCP, or Azure.
- Experience with MLflow or similar tools for model tracking and lifecycle management.
- Excellent problem-solving abilities and experience in data-driven experimentation and A/B testing.
- Strong communication skills and ability to work effectively in a collaborative team environment.
Benefits -
- Competitive salary as per the market standard.
- Directly work with the Executive Leadership.
- Join a great workplace & culture.
- Company-paid medical insurance for you and your family.
Building the machine learning production (or MLOps) is the biggest challenge most large companies currently have in making the transition to becoming an AI-driven organization. This position is an opportunity for an experienced, server-side developer to build expertise in this exciting new frontier. You will be part of a team deploying state-of-the-art AI solutions for Fractal clients.
Responsibilities
As MLOps Engineer, you will work collaboratively with Data Scientists and Data engineers to deploy and operate advanced analytics machine learning models. You’ll help automate and streamline Model development and Model operations. You’ll build and maintain tools for deployment, monitoring, and operations. You’ll also troubleshoot and resolve issues in development, testing, and production environments.
- Enable Model tracking, model experimentation, Model automation
- Develop ML pipelines to support
- Develop MLOps components in Machine learning development life cycle using Model Repository (either of): MLFlow, Kubeflow Model Registry
- Develop MLOps components in Machine learning development life cycle using Machine Learning Services (either of): Kubeflow, DataRobot, HopsWorks, Dataiku or any relevant ML E2E PaaS/SaaS
- Work across all phases of Model development life cycle to build MLOPS components
- Build the knowledge base required to deliver increasingly complex MLOPS projects on Azure
- Be an integral part of client business development and delivery engagements across multiple domains
Required Qualifications
- 3-5 years experience building production-quality software.
- B.E/B.Tech/M.Tech in Computer Science or related technical degree OR Equivalent
- Strong experience in System Integration, Application Development or Data Warehouse projects across technologies used in the enterprise space
- Knowledge of MLOps, machine learning and docker
- Object-oriented languages (e.g. Python, PySpark, Java, C#, C++)
- CI/CD experience( i.e. Jenkins, Git hub action,
- Database programming using any flavors of SQL
- Knowledge of Git for Source code management
- Ability to collaborate effectively with highly technical resources in a fast-paced environment
- Ability to solve complex challenges/problems and rapidly deliver innovative solutions
- Foundational Knowledge of Cloud Computing on Azure
- Hunger and passion for learning new skills
Building the machine learning production System(or MLOps) is the biggest challenge most large companies currently have in making the transition to becoming an AI-driven organization. This position is an opportunity for an experienced, server-side developer to build expertise in this exciting new frontier. You will be part of a team deploying state-ofthe-art AI solutions for Fractal clients.
Responsibilities
As MLOps Engineer, you will work collaboratively with Data Scientists and Data engineers to deploy and operate advanced analytics machine learning models. You’ll help automate and streamline Model development and Model operations. You’ll build and maintain tools for deployment, monitoring, and operations. You’ll also troubleshoot and resolve issues in development, testing, and production environments.
- Enable Model tracking, model experimentation, Model automation
- Develop scalable ML pipelines
- Develop MLOps components in Machine learning development life cycle using Model Repository (either of): MLFlow, Kubeflow Model Registry
- Machine Learning Services (either of): Kubeflow, DataRobot, HopsWorks, Dataiku or any relevant ML E2E PaaS/SaaS
- Work across all phases of Model development life cycle to build MLOPS components
- Build the knowledge base required to deliver increasingly complex MLOPS projects on Azure
- Be an integral part of client business development and delivery engagements across multiple domains
Required Qualifications
- 5.5-9 years experience building production-quality software
- B.E/B.Tech/M.Tech in Computer Science or related technical degree OR equivalent
- Strong experience in System Integration, Application Development or Datawarehouse projects across technologies used in the enterprise space
- Expertise in MLOps, machine learning and docker
- Object-oriented languages (e.g. Python, PySpark, Java, C#, C++)
- Experience developing CI/CD components for production ready ML pipeline.
- Database programming using any flavors of SQL
- Knowledge of Git for Source code management
- Ability to collaborate effectively with highly technical resources in a fast-paced environment
- Ability to solve complex challenges/problems and rapidly deliver innovative solutions
- Team handling, problem solving, project management and communication skills & creative thinking
- Foundational Knowledge of Cloud Computing on Azure
- Hunger and passion for learning new skills
Responsibilities
- Design and implement advanced solutions utilizing Large Language Models (LLMs).
- Demonstrate self-driven initiative by taking ownership and creating end-to-end solutions.
- Conduct research and stay informed about the latest developments in generative AI and LLMs.
- Develop and maintain code libraries, tools, and frameworks to support generative AI development.
- Participate in code reviews and contribute to maintaining high code quality standards.
- Engage in the entire software development lifecycle, from design and testing to deployment and maintenance.
- Collaborate closely with cross-functional teams to align messaging, contribute to roadmaps, and integrate software into different repositories for core system compatibility.
- Possess strong analytical and problem-solving skills.
- Demonstrate excellent communication skills and the ability to work effectively in a team environment.
Primary Skills
- Generative AI: Proficiency with SaaS LLMs, including Lang chain, llama index, vector databases, Prompt engineering (COT, TOT, ReAct, agents). Experience with Azure OpenAI, Google Vertex AI, AWS Bedrock for text/audio/image/video modalities.
- Familiarity with Open-source LLMs, including tools like TensorFlow/Pytorch and Huggingface. Techniques such as quantization, LLM finetuning using PEFT, RLHF, data annotation workflow, and GPU utilization.
- Cloud: Hands-on experience with cloud platforms such as Azure, AWS, and GCP. Cloud certification is preferred.
- Application Development: Proficiency in Python, Docker, FastAPI/Django/Flask, and Git.
- Natural Language Processing (NLP): Hands-on experience in use case classification, topic modeling, Q&A and chatbots, search, Document AI, summarization, and content generation.
- Computer Vision and Audio: Hands-on experience in image classification, object detection, segmentation, image generation, audio, and video analysis.
About Tazapay
Tazapay is a Singapore-based B2B payments startup, backed by Sequoia Capital, Saison Capital, and RTP Global, enabling small and medium-sized businesses (SMBs) to conduct secure cross-border commerce. Founded by experts in payments, e-commerce, and cross-border trade, Tazapay is focused on driving digital transformation in the B2B space, addressing post-pandemic shifts and new opportunities with a culture of openness, innovation, and growth.
What Awaits You?
Joining us now means being part of an exceptional team on an exciting journey just as we’re preparing for significant growth. This is a unique opportunity to help build something from the ground up, with the satisfaction of seeing your work impact thousands of users. You’ll experience growth across all areas – Sales, Software Development, Marketing, HR, Accounting, and more – and share in a culture of openness, innovation, and memorable experiences.
Are you ready for the ride?
Explore what you could accomplish with us!
Backend Engineer Role
Responsibilities (not exhaustive)
- Design, write, and deliver highly scalable, reliable, and fault-tolerant systems with minimal guidance.
- Participate in code and design reviews to maintain high development standards.
- Collaborate with product management to define and execute the feature roadmap.
- Translate business requirements into scalable and extensible design.
- Proactively manage stakeholder communication regarding deliverables, risks, changes, and dependencies.
- Coordinate with cross-functional teams (Mobile, DevOps, Data, UX, QA, etc.) on planning and execution.
- Continuously improve code quality, product performance, and customer satisfaction.
- Willingness to learn new languages and methodologies.
- Display a strong sense of ownership.
- Engage in service capacity and demand planning, software performance analysis, tuning, and optimization.
The Ideal Candidate
Education
- Degree in Computer Science or equivalent.
- 5+ years of experience in commercial software development within large distributed systems.
Experience
- Hands-on experience in designing, developing, testing, and deploying applications on one or more of the following: Golang, Ruby, Python, .Net Core, or Java for large-scale applications.
- In-depth knowledge of Linux as a production environment.
- Strong understanding of data structures, algorithms, distributed systems, and asynchronous architectures.
- Expert in at least one language: Golang, Python, Ruby, Java, C, C++.
- Proficient in OOP, including design patterns.
- Ability to design and implement low-latency RESTful services.
- Proven experience in building backend services for high-volume traffic.
- Strong understanding of system performance and scaling.
- Excellent communication, analytical skills, and design abilities.
- Experience with data modeling in both relational and NoSQL databases.
- Able to continuously refactor applications for high-quality design.
- Skilled in planning, prioritizing, estimating, and executing releases predictably.
- Able to scope, review, and refine user stories for technical completeness and dependency mitigation.
- Eager to learn new technologies and tackle complex challenges.
- “Can-do” attitude.
Nice to Have
- Familiarity with the Golang ecosystem.
- Experience running web services at scale; understanding of systems internals and networking.
- Knowledge of HTTP/HTTPS communication protocols.
Abilities and Traits
- Strong attention to product details and ability to meet deadlines.
- High focus and precision for extended periods of repetitive tasks.
- Proactive problem-solver with strong anticipation skills.
- Team-oriented and able to assist others in resolving issues.
- Quality-oriented and structured in approach.
- Exceptional planning, organization, and prioritization skills.
- Demonstrated logical and analytical thinking abilities.
If you’re excited by this role, we’d love to see how you can make an impact on our journey.
Azure DevOps engineer should have a deep understanding of container principles and hands-on experience with Docker.
They should also be able to set-up and manage clusters using Azure Kubernetes Service (AKS). Additionally, understanding of API management, Azure Key-Vaults, ACR, networking concepts like virtual networks, subnets, NSG, route tables. Awareness of any one of the software like Apigee, Kong, or APIM in Azure is a must. Strong experience with IaC technologies like Terraform, ARM/ Bicep Templates, GitHub Pipelines, Sonar etc.
- Designing DevOps strategies: Recommending strategies for migrating and consolidating DevOps tools, designing an Agile work management approach, and creating a secure development process
- Implementing DevOps development processes: Designing version control strategies, integrating source control, and managing build infrastructure
- Managing application configuration and secrets: Ensuring system and infrastructure availability, stability, scalability, and performance
- Automating processes: Overseeing code releases and deployments with an emphasis on continuous integration and delivery
- Collaborating with teams: Working with architect and developers to ensure smooth code integration and collaborating with development and operations teams to define pipelines.
- Documentation: Producing detailed Development Architecture design, setting up the DevOps tools and working together with the CI/CD specialist in integrating the automated CI and CD pipelines with those tools
- Ensuring security and compliance/DevSecOps: Managing code quality and security policies
- Troubleshooting issues: Investigating issues and responding to customer queries
- Core Skills: Azure DevOps engineer should have a deep understanding of container principles and hands-on experience with Docker. They should also be able to set-up and manage clusters using Azure Kubernetes Service (AKS). Additionally, understanding of API management, Azure Key-Vaults, ACR, networking concepts like virtual networks, subnets, NSG, route tables. Awareness of any one of the software like Apigee, Kong, or APIM in Azure is a must. Strong experience with IaC technologies like Terraform, ARM/ Bicep Templates, GitHub Pipelines, Sonar,
- Additional Skills: Self-starter and ability to execute tasks on time, Excellent communication skills, ability to come up with multiple solutions for problems, interact with client-side experts to resolve issues by providing correct pointers, excellent debugging skills, ability to breakdown tasks into smaller steps.
Job Description
We are looking for a talented Java Developer to work in abroad countries. You will be responsible for developing high-quality software solutions, working on both server-side components and integrations, and ensuring optimal performance and scalability.
Preferred Qualifications
- Experience with microservices architecture.
- Knowledge of cloud platforms (AWS, Azure).
- Familiarity with Agile/Scrum methodologies.
- Understanding of front-end technologies (HTML, CSS, JavaScript) is a plus.
Requirment Details
Bachelor’s degree in Computer Science, Information Technology, or a related field (or equivalent experience).
Proven experience as a Java Developer or similar role.
Strong knowledge of Java programming language and its frameworks (Spring, Hibernate).
Experience with relational databases (e.g., MySQL, PostgreSQL) and ORM tools.
Familiarity with RESTful APIs and web services.
Understanding of version control systems (e.g., Git).
Solid understanding of object-oriented programming (OOP) principles.
Strong problem-solving skills and attention to detail.
Job Overview:
We are seeking a motivated and enthusiastic Junior AI/ML Engineer to join our dynamic team. The ideal candidate will have a foundational knowledge in machine learning, deep learning, and related technologies, with hands-on experience in developing ML models from scratch. You will work closely with senior engineers and data scientists to design, implement, and optimize AI solutions that drive innovation and improve our products and services.
Key Responsibilities:
- Develop and implement machine learning and deep learning models from scratch for various applications.
- Collaborate with cross-functional teams to understand requirements and provide AI-driven solutions.
- Utilize deep learning frameworks such as TensorFlow, PyTorch, Keras, and JAX for model development and experimentation.
- Employ data manipulation and analysis tools such as pandas, scikit-learn, and statsmodels to preprocess and analyze data.
- Apply visualization tools like matplotlib and spacy to present data insights and model performance.
- Demonstrate a general understanding of data structures, algorithms, multi-threaded programming, and distributed computing concepts.
- Leverage knowledge of statistical and algorithmic models along with fundamental mathematical concepts, including linear algebra and probability.
Qualifications:
- Bachelor’s or Master’s degree in Computer Science, Data Science, Statistics, or a related field.
- Solid foundation in machine learning, deep learning, computer vision, and natural language processing (NLP).
- Proven experience in developing ML/deep learning models from scratch.
- Proficiency in Python and relevant libraries.
- Hands-on experience with deep learning frameworks such as TensorFlow, PyTorch, Keras, or JAX.
- Experience with data manipulation and analysis libraries like pandas, scikit-learn, and visualization tools like matplotlib.
- Strong understanding of data structures, algorithms, and multi-threaded programming.
- Knowledge of statistical models and fundamental mathematical concepts, including linear algebra and probability.
Skills and Competencies:
- Excellent problem-solving skills and attention to detail.
- Strong communication and collaboration abilities.
- Ability to work independently and as part of a team in a fast-paced environment.
- Eagerness to learn and stay updated with the latest advancements in AI/ML technologies.
Preferred Qualifications:
- Previous internship or project experience in AI/ML.
- Familiarity with cloud-based AI/ML services and tools.
Nirwana.AI is an equal opportunity employer and welcomes applicants from all backgrounds to apply.
About LambdaTest
We are the only cloud-based testing platform aimed at bringing the whole testing ecosystem to the cloud. Today we have a powerful cloud network of 2000+ real browsers and operating systems that helps testers in cross-browser and cross-platform compatibility testing.
Our Mission
On a mission to provide the most powerful, cutting-edge, comprehensive, and secure cloud test platform to empower software testers & developers globally to perform testing intelligently at scale.
Our Vision
We envision an integrated platform where professionals can rely to perform and manage all types of tests without being limited by infrastructure dependency. So people could focus on things that matter the most, i.e. their tests.
What You'll Do
- Ability to create tools, microsite, DevOps, and technical solutions for testing.
- Experience in Object-Oriented Analysis, Design(OOAD), and development of software using UML Methodology, good knowledge of J2EE design patterns and Core Java design patterns.
- Analyze test logs; create test reports, co-ordinate with stakeholders
- Experience in web application and device test automation using Selenium, Robotium, Appium, or any equivalent tool/s.
- Strong experience with Agile development incorporating Continuous Integration and Continuous Delivery, utilizing technologies such as GIT, Maven, Jenkins, Chef, Sonar, PowerMock.
- Design and build scalable automated test frameworks and test suites working across technologies.
- Debugging of any issue faced.
- GoLang, Docker, Kubernetes experience is good to have
- Perform manual testing, the scope of which will encompass all functionalities of services as a prequel to automation
- Experience working closely with development and business teams to communicate impacts and to understand business requirements
What you should have
- A Bachelor's or Master's degree with 2 – 6 years of experience as a Developer or SDET.
- Comfortable communicating cross-functionally and across management levels in formal and informal settings.
- Ability to effectively articulate technical challenges and solutions.
- Shows creativity and initiative to improve product coverage and effectiveness.
- Ability to work in teams.
- Deal well with ambiguous/undefined problems; ability to think abstractly.
- Go-getter attitude.
Experience Level: Minimum 5 years
About Pace
Started in 1995 by first-generation entrepreneurs from IIMA & FMS Delhi, PACE has evolved from a fledgling NSE Broker to a premier boutique financial conglomerate over the last 25 years. Headquartered in New Delhi, we maintain offices at more than 300 locations in more than 75 cities across India, and our customer base is spread over 34 countries. We have also been consistently nominated as one of the best Investment Advisors in India by ICRA & CNBC. At PACE we are continuously innovating and building highly scalable backend systems and strategies that give a seamless experience to our customers. We are aggressively pursuing Fintech innovation now and working on the ambitious and potentially disruptive Fintech product ‘Pocketful’—a one-of-a-kind stock-broking platform.
About Pocketful (Fintech Division of Pace)
Founded by IIM-Ahmedabad, Yale, and Columbia alumni, Pocketful is a new-age Fintech broking platform, aimed at making financial markets accessible for all. We're constantly innovating and working on a disruptive platform. The team is highly skilled, young, and extremely hungry and we are looking for folks who fit this persona. We are backed by one of India's leading stock brokers Pace Stock Broking Services.
Overview:
- We are seeking an experienced Engineering Manager or Tech Lead to join our dynamic team in the fintech industry, focusing on stockbroking solutions. The ideal candidate will have a strong technical background and leadership experience, with proficiency in our tech stacks: React.js, Flutter, Golang, and MongoDB.
- Responsibilities:
- Lead and manage a team of engineers, providing guidance and mentorship.
- Oversee the design, development, and deployment of high-quality software solutions.
- Collaborate with cross-functional teams to define, design, and deliver new features.
- Ensure best practices in coding, architecture, and security are followed.
Requirements:
- Proven experience as an Engineering Manager or Tech Lead.
- Strong technical expertise in React.js, Flutter, Golang, and MongoDB.
- Excellent leadership and communication skills.
- Experience in the fintech industry, particularly in stockbroking, is a plus.
- Ability to work in a fast-paced, agile environment.
Qualifications:
- Minimum of 5+ years of experience in a technical management role.
- Bachelor's degree in Technology
- Strong project management skills, including the ability to prioritize and manage multiple tasks simultaneously.
- Excellent leadership and communication skills.
- Problem-solving and decision-making abilities.
- Results-oriented with a focus on delivering high-quality solutions.
Other details
Expected CTC: Depending on Experience & Skills
In-Person based out of Okhla, New Delhi
Culture
We’re still early-stage and we believe that the culture is an ever-evolving process. Help build the kind of culture you want in the organization. Best ideas come from collaboration and we firmly believe in that. We have a flat hierarchy, flexible with timings and we believe in continuous learning and adapting to changing needs. We want to scale fast but sustainably, keeping everyone’s growth in mind. We aim to make this job your last job.
UKG is looking to hire a Lead Software Engineer to join our extremely talented Data Science team. As a Lead Software Engineer at UKG, you’ll be embedded on the Data Science team where you can work on the next generation AI Platform. You’ll get to work directly with other Engineers, Software Testers, Business Analysts, Product Managers, and Directors, all of whom make up the team. In this highly collaborative environment, you will have the opportunity to grow as a software engineer, and even help mentor others.
This position requires excellent object-oriented programming skills and knowledge of design patterns. They will be involved in the deployment of our AI Platform/Services solution on the cloud. The job requires you to be able to design, develop, troubleshoot, and debug complex software applications at the enterprise level. We are looking for a software engineer who is passionate about programming and truly enjoys what they do. The ideal candidate for the Python Engineer position is someone who has a can-do attitude and is an innovative thinker.
UKG works in an agile environment where there are daily stand-ups, code reviews, and constant communication within each self-managed cross-functional team. The ability to communicate effectively with Business Analysts and Software Testers, as well as work closely with other team members are key components for success in this position.
Primary Responsibilities:
- Collaborate with members of the team to solve challenging engineering tasks on time and with high quality.
- Engage in code reviews and training of team members.
- Support continuous deployment pipeline code.
- Situationally troubleshoot production issues alongside the support team.
- Continually research and recommend product improvements.
- Create and integrate features for our enterprise software solution using the latest Python technologies.
- Write web services, business objects, and other middle-tier framework using Python
- Actively communicate with team members to clarify requirements and overcome obstacles to meet the team goals.
- Leverage open-source and other technologies and languages outside of the Python platform.
- Develop cutting-edge solutions to maximize the performance, scalability, and distributed processing capabilities of the system.
- Provide troubleshooting and root cause analysis for production issues that are escalated to the engineering team.
- Work with development teams in an agile context as it relates to software development, including Kanban, automated unit testing, test fixtures, and pair programming.
Qualifications
- 5-8 years experience as a Python developer on enterprise projects using Python, Flask, FastAPI, Django, PyTest, Celery and other Python frameworks.
- Software development experience including: object-oriented programming, concurrency programming, modern design patterns, RESTful service implementation, micro-service architecture, test-driven development, and acceptance testing.
- Familiarity with tools used to automate the deployment of an enterprise software solution to the cloud, Terraform, GitHub Actions, Concourse, Ansible, etc.
- Proficiency with Git as a version control system
- Experience with Docker and Kubernetes
- Experience with relational SQL and NoSQL databases, including MongoDB and MSSQL.
- Experience with object-oriented languages: Python, Java, Scala, C#, etc.
- Experience with testing tools such as PyTest, Wiremock, xUnit, mocking frameworks, etc.
- Experience with GCP technologies such as BigQuery, GKE, GCS, DataFlow, Kubeflow, and/or VertexAI
- Excellent problem solving and communication skills.
Technical Skills:
- Ability to understand and translate business requirements into design.
- Proficient in AWS infrastructure components such as S3, IAM, VPC, EC2, and Redshift.
- Experience in creating ETL jobs using Python/PySpark.
- Proficiency in creating AWS Lambda functions for event-based jobs.
- Knowledge of automating ETL processes using AWS Step Functions.
- Competence in building data warehouses and loading data into them.
Responsibilities:
- Understand business requirements and translate them into design.
- Assess AWS infrastructure needs for development work.
- Develop ETL jobs using Python/PySpark to meet requirements.
- Implement AWS Lambda for event-based tasks.
- Automate ETL processes using AWS Step Functions.
- Build data warehouses and manage data loading.
- Engage with customers and stakeholders to articulate the benefits of proposed solutions and frameworks.
There are various job roles within software development, each with its own focus and responsibilities. Some common job roles include:
1. **Software Engineer/Developer**: This role involves designing, developing, and testing software applications or systems.
2. **Front-end Developer**: Front-end developers focus on creating the user interface and experience of websites or applications using languages like HTML, CSS, and JavaScript.
3. **Back-end Developer**: Back-end developers work on the server-side of applications, managing databases, servers, and application logic using languages like Python, Java, or Node.js.
4. **Full-stack Developer**: Full-stack developers have expertise in both front-end and back-end development, allowing them to work on all aspects of an application.
5. **Mobile App Developer**: Mobile app developers specialize in creating applications for mobile devices, often using platforms like iOS (Swift) or Android (Java/Kotlin).
6. **DevOps Engineer**: DevOps engineers focus on streamlining the development process by automating tasks, managing infrastructure, and ensuring smooth deployment and operation of software.
7. **Quality Assurance (QA) Engineer**: QA engineers are responsible for testing software to ensure it meets quality standards and is free of bugs or errors.
8. **UI/UX Designer**: UI/UX designers work on designing the user interface and experience of software applications, focusing on usability and aesthetics.
Publicis Sapient Overview:
The Senior Associate People Senior Associate L1 in Data Engineering, you will translate client requirements into technical design, and implement components for data engineering solution. Utilize deep understanding of data integration and big data design principles in creating custom solutions or implementing package solutions. You will independently drive design discussions to insure the necessary health of the overall solution
.
Job Summary:
As Senior Associate L2 in Data Engineering, you will translate client requirements into technical design, and implement components for data engineering solution. Utilize deep understanding of data integration and big data design principles in creating custom solutions or implementing package solutions. You will independently drive design discussions to insure the necessary health of the overall solution
The role requires a hands-on technologist who has strong programming background like Java / Scala / Python, should have experience in Data Ingestion, Integration and data Wrangling, Computation, Analytics pipelines and exposure to Hadoop ecosystem components. You are also required to have hands-on knowledge on at least one of AWS, GCP, Azure cloud platforms.
Role & Responsibilities:
Your role is focused on Design, Development and delivery of solutions involving:
• Data Integration, Processing & Governance
• Data Storage and Computation Frameworks, Performance Optimizations
• Analytics & Visualizations
• Infrastructure & Cloud Computing
• Data Management Platforms
• Implement scalable architectural models for data processing and storage
• Build functionality for data ingestion from multiple heterogeneous sources in batch & real-time mode
• Build functionality for data analytics, search and aggregation
Experience Guidelines:
Mandatory Experience and Competencies:
# Competency
1.Overall 5+ years of IT experience with 3+ years in Data related technologies
2.Minimum 2.5 years of experience in Big Data technologies and working exposure in at least one cloud platform on related data services (AWS / Azure / GCP)
3.Hands-on experience with the Hadoop stack – HDFS, sqoop, kafka, Pulsar, NiFi, Spark, Spark Streaming, Flink, Storm, hive, oozie, airflow and other components required in building end to end data pipeline.
4.Strong experience in at least of the programming language Java, Scala, Python. Java preferable
5.Hands-on working knowledge of NoSQL and MPP data platforms like Hbase, MongoDb, Cassandra, AWS Redshift, Azure SQLDW, GCP BigQuery etc
6.Well-versed and working knowledge with data platform related services on at least 1 cloud platform, IAM and data security
Preferred Experience and Knowledge (Good to Have):
# Competency
1.Good knowledge of traditional ETL tools (Informatica, Talend, etc) and database technologies (Oracle, MySQL, SQL Server, Postgres) with hands on experience
2.Knowledge on data governance processes (security, lineage, catalog) and tools like Collibra, Alation etc
3.Knowledge on distributed messaging frameworks like ActiveMQ / RabbiMQ / Solace, search & indexing and Micro services architectures
4.Performance tuning and optimization of data pipelines
5.CI/CD – Infra provisioning on cloud, auto build & deployment pipelines, code quality
6.Cloud data specialty and other related Big data technology certifications
Personal Attributes:
• Strong written and verbal communication skills
• Articulation skills
• Good team player
• Self-starter who requires minimal oversight
• Ability to prioritize and manage multiple tasks
• Process orientation and the ability to define and set up processes
About us: AuditorsDesk is designed to make Audit firm’s audit work paperless. Without the need to download any software you can take your audit firm online. Your firm can easily maintain documents online. Deliver high-quality audits under ICAI guidelines. Your teams and clients can collaborate and be more efficient in managing audit work papers. More than just a tool for auditors, it tracks engagement progress from planning to conclusion. It enables the entire team to operate at its best efficiency.
Job Overview: We are seeking a highly skilled Sr. Manager who can lead our Developer Team with a minimum of 12-15 years of professional experience. The ideal candidate should also demonstrate expertise in working as L2 level SDM. This is an exciting opportunity to join our dynamic IT team.
Note: Kindly share your updated resume with below required details:
Total Experience:
Ready to relocate (Yes/No):
Current annual CTC:
Expected annual CTC:
Current Location:
Notice Period:
Reason for leaving:
Responsibilities:
§ Leads a team in developing high-quality software for major technology firms.
§ Ensures team success, growth, and productivity, providing training and mentorship.
§ Participates in meetings, contributing to development strategies and product roadmap.
§ Reacts quickly to emerging technologies, producing significant feature enhancements.
§ Sets clear expectations for direct reports, meets regularly for one-on-one discussions.
§ Develops plans for team members' long-term growth with some assistance from division lead.
§ Serves as a role model for good coding practices and functions effectively as a Level 2 Software Development Engineer (L2 SDE).
§ Sets high expectations for the team, effectively communicates expectations, and provides feedback.
§ Identifies tasks and responsibilities for direct reports to foster their growth.
§ Drives large projects to completion and is viewed as a technical leader in a large domain.
§ Coordinates with others to deliver products requiring multiple components.
Requirements:
§ Provide constructive feedback to team members and demonstrate conflict resolution skills within and across teams.
§ Communicate effectively with users, technical teams, and management to gather requirements, identify tasks, estimate timelines, and meet production deadlines.
§ Apply professional software engineering best practices throughout the software development life cycle.
§ Offer technical guidance on architecture, code review, and best practices.
§ Possess 12-15 years of software development experience and minimum 10-12 years of team leadership experience.
§ Exhibit knowledge in computer science fundamentals, including object-oriented design, data structures, and algorithms.
§ Fluent in Java or a similar object-oriented language.
§ Implement and consume large-scale web services with strong problem-solving skills.
§ Desired experience and skills include excellent written and verbal communication, contribution to software design documentation, and ability to present complex technical designs concisely.
§ Work with relational and NoSQL databases and build highly scalable distributed software systems.
§ Backend skills: Spring boot, NodeJS, Python, Cloud computing environment, Vue.JS, Python, PHP-Go, Django, Docker, GO etc.
§ Required frontend skills: Flutter, Angular, JS, ReactJS.
§ Cloud platform skills: AWS (CloudFront, Elastic Beanstalk, ACM, S3, EC2, Lambda), GCP (VM Instance, Cloud Run, App Engine), Firebase (Cloud Messaging, Hosting, Dynamic Links).
§ DNS management on various providers like GoDaddy, BigRock, and Google Domain.
§ Deployment of web services on different cloud platforms, including expertise in WordPress deployment and management.
§ Proficient in Apache Solr, Apache Tomcat, Apache Httpd, NginX.
§ Experience taking a project from scoping requirements through actual launch.
Job Type: On site
Job Location: Will be disclosed.
Employment Type: Permanent
Compensation: Best fit as per industry standard
How to Apply: Interested candidates who meet the above requirements and are ready to contribute to a dynamic team are encouraged to submit their resume at rahulATauditorsdeskDOTin
Please indicate "Application for SDM" in the subject line.
*Note: Immediate joiners will be given preference. Only shortlisted candidates will be contacted for interviews.
**********
Job Description: Backend Developer (PHP, Django, Docker)
About us: AuditorsDesk is designed to make Audit firm’s audit work paperless. Without the need to download any software you can take your audit firm online. Your firm can easily maintain documents online. Deliver high-quality audits under ICAI guidelines. Your teams and clients can collaborate and be more efficient in managing audit work papers. More than just a tool for auditors, it tracks engagement progress from planning to conclusion. It enables the entire team to operate at its best efficiency.
Job Overview: We are seeking a highly skilled Backend Developer with a minimum of 3 years of professional experience in PHP framework. The ideal candidate should also demonstrate expertise in working with the Django and Docker frameworks. This is an exciting opportunity to join our dynamic IT team in South Delhi location.
Responsibilities:
§ Design, develop, and maintain efficient and reliable PHP backend systems.
§ Collaborate with cross-functional teams to define, design, and ship new features.
§ Integrate user-facing elements developed by front-end developers with server-side logic.
§ Troubleshoot, debug, and optimize application performance.
§ Stay up-to-date with industry best practices and emerging technologies.
Requirements:
§ Minimum of 3 years of professional experience as a PHP-Backend Developer.
§ Proficient in PHP framework, with a strong understanding of object-oriented programming.
§ Hands-on experience with Django and Docker frameworks.
§ Solid understanding of database design and optimization (e.g., MySQL, PostgreSQL).
§ Familiarity with front-end technologies (HTML, CSS, JavaScript) and their integration with backend services.
§ Ability to work in a fast-paced, collaborative team environment.
§ Strong problem-solving and communication skills.
Job Type: On site
Job Location: South Delhi (Nearby Nehru Enclave Metro Stn.)
Employment Type: Permanent
Compensation: Best fit as per industry standard
How to Apply: Interested candidates who meet the above requirements and are ready to contribute to a dynamic team are encouraged to submit their resume at hrATauditorsdeskDOTin
Please indicate "Backend (PHP) Developer Application" in the subject line.
*Note: Immediate joiners will be given preference. Only shortlisted candidates will be contacted for interviews.
Publicis Sapient Overview:
The Senior Associate People Senior Associate L1 in Data Engineering, you will translate client requirements into technical design, and implement components for data engineering solution. Utilize deep understanding of data integration and big data design principles in creating custom solutions or implementing package solutions. You will independently drive design discussions to insure the necessary health of the overall solution
.
Job Summary:
As Senior Associate L1 in Data Engineering, you will do technical design, and implement components for data engineering solution. Utilize deep understanding of data integration and big data design principles in creating custom solutions or implementing package solutions. You will independently drive design discussions to insure the necessary health of the overall solution
The role requires a hands-on technologist who has strong programming background like Java / Scala / Python, should have experience in Data Ingestion, Integration and data Wrangling, Computation, Analytics pipelines and exposure to Hadoop ecosystem components. Having hands-on knowledge on at least one of AWS, GCP, Azure cloud platforms will be preferable.
Role & Responsibilities:
Job Title: Senior Associate L1 – Data Engineering
Your role is focused on Design, Development and delivery of solutions involving:
• Data Ingestion, Integration and Transformation
• Data Storage and Computation Frameworks, Performance Optimizations
• Analytics & Visualizations
• Infrastructure & Cloud Computing
• Data Management Platforms
• Build functionality for data ingestion from multiple heterogeneous sources in batch & real-time
• Build functionality for data analytics, search and aggregation
Experience Guidelines:
Mandatory Experience and Competencies:
# Competency
1.Overall 3.5+ years of IT experience with 1.5+ years in Data related technologies
2.Minimum 1.5 years of experience in Big Data technologies
3.Hands-on experience with the Hadoop stack – HDFS, sqoop, kafka, Pulsar, NiFi, Spark, Spark Streaming, Flink, Storm, hive, oozie, airflow and other components required in building end to end data pipeline. Working knowledge on real-time data pipelines is added advantage.
4.Strong experience in at least of the programming language Java, Scala, Python. Java preferable
5.Hands-on working knowledge of NoSQL and MPP data platforms like Hbase, MongoDb, Cassandra, AWS Redshift, Azure SQLDW, GCP BigQuery etc
Preferred Experience and Knowledge (Good to Have):
# Competency
1.Good knowledge of traditional ETL tools (Informatica, Talend, etc) and database technologies (Oracle, MySQL, SQL Server, Postgres) with hands on experience
2.Knowledge on data governance processes (security, lineage, catalog) and tools like Collibra, Alation etc
3.Knowledge on distributed messaging frameworks like ActiveMQ / RabbiMQ / Solace, search & indexing and Micro services architectures
4.Performance tuning and optimization of data pipelines
5.CI/CD – Infra provisioning on cloud, auto build & deployment pipelines, code quality
6.Working knowledge with data platform related services on at least 1 cloud platform, IAM and data security
7.Cloud data specialty and other related Big data technology certifications
Job Title: Senior Associate L1 – Data Engineering
Personal Attributes:
• Strong written and verbal communication skills
• Articulation skills
• Good team player
• Self-starter who requires minimal oversight
• Ability to prioritize and manage multiple tasks
• Process orientation and the ability to define and set up processes
Who we are looking for
· A Natural Language Processing (NLP) expert with strong computer science fundamentals and experience in working with deep learning frameworks. You will be working at the cutting edge of NLP and Machine Learning.
Roles and Responsibilities
· Work as part of a distributed team to research, build and deploy Machine Learning models for NLP.
· Mentor and coach other team members
· Evaluate the performance of NLP models and ideate on how they can be improved
· Support internal and external NLP-facing APIs
· Keep up to date on current research around NLP, Machine Learning and Deep Learning
Mandatory Requirements
· Any graduation with at least 2 years of demonstrated experience as a Data Scientist.
Behavioural Skills
· Strong analytical and problem-solving capabilities.
· Proven ability to multi-task and deliver results within tight time frames
· Must have strong verbal and written communication skills
· Strong listening skills and eagerness to learn
· Strong attention to detail and the ability to work efficiently in a team as well as individually
Technical Skills
Hands-on experience with
· NLP
· Deep Learning
· Machine Learning
· Python
· Bert
Preferred Requirements
· Experience in Computer Vision is preferred
Role: Data Scientist
Industry Type: Banking
Department: Data Science & Analytics
Employment Type: Full Time, Permanent
Role Category: Data Science & Machine Learning
Hi
Profile : Network Support Engineer
Experience - 3+ years
Timing - Thurs-Mon: 10:30PM to 7:30 AM
Location: NOIDA, Sec-62 ( Work from Office)
Expertise & Hands on experience with the following:
- Creating new infrastructure from scratch
- Excellent working experience of LAN, WAN management and network devices
- Linux Server installation and drivers installations configuration
- Network troubleshooting in both windows, linux environments.
- Troubleshooting of WiFi Networks
- Solid understanding of networking concepts and protocols, including TCP/IP, DNS, DHCP, BGP, OSPF, VLANs, and VPNs.
- Perform network diagnostics using tools such as Wireshark, tcpdump, and SNMP.
- Investigate security incidents and implement measures to mitigate risks. ○ Participate in on-call rotation to provide support for network emergencies.
- Working experience to manage the complex network infrastructure.
- Experience to do automation to manage network devices using scripting.
- Strong analytical and problem-solving skills with the ability to quickly diagnose and resolve network issues.
- Cisco CCNA or equivalent certification is a plus.
Scripting for automation ○ Python ○ Bash
Non-Functional Requirements:
- Performs regular audits of server security and websites exposed outside to the world.
- Documenting any processes which employees need to follow in order to successfully work within our computing system.
- Experience supporting technical teams (e.g., developers and/or IT teams)
- Strong attention to details of problems and ability to navigate complex issues in high pressure situations
- The ability to effectively collaborate with various internal teams
- Excellent critical thinking and problem solving skills.
- Ensures all the problems are resolving in a timely manner and following SLAs.
- Ability to prioritize a wide range of workloads with critical deadlines.
- Experience in a 24x7 environment.
- Good communication skills
🚀 Exciting Opportunity: Data Engineer Position in Gurugram 🌐
Hello
We are actively seeking a talented and experienced Data Engineer to join our dynamic team at Reality Motivational Venture in Gurugram (Gurgaon). If you're passionate about data, thrive in a collaborative environment, and possess the skills we're looking for, we want to hear from you!
Position: Data Engineer
Location: Gurugram (Gurgaon)
Experience: 5+ years
Key Skills:
- Python
- Spark, Pyspark
- Data Governance
- Cloud (AWS/Azure/GCP)
Main Responsibilities:
- Define and set up analytics environments for "Big Data" applications in collaboration with domain experts.
- Implement ETL processes for telemetry-based and stationary test data.
- Support in defining data governance, including data lifecycle management.
- Develop large-scale data processing engines and real-time search and analytics based on time series data.
- Ensure technical, methodological, and quality aspects.
- Support CI/CD processes.
- Foster know-how development and transfer, continuous improvement of leading technologies within Data Engineering.
- Collaborate with solution architects on the development of complex on-premise, hybrid, and cloud solution architectures.
Qualification Requirements:
- BSc, MSc, MEng, or PhD in Computer Science, Informatics/Telematics, Mathematics/Statistics, or a comparable engineering degree.
- Proficiency in Python and the PyData stack (Pandas/Numpy).
- Experience in high-level programming languages (C#/C++/Java).
- Familiarity with scalable processing environments like Dask (or Spark).
- Proficient in Linux and scripting languages (Bash Scripts).
- Experience in containerization and orchestration of containerized services (Kubernetes).
- Education in database technologies (SQL/OLAP and Non-SQL).
- Interest in Big Data storage technologies (Elastic, ClickHouse).
- Familiarity with Cloud technologies (Azure, AWS, GCP).
- Fluent English communication skills (speaking and writing).
- Ability to work constructively with a global team.
- Willingness to travel for business trips during development projects.
Preferable:
- Working knowledge of vehicle architectures, communication, and components.
- Experience in additional programming languages (C#/C++/Java, R, Scala, MATLAB).
- Experience in time-series processing.
How to Apply:
Interested candidates, please share your updated CV/resume with me.
Thank you for considering this exciting opportunity.
A Delhi NCR based Applied AI & Consumer Tech company tackling one of the largest unsolved consumer internet problems of our time. We are a motley crew of smart, passionate and nice people who believe you can build a high performing company with a culture of respect aka a sports team with a heart aka a caring meritocracy.
Our illustrious angels include unicorn founders, serial entrepreneurs with exits, tech & consumer industry stalwarts and investment professionals/bankers.
We are hiring for our founding team (in Delhi NCR only, no remote) that will take the product from prototype to a landing! Opportunity for disproportionate non-linear impact, learning and wealth creation in a classic 0-1 with a Silicon Valley caliber founding team.
Key Responsibilities:
1. Data Strategy and Vision:
· Develop and drive the company's data analytics strategy, aligning it with overall business goals.
· Define the vision for data analytics, outlining clear objectives and key results (OKRs) to measure success.
2. Data Analysis and Interpretation:
· Oversee the analysis of complex datasets to extract valuable insights, trends, and patterns.
· Utilize statistical methods and data visualization techniques to present findings in a clear and compelling manner to both technical and non-technical stakeholders.
3. Data Infrastructure and Tools:
· Evaluate, select, and implement advanced analytics tools and platforms to enhance data processing and analysis capabilities.
· Collaborate with IT teams to ensure a robust and scalable data infrastructure, including data storage, retrieval, and security protocols.
4. Collaboration and Stakeholder Management:
· Collaborate cross-functionally with teams such as marketing, sales, and product development to identify opportunities for data-driven optimizations.
· Act as a liaison between technical and non-technical teams, ensuring effective communication of data insights and recommendations.
5. Performance Measurement:
· Establish key performance indicators (KPIs) and metrics to measure the impact of data analytics initiatives on business outcomes.
· Continuously assess and improve the accuracy and relevance of analytical models and methodologies.
Qualifications:
- Bachelor's or Master's degree in Data Science, Statistics, Computer Science, or related field.
- Proven experience (5+ years) in data analytics, with a focus on leading analytics teams and driving strategic initiatives.
- Proficiency in data analysis tools such as Python, R, SQL, and advanced knowledge of data visualization tools.
- Strong understanding of statistical methods, machine learning algorithms, and predictive modelling techniques.
- Excellent communication skills, both written and verbal, to effectively convey complex findings to diverse audie
AWS Glue Developer
Work Experience: 6 to 8 Years
Work Location: Noida, Bangalore, Chennai & Hyderabad
Must Have Skills: AWS Glue, DMS, SQL, Python, PySpark, Data integrations and Data Ops,
Job Reference ID:BT/F21/IND
Job Description:
Design, build and configure applications to meet business process and application requirements.
Responsibilities:
7 years of work experience with ETL, Data Modelling, and Data Architecture Proficient in ETL optimization, designing, coding, and tuning big data processes using Pyspark Extensive experience to build data platforms on AWS using core AWS services Step function, EMR, Lambda, Glue and Athena, Redshift, Postgres, RDS etc and design/develop data engineering solutions. Orchestrate using Airflow.
Technical Experience:
Hands-on experience on developing Data platform and its components Data Lake, cloud Datawarehouse, APIs, Batch and streaming data pipeline Experience with building data pipelines and applications to stream and process large datasets at low latencies.
➢ Enhancements, new development, defect resolution and production support of Big data ETL development using AWS native services.
➢ Create data pipeline architecture by designing and implementing data ingestion solutions.
➢ Integrate data sets using AWS services such as Glue, Lambda functions/ Airflow.
➢ Design and optimize data models on AWS Cloud using AWS data stores such as Redshift, RDS, S3, Athena.
➢ Author ETL processes using Python, Pyspark.
➢ Build Redshift Spectrum direct transformations and data modelling using data in S3.
➢ ETL process monitoring using CloudWatch events.
➢ You will be working in collaboration with other teams. Good communication must.
➢ Must have experience in using AWS services API, AWS CLI and SDK
Professional Attributes:
➢ Experience operating very large data warehouses or data lakes Expert-level skills in writing and optimizing SQL Extensive, real-world experience designing technology components for enterprise solutions and defining solution architectures and reference architectures with a focus on cloud technology.
➢ Must have 6+ years of big data ETL experience using Python, S3, Lambda, Dynamo DB, Athena, Glue in AWS environment.
➢ Expertise in S3, RDS, Redshift, Kinesis, EC2 clusters highly desired.
Qualification:
➢ Degree in Computer Science, Computer Engineering or equivalent.
Salary: Commensurate with experience and demonstrated competence
About Statcon Electronics
Statcon Electronics India Limited Statcon Electronics India Limited is a company specializing in the field of power electronics since its inception, with its roots going back to 1986. It has held prestigious clientage both nationally and internationally, with ABB, Alstom, BHEL, Indian Air Force, Indian Army, Indian Railways, GAIL, and Indian Oil, to name a few. SEIL has a diverse portfolio, spanning across 4 sectors – Railways, Power, Defence and Solar energy.
About the position
We are looking for a multiple embedded software engineers with experience in developing firmware for controlling power electronics systems. Openings are available for senior and principal engineer positions across multiple product streams, and target markets include the Indian subcontinent, Africa and North America.
Required Technical skills
- Demonstrated experience in developing embedded firmware for real-time power electronics systems using 16/32 bit ARM microcontrollers (ST Microelectronics preferred).
- Must have shipped at least 1 major relevant product to the market which meets standard industry specifications. Experience with designing for North American/European markets a plus.
- Experience with common communication protocols such as SPI, I2C, USB, UART, Bluetooth, Ethernet, RS232, and RS485
- Experience with peripherals and systems such as ADC, DAC, timer, GPIO, PWM, DMA, NVIC, serial and parallel interfaces, memory, bootloaders and watchdog timers
- Proficiency in advanced algorithm development using Embedded C
- Experience with embedded software tools such as editors, assemblers, compilers, debuggers, simulators, emulators and Flash/OTP programmers.
- Thorough understanding of power electronics topologies and control strategies
- Good grasp over digital signal processing techniques
- Ability to read and interpret component datasheets, PCB schematics and layout design
- Familiarity with using measurement devices such as oscilloscopes, multimeters, function generators and logic analyzers to bring up and debug hardware
- Demonstrated knowledge of firmware development best practices (code reviews, unit tests, Software Configuration Management, version control using git etc.)
- Excellent documentation skills, and a good grasp over the English language
- Comfort in using modern collaborative tools such as Slack/Microsoft Teams, JIRA/Trello/Microsoft Planner, Confluence/Microsoft OneNote and the like.
Bonus Technical skills
- Experience using at least one scripting language, preferably Python
- Experience with designing for North American/European markets
- Experience with designing for IEC and BIS standards
- Familiarity with EMI/EMC process
- Ability to simulate power electronics systems using Simulink/MATLAB or equivalent open-source tools
- Familiarity with agile software development processes
- Experience with Linux administration using command line
Soft skills
- Ability to mentor junior engineers and generate testing procedures for people of all skill levels
- Excellent inter-personal skills
- Strong attention to detail with the ability to work on tight deadlines
- Team player with the ability to work independently under minimal supervision
- Excellent problem-solving skills
- Ability and desire to learn new technologies quickly
Qualifications
- Junior Engineer: A Master's (preferred) in Electrical/Electronics/Computer Engineering with 1 year of industrial experience, or a Bachelor's with 3 years of relevant industrial experience.
- Senior Engineer: A Master's (preferred) in Electrical/Electronics/Computer Engineering with 4 years of industrial experience, or a Bachelor's with 6 years of relevant industrial experience.
- Principal Engineer: A Master's (preferred) in Electrical /Electronics/Computer Engineering with 8 years of industrial experience, or a Bachelor's with 10 years of relevant industrial experience.
Responsibilities
- Develop, implement and test cutting-edge algorithms for medium and high-power AC-DC, DC-AC and AC-AC conversion of 1 phase and 3 phase AC systems up to 440 V AC, and DC-DC conversion of systems up to 1000 V DC.
- Develop sensing and IoT solutions for power electronics embedded systems
- Convert product requirements into technical specifications for Indian and international markets
- Cover all stages (pertaining to firmware) of V model for product development
- Generate technical documentation describing product functioning and test procedures for hand-off to SQA
- Debug product failures in the field using data-driven analysis techniques
- Mentor junior engineers
- Host research talks on emerging technologies within the company
Statcon Electronics is an Equal Opportunity Employer and considers applicants for all positions without regard to race, color, religion or belief, age, marital status, military/veteran status, genetic information, sexual orientation, gender identity, physical or mental disability or any other characteristic protected by applicable laws. Statcon Electronics is committed to creating a dynamic work environment that values diversity and inclusion, respect and integrity, customer focus, and innovation. For more information, visit www.sindia.co.in.
- Proficiency in Python , Django and Other Allied Frameworks;
- Expert in designing UI/UX interfaces;
- Expert in testing, troubleshooting, debugging and problem solving;
- Basic knowledge of SEO;
- Good communication;
- Team building and good acumen;
- Ability to perform;
- Continuous learning
YatriPay is a pioneering force in the travel industry, harnessing blockchain technology to create seamless travel experiences.
We're looking for a skilled Python Django Developer with expertise in PostgreSQL to join our dynamic team. You'll be responsible for developing and maintaining robust, user-friendly applications while collaborating with cross-functional teams to deliver transformative solutions. If you're enthusiastic about using technology to enhance travel and you have the expertise in Python, Django, and PostgreSQL, we invite you to apply. Join us in revolutionizing the travel sector with innovation, efficiency, and security. Your code could be the key to unlocking a new era in travel.
Key Qualifications
1) BS/MS in Computer Science or relevant field and 6+ years of experience in building backend services with
Python 3.7.x using Django >= 2.2.1, Django REST framework >= 3.13.1, Flask >=1.1.2 and relational databases
design and maintenance.
2) At least 4 years of solid experience in working with front-end MVC framework React and caching strategies
(memcached, Redis).
3) Expertise in building microservices-based and cloud-based architectures including development and deployment
in AWS.
4) At least 3 years of experience with Web Application technologies: REST, MongoDB, MySQL, NoSQL, AWS
Lambda, API Gateway and web servers Apache, Nginx.
5) Follows coding best practices to produce tested, scalable, reliable and maintainable code.
6) Hands-on experience in developing, deploying, and releasing large-scale applications.
Responsibilities
1) Understand complex requirements, scope and architect major features and perform API technical design for
junior developers to enhance the product at all layers.
2) Think in terms of future possibility of features, backward compatibility, and application performance.
3) Build cloud-based Python Django software products and implement UI components in React.js framework.
4) Write quality code (with comments, unit tests & documentation), design, implement and manage data pipelines
at enterprise scale with data migration and production data maintenance.
5) Collaborate with different teams to conceptualize, design, and build highly scalable and reliable software
solutions with REST APIs following SDLC best practices and DevOps principles.
6) Work closely with product, project and business teams to translate user feedback and company vision into
technical roadmap.
7) Possess strong product-centric mindset. You should be interested in the way software products are built and
comfortable to be proactive with your ideas and opinions.
Must Have: BE/BTech/ME/MTech in Computer Science or Information Technology
Our Stack
Back-end: Python, Django REST, MySQL, Celery, Rabbitmq, Sqlalchemy, RDS
Front-end: JavaScript, React, HTML, CSS, Bootstrap
Ops: Docker, AWS, RDS, Terraform, Github Actions, AWS Route53, AWS cloudwatch
Job Description:
As an Azure Data Engineer, your role will involve designing, developing, and maintaining data solutions on the Azure platform. You will be responsible for building and optimizing data pipelines, ensuring data quality and reliability, and implementing data processing and transformation logic. Your expertise in Azure Databricks, Python, SQL, Azure Data Factory (ADF), PySpark, and Scala will be essential for performing the following key responsibilities:
Designing and developing data pipelines: You will design and implement scalable and efficient data pipelines using Azure Databricks, PySpark, and Scala. This includes data ingestion, data transformation, and data loading processes.
Data modeling and database design: You will design and implement data models to support efficient data storage, retrieval, and analysis. This may involve working with relational databases, data lakes, or other storage solutions on the Azure platform.
Data integration and orchestration: You will leverage Azure Data Factory (ADF) to orchestrate data integration workflows and manage data movement across various data sources and targets. This includes scheduling and monitoring data pipelines.
Data quality and governance: You will implement data quality checks, validation rules, and data governance processes to ensure data accuracy, consistency, and compliance with relevant regulations and standards.
Performance optimization: You will optimize data pipelines and queries to improve overall system performance and reduce processing time. This may involve tuning SQL queries, optimizing data transformation logic, and leveraging caching techniques.
Monitoring and troubleshooting: You will monitor data pipelines, identify performance bottlenecks, and troubleshoot issues related to data ingestion, processing, and transformation. You will work closely with cross-functional teams to resolve data-related problems.
Documentation and collaboration: You will document data pipelines, data flows, and data transformation processes. You will collaborate with data scientists, analysts, and other stakeholders to understand their data requirements and provide data engineering support.
Skills and Qualifications:
Strong experience with Azure Databricks, Python, SQL, ADF, PySpark, and Scala.
Proficiency in designing and developing data pipelines and ETL processes.
Solid understanding of data modeling concepts and database design principles.
Familiarity with data integration and orchestration using Azure Data Factory.
Knowledge of data quality management and data governance practices.
Experience with performance tuning and optimization of data pipelines.
Strong problem-solving and troubleshooting skills related to data engineering.
Excellent collaboration and communication skills to work effectively in cross-functional teams.
Understanding of cloud computing principles and experience with Azure services.
Talented C++ Developer who has experience in design, development, debugging of multi-threaded large scale application with good understanding in data structures on Linux packaging, functional testing and deployment automation very good with problem solving.
Key responsibilities :
- Understand fundamental design principles and best practices for developing backend servers and web applications
- Gather requirements, scope functionality, estimate and translate those requirements into solutions
- Implement and integrate software features as per requirements
- Deliver across the entire app life cycle
- Work in a product creation project and/or technology project with implementation or integration responsibilities
- Improve an existing code base, if required, and ability to read source code to understand data flow and origin
- Design effective data storage for the task at hand and know how to optimize query performance along the way
- Follow an agile methodology of development and delivery
- Strictly adhere to coding standards and internal practices; must be able to conduct review code
- Mentor and possibly lead junior developers
- Contribute towards innovation
- Performance optimization of apps
- Explain technologies and solutions to technical and non-technical stakeholders
- Diagnose bugs and other issues in products
- Continuously discover, evaluate, and implement new technologies to maximize development efficiency
Must have / Good to have:
- 5-7years' experience with C++ development and relevant 3+yrs in modern version 11/14/17 would be a plus.
- Design and implementation of high-availability, and performance applications on Linux environment
- Advanced knowledge of C/C++, Object Oriented Design, STL
- Good with multithreading and data structures
- Develop back-end components to improve responsiveness and overall performance
- Familiarity with database design, integration with applications and python packaging.
- Familiarity with front-end technologies (like JavaScript and HTML5), REST API, security considerations
- Familiarity with functional testing and deployment automation frameworks
- Experience in development for 3-4 production ready application using C++ as programming language
- Experience in writing unit test cases including positive and negative test cases
- Experience of CI/CD pipeline code deployment (Git, SVN, Jenkins or Teamcity)
- Experience with Agile and DevOps methodology
- Very good problem-solving skills
- Experience with Web technologies is a plus.
Read less
Job Title: Data Engineer
Job Summary: As a Data Engineer, you will be responsible for designing, building, and maintaining the infrastructure and tools necessary for data collection, storage, processing, and analysis. You will work closely with data scientists and analysts to ensure that data is available, accessible, and in a format that can be easily consumed for business insights.
Responsibilities:
- Design, build, and maintain data pipelines to collect, store, and process data from various sources.
- Create and manage data warehousing and data lake solutions.
- Develop and maintain data processing and data integration tools.
- Collaborate with data scientists and analysts to design and implement data models and algorithms for data analysis.
- Optimize and scale existing data infrastructure to ensure it meets the needs of the business.
- Ensure data quality and integrity across all data sources.
- Develop and implement best practices for data governance, security, and privacy.
- Monitor data pipeline performance / Errors and troubleshoot issues as needed.
- Stay up-to-date with emerging data technologies and best practices.
Requirements:
Bachelor's degree in Computer Science, Information Systems, or a related field.
Experience with ETL tools like Matillion,SSIS,Informatica
Experience with SQL and relational databases such as SQL server, MySQL, PostgreSQL, or Oracle.
Experience in writing complex SQL queries
Strong programming skills in languages such as Python, Java, or Scala.
Experience with data modeling, data warehousing, and data integration.
Strong problem-solving skills and ability to work independently.
Excellent communication and collaboration skills.
Familiarity with big data technologies such as Hadoop, Spark, or Kafka.
Familiarity with data warehouse/Data lake technologies like Snowflake or Databricks
Familiarity with cloud computing platforms such as AWS, Azure, or GCP.
Familiarity with Reporting tools
Teamwork/ growth contribution
- Helping the team in taking the Interviews and identifying right candidates
- Adhering to timelines
- Intime status communication and upfront communication of any risks
- Tech, train, share knowledge with peers.
- Good Communication skills
- Proven abilities to take initiative and be innovative
- Analytical mind with a problem-solving aptitude
Good to have :
Master's degree in Computer Science, Information Systems, or a related field.
Experience with NoSQL databases such as MongoDB or Cassandra.
Familiarity with data visualization and business intelligence tools such as Tableau or Power BI.
Knowledge of machine learning and statistical modeling techniques.
If you are passionate about data and want to work with a dynamic team of data scientists and analysts, we encourage you to apply for this position.
Senior Data Scientist
Your goal: To improve the education process and improve the student experience through data.
The organization: Data Science for Learning Services Data Science and Machine Learning are core to Chegg. As a Student Hub, we want to ensure that students discover the full breadth of learning solutions we have to offer to get full value on their learning time with us. To create the most relevant and engaging interactions, we are solving a multitude of machine learning problems so that we can better model student behavior, link various types of content, optimize workflows, and provide a personalized experience.
The Role: Senior Data Scientist
As a Senior Data Scientist, you will focus on conducting research and development in NLP and ML. You will be responsible for writing production-quality code for data product solutions at Chegg. You will lead in identification and implementation of key projects to process data and knowledge discovery.
Responsibilities:
• Translate product requirements into AIML/NLP solutions
• Be able to think out of the box and be able to design novel solutions for the problem at hand
• Write production-quality code
• Be able to design data and annotation collection strategies
• Identify key evaluation metrics and release requirements for data products
• Integrate new data and design workflows
• Innovate, share, and educate team members and community
Requirements:
• Working experience in machine learning, NLP, recommendation systems, experimentation, or related fields, with a specialization in NLP • Working experience on large language models that cater to multiple tasks such as text generation, Q&A, summarization, translation etc is highly preferred
• Knowledge on MLOPs and deployment pipelines is a must
• Expertise on supervised, unsupervised and reinforcement ML algorithms.
• Strong programming skills in Python
• Top data wrangling skills using SQL or NOSQL queries
• Experience using containers to deploy real-time prediction services
• Passion for using technology to help students
• Excellent communication skills
• Good team player and a self-starter
• Outstanding analytical and problem-solving skills
• Experience working with ML pipeline products such as AWS Sagemaker, Google ML, or Databricks a plus.
Why do we exist?
Students are working harder than ever before to stabilize their future. Our recent research study called State of the Student shows that nearly 3 out of 4 students are working to support themselves through college and 1 in 3 students feel pressure to spend more than they can afford. We founded our business on provided affordable textbook rental options to address these issues. Since then, we’ve expanded our offerings to supplement many facets of higher educational learning through Chegg Study, Chegg Math, Chegg Writing, Chegg Internships, Thinkful Online Learning, and more, to support students beyond their college experience. These offerings lower financial concerns for students by modernizing their learning experience. We exist so students everywhere have a smarter, faster, more affordable way to student.
Video Shorts
Life at Chegg: https://jobs.chegg.com/Video-Shorts-Chegg-Services
Certified Great Place to Work!: http://reviews.greatplacetowork.com/chegg
Chegg India: http://www.cheggindia.com/
Chegg Israel: http://insider.geektime.co.il/organizations/chegg
Thinkful (a Chegg Online Learning Service): https://www.thinkful.com/about/#careers
Chegg out our culture and benefits!
http://www.chegg.com/jobs/benefits
https://www.youtube.com/watch?v=YYHnkwiD7Oo
Chegg is an equal-opportunity employer
About Us -Celebal Technologies is a premier software services company in the field of Data Science, Big Data and Enterprise Cloud. Celebal Technologies helps you to discover the competitive advantage by employing intelligent data solutions using cutting-edge technology solutions that can bring massive value to your organization. The core offerings are around "Data to Intelligence", wherein we leverage data to extract intelligence and patterns thereby facilitating smarter and quicker decision making for clients. With Celebal Technologies, who understands the core value of modern analytics over the enterprise, we help the business in improving business intelligence and more data-driven in architecting solutions.
Key Responsibilities
• As a part of the DevOps team, you will be responsible for configuration, optimization, documentation, and support of the CI/CD components.
• Creating and managing build and release pipelines with Azure DevOps and Jenkins.
• Assist in planning and reviewing application architecture and design to promote an efficient deployment process.
• Troubleshoot server performance issues & handle the continuous integration system.
• Automate infrastructure provisioning using ARM Templates and Terraform.
• Monitor and Support deployment, Cloud-based and On-premises Infrastructure.
• Diagnose and develop root cause solutions for failures and performance issues in the production environment.
• Deploy and manage Infrastructure for production applications
• Configure security best practices for application and infrastructure
Essential Requirements
• Good hands-on experience with cloud platforms like Azure, AWS & GCP. (Preferably Azure)
• Strong knowledge of CI/CD principles.
• Strong work experience with CI/CD implementation tools like Azure DevOps, Team city, Octopus Deploy, AWS Code Deploy, and Jenkins.
• Experience of writing automation scripts with PowerShell, Bash, Python, etc.
• GitHub, JIRA, Confluence, and Continuous Integration (CI) system.
• Understanding of secure DevOps practices
Good to Have -
• Knowledge of scripting languages such as PowerShell, Bash
• Experience with project management and workflow tools such as Agile, Jira, Scrum/Kanban, etc.
• Experience with Build technologies and cloud services. (Jenkins, TeamCity, Azure DevOps, Bamboo, AWS Code Deploy)
• Strong communication skills and ability to explain protocol and processes with team and management.
• Must be able to handle multiple tasks and adapt to a constantly changing environment.
• Must have a good understanding of SDLC.
• Knowledge of Linux, Windows server, Monitoring tools, and Shell scripting.
• Self-motivated; demonstrating the ability to achieve in technologies with minimal supervision.
• Organized, flexible, and analytical ability to solve problems creatively.
ROLE: Full Stack Developer
About the company:
CogniTensor is an analytical software company that brings data to the heart of decision-making. CogniTensor leverages its product, DeepOptics - an integrated platform to implement 3A (Automation, Analytics and AI) at scale.
Cognitensor has customers ranging in Finance, Energy, Commodity, Retail & Manufacturing. More details can be found on our website: https://www.cognitensor.com/
Our strategic investors include Shell and CIIE.CO (IIM-A/Accenture).
Qualification & Experience:
- BE/B.Tech Degree in Computer Programming, Computer Science, or a related field.
- +2 years experience as a Software Developer.
- Hands on experience in developing finance applications is must
Roles & Responsibilities:
We are looking for a Full Stack Developer to produce scalable software solutions. You’ll be part of a cross-functional team that’s responsible for the full software development life cycle, from conception to deployment.
As a Full Stack Developer, you should be comfortable around both front-end and back-end coding languages, development frameworks and third-party libraries. You should also be a team player with a knack for visual design and utility. Along with familiar with Agile methodologies and testing skills
- Work with development teams and product managers to ideate software solutions
- Design client-side and server-side architecture
- Develop and manage well-functioning databases and applications
- Write effective APIs
- Write technical documentation
- Excellent communication and teamwork skills
Technical Skills:
Must Have
- React JS
- Git / Bitbucket,
- Express JS, Python, HTML, CSS, Node JS
- CI/CD like CircleCI
- Postgres or any DB knowledge
- Familiarity with AWS or Azure or both
Good to Have
- Docker
- Redux
- Android development
- React Native
- Electron
- GraphQL
- Jira
What’s in for you:
● An opportunity to lead a business segment
● Extensive liaising with customers and partners
● A rewarding career progression
Preferred Location:
Delhi NCR
Job Responsibilities:
Support, maintain, and enhance existing and new product functionality for trading software in a real-time, multi-threaded, multi-tier server architecture environment to create high and low level design for concurrent high throughput, low latency software architecture.
- Provide software development plans that meet future needs of clients and markets
- Evolve the new software platform and architecture by introducing new components and integrating them with existing ones
- Perform memory, cpu and resource management
- Analyze stack traces, memory profiles and production incident reports from traders and support teams
- Propose fixes, and enhancements to existing trading systems
- Adhere to release and sprint planning with the Quality Assurance Group and Project Management
- Work on a team building new solutions based on requirements and features
- Attend and participate in daily scrum meetings
Required Skills:
- JavaScript and Python
- Multi-threaded browser and server applications
- Amazon Web Services (AWS)
- REST
**THIS IS A 100% WORK FROM OFFICE ROLE**
We are looking for an experienced DevOps engineer that will help our team establish DevOps practice. You will work closely with the technical lead to identify and establish DevOps practices in the company.
You will help us build scalable, efficient cloud infrastructure. You’ll implement monitoring for automated system health checks. Lastly, you’ll build our CI pipeline, and train and guide the team in DevOps practices.
ROLE and RESPONSIBILITIES:
• Understanding customer requirements and project KPIs
• Implementing various development, testing, automation tools, and IT infrastructure
• Planning the team structure, activities, and involvement in project management
activities.
• Managing stakeholders and external interfaces
• Setting up tools and required infrastructure
• Defining and setting development, test, release, update, and support processes for
DevOps operation
• Have the technical skill to review, verify, and validate the software code developed in
the project.
• Troubleshooting techniques and fixing the code bugs
• Monitoring the processes during the entire lifecycle for its adherence and updating or
creating new processes for improvement and minimizing the wastage
• Encouraging and building automated processes wherever possible
• Identifying and deploying cybersecurity measures by continuously performing
vulnerability assessment and risk management
• Incidence management and root cause analysis
• Coordination and communication within the team and with customers
• Selecting and deploying appropriate CI/CD tools
• Strive for continuous improvement and build continuous integration, continuous
development, and constant deployment pipeline (CI/CD Pipeline)
• Mentoring and guiding the team members
• Monitoring and measuring customer experience and KPIs
• Managing periodic reporting on the progress to the management and the customer
Essential Skills and Experience Technical Skills
• Proven 3+years of experience as DevOps
• A bachelor’s degree or higher qualification in computer science
• The ability to code and script in multiple languages and automation frameworks
like Python, C#, Java, Perl, Ruby, SQL Server, NoSQL, and MySQL
• An understanding of the best security practices and automating security testing and
updating in the CI/CD (continuous integration, continuous deployment) pipelines
• An ability to conveniently deploy monitoring and logging infrastructure using tools.
• Proficiency in container frameworks
• Mastery in the use of infrastructure automation toolsets like Terraform, Ansible, and command line interfaces for Microsoft Azure, Amazon AWS, and other cloud platforms
• Certification in Cloud Security
• An understanding of various operating systems
• A strong focus on automation and agile development
• Excellent communication and interpersonal skills
• An ability to work in a fast-paced environment and handle multiple projects
simultaneously
OTHER INFORMATION
The DevOps Engineer will also be expected to demonstrate their commitment:
• to gedu values and regulations, including equal opportunities policy.
• the gedu’s Social, Economic and Environmental responsibilities and minimise environmental impact in the performance of the role and actively contribute to the delivery of gedu’s Environmental Policy.
• to their Health and Safety responsibilities to ensure their contribution to a safe and secure working environment for staff, students, and other visitors to the campus.
Consulting & Implementation Services Data Analytic & EPM
About the Company :
Our Client enables enterprises in their digital transformation journey by offering Consulting & Implementation Services related to Data Analytics &Enterprise Performance Management (EPM).
Our Cleint deliver the best-suited solutions to our customers across industries such as Retail & E-commerce, Consumer Goods, Pharmaceuticals & Life Sciences, Real Estate & Senior Housing, Hi-tech, Media & Telecom as Manufacturing and Automotive clientele.
Our in-house research and innovation lab has conceived multiple plug-n-play apps, toolkits and plugins to streamline implementation and faster time-to-market
Job Title– AWS Developer
Notice period- Immediate to 60 days
Experience – 3-8
Location - Noida, Mumbai, Bangalore & Kolkata
Roles & Responsibilities
- Bachelor’s degree in Computer Science or a related analytical field or equivalent experience is preferred
- 3+ years’ experience in one or more architecture domains (e.g., business architecture, solutions architecture, application architecture)
- Must have 2 years of experience in design and implementation of cloud workloads in AWS.
- Minimum of 2 years of experience handling workloads in large-scale environments. Experience in managing large operational cloud environments spanning multiple tenants through techniques such as Multi-Account management, AWS Well Architected Best Practices.
- Minimum 3 years of microservice architectural experience.
- Minimum of 3 years of experience working exclusively designing and implementing cloud-native workloads.
- Experience with analysing and defining technical requirements & design specifications.
- Experience with database design with both relational and document-based database systems.
- Experience with integrating complex multi-tier applications.
- Experience with API design and development.
- Experience with cloud networking and network security, including virtual networks, network security groups, cloud-native firewalls, etc.
- Proven ability to write programs using an object-oriented or functional programming language such as Spark, Python, AWS Glue, Aws Lambda
Job Specification
*Strong and innovative approach to problem solving and finding solutions.
*Excellent communicator (written and verbal, formal and informal).
*Flexible and proactive/self-motivated working style with strong personal ownership of problem resolution.
*Ability to multi-task under pressure and work independently with minimal supervision.
Regards
Team Merito
Classplus is India's largest B2B ed-tech start-up, enabling 1 Lac+ educators and content creators to create their digital identity with their own branded apps. Starting in 2018, we have grown more than 10x in the last year, into India's fastest-growing video learning platform.
Over the years, marquee investors like Tiger Global, Surge, GSV Ventures, Blume, Falcon, Capital, RTP Global, and Chimera Ventures have supported our vision. Thanks to our awesome and dedicated team, we achieved a major milestone in March this year when we secured a “Series-D” funding.
Now as we go global, we are super excited to have new folks on board who can take the rocketship higher🚀. Do you think you have what it takes to help us achieve this? Find Out Below!
What will you do?
• Define the overall process, which includes building a team for DevOps activities and ensuring that infrastructure changes are reviewed from an architecture and security perspective
• Create standardized tooling and templates for development teams to create CI/CD pipelines
• Ensure infrastructure is created and maintained using terraform
• Work with various stakeholders to design and implement infrastructure changes to support new feature sets in various product lines.
• Maintain transparency and clear visibility of costs associated with various product verticals, environments and work with stakeholders to plan for optimization and implementation
• Spearhead continuous experimenting and innovating initiatives to optimize the infrastructure in terms of uptime, availability, latency and costs
You should apply, if you
1. Are a seasoned Veteran: Have managed infrastructure at scale running web apps, microservices, and data pipelines using tools and languages like JavaScript(NodeJS), Go, Python, Java, Erlang, Elixir, C++ or Ruby (experience in any one of them is enough)
2. Are a Mr. Perfectionist: You have a strong bias for automation and taking the time to think about the right way to solve a problem versus quick fixes or band-aids.
3. Bring your A-Game: Have hands-on experience and ability to design/implement infrastructure with GCP services like Compute, Database, Storage, Load Balancers, API Gateway, Service Mesh, Firewalls, Message Brokers, Monitoring, Logging and experience in setting up backups, patching and DR planning
4. Are up with the times: Have expertise in one or more cloud platforms (Amazon WebServices or Google Cloud Platform or Microsoft Azure), and have experience in creating and managing infrastructure completely through Terraform kind of tool
5. Have it all on your fingertips: Have experience building CI/CD pipeline using Jenkins, Docker for applications majorly running on Kubernetes. Hands-on experience in managing and troubleshooting applications running on K8s
6. Have nailed the data storage game: Good knowledge of Relational and NoSQL databases (MySQL,Mongo, BigQuery, Cassandra…)
7. Bring that extra zing: Have the ability to program/script is and strong fundamentals in Linux and Networking.
8. Know your toys: Have a good understanding of Microservices architecture, Big Data technologies and experience with highly available distributed systems, scaling data store technologies, and creating multi-tenant and self hosted environments, that’s a plus
Being Part of the Clan
At Classplus, you’re not an “employee” but a part of our “Clan”. So, you can forget about being bound by the clock as long as you’re crushing it workwise😎. Add to that some passionate people working with and around you, and what you get is the perfect work vibe you’ve been looking for!
It doesn’t matter how long your journey has been or your position in the hierarchy (we don’t do Sirs and Ma’ams); you’ll be heard, appreciated, and rewarded. One can say, we have a special place in our hearts for the Doers! ✊🏼❤️
Are you a go-getter with the chops to nail what you do? Then this is the place for you.
About us
Classplus is India's largest B2B ed-tech start-up, enabling 1 Lac+ educators and content creators to create their digital identity with their own branded apps. Starting in 2018, we have grown more than 10x in the last year, into India's fastest-growing video learning platform.
Over the years, marquee investors like Tiger Global, Surge, GSV Ventures, Blume, Falcon, Capital, RTP Global, and Chimera Ventures have supported our vision. Thanks to our awesome and dedicated team, we achieved a major milestone in March this year when we secured a “Series-D” funding.
Now as we go global, we are super excited to have new folks on board who can take the rocketship higher🚀. Do you think you have what it takes to help us achieve this? Find Out Below!
What will you do?
· Define the overall process, which includes building a team for DevOps activities and ensuring that infrastructure changes are reviewed from an architecture and security perspective
· Create standardized tooling and templates for development teams to create CI/CD pipelines
· Ensure infrastructure is created and maintained using terraform
· Work with various stakeholders to design and implement infrastructure changes to support new feature sets in various product lines.
· Maintain transparency and clear visibility of costs associated with various product verticals, environments and work with stakeholders to plan for optimization and implementation
· Spearhead continuous experimenting and innovating initiatives to optimize the infrastructure in terms of uptime, availability, latency and costs
You should apply, if you
1. Are a seasoned Veteran: Have managed infrastructure at scale running web apps, microservices, and data pipelines using tools and languages like JavaScript(NodeJS), Go, Python, Java, Erlang, Elixir, C++ or Ruby (experience in any one of them is enough)
2. Are a Mr. Perfectionist: You have a strong bias for automation and taking the time to think about the right way to solve a problem versus quick fixes or band-aids.
3. Bring your A-Game: Have hands-on experience and ability to design/implement infrastructure with GCP services like Compute, Database, Storage, Load Balancers, API Gateway, Service Mesh, Firewalls, Message Brokers, Monitoring, Logging and experience in setting up backups, patching and DR planning
4. Are up with the times: Have expertise in one or more cloud platforms (Amazon WebServices or Google Cloud Platform or Microsoft Azure), and have experience in creating and managing infrastructure completely through Terraform kind of tool
5. Have it all on your fingertips: Have experience building CI/CD pipeline using Jenkins, Docker for applications majorly running on Kubernetes. Hands-on experience in managing and troubleshooting applications running on K8s
6. Have nailed the data storage game: Good knowledge of Relational and NoSQL databases (MySQL,Mongo, BigQuery, Cassandra…)
7. Bring that extra zing: Have the ability to program/script is and strong fundamentals in Linux and Networking.
8. Know your toys: Have a good understanding of Microservices architecture, Big Data technologies and experience with highly available distributed systems, scaling data store technologies, and creating multi-tenant and self hosted environments, that’s a plus
Being Part of the Clan
At Classplus, you’re not an “employee” but a part of our “Clan”. So, you can forget about being bound by the clock as long as you’re crushing it workwise😎. Add to that some passionate people working with and around you, and what you get is the perfect work vibe you’ve been looking for!
It doesn’t matter how long your journey has been or your position in the hierarchy (we don’t do Sirs and Ma’ams); you’ll be heard, appreciated, and rewarded. One can say, we have a special place in our hearts for the Doers! ✊🏼❤️
Are you a go-getter with the chops to nail what you do? Then this is the place for you.
- Lead multiple client projects in the organization.
- Define & build technical architecture for projects.
- Introduce & ensure right coding standards within the team and ensure that it is maintained in all the projects.
- Ensure the quality delivery of projects.
- Ensure applications are confirmed to security guidelines wherever required.
- Assist pre-sales/sales team for converting raw requirements from potential clients to functional solutions.
- Train fellow team members to impose the best practices available.
- Work on improving and managing processes within the team.
- Implement innovative ideas throughout the team to improve the overall efficiency and quality of the team.
- Ensure proper communication & collaboration within the team
Requirements
7+ Years of experience in developing large scale applications.
Solid Domain Knowledge and experience with various Design Patterns & Data Modelling (Associations, OOPs Concepts etc.)
Should have exposure to multiple backend technologies, and databases - both relational and NOSQL databases
Should be aware of latest conventions for APIs
Preferred hands-on experience with GraphQL as well as REST API
Must be well-aware of the latest technological advancements for relevant platforms.
Advanced Concepts of Databases - Views, Stored Procedures, Database Optimization - are a good to have.
Should have Research Oriented Approach
Solid at Logical thinking and Problem solving
Solid understanding of Coding Standards, Code Review processes and delivery of quality products
Experience with various Tools used in Development, Tests & Deployments.
Sound knowledge of DevOps and CI/CD Pipeline Tools
Solid experience with Git Workflow on Enterprise projects and larger teams
Should be good at documentation at project level and code level ; Should have good experience with Agile Methodology and process
Should have a good understanding of server side deployment, scalability, maintainability and handling server security problems.
Should have good understanding on Software UX
Proficient with communication and good at making software architectural judgments
Expected outcomes
- Growing the team and retaining talent, thus, creating an inspiring environment for the team members.
- Creating more leadership within the team along with mentoring and guiding new joiners and experienced developers.
- Creating growth plans for the team and preparing training guides for other team members.
- Refining processes in the team on a regular basis to ensure quality delivery of projects- such as coding standards, project collaboration, code review processes etc.
- Improving overall efficiency and team productivity by introducing new methodologies and ideas in the team.
- Working on R&D and employing innovative technologies in the company.
- Streamlining processes which will result in saving time and cost optimization
- Ensuring code review healthiness and shipping superior quality code
Benefits
- Unlimited learning and growth opportunities
- A collaborative and cheerful work environment
- Exceptional reward and recognition policy
- Outstanding compensation
- Flexible work hours
- Opportunity to make an impact as your work will directly contribute to our business strategy.
At Nickelfox, you have a chance to craft a career path as unique as you are and become the best version of YOU. You will be part of a team with a ‘no limits’ mindset in an inclusive, people-focused culture. And we’re counting on your unique perspective to help Nickelfox grow even faster.
Are you passionate about tech? Dedicated to learning? Come, join us to build an extraordinary experience for yourself and a dignified working world for all.
What makes Nickelfox a great place for you?
In Nickelfox, you’ll join a team whose passion for technology and understanding of business has driven the company to serve clients across 25+ countries in just five years. We partner with our customers to fuel their growth story and enable them to make the right decisions with our customized technology services and insights. All in all, we are passionate to see our customers win the day. This is the reason why 80% of our business comes from repeat clients.
Our mission is to provide dignified employment and an environment that recognizes the uniqueness of every individual and values their expertise, and contribution. We have a culture that encourages everyone to bring their authentic selves to work. Our people enjoy a collaborative work environment with exceptional training and career development. If you like working with a curious, youthful, high-performing team, Nickelfox is the place for you.
About the company:
CogniTensor is an analytical software company that brings data to the heart of decision-making. CogniTensor leverages its product, DeepOptics - an integrated platform to implement 3A (Automation, Analytics and AI) at scale.
Cognitensor has customers ranging in Finance, Energy, Commodity, Retail & Manufacturing. More details can be found on our website: https://www.cognitensor.com/">https://www.cognitensor.com/
Our strategic investors include Shell and CIIE.CO (IIM-A/Accenture).
Qualification & Experience:
- BE/B.Tech Degree in Computer Programming, Computer Science, or a related field.
- +2 years experience as a Software Developer.
- Hands on experience in developing finance applications is must
Roles & Responsibilities:
We are looking for a Full Stack Developer to produce scalable software solutions. You’ll be part of a cross-functional team that’s responsible for the full software development life cycle, from conception to deployment.
As a Full Stack Developer, you should be comfortable around both front-end and back-end coding languages, development frameworks and third-party libraries. You should also be a team player with a knack for visual design and utility. Along with familiar with Agile methodologies and testing skills
- Work with development teams and product managers to ideate software solutions
- Design client-side and server-side architecture
- Develop and manage well-functioning databases and applications
- Write effective APIs
- Write technical documentation
- Excellent communication and teamwork skills
Technical Skills:
Must Have
- React JS
- Git / Bitbucket,
- Express JS, Python, HTML, CSS, Node JS
- CI/CD like CircleCI
- Postgres or any DB knowledge
- Familiarity with AWS or Azure or both
Good to Have
- Docker
- Redux
- Android development
- React Native
- Electron
- GraphQL
- Jira
What’s in for you:
- An opportunity to lead a business segment
- Extensive liaising with customers and partners
- A rewarding career progression
Preferred Location:
Delhi NCR
Job Description:
- Creating proxy-based applications
- Migrating legacy applications to new technologies
- Maintaining applications/websites and troubleshooting
Skills & Experience:
- Well-versed with SSO, Shibboleth and SAML
- Should have decent knowledge of networking
- Good with .NET web applications, MVC, RestAPI, REACTJS, NODEJS, HTML, CSS
- Should have previously worked on applications based on SAML, SSO and CDNs.
- Should know Identity Providers, access gateways and integration with SSOs
• Writing and testing code, debugging programs and integrating applications with third-party
web services. To be successful in this role, you should have experience using server-side logic
and work well in a team.
• Ultimately, you'll build highly responsive web applications that align with our business needs.
• Write effective, scalable code Develop back-end components to improve responsiveness and
overall performance Integrate user-facing elements into applications.
• Test and debug programs
• Improve functionality of existing systems Implement security and data protection solutions
• Assess and prioritize feature requests
• Coordinate with internal teams to understand user requirements and provide technical
solutions
• Expertise in at least one popular Python framework (like Django, Flask, etc)
• Team spirit
• Good problem-solving skills.
Requirements
• 3 to 5 years of experience as a Python Developer
• Hands-on experience with Flask, Django or Gin or Revel or Sanic
• Knowledge of design/architectural patterns will be considered as a plus
• Experience working in an agile development environment with a strong focus on rapid
software development
• Experience in AWS or similar cloud technologies
• Excellent troubleshooting and debugging skills
• Proven ability to complete the assigned task according to the outlined scope and timeline
• Experience with messaging frameworks such as SQS/Kafka/RabbitMq
• Experience with Elastic Search
• Willingness to learn new and different technologies
Location - South Delhi
Job Title -Data Scientist
Job Duties
- Data Scientist responsibilities includes planning projects and building analytics models.
- You should have a strong problem-solving ability and a knack for statistical analysis.
- If you're also able to align our data products with our business goals, we'd like to meet you. Your ultimate goal will be to help improve our products and business decisions by making the most out of our data.
Responsibilities
Own end-to-end business problems and metrics, build and implement ML solutions using cutting-edge technology.
Create scalable solutions to business problems using statistical techniques, machine learning, and NLP.
Design, experiment and evaluate highly innovative models for predictive learning
Work closely with software engineering teams to drive real-time model experiments, implementations, and new feature creations
Establish scalable, efficient, and automated processes for large-scale data analysis, model development, deployment, experimentation, and evaluation.
Research and implement novel machine learning and statistical approaches.
Requirements
2-5 years of experience in data science.
In-depth understanding of modern machine learning techniques and their mathematical underpinnings.
Demonstrated ability to build PoCs for complex, ambiguous problems and scale them up.
Strong programming skills (Python, Java)
High proficiency in at least one of the following broad areas: machine learning, statistical modelling/inference, information retrieval, data mining, NLP
Experience with SQL and NoSQL databases
Strong organizational and leadership skills
Excellent communication skills
- Mandatory - Hands on experience in Python and PySpark.
- Build pySpark applications using Spark Dataframes in Python using Jupyter notebook and PyCharm(IDE).
- Worked on optimizing spark jobs that processes huge volumes of data.
- Hands on experience in version control tools like Git.
- Worked on Amazon’s Analytics services like Amazon EMR, Lambda function etc
- Worked on Amazon’s Compute services like Amazon Lambda, Amazon EC2 and Amazon’s Storage service like S3 and few other services like SNS.
- Experience/knowledge of bash/shell scripting will be a plus.
- Experience in working with fixed width, delimited , multi record file formats etc.
- Hands on experience in tools like Jenkins to build, test and deploy the applications
- Awareness of Devops concepts and be able to work in an automated release pipeline environment.
- Excellent debugging skills.
Skills and requirements
- Experience analyzing complex and varied data in a commercial or academic setting.
- Desire to solve new and complex problems every day.
- Excellent ability to communicate scientific results to both technical and non-technical team members.
Desirable
- A degree in a numerically focused discipline such as, Maths, Physics, Chemistry, Engineering or Biological Sciences..
- Hands on experience on Python, Pyspark, SQL
- Hands on experience on building End to End Data Pipelines.
- Hands on Experience on Azure Data Factory, Azure Data Bricks, Data Lake - added advantage
- Hands on Experience in building data pipelines.
- Experience with Bigdata Tools, Hadoop, Hive, Sqoop, Spark, SparkSQL
- Experience with SQL or NoSQL databases for the purposes of data retrieval and management.
- Experience in data warehousing and business intelligence tools, techniques and technology, as well as experience in diving deep on data analysis or technical issues to come up with effective solutions.
- BS degree in math, statistics, computer science or equivalent technical field.
- Experience in data mining structured and unstructured data (SQL, ETL, data warehouse, Machine Learning etc.) in a business environment with large-scale, complex data sets.
- Proven ability to look at solutions in unconventional ways. Sees opportunities to innovate and can lead the way.
- Willing to learn and work on Data Science, ML, AI.
consulting & implementation services in the area of Oil & Gas, Mining and Manufacturing Industry
- Data Engineer
Required skill set: AWS GLUE, AWS LAMBDA, AWS SNS/SQS, AWS ATHENA, SPARK, SNOWFLAKE, PYTHON
Mandatory Requirements
- Experience in AWS Glue
- Experience in Apache Parquet
- Proficient in AWS S3 and data lake
- Knowledge of Snowflake
- Understanding of file-based ingestion best practices.
- Scripting language - Python & pyspark
CORE RESPONSIBILITIES
- Create and manage cloud resources in AWS
- Data ingestion from different data sources which exposes data using different technologies, such as: RDBMS, REST HTTP API, flat files, Streams, and Time series data based on various proprietary systems. Implement data ingestion and processing with the help of Big Data technologies
- Data processing/transformation using various technologies such as Spark and Cloud Services. You will need to understand your part of business logic and implement it using the language supported by the base data platform
- Develop automated data quality check to make sure right data enters the platform and verifying the results of the calculations
- Develop an infrastructure to collect, transform, combine and publish/distribute customer data.
- Define process improvement opportunities to optimize data collection, insights and displays.
- Ensure data and results are accessible, scalable, efficient, accurate, complete and flexible
- Identify and interpret trends and patterns from complex data sets
- Construct a framework utilizing data visualization tools and techniques to present consolidated analytical and actionable results to relevant stakeholders.
- Key participant in regular Scrum ceremonies with the agile teams
- Proficient at developing queries, writing reports and presenting findings
- Mentor junior members and bring best industry practices
QUALIFICATIONS
- 5-7+ years’ experience as data engineer in consumer finance or equivalent industry (consumer loans, collections, servicing, optional product, and insurance sales)
- Strong background in math, statistics, computer science, data science or related discipline
- Advanced knowledge one of language: Java, Scala, Python, C#
- Production experience with: HDFS, YARN, Hive, Spark, Kafka, Oozie / Airflow, Amazon Web Services (AWS), Docker / Kubernetes, Snowflake
- Proficient with
- Data mining/programming tools (e.g. SAS, SQL, R, Python)
- Database technologies (e.g. PostgreSQL, Redshift, Snowflake. and Greenplum)
- Data visualization (e.g. Tableau, Looker, MicroStrategy)
- Comfortable learning about and deploying new technologies and tools.
- Organizational skills and the ability to handle multiple projects and priorities simultaneously and meet established deadlines.
- Good written and oral communication skills and ability to present results to non-technical audiences
- Knowledge of business intelligence and analytical tools, technologies and techniques.
Familiarity and experience in the following is a plus:
- AWS certification
- Spark Streaming
- Kafka Streaming / Kafka Connect
- ELK Stack
- Cassandra / MongoDB
- CI/CD: Jenkins, GitLab, Jira, Confluence other related tools
B2B - Factory app for retailers & buyers (well funded)
Job Title
Data Analyst
Job Brief
The successful candidate will turn data into information, information into insight and insight into business decisions.
Data Analyst Job Duties
Data analyst responsibilities include conducting full lifecycle analysis to include requirements, activities and design. Data analysts will develop analysis and reporting capabilities. They will also monitor performance and quality control plans to identify improvements.
Responsibilities
● Interpret data, analyze results using statistical techniques and provide ongoing reports.
● Develop and implement databases, data collection systems, data analytics and other strategies that optimize statistical efficiency and quality.
● Acquire data fromprimary orsecondary data sources andmaintain databases/data systems.
● Identify, analyze, and interpret trends orpatternsin complex data sets.
● Filter and “clean” data by reviewing computerreports, printouts, and performance indicatorsto locate and correct code problems.
● Work withmanagementto prioritize business and information needs.
● Locate and define new processimprovement opportunities.
Requirements
● Proven working experienceas aData Analyst or BusinessDataAnalyst.
● Technical expertise regarding data models, database design development, data mining and segmentation techniques.
● Strong knowledge of and experience with reporting packages (Business Objects etc), databases (SQL etc), programming (XML, Javascript, or ETL frameworks).
● Knowledge of statistics and experience using statistical packages for analyzing datasets (Excel, SPSS, SAS etc).
● Strong analytical skills with the ability to collect, organize, analyze, and disseminate significant amounts of information with attention to detail and accuracy.
● Adept atqueries,reportwriting and presenting findings.
Job Location SouthDelhi, New Delhi
About Us
We began in 2015 with an entrepreneurial vision to bring a digital change in the manufacturing landscape of India. With a team of 1500(1k cluster guys) we are working towards the digital transformation of business in the manufacturing industry across domains like Footwear, Apparel, Textile, Accessories etc. We are backed by investors such as Info Edge (Naukri.com ), Matrix Partners, Sequoia, Water Bridge Ventures and select Industry leaders.
Today, we have enabled 4000+ Manufacturers to digitize their distribution channel.
Roles & Responsibilities
Writing and testing code, debugging programs and integrating applications with third-party web services. To be successful in this role, you should have experience using server-side logic and work well in a team.
Ultimately, you'll build highly responsive web applications that align with our business needs. Write effective, scalable code Develop back-end components to improve responsiveness and overall performance Integrate user-facing elements into applications.
Test and debug programs
Improve functionality of existing systems Implement security and data protection solutions Assess and prioritize feature requests
Coordinate with internal teams to understand user requirements and provide technical solutions Expertise in at least one popular Python framework (like Django, Flask, etc)
Team spirit
Good problem-solving skills
Requirements
1 to 5 years of experience as a Python Developer
Hands on experience of Flask, Django or Gin or Revel or Sanic
Knowledge of design/architectural patterns will be considered as a plus
Experience working in an agile development environment with a strong focus on rapid software development
Experience in AWS or similar cloud technologies
Excellent troubleshooting and debugging skills
Proven ability to complete assigned task according to the outlined scope and timeline
Experience with messaging frameworks such as SQS/Kafka/RabbitMq
Experience with Elastic Search
Willingness to learn new and different technologies
• 2 - 5 years of experience building React and/or Mobile Applications
• 5-8 years working with microservices, API servers, databases, cloud-native development, observability,
alerting, and monitoring
• Deep exposure to cloud services, preferably Azure
• Preferably worked in the Finance/Retail domain or other similar domains with complex business
requirements.
• Hands-on skills combined with leadership qualities to guide teams.
Location – Bangalore, Mumbai, Gurgaon
Functional / Technical Skills:
• Strong understanding of networking fundamentals
o OSI Stack, DNS, TCP protocols
o Browser rendering and various stages of execution
• Good understanding of RESTful APIs, GraphQL and Web Sockets
• Ability to debug and profile Web/Mobile applications with Chrome DevTools or Native profilers
• Strong understanding of Distributed Systems, Fault Tolerance and Resiliency. Exposure to setting up
and managing Chaos is a plus.
• Exposure to Domain Driven Design (DDD), SOLID principles, and Data Modelling on various RDBMS,
NoSQL databases.
• Ability to define and document performance goals, SLAs, and volumetrics. Creating a framework for
measuring and validating the goals. Work with teams to implement and meet them.
• Create automation scripts to measure performance. Making this part of the CI/CD process.
• Good understanding of CNCF projects with a specific focus on Observability, Monitoring, Tracing,
Sidecars, Kubernetes
• Tuning of Cloud-native deployments with a focus on Cost Optimization.
• Participate in architecture reviews to identify potential issues, and bottlenecks and provide early guidance.
• Deep knowledge of at least 2 different programming languages and runtimes. Any two of Ruby, Python,
Swift, Go, Rust, C#, Dart, Kotlin, Java, Haskell, OCaml
• Excellent verbal and written communication
• A mindset to constantly learn new things and challenge the Status Quo.