
Location: Pune
Required Skills : Scala, Python, Data Engineering, AWS, Cassandra/AstraDB, Athena, EMR, Spark/Snowflake

About Wissen Technology
About
The Wissen Group was founded in the year 2000. Wissen Technology, a part of Wissen Group, was established in the year 2015. Wissen Technology is a specialized technology company that delivers high-end consulting for organizations in the Banking & Finance, Telecom, and Healthcare domains.
With offices in US, India, UK, Australia, Mexico, and Canada, we offer an array of services including Application Development, Artificial Intelligence & Machine Learning, Big Data & Analytics, Visualization & Business Intelligence, Robotic Process Automation, Cloud, Mobility, Agile & DevOps, Quality Assurance & Test Automation.
Leveraging our multi-site operations in the USA and India and availability of world-class infrastructure, we offer a combination of on-site, off-site and offshore service models. Our technical competencies, proactive management approach, proven methodologies, committed support and the ability to quickly react to urgent needs make us a valued partner for any kind of Digital Enablement Services, Managed Services, or Business Services.
We believe that the technology and thought leadership that we command in the industry is the direct result of the kind of people we have been able to attract, to form this organization (you are one of them!).
Our workforce consists of 1000+ highly skilled professionals, with leadership and senior management executives who have graduated from Ivy League Universities like MIT, Wharton, IITs, IIMs, and BITS and with rich work experience in some of the biggest companies in the world.
Wissen Technology has been certified as a Great Place to Work®. The technology and thought leadership that the company commands in the industry is the direct result of the kind of people Wissen has been able to attract. Wissen is committed to providing them the best possible opportunities and careers, which extends to providing the best possible experience and value to our clients.
Connect with the team
Similar jobs
Job Title: Python Developer (Full Time)
Location: Hyderabad (Onsite)
Interview: Virtual and Face to Face Interview (Last round)
Experience Required: 4 + Years
Working Days: 5 Days
About the Role
We are seeking a highly skilled Lead Python Developer with a strong background in building scalable and secure applications. The ideal candidate will have hands-on expertise in Python frameworks, API integrations, and modern application architectures. This role requires a tech leader who can balance innovation, performance, and compliance while driving successful project delivery.
Key Responsibilities
- Application Development
- Architect and develop robust, high-performance applications using Django, Flask, and FastAPI.
- API Integration
- Design and implement seamless integration with third-party APIs (including travel-related APIs, payment gateways, and external service providers).
- Data Management
- Develop and optimize ETL pipelines for structured and unstructured data using data lakes and distributed storage solutions.
- Microservices Architecture
- Build modular, scalable applications using microservices principles for independent deployment and high availability.
- Performance Optimization
- Enhance application performance through load balancing, caching, and query optimization to deliver superior user experiences.
- Security & Compliance
- Apply secure coding practices, implement data encryption, and ensure compliance with industry security and privacy standards (e.g., PCI DSS, GDPR).
- Automation & Deployment
- Utilize CI/CD pipelines, Docker/Kubernetes, and monitoring tools for automated testing, deployment, and production monitoring.
- Collaboration
- Partner with front-end developers, product managers, and stakeholders to deliver user-centric, business-aligned solutions.
Requirements
Education
- Bachelor’s or Master’s degree in Computer Science, Engineering, or a related field.
Technical Expertise
- 4+ years of hands-on experience with Python frameworks (Django, Flask, FastAPI).
- Proficiency in RESTful APIs, GraphQL, and asynchronous programming.
- Strong knowledge of SQL/NoSQL databases (PostgreSQL, MongoDB) and big data tools (Spark, Kafka).
- Familiarity with Kibana, Grafana, Prometheus for monitoring and visualization.
- Experience with AWS, Azure, or Google Cloud, containerization (Docker, Kubernetes), and CI/CD tools (Jenkins, GitLab CI).
- Working knowledge of testing tools: PyTest, Selenium, SonarQube.
- Experience with API integrations, booking flows, and payment gateway integrations (travel domain knowledge is a plus, but not mandatory).
Soft Skills
- Strong problem-solving and analytical skills.
- Excellent communication, presentation, and teamwork abilities.
- Proactive, ownership-driven mindset with the ability to perform under pressure.
10+ years of experience in web application design and development
• 10+ years experience with Ruby on Rails
• 5+ years of experience in Jenkins and must have an understanding of CI/CD pipelines
• Analyze, design, develop, and maintain scalable solutions.
• Participate in code review.
• Collaborate with design team to create customer friendly solutions.
• Work with the support team to fix any bugs that arise in code.
• Perform debugging and troubleshooting of existing code base. • Good knowledge of Dockers containers. • Experience with Git/GitHub/AWS DevOps. • Work within a wide range of new and legacy code and technologies in a mature codebase o Experience with React-o Experience with AWS Lambda
Nice to have: Experience with building integrations with 3rd party services/tools such as:o Experience with Sinatra Experience with WordPress, Drupal, Sitecore Experience with MongoDB
- Build campaign generation services which can send app notifications at a speed of 10 million a minute
- Dashboards to show Real time key performance indicators to clients
- Develop complex user segmentation engines which creates segments on Terabytes of data within few seconds
- Building highly available & horizontally scalable platform services for ever growing data
- Use cloud based services like AWS Lambda for blazing fast throughput & auto scalability
- Work on complex analytics on terabytes of data like building Cohorts, Funnels, User path analysis, Recency Frequency & Monetary analysis at blazing speed
- You will build backend services and APIs to create scalable engineering systems.
- As an individual contributor, you will tackle some of our broadest technical challenges that requires deep technical knowledge, hands-on software development and seamless collaboration with all functions.
- You will envision and develop features that are highly reliable and fault tolerant to deliver a superior customer experience.
- Collaborating various highly-functional teams in the company to meet deliverables throughout the software development lifecycle.
- Identify and improvise areas of improvement through data insights and research.
- 2-5 years of experience in backend development and must have worked on Java/shell/Perl/python scripting.
- Solid understanding of engineering best practices, continuous integration, and incremental delivery.
- Strong analytical skills, debugging and troubleshooting skills, product line analysis.
- Follower of agile methodology (Sprint planning, working on JIRA, retrospective etc).
- Proficiency in usage of tools like Docker, Maven, Jenkins and knowledge on frameworks in Java like spring, spring boot, hibernate, JPA.
- Ability to design application modules using various concepts like object oriented, multi-threading, synchronization, caching, fault tolerance, sockets, various IPCs, database interfaces etc.
- Hands on experience on Redis, MySQL and streaming technologies like Kafka producer consumers and NoSQL databases like mongo dB/Cassandra.
- Knowledge about versioning like Git and deployment processes like CICD.
Roles and Responsibilities
Level of skills and experience:
5 years of hands-on experience in using Python, Spark,Sql.
Experienced in AWS Cloud usage and management.
Experience with Databricks (Lakehouse, ML, Unity Catalog, MLflow).
Experience using various ML models and frameworks such as XGBoost, Lightgbm, Torch.
Experience with orchestrators such as Airflow and Kubeflow.
Familiarity with containerization and orchestration technologies (e.g., Docker, Kubernetes).
Fundamental understanding of Parquet, Delta Lake and other data file formats.
Proficiency on an IaC tool such as Terraform, CDK or CloudFormation.
Strong written and verbal English communication skill and proficient in communication with non-technical stakeholderst
Cloudera Data Warehouse Hive team looking for a passionate senior developer to join our growing engineering team. This group is targeting the biggest enterprises wanting to utilize Cloudera’s services in a private and public cloud environment. Our product is built on open source technologies like Hive, Impala, Hadoop, Kudu, Spark and so many more providing unlimited learning opportunities.
A Day in the Life
Over the past 10+ years, Cloudera has experienced tremendous growth making us the leading contributor to Big Data platforms and ecosystems and a leading provider for enterprise solutions based on Apache Hadoop. You will work with some of the best engineers in the industry who are tackling challenges that will continue to shape the Big Data revolution. We foster an engaging, supportive, and productive work environment where you can do your best work. The team culture values engineering excellence, technical depth, grassroots innovation, teamwork, and collaboration.
You will manage product development for our CDP components, develop engineering tools and scalable services to enable efficient development, testing, and release operations. You will be immersed in many exciting, cutting-edge technologies and projects, including collaboration with developers, testers, product, field engineers, and our external partners, both software and hardware vendors.
Opportunity:
Cloudera is a leader in the fast-growing big data platforms market. This is a rare chance to make a name for yourself in the industry and in the Open Source world. The candidate will responsible for Apache Hive and CDW projects. We are looking for a candidate who would like to work on these projects upstream and downstream. If you are curious about the project and code quality you can check the project and the code at the following link. You can start the development before you join. This is one of the beauties of the OSS world.
https://hive.apache.org/" target="_blank">Apache Hive
Responsibilities:
-
Build robust and scalable data infrastructure software
-
Design and create services and system architecture for your projects
-
Improve code quality through writing unit tests, automation, and code reviews
-
The candidate would write Java code and/or build several services in the Cloudera Data Warehouse.
-
Worked with a team of engineers who reviewed each other's code/designs and held each other to an extremely high bar for the quality of code/designs
-
The candidate has to understand the basics of Kubernetes.
-
Build out the production and test infrastructure.
-
Develop automation frameworks to reproduce issues and prevent regressions.
-
Work closely with other developers providing services to our system.
-
Help to analyze and to understand how customers use the product and improve it where necessary.
Qualifications:
-
Deep familiarity with Java programming language.
-
Hands-on experience with distributed systems.
-
Knowledge of database concepts, RDBMS internals.
-
Knowledge of the Hadoop stack, containers, or Kubernetes is a strong plus.
-
Has experience working in a distributed team.
-
Has 3+ years of experience in software development.
Responsibilities:
• As a Senior Backend Engineer, you will design, implement and build server-side components that run seamlessly on the Tickertape product which is loved and used by millions of investors every day.
• You will partner with other engineers to build high-performance REST & WebSocket APIs to power our frontend experiences.
• Influences best practices in the team.
• Perform data analysis and troubleshoot technical issues with platforms, performance, data discrepancies.
Requirements:
• 5 - 7 years of experience
• Good programming skills with any of the programming languages like Go, Javascript/Typescript (Nodejs)
• A good understanding of RDBMS(PostgreSQL), NoSQL systems(Mongo, Elasticsearch), Time series DB(Influx, Timescale), Queuing Systems(Kafka, SQS), caching technologies(Redis), and cloud technologies(AWS) is a must
• Web development concepts - basics of REST APIs, server architecture
• Extremely good at problem-solving, interest in building things from scratch, and is a self-learner.
• Good team player and ability to collaborate with others.
• Interest (and/or experience) in the financial/stock market space - interest trumps experience
-
Bachelor’s or master’s degree in Computer Engineering, Computer Science, Computer Applications, Mathematics, Statistics, or related technical field. Relevant experience of at least 3 years in lieu of above if from a different stream of education.
-
Well-versed in and 3+ hands-on demonstrable experience with: ▪ Stream & Batch Big Data Pipeline Processing using Apache Spark and/or Apache Flink.
▪ Distributed Cloud Native Computing including Server less Functions
▪ Relational, Object Store, Document, Graph, etc. Database Design & Implementation
▪ Micro services Architecture, API Modeling, Design, & Programming -
3+ years of hands-on development experience in Apache Spark using Scala and/or Java.
-
Ability to write executable code for Services using Spark RDD, Spark SQL, Structured Streaming, Spark MLLib, etc. with deep technical understanding of Spark Processing Framework.
-
In-depth knowledge of standard programming languages such as Scala and/or Java.
-
3+ years of hands-on development experience in one or more libraries & frameworks such as Apache Kafka, Akka, Apache Storm, Apache Nifi, Zookeeper, Hadoop ecosystem (i.e., HDFS, YARN, MapReduce, Oozie & Hive), etc.; extra points if you can demonstrate your knowledge with working examples.
-
3+ years of hands-on development experience in one or more Relational and NoSQL datastores such as PostgreSQL, Cassandra, HBase, MongoDB, DynamoDB, Elastic Search, Neo4J, etc.
-
Practical knowledge of distributed systems involving partitioning, bucketing, CAP theorem, replication, horizontal scaling, etc.
-
Passion for distilling large volumes of data, analyze performance, scalability, and capacity performance issues in Big Data Platforms.
-
Ability to clearly distinguish system and Spark Job performances and perform spark performance tuning and resource optimization.
-
Perform benchmarking/stress tests and document the best practices for different applications.
-
Proactively work with tenants on improving the overall performance and ensure the system is resilient, and scalable.
-
Good understanding of Virtualization & Containerization; must demonstrate experience in technologies such as Kubernetes, Istio, Docker, OpenShift, Anthos, Oracle VirtualBox, Vagrant, etc.
-
Well-versed with demonstrable working experience with API Management, API Gateway, Service Mesh, Identity & Access Management, Data Protection & Encryption.
Hands-on experience with demonstrable working experience with DevOps tools and platforms viz., Jira, GIT, Jenkins, Code Quality & Security Plugins, Maven, Artifactory, Terraform, Ansible/Chef/Puppet, Spinnaker, etc.
-
Well-versed in AWS and/or Azure or and/or Google Cloud; must demonstrate experience in at least FIVE (5) services offered under AWS and/or Azure or and/or Google Cloud in any categories: Compute or Storage, Database, Networking & Content Delivery, Management & Governance, Analytics, Security, Identity, & Compliance (or) equivalent demonstrable Cloud Platform experience.
-
Good understanding of Storage, Networks and Storage Networking basics which will enable you to work in a Cloud environment.
-
Good understanding of Network, Data, and Application Security basics which will enable you to work in a Cloud as well as Business Applications / API services environment.
Job Description for Python Backend Developer
2 + years expertise in Python 3.7, Django 2 (or Django 3).
Familiarity with some ORM (Object Relational Mapper) libraries.
Able to integrate multiple data sources and databases into one system.
Integration of user-facing elements developed by front-end developers with server-side logic in Django (RESTful APIs).
Basic understanding of front-end technologies, such as JavaScript, HTML5, and CSS3
Knowledge of user authentication and authorization between multiple systems, servers, and environments
Understanding of the differences between multiple delivery platforms, such as mobile vs desktop, and optimizing output to match the specific platform
Able to create database schemas that represent and support business processes
Strong unit test and debugging skills.
Proficient understanding of code versioning tools such as Git.
The desirables optionals
Django Channels, Web Sockets, Asyncio.
Experience working with AWS or similar Cloud services.
Experience in containerization technologies such as Docker.
Understanding of fundamental design principles behind a scalable application (caching, Redis)
Role: Software Developer
Industry Type: IT-Software, Software Services
Employment Type Full Time
Role Category Programming & Design
Qualification: Any Graduate in Any Specialization
Key Skills – Python 3.7 Django 2.0 onwards , REST APIs , ORM, Front End for interfacing only ( curl, Postman, Angular for testing), Docker (optional), database (PostgreSQL), Github










