
Location: Pune
Required Skills : Scala, Python, Data Engineering, AWS, Cassandra/AstraDB, Athena, EMR, Spark/Snowflake

About Wissen Technology
About
The Wissen Group was founded in the year 2000. Wissen Technology, a part of Wissen Group, was established in the year 2015. Wissen Technology is a specialized technology company that delivers high-end consulting for organizations in the Banking & Finance, Telecom, and Healthcare domains.
With offices in US, India, UK, Australia, Mexico, and Canada, we offer an array of services including Application Development, Artificial Intelligence & Machine Learning, Big Data & Analytics, Visualization & Business Intelligence, Robotic Process Automation, Cloud, Mobility, Agile & DevOps, Quality Assurance & Test Automation.
Leveraging our multi-site operations in the USA and India and availability of world-class infrastructure, we offer a combination of on-site, off-site and offshore service models. Our technical competencies, proactive management approach, proven methodologies, committed support and the ability to quickly react to urgent needs make us a valued partner for any kind of Digital Enablement Services, Managed Services, or Business Services.
We believe that the technology and thought leadership that we command in the industry is the direct result of the kind of people we have been able to attract, to form this organization (you are one of them!).
Our workforce consists of 1000+ highly skilled professionals, with leadership and senior management executives who have graduated from Ivy League Universities like MIT, Wharton, IITs, IIMs, and BITS and with rich work experience in some of the biggest companies in the world.
Wissen Technology has been certified as a Great Place to Work®. The technology and thought leadership that the company commands in the industry is the direct result of the kind of people Wissen has been able to attract. Wissen is committed to providing them the best possible opportunities and careers, which extends to providing the best possible experience and value to our clients.
Connect with the team
Similar jobs
Mandatory (Experience 1) - Must have a minimum 4+ years of experience in backend software development.
Mandatory (Experience 2) -Must have 4+ years of experience in backend development using Python (Highly preferred), Java, or Node.js.
Mandatory (Experience 3) - Must have experience with Cloud platforms like AWS (highly preferred), gcp or azure
Mandatory (Experience 4) - Must have Experience in any databases - MySQL / PostgreSQL / Postgres / Oracle / SQL Server / DB2 / SQL / MongoDB / Ne
Requirements
- 3+ years work experience with production-grade python. Contribution to open source repos is preferred
- Experience writing concurrent and distributed programs, AWS lambda, Kubernetes, Docker, Spark is preferred.
- Experience with one relational & 1 non-relational DB is preferred
- Prior work in the ML domain will be a big boost
What You’ll Do
- Help realize the product vision: Production-ready machine learning models with monitoring within moments, not months.
- Help companies deploy their machine learning models at scale across a wide range of use-cases and sectors.
- Build integrations with other platforms to make it easy for our customers to use our product without changing their workflow.
- Write maintainable, scalable performant python code
- Building gRPC, rest API servers
- Working with Thrift, Protobufs, etc.
The Sr. Analytics Engineer would provide technical expertise in needs identification, data modeling, data movement, and transformation mapping (source to target), automation and testing strategies, translating business needs into technical solutions with adherence to established data guidelines and approaches from a business unit or project perspective.
Understands and leverages best-fit technologies (e.g., traditional star schema structures, cloud, Hadoop, NoSQL, etc.) and approaches to address business and environmental challenges.
Provides data understanding and coordinates data-related activities with other data management groups such as master data management, data governance, and metadata management.
Actively participates with other consultants in problem-solving and approach development.
Responsibilities :
Provide a consultative approach with business users, asking questions to understand the business need and deriving the data flow, conceptual, logical, and physical data models based on those needs.
Perform data analysis to validate data models and to confirm the ability to meet business needs.
Assist with and support setting the data architecture direction, ensuring data architecture deliverables are developed, ensuring compliance to standards and guidelines, implementing the data architecture, and supporting technical developers at a project or business unit level.
Coordinate and consult with the Data Architect, project manager, client business staff, client technical staff and project developers in data architecture best practices and anything else that is data related at the project or business unit levels.
Work closely with Business Analysts and Solution Architects to design the data model satisfying the business needs and adhering to Enterprise Architecture.
Coordinate with Data Architects, Program Managers and participate in recurring meetings.
Help and mentor team members to understand the data model and subject areas.
Ensure that the team adheres to best practices and guidelines.
Requirements :
- Strong working knowledge of at least 3 years of Spark, Java/Scala/Pyspark, Kafka, Git, Unix / Linux, and ETL pipeline designing.
- Experience with Spark optimization/tuning/resource allocations
- Excellent understanding of IN memory distributed computing frameworks like Spark and its parameter tuning, writing optimized workflow sequences.
- Experience of relational databases (e.g., PostgreSQL, MySQL) and NoSQL databases (e.g., Redshift, Bigquery, Cassandra, etc).
- Familiarity with Docker, Kubernetes, Azure Data Lake/Blob storage, AWS S3, Google Cloud storage, etc.
- Have a deep understanding of the various stacks and components of the Big Data ecosystem.
- Hands-on experience with Python is a huge plus
Cloudera Data Warehouse Hive team looking for a passionate senior developer to join our growing engineering team. This group is targeting the biggest enterprises wanting to utilize Cloudera’s services in a private and public cloud environment. Our product is built on open source technologies like Hive, Impala, Hadoop, Kudu, Spark and so many more providing unlimited learning opportunities.
A Day in the Life
Over the past 10+ years, Cloudera has experienced tremendous growth making us the leading contributor to Big Data platforms and ecosystems and a leading provider for enterprise solutions based on Apache Hadoop. You will work with some of the best engineers in the industry who are tackling challenges that will continue to shape the Big Data revolution. We foster an engaging, supportive, and productive work environment where you can do your best work. The team culture values engineering excellence, technical depth, grassroots innovation, teamwork, and collaboration.
You will manage product development for our CDP components, develop engineering tools and scalable services to enable efficient development, testing, and release operations. You will be immersed in many exciting, cutting-edge technologies and projects, including collaboration with developers, testers, product, field engineers, and our external partners, both software and hardware vendors.
Opportunity:
Cloudera is a leader in the fast-growing big data platforms market. This is a rare chance to make a name for yourself in the industry and in the Open Source world. The candidate will responsible for Apache Hive and CDW projects. We are looking for a candidate who would like to work on these projects upstream and downstream. If you are curious about the project and code quality you can check the project and the code at the following link. You can start the development before you join. This is one of the beauties of the OSS world.
https://hive.apache.org/" target="_blank">Apache Hive
Responsibilities:
-
Build robust and scalable data infrastructure software
-
Design and create services and system architecture for your projects
-
Improve code quality through writing unit tests, automation, and code reviews
-
The candidate would write Java code and/or build several services in the Cloudera Data Warehouse.
-
Worked with a team of engineers who reviewed each other's code/designs and held each other to an extremely high bar for the quality of code/designs
-
The candidate has to understand the basics of Kubernetes.
-
Build out the production and test infrastructure.
-
Develop automation frameworks to reproduce issues and prevent regressions.
-
Work closely with other developers providing services to our system.
-
Help to analyze and to understand how customers use the product and improve it where necessary.
Qualifications:
-
Deep familiarity with Java programming language.
-
Hands-on experience with distributed systems.
-
Knowledge of database concepts, RDBMS internals.
-
Knowledge of the Hadoop stack, containers, or Kubernetes is a strong plus.
-
Has experience working in a distributed team.
-
Has 3+ years of experience in software development.
Your Opportunity
- Own and drive business features into tech requirements
- Design & develop large scale real time server side systems
- Quickly create quality prototypes
- Staying updated on emerging technologies
- Ensuring that all deliverables adhere to our world class standards
- Promote coding best practices
- Mentor and develop junior developers in the team
Required Experience:
- 4+ years of relevant experience as described below
- Excellent grasp of Core Java, Multi Threading and OO design patterns
- Experience with Scala, functional, reactive programming and Akka/Play is a plus
- Excellent understanding of data structures and algorithms
- Solid grasp of large scale distributed real time systems
- Prior experience on building a scalable and resilient micro service
- Solid understanding of relational databases, NoSQL databases and Caching systems
- Good understanding of Big Data technologies such as Spark, Hadoop is a plus
- Experience on one of AWS, Azure or GCP
Who you are :
- You have excellent and effective communication and collaborative skills
- You love problem solving
- You stay up to date with the latest technologies and then apply them in real life
- You love paying attention to detail
- You thrive in meeting tight deadlines and prioritising workloads
- Ability to collaborate across multiple functions
Education:
Bachelor’s degree in Engineering or equivalent experience within the field
Job Description for Python Backend Developer
2 + years expertise in Python 3.7, Django 2 (or Django 3).
Familiarity with some ORM (Object Relational Mapper) libraries.
Able to integrate multiple data sources and databases into one system.
Integration of user-facing elements developed by front-end developers with server-side logic in Django (RESTful APIs).
Basic understanding of front-end technologies, such as JavaScript, HTML5, and CSS3
Knowledge of user authentication and authorization between multiple systems, servers, and environments
Understanding of the differences between multiple delivery platforms, such as mobile vs desktop, and optimizing output to match the specific platform
Able to create database schemas that represent and support business processes
Strong unit test and debugging skills.
Proficient understanding of code versioning tools such as Git.
The desirables optionals
Django Channels, Web Sockets, Asyncio.
Experience working with AWS or similar Cloud services.
Experience in containerization technologies such as Docker.
Understanding of fundamental design principles behind a scalable application (caching, Redis)
Role: Software Developer
Industry Type: IT-Software, Software Services
Employment Type Full Time
Role Category Programming & Design
Qualification: Any Graduate in Any Specialization
Key Skills – Python 3.7 Django 2.0 onwards , REST APIs , ORM, Front End for interfacing only ( curl, Postman, Angular for testing), Docker (optional), database (PostgreSQL), Github
- Minimum 7 years of relevant work experience in similar roles.
- Hands-on experience developing and delivering scalable multi-tenant SaaS applications on AWS platform.
- In-depth knowledge of Spring, Spring Boot, Java, REST Web Services, SQL databases, microservices, GRAND stack, SQL and NoSQL databases.
- In-depth knowledge and experience developing and delivering scalable data lakes, data ingestion and processing pipelines, data access microservices.
- In-depth knowledge of AWS platform, tools and services, specifically AWS networking and security, Route53, API Gateway, ECS/Fargate, RDS; Java/Spring development; modern database and data processing technologies; DevOps; microservices architecture; container/Docker technology.
- Outstanding collaboration and communication skills. Ability to effectively collaborate with distributed team.
- Understand and practice agile development methodology.
- Prior experience working in a software product company.
- Prior experience with security product development.
Nice to Have:
- AWS Certified Developer certification is highly desired.
- Prior experience with Apache Spark and Scala.










