Scala Jobs
Let’s develop together!
At Nyteco we’re proud to offer innovative solutions that meet the needs of the recycled materials industry and manufacturing supply chains. We aim to build, acquire or invest in industry start-ups.
As a developer at Nyteco, you will have the responsibility to work with the product manager and the rest of the scrum team to design, sprint and develop features for the end user that are production ready.
At Nyteco developers are independent and manage identically front-end, back-end, database, devops, scripts, etc. The tech should be a tool to help you design and create the best product. Eventually, you shouldn’t be afraid to tackle technical debt, contribute to open source projects and be proud of your technical expertise.
What you will do in this role
- Work with the product manager and the UX designer to validate the features that need to be developed
- Estimating the work to be done and prioritizing the most relevant features first
- Challenging as much as possible the work to be done in order to release an mvp as fast as possible
- Making sure that your features are production ready and monitoring them in the right environments
- Quality test your team’s features in order to make sure that user flows are bug free
- Helping the team design the technical strategy of the whole sprint
- Creating end to end tests to minimize the time to push to production
- Making sure that your features are performant in a production environment
- Helping everyone in the team to create high quality code
- Constantly improving
Who we are looking for
Skills and Qualifications:
- (must have) Typescript is must-have, you need to know what are interfaces, how to type generic functions, and understand typing libraries such as lodash.
- (must have) Functional programming (higher order functions, immutability, side effects, pure functions, etc)
- (must have) React OR React Native (hooks, side effect, optimization, callbacks, deep VS shallow comparison, etc)
- (must have) Any backend framework
- (must have) Any SQL language
- (nice to have) Graphql, Jest, UI Framework, OOP, SQL query optimization, Apollo Server, RabbitMQ, worker/slave architecture, microservice architecture, Docker, Terraform
Requirements and Skills
- At least 3 years of experience developing a product, ideally B2B
- Experience in our technical stack
- Adept of functional programming
- Experience working closely with product management and dev teams to deliver solutions.
- Fluent in English, and great communication skills
- Prior experience in B2B SaaS
Grow, develop and thrive with us
What we offer
Work closely with a global team helping bring market intelligence to the recycling world. As a part of our team, we look to foster relationships and help you grow with us. You can also expect:
- As a global company, we treasure and encourage diversity, perspective, interest, and representation through inclusivity. The more we have, the better the solution.
- Connect and work with leading minds from the recycling industry and be part of a growing, energetic global team, across time zones, regions, offices and screens.
- Exposure to developments and tools within your field ensures evolution in your career and skill building.
- We adopt a Bring Your Own Device policy and encourage flexibility and freedom in how you work through competitive compensation and yearly appraisals
- Health insurance coverage, paid vacation days and flexible work hours help you maintain a work-life balance
- Have the opportunity to network and collaborate in a diverse community.
As Conviva is expanding, we are building products providing deep insights into end-user experience for our customers.
Platform and TLB Team
The vision for the TLB team is to build data processing software that works on terabytes of streaming data in real-time. Engineer the next-gen Spark-like system for in-memory computation of large time-series datasets – both Spark-like backend infra and library-based programming model. Build a horizontally and vertically scalable system that analyses trillions of events per day within sub-second latencies. Utilize the latest and greatest big data technologies to build solutions for use cases across multiple verticals. Lead technology innovation and advancement that will have a big business impact for years to come. Be part of a worldwide team building software using the latest technologies and the best of software development tools and processes.
What You’ll Do
This is an individual contributor position. Expectations will be on the below lines:
- Design, build and maintain the stream processing, and time-series analysis system which is at the heart of Conviva’s products
- Responsible for the architecture of the Conviva platform
- Build features, enhancements, new services, and bug fixing in Scala and Java on a Jenkins-based pipeline to be deployed as Docker containers on Kubernetes
- Own the entire lifecycle of your microservice including early specs, design, technology choice, development, unit-testing, integration-testing, documentation, deployment, troubleshooting, enhancements, etc.
- Lead a team to develop a feature or parts of a product
- Adhere to the Agile model of software development to plan, estimate, and ship per business priority
What you need to succeed
- 5+ years of work experience in software development of data processing products.
- Engineering degree in software or equivalent from a premier institute.
- Excellent knowledge of fundamentals of Computer Science like algorithms and data structures. Hands-on with functional programming and know-how of its concepts
- Excellent programming and debugging skills on the JVM. Proficient in writing code in Scala/Java/Rust/Haskell/Erlang that is reliable, maintainable, secure, and performant
- Experience with big data technologies like Spark, Flink, Kafka, Druid, HDFS, etc.
- Deep understanding of distributed systems concepts and scalability challenges including multi-threading, concurrency, sharding, partitioning, etc.
- Experience/knowledge of Akka/Lagom framework and/or stream processing technologies like RxJava or Project Reactor will be a big plus. Knowledge of design patterns like event-streaming, CQRS and DDD to build large microservice architectures will be a big plus
- Excellent communication skills. Willingness to work under pressure. Hunger to learn and succeed. Comfortable with ambiguity. Comfortable with complexity
Underpinning the Conviva platform is a rich history of innovation. More than 60 patents represent award-winning technologies and standards, including first-of-its kind-innovations like time-state analytics and AI-automated data modeling, that surfaces actionable insights. By understanding real-world human experiences and having the ability to act within seconds of observation, our customers can solve business-critical issues and focus on growing their business ahead of the competition. Examples of the brands Conviva has helped fuel streaming growth for include: DAZN, Disney+, HBO, Hulu, NBCUniversal, Paramount+, Peacock, Sky, Sling TV, Univision and Warner Bros Discovery.
Privately held, Conviva is headquartered in Silicon Valley, California with offices and people around the globe. For more information, visit us at www.conviva.com. Join us to help extend our leadership position in big data streaming analytics to new audiences and markets!
We are looking for computer science/engineering final year students/ fresh graduates that have solid understanding of computer science fundamentals (algorithms, data structures, object oriented programming) and strong java. programming skills. You will get to work on machine learning algorithms as applied to online advertising or do data analytics. You will learn how to collaborate in small, agile teams, do rapid development, testing and get to taste the invigorating feel of a start-up company.
Experience
None required
Required Skills
-Solid foundation in computer science, with strong competencies in data structures, algorithms, and software design
-Java / Python programming
-UI/UX HTML5 CSS3, Javascript
-MYSQL, Relational Databases
-MVC Framework, ReactJS
Optional Skills
-Familiarity with online advertising, web technologies
-Familiarity with Hadoop, Spark, Scala
Education
UG - B.Tech/B.E. - Computers; PG - M.Tech - Computers
- 5-10 Years of experience.
- 3-8 Years of relevant experience in Ab Initio (Required)
- Experience in Full-Life-Cycle Development of ETL Projects (Required)
- Solid experience query performance tuning and ETL
- We need to sort the profiles based on profiles and production support
- Good debugging skills.
- Analyze, troubleshoot and able to provide support for production issues.
- Design ETL Framework for audit and data reconciliation to manage batch and real time interfaces
- Develop ETL jobs for automation, monitoring and responsible for job performance optimization using ETL development tools.
- Able to estimate and Groom stories for development
Job description :
Looking for a passionate developer and team player who wants to learn, contribute and bring fun & energy to the team. We are a friendly startup where we provide opportunities to explore and learn a lot of things(new technology/tools etc.,) in building quality products using best-in-class technology.
Responsibilities :
- Design and develop new features using Full-stack development (Java/Spring/React/Angular/Mysql) for a cloud(AWS/others) and mobile product application in SOA/microservices architecture.
- Design awesome features and continuously improve them by exploring alternatives/technologies to make design improvements.
- Performance testing with Gatling (Scala).
- Work with CI/CD pipeline and tools (Docker, Ansible) to improve the build and deployment process.
- Working with QA to ensure the quality and timing of new release deployments.
Skills/Experience :
- Good coding/problem-solving skills and interest in learning new things will be the key.
- Time/Training will be provided to learn new technologies/tools.
- 1 or more years of professional experience in building web/mobile applications using Java or similar technologies (C#, Ruby, Python, Elixir, NodeJS).
- Experience in Spring Framework or similar frameworks.
- Experience in any DB (SQL/noSQL)
- Any experience in front-end development using React/Vue/Angular/similar frameworks.
- Any experience with Java/similar testing frameworks (JUnit/Mocks etc).
Job Type: Full-time
Role: Data Engineer
Total Experience: 5 to 8 Years
Job Location: Gurgaon
Budget -26 28 LPA
Must have - Technical & Soft Skills:
- Python: Data Structures, List, Libraries, Data engineering basics
- SQL: Joins, Groups, Aggregations, Windowing functions, analytic functions etc.
- Worked in AWS services S3, EC2, Glue, Data Pipeline, Athena and Redshift
- Solid hands-on working experience in Big-Data Technologies
- Strong hands-on experience of programming languages like Python, Scala with Spark.
- Good command and working experience on Hadoop/Map Reduce, HDFS, Hive, HBase, and No-SQL Databases
- Hands on working experience on any of the data engineering/analytics platform AWS preferred
- Hands-on experience on Data Ingestion Apache Nifi, Apache Airflow, Sqoop, and Ozzie
- Hands on working experience of data processing at scale with event driven systems, message queues (Kafka/ Flink/Spark Streaming)
- Hands on working Experience with AWS Services like EMR, Kinesis, S3, CloudFormation, Glue, API
- Gateway, Lake Foundation
- Operationalization of ML models on AWS (e.g. deployment, scheduling, model monitoring etc.)
- Feature Engineering/Data Processing to be used for Model development
- Experience gathering and processing raw data at scale (including writing scripts, web scraping, calling APIs, write SQL queries, etc.)
- Hands-on working experience in analyzing source system data and data flows, working with structured and unstructured data
Job purpose:
- Design a multi-tier data pipeline to feed data into applications for building a full-featured analytics environment.
- Develop high-quality code to support the platform's technical architecture and design.
- Participate and contribute to an effective software development lifecycle using Scrum and Agile.
- Collaborate with global teams and work as one team.
what you get to do:
- You'll work on the design, implementation, and maintenance of data pipelines.
- Design and build database schemas to handle large-scale data migration & transformation.
- Capable of designing a high-performance, scalable, distributed product in the cloud[AWS, GCS].
- Review developmental frameworks, and coding standards, conducts code reviews and walkthroughs, and conduct in-depth design reviews.
- Identify gaps in the existing infrastructure and advocate for the necessary changes to close them.
Who we are looking for:
- 2 to 4 years of industry experience working in Spark and Scala/Python.
- Working experience with big-data tech stacks like Spark, Kafka & Athena.
- Extensive experience in SQL query optimization/tuning and debugging SQL performance issues.
- Experience in ETL/ELT process to move data through the data processing pipeline.
- Be a fearless leader in championing smart design.
Top 3 primary skills and expertise level requirements ( 1 to 5; 5 being expert)
- Excellent programming experience in Scala or Python.
- Good experience in SQL queries and optimizations.
- 2 to 3 years of Spark experience.
- Nice to have experience in Airflow.
- Nice to have experience with AWS EMR, Lambda, and S3.
Employment Type - FULLTIME
Industry Type - Media / Entertainment / Internet
Seniority Level - Mid-Senior-Level
Work Experience(in years) - 2 - 4Years
Education - B.Tech/B.E.
Skills - Python, Scala, Ms Sql Server, Aws
Lead Data Engineer
Data Engineers develop modern data architecture approaches to meet key business objectives and provide end-to-end data solutions. You might spend a few weeks with a new client on a deep technical review or a complete organizational review, helping them to understand the potential that data brings to solve their most pressing problems. On other projects, you might be acting as the architect, leading the design of technical solutions, or perhaps overseeing a program inception to build a new product. It could also be a software delivery project where you're equally happy coding and tech-leading the team to implement the solution.
Job responsibilities
· You might spend a few weeks with a new client on a deep technical review or a complete organizational review, helping them to understand the potential that data brings to solve their most pressing problems
· You will partner with teammates to create complex data processing pipelines in order to solve our clients' most ambitious challenges
· You will collaborate with Data Scientists in order to design scalable implementations of their models
· You will pair to write clean and iterative code based on TDD
· Leverage various continuous delivery practices to deploy, support and operate data pipelines
· Advise and educate clients on how to use different distributed storage and computing technologies from the plethora of options available
· Develop and operate modern data architecture approaches to meet key business objectives and provide end-to-end data solutions
· Create data models and speak to the tradeoffs of different modeling approaches
· On other projects, you might be acting as the architect, leading the design of technical solutions, or perhaps overseeing a program inception to build a new product
· Seamlessly incorporate data quality into your day-to-day work as well as into the delivery process
· Assure effective collaboration between Thoughtworks' and the client's teams, encouraging open communication and advocating for shared outcomes
Job qualifications Technical skills
· You are equally happy coding and leading a team to implement a solution
· You have a track record of innovation and expertise in Data Engineering
· You're passionate about craftsmanship and have applied your expertise across a range of industries and organizations
· You have a deep understanding of data modelling and experience with data engineering tools and platforms such as Kafka, Spark, and Hadoop
· You have built large-scale data pipelines and data-centric applications using any of the distributed storage platforms such as HDFS, S3, NoSQL databases (Hbase, Cassandra, etc.) and any of the distributed processing platforms like Hadoop, Spark, Hive, Oozie, and Airflow in a production setting
· Hands on experience in MapR, Cloudera, Hortonworks and/or cloud (AWS EMR, Azure HDInsights, Qubole etc.) based Hadoop distributions
· You are comfortable taking data-driven approaches and applying data security strategy to solve business problems
· You're genuinely excited about data infrastructure and operations with a familiarity working in cloud environments
· Working with data excites you: you have created Big data architecture, you can build and operate data pipelines, and maintain data storage, all within distributed systems
Professional skills
· Advocate your data engineering expertise to the broader tech community outside of Thoughtworks, speaking at conferences and acting as a mentor for more junior-level data engineers
· You're resilient and flexible in ambiguous situations and enjoy solving problems from technical and business perspectives
· An interest in coaching others, sharing your experience and knowledge with teammates
· You enjoy influencing others and always advocate for technical excellence while being open to change when needed
About Us
Sahaj Software is an artisanal software engineering firm built on the values of trust, respect, curiosity, and craftsmanship, and delivering purpose-built solutions to drive data-led transformation for organizations. Our emphasis is on craft as we create a purpose-built solution, leveraging Data Engineering, Platform Engineering and Data Science with a razor-sharp focus to solve complex business and technology challenges, and provide customers with a competitive edge
Job Description - Full Stack Engineer
Job Title - Solutions Consultant
As a Full Stack Engineer, you’ll feel at home if you are hands-on, grounded, opinionated and passionate about building things using technology. Our tech stack ranges widely with language ecosystems like Typescript, Java, Scala, Golang, Elixir, Python, .Net, Nodejs or even Rust.
Responsibilities
- You will often work in small pizza teams of 2-5 people where a well-founded argument holds more weight than years of experience.
- You will work with customers across domains like retail, banking, publishing, social, education, adtech and more
- Produce high-quality code that allows us to put solutions into production
- Utilize DevOps tools and practices to build and deploy software
- The teams you work with will have experienced and smart people with no roles. The team will self-organize without oversight to own and deliver solutions end to end.
- Work in short sprints to deliver working software with clear deliverables and client led deadlines
- Willingness to be a polyglot developer and learn multiple technologies
Skills you’ll need
- A maker’s mindset. To be resourceful and have the ability to do things that have no instructions
- Demonstrated experience (at least 2+ years) as a Software Engineer
- Deep understanding of fundamentals and at least one programming language (functional or object-oriented)
- Understanding of web APIs, contracts and communication protocols
- A nuanced and rich understanding of code quality, maintainability and practices like Test Driven Development
- Experience with one or more source control and build toolchains
- Working knowledge of CI/CD will be an added advantage
- Understanding of Cloud platforms, infra-automation/DevOps, IaC/GitOps/Containers
What will you experience as a culture at Sahaj?
At Sahaj, people's collective stands for a shared purpose where everyone owns the dreams, ideas, ideologies, successes, and failures of the organization - a synergy that is rooted in the ethos of honesty, respect, trust, and equitability. At Sahaj you will experience
- Creativity
- Ownership
- Curiosity
- Craftsmanship
- A culture of trust, respect and transparency
- Opportunity to collaborate with some of the finest minds in the industry
- Work across multiple domains
What are the benefits of being at Sahaj?
- Unlimited leaves
- Life Insurance & Private Health insurance paid by Sahaj
- Stock options
- No hierarchy
- Open Salaries
Mandatory Requirements
- Experience in AWS Glue
- Experience in Apache Parquet
- Proficient in AWS S3 and data lake
- Knowledge of Snowflake
- Understanding of file-based ingestion best practices.
- Scripting language - Python & pyspark
CORE RESPONSIBILITIES
- Create and manage cloud resources in AWS
- Data ingestion from different data sources which exposes data using different technologies, such as: RDBMS, REST HTTP API, flat files, Streams, and Time series data based on various proprietary systems. Implement data ingestion and processing with the help of Big Data technologies
- Data processing/transformation using various technologies such as Spark and Cloud Services. You will need to understand your part of business logic and implement it using the language supported by the base data platform
- Develop automated data quality check to make sure right data enters the platform and verifying the results of the calculations
- Develop an infrastructure to collect, transform, combine and publish/distribute customer data.
- Define process improvement opportunities to optimize data collection, insights and displays.
- Ensure data and results are accessible, scalable, efficient, accurate, complete and flexible
- Identify and interpret trends and patterns from complex data sets
- Construct a framework utilizing data visualization tools and techniques to present consolidated analytical and actionable results to relevant stakeholders.
- Key participant in regular Scrum ceremonies with the agile teams
- Proficient at developing queries, writing reports and presenting findings
- Mentor junior members and bring best industry practices
QUALIFICATIONS
- 5-7+ years’ experience as data engineer in consumer finance or equivalent industry (consumer loans, collections, servicing, optional product, and insurance sales)
- Strong background in math, statistics, computer science, data science or related discipline
- Advanced knowledge one of language: Java, Scala, Python, C#
- Production experience with: HDFS, YARN, Hive, Spark, Kafka, Oozie / Airflow, Amazon Web Services (AWS), Docker / Kubernetes, Snowflake
- Proficient with
- Data mining/programming tools (e.g. SAS, SQL, R, Python)
- Database technologies (e.g. PostgreSQL, Redshift, Snowflake. and Greenplum)
- Data visualization (e.g. Tableau, Looker, MicroStrategy)
- Comfortable learning about and deploying new technologies and tools.
- Organizational skills and the ability to handle multiple projects and priorities simultaneously and meet established deadlines.
- Good written and oral communication skills and ability to present results to non-technical audiences
- Knowledge of business intelligence and analytical tools, technologies and techniques.
Familiarity and experience in the following is a plus:
- AWS certification
- Spark Streaming
- Kafka Streaming / Kafka Connect
- ELK Stack
- Cassandra / MongoDB
CI/CD: Jenkins, GitLab, Jira, Confluence other related tools
About Kloud9:
Kloud9 exists with the sole purpose of providing cloud expertise to the retail industry. Our team of cloud architects, engineers and developers help retailers launch a successful cloud initiative so you can quickly realise the benefits of cloud technology. Our standardised, proven cloud adoption methodologies reduce the cloud adoption time and effort so you can directly benefit from lower migration costs.
Kloud9 was founded with the vision of bridging the gap between E-commerce and cloud. The E-commerce of any industry is limiting and poses a huge challenge in terms of the finances spent on physical data structures.
At Kloud9, we know migrating to the cloud is the single most significant technology shift your company faces today. We are your trusted advisors in transformation and are determined to build a deep partnership along the way. Our cloud and retail experts will ease your transition to the cloud.
Our sole focus is to provide cloud expertise to retail industry giving our clients the empowerment that will take their business to the next level. Our team of proficient architects, engineers and developers have been designing, building and implementing solutions for retailers for an average of more than 20 years.
We are a cloud vendor that is both platform and technology independent. Our vendor independence not just provides us with a unique perspective into the cloud market but also ensures that we deliver the cloud solutions available that best meet our clients' requirements.
● Overall 8+ Years of Experience in Web Application development.
● 5+ Years of development experience with JAVA8 , Springboot, Microservices and middleware
● 3+ Years of Designing Middleware using Node JS platform.
● good to have 2+ Years of Experience in using NodeJS along with AWS Serverless platform.
● Good Experience with Javascript / TypeScript, Event Loops, ExpressJS, GraphQL, SQL DB (MySQLDB), NoSQL DB(MongoDB) and YAML templates.
● Good Experience with TDD Driven Development and Automated Unit Testing.
● Good Experience with exposing and consuming Rest APIs in Java 8, Springboot platform and Swagger API contracts.
● Good Experience in building NodeJS middleware performing Transformations, Routing, Aggregation, Orchestration and Authentication(JWT/OAUTH).
● Experience supporting and working with cross-functional teams in a dynamic environment.
● Experience working in Agile Scrum Methodology.
● Very good Problem-Solving Skills.
● Very good learner and passion for technology.
● Excellent verbal and written communication skills in English
● Ability to communicate effectively with team members and business stakeholders
Secondary Skill Requirements:
● Experience working with any of Loopback, NestJS, Hapi.JS, Sails.JS, Passport.JS
Why Explore a Career at Kloud9:
With job opportunities in prime locations of US, London, Poland and Bengaluru, we help build your career paths in cutting edge technologies of AI, Machine Learning and Data Science. Be part of an inclusive and diverse workforce that's changing the face of retail technology with their creativity and innovative solutions. Our vested interest in our employees translates to deliver the best products and solutions to our customers.
We are looking for curious & inquisitive technology practitioners. Our customers see us one of the most premium advisory and development services firm, hence most of the problems we work on are complex and often hard to solve. You can expect to work in small (2-5) people teams, working very closely with the customers in iterative developing and evolving the solution. We are continually on the search for passionate, bright and energetic professionals to join our team.
So, if you are someone who has strong fundamentals on technology and wants to stretch, beyond the regular role based boundaries, then Sahaj is the place for you. You will experience a world, where there are no roles or grades and you will play different roles and wear multiple hats, to deliver a software project.
Responsibilities
- Work on complex, custom-designed, scalable, multi-tiered software development projects
- Work closely with clients (commercial & social enterprises, start ups), both Business and Technical staff members *
- Be responsible for the quality of software and resolving any issues regards the solution
- Think through hard problems, not limited to technology and work with a team to realise and implement solutions
- Learn something new everyday
Requirements
- Development and delivery experience in any of the programming languages
- Passion for software engineering and craftsman-like coding prowess
- Great design and solutioning skills (OO & Functional)
- Experience including analysis, design, coding and implementation of large scale custom built object-oriented applications
- Understanding of code refactoring and optimisation issues
- Understanding of Virtualisation & DevOps.
- Experience with Ansible, Chef, Docker preferable *
- Ability to learn new technologies and adapt to different situations
- Ability to handle ambiguity on a day to day basis
- Skills: J2EE, Spring Boot, Hibernate (Java), Java and Scala
Desired Skills and Experience
- NET,Golang,Java,Node.js,Python,Ruby,Scala,Hibernate,J2EE,Ruby on Rails,Spring
InViz is Bangalore Based Startup helping Enterprises simplifying the Search and Discovery experiences for both their end customers as well as their internal users. We use state-of-the-art technologies in Computer Vision, Natural Language Processing, Text Mining, and other ML techniques to extract information/concepts from data of different formats- text, images, videos and make them easily discoverable through simple human-friendly touchpoints.
TSDE - Data
Data Engineer:
- Should have total 3-6 Yrs of experience in Data Engineering.
- Person should have experience in coding data pipeline on GCP.
- Prior experience on Hadoop systems is ideal as candidate may not have total GCP experience.
- Strong on programming languages like Scala, Python, Java.
- Good understanding of various data storage formats and it’s advantages.
- Should have exposure on GCP tools to develop end to end data pipeline for various scenarios (including ingesting data from traditional data bases as well as integration of API based data sources).
- Should have Business mindset to understand data and how it will be used for BI and Analytics purposes.
- Data Engineer Certification preferred
Experience in Working with GCP tools like |
|
Store : CloudSQL , Cloud Storage, Cloud Bigtable, Bigquery, Cloud Spanner, Cloud DataStore |
|
Ingest : Stackdriver, Pub/Sub, AppEngine, Kubernete Engine, Kafka, DataPrep , Micro services |
|
Schedule : Cloud Composer |
|
Processing: Cloud Dataproc, Cloud Dataflow, Cloud Dataprep |
|
CI/CD - Bitbucket+Jenkinjs / Gitlab |
|
Atlassian Suite |
|
|
.
At Everest, we innovate at the intersection of design and engineering to produce outstanding products. The work we do is meaningful and challenging - which makes it interesting. Imagine each line of your code, making the world a better place. We work on five workdays weeks, and overtime is a rarity. If clean architecture, TDD, DDD, DevOps, Microservices, Micro-frontends, scalable systems resonate with you, please apply.
To see the quality of our code, you can checkout some of our open source projects: https://github.com/everest-engineering
If you want to know more about our culture:
https://github.com/everest-engineering/manifesto
Some videos that can help:
https://www.youtube.com/watch?v=A7y9RpqXAdA;
- Passion to own and create amazing product.
- Should be able to clearly understand the customer's problem.
- Should be a collaborative problem solver.
- Should be able a team player.
- Should be open to learn from others and teach others.
- Should be a good problem solver.
- Should be able to take feedback and improve continuously.
- Should commit to inclusion, equity & diversity.
- Should maintain integrity at work.
Requirements:
- Can write reliable, scalable, testable and maintainable code.
- Familiarity with Agile methodologies and clean code.
- Design and/or contribute to client-side and server-side architecture.
- Well versed with fundamentals of REST.
- Build the front-end of applications through appealing visual design.
- Knowledge of one or more front-end languages and libraries (e. g. HTML / CSS, JavaScript, XML, jQuery, Typescript) JavaScript frameworks (e. g. Angular, React, Redux, Vue.js )
- Knowledge of one or more back-end languages (e. g. C#, Java, Python, Go, Node.js and frameworks like SpringBoot, . NET Core)
- Well versed with fundamentals of database design.
- Familiarity with databases - RDBMS like MySQL, Postgres & NoSQL like MongoDB, DynamoDB.
- Well versed with one or more cloud platforms like - AWS, Azure, GCP.
- Familiar with Infrastructure as Code - CloudFormation & Terraform & deployment tools like Docker, Kubernetes.
- Familiarity with CI/CD tools like Jenkins, CircleCI, Github Actions. Unit testing tools like Junit, Mockito, Chai, Mocha, Jest.
good exposure to concepts and/or technology across the broader spectrum. Enterprise Risk Technology
covers a variety of existing systems and green-field projects.
A Full stack Hadoop development experience with Scala development
A Full stack Java development experience covering Core Java (including JDK 1.8) and good understanding
of design patterns.
Requirements:-
• Strong hands-on development in Java technologies.
• Strong hands-on development in Hadoop technologies like Spark, Scala and experience on Avro.
• Participation in product feature design and documentation
• Requirement break-up, ownership and implantation.
• Product BAU deliveries and Level 3 production defects fixes.
Qualifications & Experience
• Degree holder in numerate subject
• Hands on Experience on Hadoop, Spark, Scala, Impala, Avro and messaging like Kafka
• Experience across a core compiled language – Java
• Proficiency in Java related frameworks like Springs, Hibernate, JPA
• Hands on experience in JDK 1.8 and strong skillset covering Collections, Multithreading with
For internal use only
For internal use only
experience working on Distributed applications.
• Strong hands-on development track record with end-to-end development cycle involvement
• Good exposure to computational concepts
• Good communication and interpersonal skills
• Working knowledge of risk and derivatives pricing (optional)
• Proficiency in SQL (PL/SQL), data modelling.
• Understanding of Hadoop architecture and Scala program language is a good to have.
Roles and Responsibilities
Roles and Responsibilities
• Strong experience working with Big Data technologies like Spark (Scala/Java),
• Apache Solr, HIVE, HBase, ElasticSearch, MongoDB, Airflow, Oozie, etc.
• Experience working with Relational databases like MySQL, SQLServer, Oracle etc.
• Good understanding of large system architecture and design
• Experience working in AWS/Azure cloud environment is a plus
• Experience using Version Control tools such as Bitbucket/GIT code repository
• Experience using tools like Maven/Jenkins, JIRA
• Experience working in an Agile software delivery environment, with exposure to
continuous integration and continuous delivery tools
• Passionate about technology and delivering solutions to solve complex business
problems
• Great collaboration and interpersonal skills
• Ability to work with team members and lead by example in code, feature
development, and knowledge sharing
Principal Engineer
8+ yrs | Bangalore Office
●Drive the technology and engineering best practices on different fronts like quality, performance, design, operations at organisational level
●Responsible for design, architecture, and delivery of a feature or component/product with the highest quality with high level directions from architects
●Estimates efforts, identify risks, devices and meets project schedules
●Collaborates effectively with cross functional teams to deliver end to end products / platform features
●Demonstrates ability to multi task and re prioritize responsibilities based on changing requirements
● Runs review meetings effectively and drive the closure of all open issues and RCAs on time
●Provide functional, design, and code reviews in related areas of expertise with in team and cross team.
●Mentors/coaches engineers to facilitate their development and provide technical leadership to them
●8+ years of strong design/development experience in building massively large scale distributed internet systems and products
●Strong Object oriented skills, knowledge of design patterns, and uncanny ability to design intuitive modules and class level interfaces.
●Experience leading & mentoring project teams
●Good experience in working with any of the modern programming languages like,Golang, Python, Scala, Java, C++ etc
●Strong engineering mindset drive design and development of automated
monitoring, alerting, self healing
●Superior organization, communication, interpersonal and leadershipskills
What you will bring along:
●Must be a proven performer and team player that enjoy challenging assignments Must be a proven performer and team player that enjoy challenging assignments in a highin a high--energy, fast growing and start energy, fast growing and start--up workplaceup workplace
●Must be a self must be a self--starter who can work well with minimal guidance and in fluid starter who can work well with minimal guidance and in fluid environmentenvironment
●Provide good attention to details provides good attention to details
●Must be excited by challenges surrounding the development of massively scalable Must be excited by challenges surrounding the development of massively scalable & distributed system& distributed system
●Have worked in fast paced startup or product based companiesHave worked in fast paced startup or product based companies
Good to have
●Open source contribution Open source contribution
●Worked at product companiesWorked at product companies
●Knowledge in Golang Knowledge in Golang
●Working Knowledge of multiple programming languagesWorking Knowledge of multiple programming languages
●Have one or more side project up on GithubHave one or more side project up on Github
at Altimetrik
Big Data Engineer: 5+ yrs.
Immediate Joiner
- Expertise in building AWS Data Engineering pipelines with AWS Glue -> Athena -> Quick sight
- Experience in developing lambda functions with AWS Lambda
- Expertise with Spark/PySpark – Candidate should be hands on with PySpark code and should be able to do transformations with Spark
- Should be able to code in Python and Scala.
- Snowflake experience will be a plus
- We can start keeping Hadoop and Hive requirements as good to have or understanding of is enough rather than keeping it as a desirable requirement.
at Altimetrik
Bigdata with cloud:
Experience : 5-10 years
Location : Hyderabad/Chennai
Notice period : 15-20 days Max
1. Expertise in building AWS Data Engineering pipelines with AWS Glue -> Athena -> Quick sight
2. Experience in developing lambda functions with AWS Lambda
3. Expertise with Spark/PySpark – Candidate should be hands on with PySpark code and should be able to do transformations with Spark
4. Should be able to code in Python and Scala.
5. Snowflake experience will be a plus
A top of the line, premium software advisory & development services firm. Our customers include promising early stage start ups, fortune 500 enterprises and investors. We draw inspiration from Leonardo Da Vinci's famous quote - Simplicity is the ultimate sophistication.
Domains we work in:
Multiple; publishing, retail, banking, networking, social sector, education and many more.
Tech we use
Java, Scala, Golang, Elixir, Python, RoR, .Net, JS frameworks
More details on tech:
You name it and we might be working on it. The important thing is not technology here but what kind of solutions we provide to our clients. We believe to solve some of the most complex problems, holistic thinking and solution design is of extreme importance. Technology is the most important tool to implement the solution thus designed.
Who should join us:
We are looking for curious & inquisitive technology practitioners. Our customers see us one of the most premium advisory and development services firm, hence most of the problems we work on are complex and often hard to solve. You can expect to work in small (2-5) people teams, working very closely with the customers in iterative developing and evolving the solution. We are continually on the search for passionate, bright and energetic professionals to join our team.
So, if you are someone who has strong fundamentals on technology and wants to stretch, beyond the regular role based boundaries, then Sahaj is the place for you. You will experience a world, where there are no roles or grades and you will play different roles and wear multiple hats, to deliver a software project.
- Work on complex, custom-designed, scalable, multi-tiered software development projects
- Work closely with clients (commercial & social enterprises, start-ups), both Business and Technical staff members * Be responsible for the quality of software and resolving any issues regards the solution
- Think through hard problems, not limited to technology and work with a team to realise and implement solutions
- Learn something new every day
- Development and delivery experience in any of the programming languages
- Passion for software engineering and craftsman-like coding prowess
- Great design and solutioning skills (OO & Functional)
- Experience including analysis, design, coding and implementation of large-scale custom-built object-oriented applications
- Understanding of code refactoring and optimisation issues
- Understanding of Virtualisation & DevOps. Experience with Ansible, Chef, and Docker preferable * Ability to learn new technologies and adapt to different situations
- Ability to handle ambiguity on a day-to-day basis
Skills:- J2EE, Spring Boot, Hibernate (Java), Java and Scala
A top of the line, premium software advisory & development services firm. Our customers include promising early stage start ups, fortune 500 enterprises and investors. We draw inspiration from Leonardo Da Vinci's famous quote - Simplicity is the ultimate sophistication.
Domains we work in:
Multiple; publishing, retail, banking, networking, social sector, education and many more.
Tech we use
Java, Scala, Golang, Elixir, Python, RoR, .Net, JS frameworks
More details on tech:
You name it and we might be working on it. The important thing is not technology here but what kind of solutions we provide to our clients. We believe to solve some of the most complex problems, holistic thinking and solution design is of extreme importance. Technology is the most important tool to implement the solution thus designed.
Who should join us:
We are looking for curious & inquisitive technology practitioners. Our customers see us one of the most premium advisory and development services firm, hence most of the problems we work on are complex and often hard to solve. You can expect to work in small (2-5) people teams, working very closely with the customers in iterative developing and evolving the solution. We are continually on the search for passionate, bright and energetic professionals to join our team.
So, if you are someone who has strong fundamentals on technology and wants to stretch, beyond the regular role based boundaries, then Sahaj is the place for you. You will experience a world, where there are no roles or grades and you will play different roles and wear multiple hats, to deliver a software project.
- Work on complex, custom-designed, scalable, multi-tiered software development projects
- Work closely with clients (commercial & social enterprises, start-ups), both Business and Technical staff members * Be responsible for the quality of software and resolving any issues regards the solution
- Think through hard problems, not limited to technology and work with a team to realise and implement solutions
- Learn something new every day
- Development and delivery experience in any of the programming languages
- Passion for software engineering and craftsman-like coding prowess
- Great design and solutioning skills (OO & Functional)
- Experience including analysis, design, coding and implementation of large-scale custom-built object-oriented applications
- Understanding of code refactoring and optimisation issues
- Understanding of Virtualisation & DevOps. Experience with Ansible, Chef, and Docker preferable * Ability to learn new technologies and adapt to different situations
- Ability to handle ambiguity on a day-to-day basis
Skills:- J2EE, Spring Boot, Hibernate (Java), Java and Scala
at Altimetrik
-Expertise in building AWS Data Engineering pipelines with AWS Glue -> Athena -> Quick sight.
-Experience in developing lambda functions with AWS Lambda.
-
Expertise with Spark/PySpark
– Candidate should be hands on with PySpark code and should be able to do transformations with Spark
-Should be able to code in Python and Scala.
-
Snowflake experience will be a plus
Job responsibilities
- You will partner with teammates to create complex data processing pipelines in order to solve our clients' most complex challenges
- You will collaborate with Data Scientists in order to design scalable implementations of their models
- You will pair to write clean and iterative code based on TDD
- Leverage various continuous delivery practices to deploy, support and operate data pipelines
- Advise and educate clients on how to use different distributed storage and computing technologies from the plethora of options available
- Develop and operate modern data architecture approaches to meet key business objectives and provide end-to-end data solutions
- Create data models and speak to the tradeoffs of different modeling approaches
- Seamlessly incorporate data quality into your day-to-day work as well as into the delivery process
- Assure effective collaboration between Thoughtworks' and the client's teams, encouraging open communication and advocating for shared outcomes
- You have a good understanding of data modelling and experience with data engineering tools and platforms such as Kafka, Spark, and Hadoop
- You have built large-scale data pipelines and data-centric applications using any of the distributed storage platforms such as HDFS, S3, NoSQL databases (Hbase, Cassandra, etc.) and any of the distributed processing platforms like Hadoop, Spark, Hive, Oozie, and Airflow in a production setting
- Hands on experience in MapR, Cloudera, Hortonworks and/or cloud (AWS EMR, Azure HDInsights, Qubole etc.) based Hadoop distributions
- You are comfortable taking data-driven approaches and applying data security strategy to solve business problems
- Working with data excites you: you can build and operate data pipelines, and maintain data storage, all within distributed systems
- You're genuinely excited about data infrastructure and operations with a familiarity working in cloud environments
- Professional skills
- You're resilient and flexible in ambiguous situations and enjoy solving problems from technical and business perspectives
- An interest in coaching, sharing your experience and knowledge with teammates
- You enjoy influencing others and always advocate for technical excellence while being open to change when needed
- Presence in the external tech community: you willingly share your expertise with others via speaking engagements, contributions to open source, blogs and more
at Altimetrik
Experience in developing lambda functions with AWS Lambda
Expertise with Spark/PySpark – Candidate should be hands on with PySpark code and should be able to do transformations with Spark
Should be able to code in Python and Scala.
Snowflake experience will be a plus
Sarvaha would like to welcome experienced Backend developers with 6-10 years of designing a broad range of systems & back-end services. Sarvaha is a niche software development company that works with some of the best-funded startups and established companies across the globe. Please visit our website at http://www.sarvaha.com to know more about us.
Key Responsibilities
- Work as a lead contributor for creating technical solutions
- Contribute to scoping, estimating, and proposing technical solutions & development
- Investigate new technologies, provide analysis and recommendations on technical choices
- Responsible for providing hands-on expert-level assistance to developers for technical issues
- Mentor and guide technical team members
Skills Required
- BE/BTech/MTech (CS/IT or MCA), with an emphasis on Software Engineering, is highly preferable
- Strong developer experience using Scala (preferred) or else Java technologies & its frameworks (Java 8 or above)
- Understanding of Spring / Spring Boot / Hibernate
- Hands-on experience with Highly scalable Micro-services with Clustered / Multi-node setup concepts
- Knowledge of design patterns and micro-services design concepts.
- Experience with Queuing mechanism, Producer & Consumer
- API-based development experience (REST, Swagger, OpenAPI, Graph QL etc)
- Experience with DB Concepts, DB Design, SQL, query building and Liquibase
- Secure programming experience working with Vulnerability scanning tools such as Veracode
- Knowledge/experience on Reporting tools/libraries (Jasper reports)
- Other Dev tools - Kafka, Jenkins, Git, Maven, Gradle, SBT, Redis,
- Agile development experience including working with JIRA & Confluence
- An attitude of constant learning of new skills strong communication, collaboration, and influencing skills to drive change
Position Benefits
- Top-notch remuneration and excellent growth opportunities
- An excellent, no-nonsense work environment with the very best people to work with
- Highly challenging software implementation problems
- Work from home or office. We offered complete work from home even before the pandemic.
at Altimetrik
- Expertise in building AWS Data Engineering pipelines with AWS Glue -> Athena -> Quick sight
- Experience in developing lambda functions with AWS Lambda
- Expertise with Spark/PySpark – Candidate should be hands on with PySpark code and should be able to do transformations with Spark
- Should be able to code in Python and Scala.
- Snowflake experience will be a plus
A top of the line, premium software advisory & development services firm. Our customers include promising early stage start ups, fortune 500 enterprises and investors. We draw inspiration from Leonardo Da Vinci's famous quote - Simplicity is the ultimate sophistication.
Domains we work in
Multiple; publishing, retail, banking, networking, social sector, education and many more.
Tech we use
Java, Scala, Golang, Elixir, Python, RoR, .Net, JS frameworks
More details on tech
You name it and we might be working on it. The important thing is not technology here but what kind of solutions we provide to our clients. We believe to solve some of the most complex problems, holistic thinking and solution design is of extreme importance. Technology is the most important tool to implement the solution thus designed.
Who should join us
We are looking for curious & inquisitive technology practitioners. Our customers see us one of the most premium advisory and development services firm, hence most of the problems we work on are complex and often hard to solve. You can expect to work in small (2-5) people teams, working very closely with the customers in iterative developing and evolving the solution. We are continually on the search for passionate, bright and energetic professionals to join our team.
So, if you are someone who has strong fundamentals on technology and wants to stretch, beyond the regular role based boundaries, then Sahaj is the place for you. You will experience a world, where there are no roles or grades and you will play different roles and wear multiple hats, to deliver a software project.
- Work on complex, custom-designed, scalable, multi-tiered software development projects
- Work closely with clients (commercial & social enterprises, start ups), both Business and Technical staff members * Be responsible for the quality of software and resolving any issues regards the solution
- Think through hard problems, not limited to technology and work with a team to realise and implement solutions
- Learn something new everyday
- Development and delivery experience in any of the programming languages
- Passion for software engineering and craftsman-like coding prowess
- Great design and solutioning skills (OO & Functional)
- Experience including analysis, design, coding and implementation of large scale custom built object-oriented applications
- Understanding of code refactoring and optimisation issues
- Understanding of Virtualisation & DevOps. Experience with Ansible, Chef, Docker preferable * Ability to learn new technologies and adapt to different situations
- Ability to handle ambiguity on a day to day basis
Skills:- J2EE, Spring Boot, Hibernate (Java), Java and Scala
Skill- Spark and Scala along with Azure
Location - Pan India
Looking for someone Bigdata along with Azure
WHAT YOU WILL DO:
-
● Create and maintain optimal data pipeline architecture.
-
● Assemble large, complex data sets that meet functional / non-functional business requirements.
-
● Identify, design, and implement internal process improvements: automating manual processes,
optimizing data delivery, re-designing infrastructure for greater scalability, etc.
-
● Build the infrastructure required for optimal extraction, transformation, and loading of data from a wide
variety of data sources using Spark,Hadoop and AWS 'big data' technologies.(EC2, EMR, S3, Athena).
-
● Build analytics tools that utilize the data pipeline to provide actionable insights into customer acquisition,
operational efficiency and other key business performance metrics.
-
● Work with stakeholders including the Executive, Product, Data and Design teams to assist with
data-related technical issues and support their data infrastructure needs.
-
● Keep our data separated and secure across national boundaries through multiple data centers and AWS
regions.
-
● Create data tools for analytics and data scientist team members that assist them in building and
optimizing our product into an innovative industry leader.
-
● Work with data and analytics experts to strive for greater functionality in our data systems.
REQUIRED SKILLS & QUALIFICATIONS:
-
● 5+ years of experience in a Data Engineer role.
-
● Advanced working SQL knowledge and experience working with relational databases, query authoring
(SQL) as well as working familiarity with a variety of databases.
-
● Experience building and optimizing 'big data' data pipelines, architectures and data sets.
-
● Experience performing root cause analysis on internal and external data and processes to answer
specific business questions and identify opportunities for improvement.
-
● Strong analytic skills related to working with unstructured datasets.
-
● Build processes supporting data transformation, data structures, metadata, dependency and workload
management.
-
● A successful history of manipulating, processing and extracting value from large disconnected datasets.
-
● Working knowledge of message queuing, stream processing, and highly scalable 'big data' data stores.
-
● Strong project management and organizational skills.
-
● Experience supporting and working with cross-functional teams in a dynamic environment
-
● Experience with big data tools: Hadoop, Spark, Pig, Vetica, etc.
-
● Experience with AWS cloud services: EC2, EMR, S3, Athena
-
● Experience with Linux
-
● Experience with object-oriented/object function scripting languages: Python, Java, Shell, Scala, etc.
PREFERRED SKILLS & QUALIFICATIONS:
● Graduate degree in Computer Science, Statistics, Informatics, Information Systems or another quantitative field.
Not just a delivery company
RARA NOW is revolutionising instant delivery for e-commerce in Indonesia through data-driven logistics.
RARA NOW is making instant and same-day deliveries scalable and cost-effective by leveraging a differentiated operating model and real-time optimisation technology. RARA makes it possible for anyone, anywhere to get same day delivery in Indonesia. While others are focusing on one-to-one' deliveries, the company has developed proprietary, real-time batching tech to do many-to-many' deliveries within a few hours. RARA is already in partnership with some of the top eCommerce players in Indonesia like Blibli, Sayurbox, Kopi Kenangan and many more.
We are a distributed team with the company headquartered in Singapore, core operations in Indonesia and technology team based out of India
Future of eCommerce Logistics.
Data driven logistics company that is bringing in same day delivery revolution in Indonesia
Revolutionising delivery as an experience
Empowering D2C Sellers with logistics as the core technology
**About the Role**
Integration of user-facing elements developed by front-end developers with server side logic
Implementation of security and data protection
Integration of data storage solutions
Strong proficiency with JavaScript
Knowledge of Node.js and frameworks available for it
Understanding the nature of asynchronous programming and its quirks and workarounds
Good understanding of server-side templating languages and CSS preprocessor
Basic understanding of front-end technologies, such as HTML5 and CSS3
User authentication and authorization between multiple systems, servers, and environments
Understanding differences between multiple delivery platforms, such as mobile vs. desktop, and optimizing output to match the specific platform
Implementing automated testing platforms and unit test
Strong technical development experience in effectively writing code, performing code reviews, and implementing best practices on configuration management and code refactoring
Experience in working with vendor applications
Experience in making optimized queries to MySQL database
Proven problem solving and analytical skills
A delivery-focused approach to work and the ability to work without direction
Experience in Agile development techniques, including Scrum
Experience implementing and/or using Git
Ability to work collaboratively in teams and develop meaningful relationships to achieve common goals
Bachelor degree in Computer Science or related discipline preferred
Our company is seeking to hire a skilled software developer to help with the development of our AI/ML platform.
Your duties will primarily revolve around building Platform by writing code in Scala, as well as modifying platform
to fix errors, work on distributed computing, adapt it to new cloud services, improve its performance, or upgrade
interfaces. To be successful in this role, you will need extensive knowledge of programming languages and the
software development life-cycle.
Responsibilities:
Analyze, design develop, troubleshoot and debug Platform
Writes code and guides other team membersfor best practices and performs testing and debugging of
applications.
Specify, design and implementminor changes to existing software architecture. Build highly complex
enhancements and resolve complex bugs. Build and execute unit tests and unit plans.
Duties and tasks are varied and complex, needing independent judgment. Fully competent in own area of
expertise
Experience:
The candidate should have about 2+ years of experience with design and development in Java/Scala. Experience in
algorithm, Distributed System, Data-structure, database and architectures of distributed System is mandatory.
Required Skills:
1. In-depth knowledge of Hadoop, Spark architecture and its componentssuch as HDFS, YARN and executor, cores and memory param
2. Knowledge of Scala/Java.
3. Extensive experience in developing spark job. Should possess good Oops knowledge and be aware of
enterprise application design patterns.
4. Good knowledge of Unix/Linux.
5. Experience working on large-scale software projects
6. Keep an eye out for technological trends, open-source projects that can be used.
7. Knows common programming languages Frameworks
- Manages the delivery of large, complex Data Science projects using appropriate frameworks and collaborating with stake holders to manage scope and risk. Help the AI/ML Solution
- Analyst to build solution as per customer need on our platform Newgen AI Cloud. Drives profitability and continued success by managing service quality and cost and leading delivery. Proactively support sales through innovative solutions and delivery excellence.
Work location: Gurugram
Key Responsibilities:
1 Collaborate/contribute to all project phases, technical know to design, develop solutions and deploy at customer end.
2 End-to-end implementations i.e. gathering requirements, analysing, designing, coding, deployment to Production
3 Client facing role talking to client on regular basis to get requirement clarification
4. Lead the team
Core Tech Skills: Azure, Cloud Computing, Java/Scala, Python, Design Patterns and fair knowledge of Data Science. Fair Knowledge of Data Lake/DWH
Educational Qualification: Engineering graduate preferably Computer since graduate
Number Theory is looking for experienced software/data engineer who would be focused on owning and rearchitecting dynamic pricing engineering systems
Job Responsibilities:
Evaluate and recommend Big Data technology stack best suited for NT AI at scale Platform
and other products
Lead the team for defining proper Big Data Architecture Design.
Design and implement features on NT AI at scale platform using Spark and other Hadoop
Stack components.
Drive significant technology initiatives end to end and across multiple layers of architecture
Provides strong technical leadership in adopting and contributing to open source technologies related to Big Data across multiple engagements
Designing /architecting complex, highly available, distributed, failsafe compute systems dealing with considerable scalable amount of data
Identify and work upon incorporating Non-functional requirements into the solution (Performance, scalability, monitoring etc.)
Requirements:
A successful candidate with 8+ years of experience in the role of implementation of a highend software product.
Provides technical leadership in Big Data space (Spark and Hadoop Stack like Map/Reduc,
HDFS, Hive, HBase, Flume, Sqoop etc. NoSQL stores like Cassandra, HBase etc) across
Engagements and contributes to open-source Big Data technologies.
Rich hands on in Spark and worked on Spark at a larger scale.
Visualize and evangelize next generation infrastructure in Big Data space (Batch, Near
Real-time, Realtime technologies).
Passionate for continuous learning, experimenting, applying and contributing towards
cutting edge open-source technologies and software paradigms
Expert-level proficiency in Java and Scala.
Strong understanding and experience in distributed computing frameworks, particularly
Apache Hadoop2.0 (YARN; MR & HDFS) and associated technologies one or more of Hive,
Sqoop, Avro, Flume, Oozie, Zookeeper, etc.Hands-on experience with Apache Spark and its
components (Streaming, SQL, MLLib)
Operating knowledge of cloud computing platforms (AWS,Azure) –
Good to have:
Operating knowledge of different enterprise hadoop distribution (C) –
Good Knowledge of Design Patterns
Experience working within a Linux computing environment, and use of command line tools
including knowledge of shell/Python scripting for automating common tasks.
Hiring - Python Developer Freelance Consultant (WFH-Remote)
Greetings from Deltacubes Technology!!
Skillset Required:
Python
Pyspark
AWS
Scala
Experience:
5+ years
Thanks
Bavithra
1. ROLE AND RESPONSIBILITIES
1.1. Implement next generation intelligent data platform solutions that help build high performance distributed systems.
1.2. Proactively diagnose problems and envisage long term life of the product focusing on reusable, extensible components.
1.3. Ensure agile delivery processes.
1.4. Work collaboratively with stake holders including product and engineering teams.
1.5. Build best-practices in the engineering team.
2. PRIMARY SKILL REQUIRED
2.1. Having a 2-6 years of core software product development experience.
2.2. Experience of working with data-intensive projects, with a variety of technology stacks including different programming languages (Java,
Python, Scala)
2.3. Experience in building infrastructure required for optimal extraction, transformation, and loading of data from a wide variety of data
sources to support other teams to run pipelines/jobs/reports etc.
2.4. Experience in Open-source stack
2.5. Experiences of working with RDBMS databases, NoSQL Databases
2.6. Knowledge of enterprise data lakes, data analytics, reporting, in-memory data handling, etc.
2.7. Have core computer science academic background
2.8. Aspire to continue to pursue career in technical stream
3. Optional Skill Required:
3.1. Understanding of Big Data technologies and Machine learning/Deep learning
3.2. Understanding of diverse set of databases like MongoDB, Cassandra, Redshift, Postgres, etc.
3.3. Understanding of Cloud Platform: AWS, Azure, GCP, etc.
3.4. Experience in BFSI domain is a plus.
4. PREFERRED SKILLS
4.1. A Startup mentality: comfort with ambiguity, a willingness to test, learn and improve rapidl
- Data Engineer
Required skill set: AWS GLUE, AWS LAMBDA, AWS SNS/SQS, AWS ATHENA, SPARK, SNOWFLAKE, PYTHON
Mandatory Requirements
- Experience in AWS Glue
- Experience in Apache Parquet
- Proficient in AWS S3 and data lake
- Knowledge of Snowflake
- Understanding of file-based ingestion best practices.
- Scripting language - Python & pyspark
CORE RESPONSIBILITIES
- Create and manage cloud resources in AWS
- Data ingestion from different data sources which exposes data using different technologies, such as: RDBMS, REST HTTP API, flat files, Streams, and Time series data based on various proprietary systems. Implement data ingestion and processing with the help of Big Data technologies
- Data processing/transformation using various technologies such as Spark and Cloud Services. You will need to understand your part of business logic and implement it using the language supported by the base data platform
- Develop automated data quality check to make sure right data enters the platform and verifying the results of the calculations
- Develop an infrastructure to collect, transform, combine and publish/distribute customer data.
- Define process improvement opportunities to optimize data collection, insights and displays.
- Ensure data and results are accessible, scalable, efficient, accurate, complete and flexible
- Identify and interpret trends and patterns from complex data sets
- Construct a framework utilizing data visualization tools and techniques to present consolidated analytical and actionable results to relevant stakeholders.
- Key participant in regular Scrum ceremonies with the agile teams
- Proficient at developing queries, writing reports and presenting findings
- Mentor junior members and bring best industry practices
QUALIFICATIONS
- 5-7+ years’ experience as data engineer in consumer finance or equivalent industry (consumer loans, collections, servicing, optional product, and insurance sales)
- Strong background in math, statistics, computer science, data science or related discipline
- Advanced knowledge one of language: Java, Scala, Python, C#
- Production experience with: HDFS, YARN, Hive, Spark, Kafka, Oozie / Airflow, Amazon Web Services (AWS), Docker / Kubernetes, Snowflake
- Proficient with
- Data mining/programming tools (e.g. SAS, SQL, R, Python)
- Database technologies (e.g. PostgreSQL, Redshift, Snowflake. and Greenplum)
- Data visualization (e.g. Tableau, Looker, MicroStrategy)
- Comfortable learning about and deploying new technologies and tools.
- Organizational skills and the ability to handle multiple projects and priorities simultaneously and meet established deadlines.
- Good written and oral communication skills and ability to present results to non-technical audiences
- Knowledge of business intelligence and analytical tools, technologies and techniques.
Familiarity and experience in the following is a plus:
- AWS certification
- Spark Streaming
- Kafka Streaming / Kafka Connect
- ELK Stack
- Cassandra / MongoDB
- CI/CD: Jenkins, GitLab, Jira, Confluence other related tools
Job Description
- Solid technical skills with a proven and successful history working with data at scale and empowering organizations through data
- Big data processing frameworks: Spark, Scala, Hadoop, Hive, Kafka, EMR with Python
- Advanced experience and hands-on architecture and administration experience on big data platforms
LogiNext is looking for a technically savvy and passionate Software Engineer - Data Science to analyze large amounts of raw information to find patterns that will help improve our company. We will rely on you to build data products to extract valuable business insights.
In this role, you should be highly analytical with a knack for analysis, math and statistics. Critical thinking and problem-solving skills are essential for interpreting data. We also want to see a passion for machine-learning and research.
Your goal will be to help our company analyze trends to make better decisions. Without knowledge of how the software works, data scientists might have difficulty in work. Apart from experience in developing R and Python, they must know modern approaches to software development and their impact. DevOps continuous integration and deployment, experience in cloud computing are everyday skills to manage and process data.
Responsibilities:
Identify valuable data sources and automate collection processes Undertake preprocessing of structured and unstructured data Analyze large amounts of information to discover trends and patterns Build predictive models and machine-learning algorithms Combine models through ensemble modeling Present information using data visualization techniques Propose solutions and strategies to business challenges Collaborate with engineering and product development teams
Requirements:
Bachelors degree or higher in Computer Science, Information Technology, Information Systems, Statistics, Mathematics, Commerce, Engineering, Business Management, Marketing or related field from top-tier school 2 to 3 year experince in in data mining, data modeling, and reporting. Understading of SaaS based products and services. Understanding of machine-learning and operations research Experience of R, SQL and Python; familiarity with Scala, Java or C++ is an asset Experience using business intelligence tools (e.g. Tableau) and data frameworks (e.g. Hadoop) Analytical mind and business acumen and problem-solving aptitude Excellent communication and presentation skills Proficiency in Excel for data management and manipulation Experience in statistical modeling techniques and data wrangling Able to work independently and set goals keeping business objectives in mind
LogiNext is looking for a technically savvy and passionate Senior Software Engineer - Data Science to analyze large amounts of raw information to find patterns that will help improve our company. We will rely on you to build data products to extract valuable business insights.
In this role, you should be highly analytical with a knack for analysis, math and statistics. Critical thinking and problem-solving skills are essential for interpreting data. We also want to see a passion for machine-learning and research.
Your goal will be to help our company analyze trends to make better decisions. Without knowledge of how the software works, data scientists might have difficulty in work. Apart from experience in developing R and Python, they must know modern approaches to software development and their impact. DevOps continuous integration and deployment, experience in cloud computing are everyday skills to manage and process data.
Responsibilities :
Adapting and enhancing machine learning techniques based on physical intuition about the domain Design sampling methodology, prepare data, including data cleaning, univariate analysis, missing value imputation, , identify appropriate analytic and statistical methodology, develop predictive models and document process and results Lead projects both as a principal investigator and project manager, responsible for meeting project requirements on schedule and on budget Coordinate and lead efforts to innovate by deriving insights from heterogeneous sets of data generated by our suite of Aerospace products Support and mentor data scientists Maintain and work with our data pipeline that transfers and processes several terabytes of data using Spark, Scala, Python, Apache Kafka, Pig/Hive & Impala Work directly with application teams/partners (internal clients such as Xbox, Skype, Office) to understand their offerings/domain and help them become successful with data so they can run controlled experiments (a/b testing) Understand the data generated by experiments, and producing actionable, trustworthy conclusions from them Apply data analysis, data mining and data processing to present data clearly and develop experiments (ab testing) Work with development team to build tools for data logging and repeatable data tasks tol accelerate and automate data scientist duties
Requirements:
Bachelor’s or Master’s degree in Computer Science, Math, Physics, Engineering, Statistics or other technical field. PhD preferred 4 to 7 years of experience in data mining, data modeling, and reporting 3+ years of experience working with large data sets or do large scale quantitative analysis Expert SQL scripting required Development experience in one of the following: Scala, Java, Python, Perl, PHP, C++ or C# Experience working with Hadoop, Pig/Hive, Spark, MapReduce Ability to drive projects Basic understanding of statistics – hypothesis testing, p-values, confidence intervals, regression, classification, and optimization are core lingo Analysis - Should be able to perform Exploratory Data Analysis and get actionable insights from the data, with impressive visualization. Modeling - Should be familiar with ML concepts and algorithms; understanding of the internals and pros/cons of models is required. Strong algorithmic problem-solving skills Experience manipulating large data sets through statistical software (ex. R, SAS) or other methods Superior verbal, visual and written communication skills to educate and work with cross functional teams on controlled experiments Experimentation design or A/B testing experience is preferred. Experince in team management.
•Design and develop distributed, scalable, high availability web services.
•Work independently completing small to Mid-sized projects while
managing competing priorities in a demanding production environment.
•you will be writing reusable and maintainable quality code.
What You'll Bring
•BS in CS (or equivalent) and 4+ years of hands-on software design and
development experience in building high-availability, scalable backend
systems.
•hands-on coding experience is a must.
•Expertise in working on Java technology stacks in Linux environment -
Java, Spring/ Hibernate, MVC frameworks, TestNG, JUnit.
•Expertise in Database Schema Design, performance efficiency, and SQL
working on leading RDBMS such as MySQL, Oracle, MSSQL, etc.
•Expertise in OOAP, Restful Web Services, and building scalable systems
Preferred Qualifications:
•Experience using Platforms such as Drools, Solr, Memcached, AKKA, Scala,
Kafka etc. is a plus
•Participation in and Contributions to Open-Source Software Development and contributions
Hiring For Data Engineer - Bangalore (Novel Tech Park)
Salary : Max upto 15LPA
Experience : 3-5years
- We are looking for an experienced (3-5 years) Data Engineers to join our team in Bangalore.
- Someone who can help client to build scalable, reliable, and secure Data analytic solutions.
Technologies you will get to work with:
1.Azure Data-bricks
2.Azure Data factory
3.Azure DevOps
4.Spark with Python & Scala and Airflow scheduling.
What You will Do: -
* Build large-scale batch and real-time data pipelines with data processing frameworks like spark, Scala on Azure platform.
* Collaborate with other software engineers, ML engineers and stakeholders, taking learning and leadership opportunities that will arise every single day.
* Use best practices in continuous integration and delivery.
* Sharing technical knowledge with other members of the Data Engineering Team.
* Work in multi-functional agile teams to continuously experiment, iterate and deliver on new product objectives.
* You will get to work with massive data sets and learn to apply the latest big data technologies on a leading-edge platform.
Job Functions:
Information Technology Employment
Type - Full-time
Who can apply -Seniority Level / Mid / Entry level
•3+ years of experience in big data & data warehousing technologies
•Experience in processing and organizing large data sets
•Experience with big data tool sets such Airflow and Oozie
•Experience working with BigQuery, Snowflake or MPP, Kafka, Azure, GCP and AWS
•Experience developing in programming languages such as SQL, Python, Java or Scala
•Experience in pulling data from variety of databases systems like SQL Server, maria DB, Cassandra
NOSQL databases
•Experience working with retail, advertising or media data at large scale
•Experience working with data science engineering, advanced data insights development
•Strong quality proponent and thrives to impress with his/her work
•Strong problem-solving skills and ability to navigate complicated database relationships
•Good written and verbal communication skills , Demonstrated ability to work with product
management and/or business users to understand their needs.
Job responsibilities
- You will partner with teammates to create complex data processing pipelines in order to solve our clients' most complex challenges
- You will pair to write clean and iterative code based on TDD
- Leverage various continuous delivery practices to deploy, support and operate data pipelines
- Advise and educate clients on how to use different distributed storage and computing technologies from the plethora of options available
- Develop and operate modern data architecture approaches to meet key business objectives and provide end-to-end data solutions
- Create data models and speak to the tradeoffs of different modeling approaches
- Seamlessly incorporate data quality into your day-to-day work as well as into the delivery process
- Encouraging open communication and advocating for shared outcomes
Technical skills
- You have a good understanding of data modelling and experience with data engineering tools and platforms such as Spark (Scala) and Hadoop
- You have built large-scale data pipelines and data-centric applications using any of the distributed storage platforms such as HDFS, S3, NoSQL databases (Hbase, Cassandra, etc.) and any of the distributed processing platforms like Hadoop, Spark, Hive, Oozie, and Airflow in a production setting
- Hands on experience in MapR, Cloudera, Hortonworks and/or cloud (AWS EMR, Azure HDInsights, Qubole etc.) based Hadoop distributions
- You are comfortable taking data-driven approaches and applying data security strategy to solve business problems
- Working with data excites you: you can build and operate data pipelines, and maintain data storage, all within distributed systems
- You're genuinely excited about data infrastructure and operations with a familiarity working in cloud environments
Professional skills
- You're resilient and flexible in ambiguous situations and enjoy solving problems from technical and business perspectives
- An interest in coaching, sharing your experience and knowledge with teammates
- You enjoy influencing others and always advocate for technical excellence while being open to change when needed
- Presence in the external tech community: you willingly share your expertise with others via speaking engagements, contributions to open source, blogs and more
- 5+ years of experience in a Data Engineering role on cloud environment
- Must have good experience in Scala/PySpark (preferably on data-bricks environment)
- Extensive experience with Transact-SQL.
- Experience in Data-bricks/Spark.
- Strong experience in Dataware house projects
- Expertise in database development projects with ETL processes.
- Manage and maintain data engineering pipelines
- Develop batch processing, streaming and integration solutions
- Experienced in building and operationalizing large-scale enterprise data solutions and applications
- Using one or more of Azure data and analytics services in combination with custom solutions
- Azure Data Lake, Azure SQL DW (Synapse), and SQL Database products or equivalent products from other cloud services providers
- In-depth understanding of data management (e. g. permissions, security, and monitoring).
- Cloud repositories for e.g. Azure GitHub, Git
- Experience in an agile environment (Prefer Azure DevOps).
Good to have
- Manage source data access security
- Automate Azure Data Factory pipelines
- Continuous Integration/Continuous deployment (CICD) pipelines, Source Repositories
- Experience in implementing and maintaining CICD pipelines
- Power BI understanding, Delta Lake house architecture
- Knowledge of software development best practices.
- Excellent analytical and organization skills.
- Effective working in a team as well as working independently.
- Strong written and verbal communication skills.
- Expertise in database development projects and ETL processes.
Hi,
Enterprise Minds is looking for Data Architect for Pune Location.
Req Skills:
Python,Pyspark,Hadoop,Java,Scala
We are hiring for Tier 1 MNC for the software developer with good knowledge in Spark,Hadoop and Scala
Should have experience in Big Data, Hadoop.
Currently providing WFH.
immediate joiner or 30 days
- 15+ years of Hands-on technical application architecture experience and Application build/ modernization experience
- 15+ years of experience as a technical specialist in Customer-facing roles.
- Ability to travel to client locations as needed (25-50%)
- Extensive experience architecting, designing and programming applications in an AWS Cloud environment
- Experience with designing and building applications using AWS services such as EC2, AWS Elastic Beanstalk, AWS OpsWorks
- Experience architecting highly available systems that utilize load balancing, horizontal scalability and high availability
- Hands-on programming skills in any of the following: Python, Java, Node.js, Ruby, .NET or Scala
- Agile software development expert
- Experience with continuous integration tools (e.g. Jenkins)
- Hands-on familiarity with CloudFormation
- Experience with configuration management platforms (e.g. Chef, Puppet, Salt, or Ansible)
- Strong scripting skills (e.g. Powershell, Python, Bash, Ruby, Perl, etc.)
- Strong practical application development experience on Linux and Windows-based systems
- Extra curricula software development passion (e.g. active open source contributor)