Spark / Scala experience should be more than 2 years.
Combination with Java & Scala is fine or we are even fine with Big Data Developer with strong Core Java Concepts. - Scala / Spark Developer.
Strong proficiency Scala on Spark (Hadoop) - Scala + Java is also preferred
Complete SDLC process and Agile Methodology (Scrum)
Version control / Git

About Accion Labs
About
Accion Labs, Inc. ranked number one IT Company based out of Pittsburgh headquartered global technology firm.
Accion labs Inc: Winner of Fastest growing Company in Pittsburgh, Raked as #1 IT services company two years in a row (2014, 2015), by Pittsburgh Business Times Accion Labs is venture-funded, profitable and fast-growing- allowing you an opportunity to grow with us 11 global offices, 1300+ employees, 80+ tech company clients 90% of our clients we work with are Direct Clients and project based. Offering a full range of product life-cycle services in emerging technology segments including Web 2.0, Open Source, SaaS /Cloud, Mobility, IT Operations Management/ITSM, Big Data and traditional BI/DW, Automation engineering (Rackspace team), devops engineering.
Employee strength: 1300+ employees
Why Accion Labs:
- Emerging technology projects i.e. Web 2.0, SaaS, cloud, mobility, BI/DW and big data
- Great learning environment
- Onsite opportunity it totally depends on project requirement
- We invest in training our resources in latest frameworks, tools, processes and best-practices and also cross-training our resources across a range of emerging technologies – enabling you to develop more marketable skill
- Employee friendly environment with 100% focus on work-life balance, life-long learning and open communication
- Allow our employees to directly interact with clients
Connect with the team
Similar jobs
🚀 Hiring: Data Engineer | GCP + Spark + Python + .NET |
| 6–10 Yrs | Gurugram (Hybrid)
We’re looking for a skilled Data Engineer with strong hands-on experience in GCP, Spark-Scala, Python, and .NET.
📍 Location: Suncity, Sector 54, Gurugram (Hybrid – 3 days onsite)
💼 Experience: 6–10 Years
⏱️ Notice Period :- Immediate Joiner
Required Skills:
- 5+ years of experience in distributed computing (Spark) and software development.
- 3+ years of experience in Spark-Scala
- 5+ years of experience in Data Engineering.
- 5+ years of experience in Python.
- Fluency in working with databases (preferably Postgres).
- Have a sound understanding of object-oriented programming and development principles.
- Experience working in an Agile Scrum or Kanban development environment.
- Experience working with version control software (preferably Git).
- Experience with CI/CD pipelines.
- Experience with automated testing, including integration/delta, Load, and Performance
Hands-on experience with Spark and SQL
Good to have java knowledge
Roles and Responsibilities
Skills Required:
- 4+ years of technical experience in a developer role
- Strong proficiency with Core Java
- Database experience preferably with DB2, Sybase, or Oracle
- Complete SDLC process and Agile Methodology (Scrum)
- Strong oral and written communication skills
- Excellent interpersonal skills and professional approach
- Bachelor’s degree in Computer Science, MIS, or other technology/engineering discipline
Skill Desired:
-Strong proficiency with Scala on Spark
- Previous experience in front office and back office reports
- Strong understanding Order Life Cycle management from Equities or Listed derivatives perspective
- Previous experience in Trade Surveillance or working with data from the order lifecycle
- Good to have knowledge on Hadoop Technologies
- High quality software architecture and design methodologies and patterns
- Work experience as level-3 support for applications
- Layered Architecture, Component based Architecture
- XML-based technologies
- Unix OS, Scripting, Python or Perl
- Experience in development on other application types (Web applications, batch, or streaming)
Software Developer
Roles and Responsibilities
- Apply knowledge set to fetch data from multiple online sources, cleanse it and build APIs on top of it
- Develop a deep understanding of our vast data sources on the web and know exactly how, when, and which data to scrap, parse and store
- We're looking for people who will naturally take ownership of data products and who can bring a project all the way from a fast prototype to production.
Desired Candidate Profile
- At Least 1-2 years of experience
- Strong coding experience in Python (knowledge of Javascripts is a plus)
- Strong knowledge of scraping frameworks in Python (Request, Beautiful Soup)
- Experience with SQL and NoSQL databases
- Knowledge in Version Control tools like GIT.
- Good understanding and hands-on with scheduling and managing tasks with cron.
Nice to have:
- Experience of having worked with elastic search
- Experience with multi-processing, multi-threading, and AWS/Azure is a plus
- Experience with web crawling is a plus
- Deploy server/related components to staging, live environments.
- Experience with cloud environments like AWS,etc as well as cloud solutions like Docker,Lambda, etc
- Experience in DevOps and related practices to improve development lifecycle, continuous delivery with high quality is an advantage.
- Compile and analyze data, processes, and codes to troubleshoot problems and identify areas for improvement
- Candidate Should have 5+ Years Of Experience in Core Java
- You will need strong development skills to work on and improve our Scala-based services, and be able to work together with senior teammates to create appropriate architectural design and ensure all aspects are appropriate to meet the business need.
- Excellent Functional Design and Functional Programming skills (more than 2 years of business experience in Scala and Java projects, respectively)
- Core skills in key supporting technologies and/or frameworks such as Play (AKKA) / Lagom
- Proven experience working in teams in the successful delivery of complex, performant and high quality products
- Excellent spoken and written communication skills
- Experience of SaaS (Software as a Service) environments
- Exposure to RESTful web APIs and a service oriented architecture
- Experience in Linux environments, Shell scripting etc
- Working with XML and JSON including parsing, asserting / matching and extracting
- Experience with Continuous Integration environments and build tools, including Terraform, Jenkins, Maven, Gradle and Ant
- Experience with messaging systems such as Apache Kafka, Amazon Kinesis, Amazon SQS and Rabbit MQ
- Experience working on Live platform SDKs such as Twilio, AWS Elemental
-
Bachelor’s or master’s degree in Computer Engineering, Computer Science, Computer Applications, Mathematics, Statistics, or related technical field. Relevant experience of at least 3 years in lieu of above if from a different stream of education.
-
Well-versed in and 3+ hands-on demonstrable experience with: ▪ Stream & Batch Big Data Pipeline Processing using Apache Spark and/or Apache Flink.
▪ Distributed Cloud Native Computing including Server less Functions
▪ Relational, Object Store, Document, Graph, etc. Database Design & Implementation
▪ Micro services Architecture, API Modeling, Design, & Programming -
3+ years of hands-on development experience in Apache Spark using Scala and/or Java.
-
Ability to write executable code for Services using Spark RDD, Spark SQL, Structured Streaming, Spark MLLib, etc. with deep technical understanding of Spark Processing Framework.
-
In-depth knowledge of standard programming languages such as Scala and/or Java.
-
3+ years of hands-on development experience in one or more libraries & frameworks such as Apache Kafka, Akka, Apache Storm, Apache Nifi, Zookeeper, Hadoop ecosystem (i.e., HDFS, YARN, MapReduce, Oozie & Hive), etc.; extra points if you can demonstrate your knowledge with working examples.
-
3+ years of hands-on development experience in one or more Relational and NoSQL datastores such as PostgreSQL, Cassandra, HBase, MongoDB, DynamoDB, Elastic Search, Neo4J, etc.
-
Practical knowledge of distributed systems involving partitioning, bucketing, CAP theorem, replication, horizontal scaling, etc.
-
Passion for distilling large volumes of data, analyze performance, scalability, and capacity performance issues in Big Data Platforms.
-
Ability to clearly distinguish system and Spark Job performances and perform spark performance tuning and resource optimization.
-
Perform benchmarking/stress tests and document the best practices for different applications.
-
Proactively work with tenants on improving the overall performance and ensure the system is resilient, and scalable.
-
Good understanding of Virtualization & Containerization; must demonstrate experience in technologies such as Kubernetes, Istio, Docker, OpenShift, Anthos, Oracle VirtualBox, Vagrant, etc.
-
Well-versed with demonstrable working experience with API Management, API Gateway, Service Mesh, Identity & Access Management, Data Protection & Encryption.
Hands-on experience with demonstrable working experience with DevOps tools and platforms viz., Jira, GIT, Jenkins, Code Quality & Security Plugins, Maven, Artifactory, Terraform, Ansible/Chef/Puppet, Spinnaker, etc.
-
Well-versed in AWS and/or Azure or and/or Google Cloud; must demonstrate experience in at least FIVE (5) services offered under AWS and/or Azure or and/or Google Cloud in any categories: Compute or Storage, Database, Networking & Content Delivery, Management & Governance, Analytics, Security, Identity, & Compliance (or) equivalent demonstrable Cloud Platform experience.
-
Good understanding of Storage, Networks and Storage Networking basics which will enable you to work in a Cloud environment.
-
Good understanding of Network, Data, and Application Security basics which will enable you to work in a Cloud as well as Business Applications / API services environment.
Software Development Engineer – SDE 2.
As a Software Development Engineer at Amazon, you have industry-leading technical abilities and demonstrate breadth and depth of knowledge. You build software to deliver business impact, making smart technology choices. You work in a team and drive things forward.
Top Skills
You write high quality, maintainable, and robust code, often in Java or C++ or C#
You recognize and adopt best practices in software engineering: design, testing, version control, documentation, build, deployment, and operations.
You have experience building scalable software systems that are high-performance, highly-available, highly transactional, low latency and massively distributed.
Roles & Responsibilities
You solve problems at their root, stepping back to understand the broader context.
You develop pragmatic solutions and build flexible systems that balance engineering complexity and timely delivery, creating business impact.
You understand a broad range of data structures and algorithms and apply them to deliver high-performing applications.
You recognize and use design patterns to solve business problems.
You understand how operating systems work, perform and scale.
You continually align your work with Amazon’s business objectives and seek to deliver business value.
You collaborate to ensure that decisions are based on the merit of the proposal, not the proposer.
You proactively support knowledge-sharing and build good working relationships within the team and with others in Amazon.
You communicate clearly with your team and with other groups and listen effectively.
Skills & Experience
Bachelors or Masters in Computer Science or relevant technical field.
Experience in software development and full product life-cycle.
Excellent programming skills in any object-oriented programming languages - preferably Java, C/C++/C#, Perl, Python, or Ruby.
Strong knowledge of data structures, algorithms, and designing for performance, scalability, and availability.
Proficiency in SQL and data modeling.
• Proficient in software development from inception to production releases using modern
programming languages ( Preferably Java, NodeJS, and Scala)
• Hands-on experience with cloud infrastructure, solution architecture on AWS or Azure
• Prior experience working as a Full-stack engineer building cloud-native, SaaS products.
• Expertise in programming and designing circuit breakers, the localized impact of failures,
service mesh, event sourcing, distributed data transactions, and eventual consistency.
• Proficient in designing and developing SAAS on Microservices architecture
• Proficient in building Fault tolerance, High availability, and Autoscaling for microservices
• Proficient in Data Modelling for distributed computing
• Deeps Hands-on experience on Microservices in Spring Boot and in large scale projects in
Spring Framework
• Fluency in cloud-native solution architecture; designing HA and Fault-Tolerant deployment
topologies for API Gateway, Kafka, and Spark clusters on cloud.
• Fluency in AWS, Azure, Serverless Functions in AWS or Azure and in Docker and Kubernetes
• Avid practitioner and coach of Test-Driven Development
• Deep understanding of modeling real-world scheduling and process problems into algorithms
running on memory and compute efficient data structures.
• We value Polyglot engineers a lot, hence experience in programming in more than one
language is a must, preferably one of Groovy, Scala, Python or Kotlin
• Excellent communication skills and collaboration temperament
• Articulation of technical matters to Business Stakeholders, and the ability to translate business
concerns into technical specifications.
• Proficiency in working with cross-functional team on refining initiatives to objective features.
Good To Have:
• Hands-on experience with Continuous Delivery and DevOps automation
• SRE and Observability implementation experience
• Refactoring Legacy products to microservices









