Spark / Scala experience should be more than 2 years.
Combination with Java & Scala is fine or we are even fine with Big Data Developer with strong Core Java Concepts. - Scala / Spark Developer.
Strong proficiency Scala on Spark (Hadoop) - Scala + Java is also preferred
Complete SDLC process and Agile Methodology (Scrum)
Version control / Git

About Accion Labs
About
Accion Labs, Inc. ranked number one IT Company based out of Pittsburgh headquartered global technology firm.
Accion labs Inc: Winner of Fastest growing Company in Pittsburgh, Raked as #1 IT services company two years in a row (2014, 2015), by Pittsburgh Business Times Accion Labs is venture-funded, profitable and fast-growing- allowing you an opportunity to grow with us 11 global offices, 1300+ employees, 80+ tech company clients 90% of our clients we work with are Direct Clients and project based. Offering a full range of product life-cycle services in emerging technology segments including Web 2.0, Open Source, SaaS /Cloud, Mobility, IT Operations Management/ITSM, Big Data and traditional BI/DW, Automation engineering (Rackspace team), devops engineering.
Employee strength: 1300+ employees
Why Accion Labs:
- Emerging technology projects i.e. Web 2.0, SaaS, cloud, mobility, BI/DW and big data
- Great learning environment
- Onsite opportunity it totally depends on project requirement
- We invest in training our resources in latest frameworks, tools, processes and best-practices and also cross-training our resources across a range of emerging technologies – enabling you to develop more marketable skill
- Employee friendly environment with 100% focus on work-life balance, life-long learning and open communication
- Allow our employees to directly interact with clients
Connect with the team
Similar jobs
Key Responsibilities
- Provide technical leadership in the design, development, and delivery of scalable, high-performance software systems.
- Partner with product managers, architects, and cross-functional teams to define technical strategy and ensure alignment with business objectives.
- Lead by example in writing high-quality, testable, and maintainable code.
- Drive best practices in software engineering, including code reviews, system design, and performance optimization.
- Mentor and guide engineers across teams, fostering a culture of technical excellence and continuous learning.
- Evaluate and introduce new technologies, tools, and frameworks to improve productivity, scale and system robustness.
Required Skills & Qualifications
- Strong foundation in computer science fundamentals: data structures, algorithms, and functional programming techniques.
- Expertise in Scala, with strong preference for functional programming.
- Solid experience in software design, implementation, and debugging, including inter-process communication and multi-threading.
- Hands-on experience with distributed systems and event-driven architectures.
- Familiarity with databases (Postgres preferred).
- Proficiency with Apache Kafka for messaging and persistence.
- Working knowledge of Python for unit and integration testing.
- Basic to intermediate experience with Ansible for automation.
- Strong problem-solving, analytical, and communication skills.
Nice-to-Have / Bonus Skills
- Experience with modeling in YANG.
- Experience with Scala libraries such as Cats Effect (2/3), Monix, and Akka.
- Experience working in Agile/Scrum environments.
What We Offer
- Opportunity to work on cutting-edge technologies in a collaborative environment.
- A role with strong ownership, technical influence, and visibility across teams.
- Competitive compensation and benefits.
● Design and deliver scalable web services, APIs, and backend data modules.
Understand requirements and develop reusable code using design patterns &
component architecture and write unit test cases.
● Collaborate with product management and engineering teams to elicit &
understand their requirements & challenges and develop potential solutions
● Stay current with the latest tools, technology ideas, and methodologies; share
knowledge by clearly articulating results and ideas to key decision-makers.
Requirements
● 3-6 years of strong experience in developing highly scalable backend and
middle tier. BS/MS in Computer Science or equivalent from premier institutes
Strong in problem-solving, data structures, and algorithm design. Strong
experience in system architecture, Web services development, highly scalable
distributed applications.
● Good in large data systems such as Hadoop, Map Reduce, NoSQL Cassandra, etc.. Fluency in Java, Spring, Hibernate, J2EE, REST Services Ability to deliver code
quickly from given scenarios in a fast-paced start-up environment.
● Attention to detail. Strong communication and collaboration skills.
• Proficient in software development from inception to production releases using modern
programming languages ( Preferably Java, NodeJS, and Scala)
• Hands-on experience with cloud infrastructure, solution architecture on AWS or Azure
• Prior experience working as a Full-stack engineer building cloud-native, SaaS products.
• Expertise in programming and designing circuit breakers, the localized impact of failures,
service mesh, event sourcing, distributed data transactions, and eventual consistency.
• Proficient in designing and developing SAAS on Microservices architecture
• Proficient in building Fault tolerance, High availability, and Autoscaling for microservices
• Proficient in Data Modelling for distributed computing
• Deeps Hands-on experience on Microservices in Spring Boot and in large scale projects in
Spring Framework
• Fluency in cloud-native solution architecture; designing HA and Fault-Tolerant deployment
topologies for API Gateway, Kafka, and Spark clusters on cloud.
• Fluency in AWS, Azure, Serverless Functions in AWS or Azure and in Docker and Kubernetes
• Avid practitioner and coach of Test-Driven Development
• Deep understanding of modeling real-world scheduling and process problems into algorithms
running on memory and compute efficient data structures.
• We value Polyglot engineers a lot, hence experience in programming in more than one
language is a must, preferably one of Groovy, Scala, Python or Kotlin
• Excellent communication skills and collaboration temperament
• Articulation of technical matters to Business Stakeholders, and the ability to translate business
concerns into technical specifications.
• Proficiency in working with cross-functional team on refining initiatives to objective features.
Good To Have:
• Hands-on experience with Continuous Delivery and DevOps automation
• SRE and Observability implementation experience
• Refactoring Legacy products to microservices

Basic Qualifications
Candidates for this role must have:
- Bachelor’s degree in computer science or a related field
- At least three years of experience writing production code of increasing complexity
- Strong understanding of computer science fundamentals, including algorithms, complexity analysis, data structures, problem solving, and object-oriented analysis and design
- Proficiency in at least one of the following: Java, C, C++, C#, Ruby, Scala, Python
- Experience refactoring code and evolving architectures
- Experience crafting unit, integration, functional and regression tests
Preferred Qualifications
- Master’s degree in computer science or a related field
- Experience developing in a Linux environment
- Experience applying service-oriented architecture techniques to production use cases
- Experience in developing secure, multi-platform mobile applications
Be Part Of Building The Future
Dremio is the Data Lake Engine company. Our mission is to reshape the world of analytics to deliver on the promise of data with a fundamentally new architecture, purpose-built for the exploding trend towards cloud data lake storage such as AWS S3 and Microsoft ADLS. We dramatically reduce and even eliminate the need for the complex and expensive workarounds that have been in use for decades, such as data warehouses (whether on-premise or cloud-native), structural data prep, ETL, cubes, and extracts. We do this by enabling lightning-fast queries directly against data lake storage, combined with full self-service for data users and full governance and control for IT. The results for enterprises are extremely compelling: 100X faster time to insight; 10X greater efficiency; zero data copies; and game-changing simplicity. And equally compelling is the market opportunity for Dremio, as we are well on our way to disrupting a $25BN+ market.
About the Role
The Dremio India team owns the DataLake Engine along with Cloud Infrastructure and services that power it. With focus on next generation data analytics supporting modern table formats like Iceberg, Deltalake, and open source initiatives such as Apache Arrow, Project Nessie and hybrid-cloud infrastructure, this team provides various opportunities to learn, deliver, and grow in career. We are looking for innovative minds with experience in leading and building high quality distributed systems at massive scale and solving complex problems.
Responsibilities & ownership
- Lead, build, deliver and ensure customer success of next-generation features related to scalability, reliability, robustness, usability, security, and performance of the product.
- Work on distributed systems for data processing with efficient protocols and communication, locking and consensus, schedulers, resource management, low latency access to distributed storage, auto scaling, and self healing.
- Understand and reason about concurrency and parallelization to deliver scalability and performance in a multithreaded and distributed environment.
- Lead the team to solve complex and unknown problems
- Solve technical problems and customer issues with technical expertise
- Design and deliver architectures that run optimally on public clouds like GCP, AWS, and Azure
- Mentor other team members for high quality and design
- Collaborate with Product Management to deliver on customer requirements and innovation
- Collaborate with Support and field teams to ensure that customers are successful with Dremio
Requirements
- B.S./M.S/Equivalent in Computer Science or a related technical field or equivalent experience
- Fluency in Java/C++ with 8+ years of experience developing production-level software
- Strong foundation in data structures, algorithms, multi-threaded and asynchronous programming models, and their use in developing distributed and scalable systems
- 5+ years experience in developing complex and scalable distributed systems and delivering, deploying, and managing microservices successfully
- Hands-on experience in query processing or optimization, distributed systems, concurrency control, data replication, code generation, networking, and storage systems
- Passion for quality, zero downtime upgrades, availability, resiliency, and uptime of the platform
- Passion for learning and delivering using latest technologies
- Ability to solve ambiguous, unexplored, and cross-team problems effectively
- Hands on experience of working projects on AWS, Azure, and Google Cloud Platform
- Experience with containers and Kubernetes for orchestration and container management in private and public clouds (AWS, Azure, and Google Cloud)
- Understanding of distributed file systems such as S3, ADLS, or HDFS
- Excellent communication skills and affinity for collaboration and teamwork
- Ability to work individually and collaboratively with other team members
- Ability to scope and plan solution for big problems and mentors others on the same
- Interested and motivated to be part of a fast-moving startup with a fun and accomplished team
About Vymo
Vymo is a Sanfrancisco-based next-generation Sales productivity SaaS company with offices in 7 locations. Vymo is funded by top tier VC firms like Emergence Capital and Sequoia Capital. Vymo is a category creator, an intelligent Personal Sales Assistant who captures sales activities automatically, learns from top performers, and predicts ‘next best actions’ contextually. Vymo has 100,000 users in 60+ large enterprises such as AXA, Allianz, Generali.Vymo has seen 3x annual growth over the last few years and aspires to do even better this year by building up the team globally.
What is the Personal Sales Assistant
A game-changer! We thrive in the CRM space where every company is struggling to deliver meaningful engagement to their Sales teams and IT systems. Vymo was engineered with a mobile-first philosophy. The platform through AI/ML detects, predicts, and learns how to make Sales Representatives more productive through nudges and suggestions on a mobile device. Explore Vymo https://getvymo.com/">https://getvymo.com/
What you will do at Vymo
From young open source enthusiasts to experienced Googlers, this team develops products like Lead Management System, Intelligent Allocations & Route mapping, Intelligent Interventions, that help improve the effectiveness of the sales teams manifold. These products power the "Personal Assistant" app that automates the sales force activities, leveraging our cutting edge location based technology and intelligent routing algorithms.
A Day in your Life
- Design, develop and maintain robust data platforms on top of Kafka, Spark, ES etc.
- Provide leadership to a group of engineers in an innovative and fast-paced environment.
- Manage and drive complex technical projects from the planning stage through execution.
What you would have done
- B.E (or equivalent) in Computer Sciences
- 6-9 years of experience building enterprise class products/platforms.
- Knowledge of Big data systems and/or Data pipeline building experience is preferred.
- 2-3 years of relevant work experience as technical lead or technical management experience.
- Excellent coding skills in one of Core Java or NodeJS
- Demonstrated problem solving skills in previous roles.
- Good communication skills.
Requirements:
- Academic degree (BE / MCA) with 3-10 years of experience in back-end Development.
- Strong knowledge of OOPS concepts, Analyzing, Designing, Development and Unit testing
- Scala technologies, AKKA, REST Webservices, SOAP, Jackson JSON API, JUnit, Mockito, Maven
- Hands-on experience with Play framework
- Familiarity with Microservice Architecture
- Experience working with Apache Tomcat server or TomEE
- Experience working with SQL databases (MySQL, PostgreSQL, Cassandra), writing custom queries, procedures, and designing schemas.
- Good to have front end experience (JavaScript, Angular JS/ React JS)








