Spark / Scala experience should be more than 2 years.
Combination with Java & Scala is fine or we are even fine with Big Data Developer with strong Core Java Concepts. - Scala / Spark Developer.
Strong proficiency Scala on Spark (Hadoop) - Scala + Java is also preferred
Complete SDLC process and Agile Methodology (Scrum)
Version control / Git

About Accion Labs
About
Accion Labs, Inc. ranked number one IT Company based out of Pittsburgh headquartered global technology firm.
Accion labs Inc: Winner of Fastest growing Company in Pittsburgh, Raked as #1 IT services company two years in a row (2014, 2015), by Pittsburgh Business Times Accion Labs is venture-funded, profitable and fast-growing- allowing you an opportunity to grow with us 11 global offices, 1300+ employees, 80+ tech company clients 90% of our clients we work with are Direct Clients and project based. Offering a full range of product life-cycle services in emerging technology segments including Web 2.0, Open Source, SaaS /Cloud, Mobility, IT Operations Management/ITSM, Big Data and traditional BI/DW, Automation engineering (Rackspace team), devops engineering.
Employee strength: 1300+ employees
Why Accion Labs:
- Emerging technology projects i.e. Web 2.0, SaaS, cloud, mobility, BI/DW and big data
- Great learning environment
- Onsite opportunity it totally depends on project requirement
- We invest in training our resources in latest frameworks, tools, processes and best-practices and also cross-training our resources across a range of emerging technologies – enabling you to develop more marketable skill
- Employee friendly environment with 100% focus on work-life balance, life-long learning and open communication
- Allow our employees to directly interact with clients
Connect with the team
Similar jobs
Position : Software Engineer (Java Backend Engineer)
Experience : 4+ Years
📍 Location : Bangalore, India (Hybrid)
Mandatory Skills : Java 8+ (Advanced Features), Spring Boot, Apache Spark (Spark Streaming), SQL & Cosmos DB, Git, Maven, CI/CD (Jenkins, GitHub), Azure Cloud, Agile Scrum.
About the Role :
We are seeking a highly skilled Backend Engineer with expertise in Java, Spark, and microservices architecture to join our dynamic team. The ideal candidate will have a strong background in object-oriented programming, experience with Spark Streaming, and a deep understanding of distributed systems and cloud technologies.
Key Responsibilities :
- Design, develop, and maintain highly scalable microservices and optimized RESTful APIs using Spring Boot and Java 8+.
- Implement and optimize Spark Streaming applications for real-time data processing.
- Utilize advanced Java 8 features, including:
- Functional interfaces & Lambda expressions
- Streams and Parallel Streams
- Completable Futures & Concurrency API improvements
- Enhanced Collections APIs
- Work with relational (SQL) and NoSQL (Cosmos DB) databases, ensuring efficient data modeling and retrieval.
- Develop and manage CI/CD pipelines using Jenkins, GitHub, and related automation tools.
- Collaborate with cross-functional teams, including Product, Business, and Automation, to deliver end-to-end product features.
- Ensure adherence to Agile Scrum practices and participate in code reviews to maintain high-quality standards.
- Deploy and manage applications in Azure Cloud environments.
Minimum Qualifications:
- BS/MS in Computer Science or a related field.
- 4+ Years of experience developing backend applications with Spring Boot and Java 8+.
- 3+ Years of hands-on experience with Git for version control.
- Strong understanding of software design patterns and distributed computing principles.
- Experience with Maven for building and deploying artifacts.
- Proven ability to work in Agile Scrum environments with a collaborative team mindset.
- Prior experience with Azure Cloud Technologies.
We are seeking a skilled Java Developer with hands-on experience in Java and Spark to build scalable data processing solutions. You'll contribute to high-performance data pipelines and analytics platforms in a dynamic Agile environment.
Key Responsibilities
- Design and develop Java applications integrated with Apache Spark for ETL processes, data transformations, and analytics.
- Build and optimize Spark jobs (Spark SQL, DataFrames, Streaming) for large-scale data processing.
- Collaborate with data engineers and analysts to implement robust data workflows.
- Write clean, maintainable Java code following best practices (Spring Boot, Microservices preferred).
- Perform code reviews, unit testing, and contribute to CI/CD pipelines.
- Troubleshoot and optimize Spark performance for production workloads.
- Document technical solutions and mentor junior developers.
Required Skills & Qualifications
- 4-7 years of hands-on Java development experience.
- Strong expertise in Apache Spark (Spark Core, Spark SQL, PySpark basics).
- Proficiency in Java 8/11+ with multithreading and collections frameworks.
- Experience with data processing (ETL, data pipelines, big data).
- Familiarity with build tools (Maven/Gradle) and version control (Git).
- Strong problem-solving skills and Bangalore location availability.
- Excellent communication skills for cross-team collaboration.
Good to Have
- Experience with Snowflake for cloud data warehousing.
- Knowledge of DBT (Data Build Tool) for analytics engineering.
- Python scripting for data manipulation and automation.
- Exposure to AWS/GCP/Azure cloud platforms.
- Familiarity with Kafka, Airflow, or containerization (Docker/Kubernetes).
· Core responsibilities to include analyze business requirements and designs for accuracy and completeness. Develops and maintains relevant product.
· BlueYonder is seeking a Senior/Principal Architect in the Data Services department (under Luminate Platform ) to act as one of key technology leaders to build and manage BlueYonder’ s technology assets in the Data Platform and Services.
· This individual will act as a trusted technical advisor and strategic thought leader to the Data Services department. The successful candidate will have the opportunity to lead, participate, guide, and mentor other people in the team on architecture and design in a hands-on manner. You are responsible for technical direction of Data Platform. This position reports to the Global Head, Data Services and will be based in Bangalore, India.
· Core responsibilities to include Architecting and designing (along with counterparts and distinguished Architects) a ground up cloud native (we use Azure) SaaS product in Order management and micro-fulfillment
· The team currently comprises of 60+ global associates across US, India (COE) and UK and is expected to grow rapidly. The incumbent will need to have leadership qualities to also mentor junior and mid-level software associates in our team. This person will lead the Data platform architecture – Streaming, Bulk with Snowflake/Elastic Search/other tools
Our current technical environment:
· Software: Java, Springboot, Gradle, GIT, Hibernate, Rest API, OAuth , Snowflake
· • Application Architecture: Scalable, Resilient, event driven, secure multi-tenant Microservices architecture
· • Cloud Architecture: MS Azure (ARM templates, AKS, HD insight, Application gateway, Virtue Networks, Event Hub, Azure AD)
· Frameworks/Others: Kubernetes, Kafka, Elasticsearch, Spark, NOSQL, RDBMS, Springboot, Gradle GIT, Ignite
- Augmenting, improving, redesigning, and/or re-implementing Dolat's low-latency/high-throughput production trading environment, which collects data from and disseminates orders to exchanges around the world
- Optimizing this platform by using network and systems programming, as well as other advanced techniques
- Developing systems that provide easy access to historical market data and trading simulations
- Building risk-management and performance-tracking tools
- Shaping the future of Dolat through regular interviewing and infrequent campus recruiting trips
- Implementing domain-optimized data structures
- Learn and internalize the theories behind current trading system
- Participate in the design, architecture and implementation of automated trading systems
- Take ownership of system from design through implementation
As a Software Development Engineer at Amazon, you have industry-leading technical abilities and demonstrate breadth and depth of knowledge. You build software to deliver business impact, making smart technology choices. You work in a team and drive things forward.
Top Skills
- You write high quality, maintainable, and robust code, often in Java or C++/C/Python/ROR/C#
- You recognize and adopt best practices in software engineering: design, testing, version control, documentation, build, deployment, and operations.
- You have experience building scalable software systems that are high-performance, highly-available, highly transactional, low latency and massively distributed.
Roles & Responsibilities
- You solve problems at their root, stepping back to understand the broader context.
- You develop pragmatic solutions and build flexible systems that balance engineering complexity and timely delivery, creating business impact.
- You understand a broad range of data structures and algorithms and apply them to deliver high-performing applications.
- You recognize and use design patterns to solve business problems.
- You understand how operating systems work, perform and scale.
- You continually align your work with Amazon’s business objectives and seek to deliver business value.
- You collaborate to ensure that decisions are based on the merit of the proposal, not the proposer.
- You proactively support knowledge-sharing and build good working relationships within the team and with others in Amazon.
- You communicate clearly with your team and with other groups and listen effectively.
Skills & Experience
- Bachelors or Masters in Computer Science or relevant technical field.
- Experience in software development and full product life-cycle.
- Excellent programming skills in any object oriented programming languages - preferably Java, C/C++/C#, Perl, Python, or Ruby.
- Strong knowledge of data structures, algorithms, and designing for performance, scalability, and availability.
- Proficiency in SQL and data modeling.
DeepSource is working on building tools that help developers ship good code. There are over 40 million developers in the world, and all of them write and review code in some form. There’s a massive opportunity to impact how software is built right from where the code is written using automation and intelligence, that not only improves developer productivity but also increases software’s robustness.
The Language Engineering team works on the source code analyzers, including both programming languages and configuration-as-code systems. As a member of the Language Engineering team, you will work on building the best, most comprehensive, Scala analyzer in the world. You will add new rules and Autofixes for finding more issues with code and automatically fixing them. You will be involved with the community to understand the problems with static analysis tools in the Scala ecosystem.
As a member of the Language Engineering team, you will:
-
Identify bad code practices in Scala and write new analyzers to detect them.
-
Improve the coverage of automatically fixable issues.
-
Ensure fewer false-positives are reported by the analyzer.
-
Work on the internal tools that support analysis runtimes.
-
Contribute to open-source static analysis tools.
We’re looking for someone who has:
-
Strong foundational knowledge in Computer Science.
-
At least 3 years of professional software development experience in Scala and Java.
-
Understanding of the nuances of execution of the source code (AST, data flow graphs, etc).
-
Familiarity with Scala best practices followed in the industry.
-
Native experience with Linux/Unix environment.
-
A focus on delivering high-quality code through strong testing practices.
We offer competitive compensation with meaningful stock options, a generous vacation policy, and a workstation of your choice, to name a few of the perks.












