
Spark / Scala experience should be more than 2 years.
Combination with Java & Scala is fine or we are even fine with Big Data Developer with strong Core Java Concepts. - Scala / Spark Developer.
Strong proficiency Scala on Spark (Hadoop) - Scala + Java is also preferred
Complete SDLC process and Agile Methodology (Scrum)
Version control / Git

About Accion Labs
About
Accion Labs, Inc. ranked number one IT Company based out of Pittsburgh headquartered global technology firm.
Accion labs Inc: Winner of Fastest growing Company in Pittsburgh, Raked as #1 IT services company two years in a row (2014, 2015), by Pittsburgh Business Times Accion Labs is venture-funded, profitable and fast-growing- allowing you an opportunity to grow with us 11 global offices, 1300+ employees, 80+ tech company clients 90% of our clients we work with are Direct Clients and project based. Offering a full range of product life-cycle services in emerging technology segments including Web 2.0, Open Source, SaaS /Cloud, Mobility, IT Operations Management/ITSM, Big Data and traditional BI/DW, Automation engineering (Rackspace team), devops engineering.
Employee strength: 1300+ employees
Why Accion Labs:
- Emerging technology projects i.e. Web 2.0, SaaS, cloud, mobility, BI/DW and big data
- Great learning environment
- Onsite opportunity it totally depends on project requirement
- We invest in training our resources in latest frameworks, tools, processes and best-practices and also cross-training our resources across a range of emerging technologies – enabling you to develop more marketable skill
- Employee friendly environment with 100% focus on work-life balance, life-long learning and open communication
- Allow our employees to directly interact with clients
Connect with the team
Similar jobs

Requirements
- 3+ years work experience with production-grade python. Contribution to open source repos is preferred
- Experience writing concurrent and distributed programs, AWS lambda, Kubernetes, Docker, Spark is preferred.
- Experience with one relational & 1 non-relational DB is preferred
- Prior work in the ML domain will be a big boost
What You’ll Do
- Help realize the product vision: Production-ready machine learning models with monitoring within moments, not months.
- Help companies deploy their machine learning models at scale across a wide range of use-cases and sectors.
- Build integrations with other platforms to make it easy for our customers to use our product without changing their workflow.
- Write maintainable, scalable performant python code
- Building gRPC, rest API servers
- Working with Thrift, Protobufs, etc.


About this roleWe are seeking an experienced MongoDB Developer/DBA who will be
responsible for maintaining MongoDB databases while optimizing performance, security, and
the availability of MongoDB clusters. As a key member of our team, you’ll play a crucial role in
ensuring our data infrastructure runs smoothly.
You'll have the following responsibilities
Maintain and Configure MongoDB Instances - Responsible for build, design, deploy,
maintain, and lead the MongoDB Atlas infrastructure. Keep clear documentation of the
database setup and architecture.
Ownership of governance, defining and enforcing policies in MongoDB Atlas.Provide
consultancy in drawing the design and infrastructure (MongoDB Atlas) for use case.
Service and Governance wrap will be in place to restrict over provisioning for server size,
number of clusters per project and scaling through MongoDB Atlas
Gathering and documenting detailed business requirements applicable to the data
layer.Responsible for designing, configuring and managing MongoDB on Atlas.
Design, develop, test, document, and deploy high-quality technical solutions on the
MongoDB Atlas platform based on industry best practices to solve business needs.
Resolves technical issues raised by the team and/or customer and manages escalations as
required.
Migrate data from on-premise MongoDB and RDBMS to MongoDB AtlasCommunicate
and collaborate with other technical resources and customers in providing timely updates
on status of deliverables, shedding light on technical issues, and obtaining buy-in on
creative solutions.
Write procedures for backup and disaster recovery.
You'll have the following skills & experience
Excellent analytical, diagnostic skills, and problem-solving skills
Should understand the Database concept and develop expertise in designing and
developing NoSQL databases such as MongoDB
MongoDB query operation, import and export operation in database
Experience in ETL methodology for performing Data Migration, Extraction,
Transformation, Data Profiling and Loading
Migrating database by ETL, migrating database by manual process and designing,
development, implementation
General networking skills, especially in the context of a public cloud (e.g. AWS – VPC,
subnets, routing tables, nat / internet gateways, dns, security groups)
Experience using Terraform as an IaC tool for setting up infrastructure on AWS
CloudPerforming database backups and recovery
Competence in at least one of the following languages (in no particular order): Java, C++,
C#, Python, Node.js (JavaScript), Ruby, Perl, Scala, Go
Excellent communication skills, often being able to compromise but draw out risks and
constraints associated with solutions. Be able to work independently and collaborate with
other teams
Proficiency in configuring schema and MongoDB data modeling.
Strong understanding of SQL and NoSQL databases.
Comfortable with MongoDB syntax.
Experience with database security management.
Performance Optimization - Ensure databases achieve maximum performance and
availability. Design effective indexing strategies.



Required Skills/Qualifications:
∙ B.Tech/MCA in Comput
Required Skills/Qualifications:
Any Backend Technology is acceptable
∙ B.Tech/MCA in Computer Science or equivalent with 1-3 years’ experience with server-side web application development.
∙ Extensive development experience using LAMP or MEAN stack, RESTful web services or Node.js, HTML and CSS.
∙ Good understanding of Data structures and Relational Databases like MySQL or NoSQL Databases like MongoDB.
∙ Experience working with services in AWS such as EC2, RDS, and ELBs and have knowledge of VPCs.
∙ Experience with server side and client side MVC frameworks, Kafka Ansilble,Jenkins.
∙ Exposure to Continuous Integration (CI) and Continuous Deployment (CD), automated testing and agile development methods.
∙ Understanding of Version Management Tools like GitHub.
er Science or equivalent with 1-3 years’ experience with server-side web application development.
∙ Extensive development experience using LAMP or MEAN stack, RESTful web services or Node.js, HTML and CSS.
∙ Good understanding of Data structures and Relational Databases like MySQL or NoSQL Databases like MongoDB.
∙ Experience working with services in AWS such as EC2, RDS, and ELBs and have knowledge of VPCs.
∙ Experience with server side and client side MVC frameworks, Kafka Ansilble,Jenkins.
∙ Exposure to Continuous Integration (CI) and Continuous Deployment (CD), automated testing and agile development methods.
∙ Understanding of Version Management Tools like GitHub.
About Rara Delivery
Not just a delivery company…
RaRa Delivery is revolutionising instant delivery for e-commerce in Indonesia through data driven logistics.
RaRa Delivery is making instant and same-day deliveries scalable and cost-effective by leveraging a differentiated operating model and real-time optimisation technology. RaRa makes it possible for anyone, anywhere to get same day delivery in Indonesia. While others are focusing on ‘one-to-one’ deliveries, the company has developed proprietary, real-time batching tech to do ‘many-to-many’ deliveries within a few hours.. RaRa is already in partnership with some of the top eCommerce players in Indonesia like Blibli, Sayurbox, Kopi Kenangan and many more.
We are a distributed team with the company headquartered in Singapore 🇸🇬 , core operations in Indonesia 🇮🇩 and technology team based out of India 🇮🇳
Future of eCommerce Logistics.
- Data driven logistics company that is bringing in same day delivery revolution in Indonesia 🇮🇩
- Revolutionising delivery as an experience
- Empowering D2C Sellers with logistics as the core technology
About the Role
- 5 - 7 years Experience with the following technologies: Core Java/J2EE, Spring Boot, Creating API, Hibernate, JDBC, SQL/PLSQL, messaging architecture, REST/Web services, Linux
- Expertise in application, data and infrastructure architecture disciplines
- Advanced knowledge of architecture, design and business processes
- 4+ years of Java, J2EE development experience
- Strong technical development experience in effectively writing code, performing code reviews, and implementing best practices on configuration management and code refactoring
- Experience in working with vendor applications
- Experience in making optimized queries to MySQL database
- Proven problem solving and analytical skills
- A delivery-focused approach to work and the ability to work without direction
- Experience in Agile development techniques, including Scrum
- Experience implementing and/or using Git
- Ability to work collaboratively in teams and develop meaningful relationships to achieve common goals
- Bachelor degree in Computer Science or related discipline preferred

- Identify bad code practices in Scala and write new analyzers to detect them.
- Improve the coverage of automatically fixable issues.
- Ensure fewer false-positives are reported by the analyzer.
- Work on the internal tools that support analysis runtimes.
- Contribute to open-source static analysis tools.
We’re looking for someone who has:
- Strong foundational knowledge in Computer Science.
- At least 3 years of professional software development experience in Scala and Java.
- Understanding of the nuances of execution of the source code (AST, data flow graphs, etc).
- Familiarity with Scala best practices followed in the industry.
- Native experience with Linux/Unix environment.
- A focus on delivering high-quality code through strong testing practices.

We are on a quest to find a Senior Software Developer - Scala with several years of experience who will help us in extending and maintaining our platform
You will join a cross functional team with different nationalities, backgrounds and experience levels. The agile team is co-located including the Product Managers
You will collaborate with all team members in order to deliver the best solutions that enhance our platform
About Springer Nature India Pvt. Ltd:
Springer Nature opens the doors to discovery for researchers, educators, clinicians and other professionals. Every day, around the globe, our imprints, books, journals, platforms and technology solutions reach millions of people. For over 175 years our brands and imprints have been a trusted source of knowledge to these communities and today, more than ever, we see it as our responsibility to ensure that fundamental knowledge can be found, verified, understood and used by our communities – enabling them to improve outcomes, make progress, and benefit the generations that follow.
Visit: http://group.springernature.com">group.springernature.com and follow @SpringerNature
If you are still wondering, why should you work with us. Here are 5 reasons why?
- Springer Nature is one of the world's largest publishing company. Nobel laureates publish their research at Springer.
- We are truly a digital organization and Springer Nature Pune is at the helm of this digitization.
- We not only believe but preach providing good work life balance to our employees.
- We are investing in building our products using machine learning and NLP.
- We work in latest technologies like AWS and Scala.
About the team:
Backend - Adis – PV is a scientific analysis platform being built for extracting content from scientific articles and providing meaningful insights
Insights are then published in a structured way, so that they can be made accessible to end users, via feeds delivery or through the platform
Backend - Adis – PV will be a production system for all databases under one IT landscape and under one umbrella.
Job Type: Permanent
Job Location: Magarpatta City, Pune - India (Work from home until further notice)
Years of Experience 6 to 10 years
What we are looking for
Educational Qualification:
B.Sc., BCA, BCS, B.E., B.Tech, M.Tech, MCA and M.Sc.
Skill Matrix:
Primary Language Skills: Java 8, Scala
Framework: Play Framework
Messaging: Rabbit MQ
Ideologies: TDD / ATDD, Pair Programming
Database: SQL and NoSQL
Challenges
- You will help us continuously improving our platform
- Together we will create best in class services that support the needs of our customers
- Taking part of team ceremonies like grooming, planning and retrospective
- Develop new features
- Improve code quality by doing pair programming or code reviews
- Continuously improve and monitor our product
Key Responsibilities
- Own and consistently deliver high quality end-to-end product features keeping in view technical & business expectations
- Add meaty features to the product which will deliver substantial business value
- Pioneer clean coding and continuous code refactoring
- Understands and appreciates existing design and architecture of the product
- Understands pros and cons of various technology options available
- Takes technical ownership of some of sub-systems of the product
- Makes changes in the product designs to achieve required business value / mileage
- Identify and addresses technical debts
- Understands technical vision and road-map of the product and expectations
- Understands purview of key pieces of deliverables and own few of these pieces
Day at work
- Pioneer proof of concepts of new technologies keeping in view product road-map and business priorities
- Self-study and share his / her learning within the team and across teams
- Provide required help to other team-members
- Pioneer in various team events and work towards objectives of these events
- Make meaningful suggestions to make ceremonies more effective
About You
- You have several years of experience with Software Development
- You have worked successfully with product teams in the past and ideally have some experience with mentoring junior developers
- You like working in a collaborative environment where there is collective ownership of the code
- You work in a Continuous Integration and always strive for Continuous Delivery
- You like to share and enable others to increase your whole team's performance

Be Part Of Building The Future
Dremio is the Data Lake Engine company. Our mission is to reshape the world of analytics to deliver on the promise of data with a fundamentally new architecture, purpose-built for the exploding trend towards cloud data lake storage such as AWS S3 and Microsoft ADLS. We dramatically reduce and even eliminate the need for the complex and expensive workarounds that have been in use for decades, such as data warehouses (whether on-premise or cloud-native), structural data prep, ETL, cubes, and extracts. We do this by enabling lightning-fast queries directly against data lake storage, combined with full self-service for data users and full governance and control for IT. The results for enterprises are extremely compelling: 100X faster time to insight; 10X greater efficiency; zero data copies; and game-changing simplicity. And equally compelling is the market opportunity for Dremio, as we are well on our way to disrupting a $25BN+ market.
About the Role
The Dremio India team owns the DataLake Engine along with Cloud Infrastructure and services that power it. With focus on next generation data analytics supporting modern table formats like Iceberg, Deltalake, and open source initiatives such as Apache Arrow, Project Nessie and hybrid-cloud infrastructure, this team provides various opportunities to learn, deliver, and grow in career. We are looking for innovative minds with experience in leading and building high quality distributed systems at massive scale and solving complex problems.
Responsibilities & ownership
- Lead, build, deliver and ensure customer success of next-generation features related to scalability, reliability, robustness, usability, security, and performance of the product.
- Work on distributed systems for data processing with efficient protocols and communication, locking and consensus, schedulers, resource management, low latency access to distributed storage, auto scaling, and self healing.
- Understand and reason about concurrency and parallelization to deliver scalability and performance in a multithreaded and distributed environment.
- Lead the team to solve complex and unknown problems
- Solve technical problems and customer issues with technical expertise
- Design and deliver architectures that run optimally on public clouds like GCP, AWS, and Azure
- Mentor other team members for high quality and design
- Collaborate with Product Management to deliver on customer requirements and innovation
- Collaborate with Support and field teams to ensure that customers are successful with Dremio
Requirements
- B.S./M.S/Equivalent in Computer Science or a related technical field or equivalent experience
- Fluency in Java/C++ with 8+ years of experience developing production-level software
- Strong foundation in data structures, algorithms, multi-threaded and asynchronous programming models, and their use in developing distributed and scalable systems
- 5+ years experience in developing complex and scalable distributed systems and delivering, deploying, and managing microservices successfully
- Hands-on experience in query processing or optimization, distributed systems, concurrency control, data replication, code generation, networking, and storage systems
- Passion for quality, zero downtime upgrades, availability, resiliency, and uptime of the platform
- Passion for learning and delivering using latest technologies
- Ability to solve ambiguous, unexplored, and cross-team problems effectively
- Hands on experience of working projects on AWS, Azure, and Google Cloud Platform
- Experience with containers and Kubernetes for orchestration and container management in private and public clouds (AWS, Azure, and Google Cloud)
- Understanding of distributed file systems such as S3, ADLS, or HDFS
- Excellent communication skills and affinity for collaboration and teamwork
- Ability to work individually and collaboratively with other team members
- Ability to scope and plan solution for big problems and mentors others on the same
- Interested and motivated to be part of a fast-moving startup with a fun and accomplished team

Requirements:
- Academic degree (BE / MCA) with 3-10 years of experience in back-end Development.
- Strong knowledge of OOPS concepts, Analyzing, Designing, Development and Unit testing
- Scala technologies, AKKA, REST Webservices, SOAP, Jackson JSON API, JUnit, Mockito, Maven
- Hands-on experience with Play framework
- Familiarity with Microservice Architecture
- Experience working with Apache Tomcat server or TomEE
- Experience working with SQL databases (MySQL, PostgreSQL, Cassandra), writing custom queries, procedures, and designing schemas.
- Good to have front end experience (JavaScript, Angular JS/ React JS)
- You will be responsible for design, development and testing of Products
- Contributing in all phases of the development lifecycle
- Writing well designed, testable, efficient code
- Ensure designs are in compliance with specifications
- Prepare and produce releases of software components
- Support continuous improvement by investigating alternatives and technologies and presenting these for architectural review
- Some of the technologies you will be working on: Core Java, Solr, Hadoop, Spark, Elastic search, Clustering, Text Mining, NLP, Mahout and Lucene etc.


