Data Engineering role at ThoughtWorks ThoughtWorks India is looking for talented data engineers passionate about building large scale data processing systems to help manage the ever-growing information needs of our clients. Our developers have been contributing code to major organizations and open source projects for over 25 years now. They’ve also been writing books, speaking at conferences, and helping push software development forward -- changing companies and even industries along the way. As Consultants, we work with our clients to ensure we’re delivering the best possible solution. Our Lead Dev plays an important role in leading these projects to success. You will be responsible for - Creating complex data processing pipelines, as part of diverse, high energy teams Designing scalable implementations of the models developed by our Data Scientists Hands-on programming based on TDD, usually in a pair programming environment Deploying data pipelines in production based on Continuous Delivery practices Ideally, you should have - 2-6 years of overall industry experience Minimum of 2 years of experience building and deploying large scale data processing pipelines in a production environment Strong domain modelling and coding experience in Java /Scala / Python. Experience building data pipelines and data centric applications using distributed storage platforms like HDFS, S3, NoSql databases (Hbase, Cassandra, etc) and distributed processing platforms like Hadoop, Spark, Hive, Oozie, Airflow, Kafka etc in a production setting Hands on experience in (at least one or more) MapR, Cloudera, Hortonworks and/or Cloud (AWS EMR, Azure HDInsights, Qubole etc.) Knowledge of software best practices like Test-Driven Development (TDD) and Continuous Integration (CI), Agile development Strong communication skills with the ability to work in a consulting environment is essential And here’s some of the perks of being part of a unique organization like ThoughtWorks: A real commitment to “changing the face of IT” -- our way of thinking about diversity and inclusion. Over the past ten years, we’ve implemented a lot of initiatives to make ThoughtWorks a place that reflects the world around us, and to make this a welcoming home to technologists of all stripes. We’re not perfect, but we’re actively working towards true gender balance for our business and our industry, and you’ll see that diversity reflected on our project teams and in offices. Continuous learning. You’ll be constantly exposed to new languages, frameworks and ideas from your peers and as you work on different projects -- challenging you to stay at the top of your game. Support to grow as a technologist outside of your role at ThoughtWorks. This is why ThoughtWorkers have written over 100 books and can be found speaking at (and, ahem, keynoting) tech conferences all over the world. We love to learn and share knowledge, and you’ll find a community of passionate technologists eager to back your endeavors, whatever they may be. You’ll also receive financial support to attend conferences every year. An organizational commitment to social responsibility. ThoughtWorkers challenge each other to be just a little more thoughtful about the world around us, and we believe in using our profits for good. All around the world, you’ll find ThoughtWorks supporting great causes and organizations in both official and unofficial capacities. If you relish the idea of being part of ThoughtWorks’ Data Practice that extends beyond the work we do for our customers, you may find ThoughtWorks is the right place for you. If you share our passion for technology and want to help change the world with software, we want to hear from you!
Responsibilities: • Design and build a hybrid data solution using a combination of on-premise and in-cloud AWS services. • Diagnose and troubleshoot complex distributed systems problems and develop solutions with a significant impact on our massive scale. • Build tools to ingest and process terabyte-scale data on a daily basis. • Communicate with a wide set of teams, including product owners, business analysts, data and enterprise architects, platform engineers, and cloud vendors, etc. • Compilation of detailed design documents, technical specification documents, and test plans as required. • Independently manage development tasks as per technical specification documents, prod deployment, post-prod support, etc • Managing support to Unit Testing, SIT & UAT • Co-ordinate with business and support teams on testing and migration activities. Requirement: • Hands-on experience in distributed systems and Hadoop and its Ecosystem. • 3-5 years of extensive experience of cloud-related services like compute, storage, messaging (Eg SNS, SQS). AWS experience is preferred. • Strong software development skills in at least one of Java, C/C++, Scala or Python (Preferred). • Strong knowledge on AWS Data Services - S3, RDS,Redshift,DynamoDB,AWS Glue,EMR,Lambda,etc • Good knowledge on Big Data ecosystem - PySpark, Hive, HDFS, Hbase. Strong SQL knowledge • Able to understand data models. Create and understand ETL/Data mapping sheets. • Good knowledge of developing batch, stream and event-based data pipelines. • Good knowledge of traditional Data Warehouse implementations and BI and ETL tools. • Experience with Agile/Scrum/DevOps software development methodologies. • Experience building and supporting large-scale systems in a production environment. Good to have: • Design Dimensional Models for Data Warehouse and Data Marts • Knowledge of version control systems, build automation and CI/CD tools and frameworks • Experience with operating system internals, file systems, disk/storage technologies, and storage protocols. • Good understanding of MDM solutions and experience working on any MDM tools like Informatica MDM, etcacross multiple cloud clusters.
This requirement is to service a leading big data technology company that measures what matters to make cross-platform audiences and advertising more valuable. We are seeking a Senior Data Analyst to mine our large-scale data sets (2 Trillion + records/month) and provide insights to help inform our clients strategy for launching and analyzing marketing campaigns around the world. This analyst will be responsible for data analysis, design, and development of Cross Platform audience measurement products. Their work will have a direct impact on driving business strategies for prominent industry leaders. Self-motivation and strong communication skills are both must haves. You will need to be comfortable working in a fast-paced work environment with shifting priorities, vague requirements, and rapid iterations. You will need to be comfortable taking risks. Problems we are solving: What, where, and when do people watch video content? Which sites and platforms are changing the behavior of consumer video consumption? How and where is traditional TV viewing shifting towards internet connected devices? What is the total audience for publishers with fragmented content across TV, computer, and mobile? How do we build flexible processes to handle the various data sources and reporting needs? About our team: We’re a small but powerful team of data junkies, analyzing over 20 TB of data each day to deliver product and research solutions to our clients. We use Scala, Spark, SQL, Hadoop, Python, and many other tools. We work with Product Management on what problems to tackle and collaborate with Data Science and Core Processing Engineering teams to create solutions. Duties and responsibilities: Design architecture and prototype solutions for data processing and methodology Drive analytical projects that span across multiple teams and functions Build, test new features and concepts and integrate into production process Participate in ongoing research and evaluation of new technologies Exercise your experience in the development lifecycle through analysis, design, development, testing and deployment of this system Collaborate with teams in Software Engineering, Operations, and Product Management to deliver timely and quality data. You will be the knowledge expert, delivering quality data to our clients Qualifications: Bachelors degree in Data Science, Engineering, Computer Science or Information Systems 5 years relevant work experience in related field Experience in data analysis and problem solving with big data Experience with Scala, AWS, SQL, Hive, Spark, R or Python and deep knowledge of relational databases and methods for efficiently retrieving data Ability to think creatively and solve complex problems Ability to autonomously manage simultaneous projects in a fast paced business environment Excellent verbal, written and computer communication skills with strong analytical and troubleshooting skills Ability to consistently meet data expectations; holds team and self-accountable Ability to manage change, course correct, and respond decisively Ability to engage with Senior Leaders across all functional departments Ability to take on new responsibilities; adapts to change and executes
We are looking for a Java developer for one of our major investment banking client- who can take ownership for the whole end to end delivery, performing analysis, design, coding, testing and maintenance of large- scale and distributed applications. Please find JD for your reference . Job Profile : Java Developer : Location : Mumbai Description: A core Java developer is required for a Tier 1 investment bank supporting the Delta One Structured Products IT group. This is a global front-office team that supports the global OTC Equity Swap Portfolio, Single Name, and Index derivative businesses. We are designing a complete restructure of the Equity Swaps trading platform, and this particular role is within the core cash flow and valuations area. The role will require the candidate to work closely with the cash flow engines team to solve problems that combine both finance and technology. This is an exciting hands-on role for a self-starter who has a thirst for new challenges as well as new technologies. The candidate should possess good analytical skills, strong software engineering skills, a logical approach to problem-solving, be able to work in a fast paced environment liaising with demanding stakeholders to understand complex requirements and be able to prioritize work under pressure with minimal supervision. The candidate should be a problem solver, and be able to bring with them some positivity and enthusiasm in trying to think about and offer potential solutions for architectural considerations. Position Profile: We are looking for someone to help own problems and be able to demonstrate leadership and responsibility for the delivery of new features. As part of the development cycle, you would be expected to write quality unit tests, supply documentation if relevant for new feature build-outs, and be involved in the test cycle (UAT, integration, regression) for the delivery and fixing of bugs for your new features. Although the role is predominantly Java, we require someone who is flexible with the development environment, as some days you might be writing Java, and other days you might be fixing stored procedures or Perl scripts. You would be expected to get involved in the Level 3 production support rota which is shared between our developers on a monthly cycle, and to occasionally help with weekend deployment activities to deploy and verify any code changes you have been involved in. Team Profile: The team and role are ideal for someone looking for a strong career development path with many opportunities to grow, learn and develop. The role requires someone who is flexible and able to respond to a dynamic business environment. The candidate must be adaptable to work across multiple technologies and disciplines, with a focus on delivering quality solutions for the business in a timely fashion. This role suits people experienced in complex data domains. Required Skills: * Experience of agile and scrum methodologies. * Core Java. * Unix shell scripting. * SQL and Relational Databases such as DB2. * Integration technologies - MQ/Xml/SOAP/JSON/Protocol Buffers/Spring. * Enterprise Architecture Patterns, GoF design * Build & agile - Ant, Gradle/Maven, Sonar, Jenkins/Hudson, GIT/perforce. * Sound understanding of Object Oriented Analysis, Design and Programming. * Strong communication and stakeholder management skills * Scala / spark or bigdata will be an added advantage * Candidate must have good experience in database. * Excellent communication and problem solving skill. Desired Skills: * Experience in banking and regulatory reporting (SFTR, MAS/ASIC etc.) * Knowledge of OTC, listed and cash products * Domain driven design and micro-services
Looking for Big data Developers in Mumbai Location
ABOUT US: Arque Capital is a FinTech startup working with AI in Finance in domains like Asset Management (Hedge Funds, ETFs and Structured Products), Robo Advisory, Bespoke Research, Alternate Brokerage, and other applications of Technology & Quantitative methods in Big Finance. PROFILE DESCRIPTION: 1. Get the "Tech" in order for the Hedge Fund - Help answer fundamentals of technology blocks to be used, choice of certain platform/tech over other, helping team visualize product with the available resources and assets 2. Build, manage, and validate a Tech Roadmap for our Products 3. Architecture Practices - At startups, the dynamics changes very fast. Making sure that best practices are defined and followed by team is very important. CTO’s may have to garbage guy and clean the code time to time. Making reviews on Code Quality is an important activity that CTO should follow. 4. Build progressive learning culture and establish predictable model of envisioning, designing and developing products 5. Product Innovation through Research and continuous improvement 6. Build out the Technological Infrastructure for the Hedge Fund 7. Hiring and building out the Technology team 8. Setting up and managing the entire IT infrastructure - Hardware as well as Cloud 9. Ensure company-wide security and IP protection REQUIREMENTS: Computer Science Engineer from Tier-I colleges only (IIT, IIIT, NIT, BITS, DHU, Anna University, MU) 5-10 years of relevant Technology experience (no infra or database persons) Expertise in Python and C++ (3+ years minimum) 2+ years experience of building and managing Big Data projects Experience with technical design & architecture (1+ years minimum) Experience with High performance computing - OPTIONAL Experience as a Tech Lead, IT Manager, Director, VP, or CTO 1+ year Experience managing Cloud computing infrastructure (Amazon AWS preferred) - OPTIONAL Ability to work in an unstructured environment Looking to work in a small, start-up type environment based out of Mumbai COMPENSATION: Co-Founder status and Equity partnership
US based Multinational Company Hands on Hadoop
Ixsight Technologies is an innovative IT company with strong Intellectual Property. Ixsight is focused on creating Customer Data Value through its solutions for Identity Management, Locational Analytics, Address Science and Customer Engagement. Ixsight is also adapting its solutions to Big Data and Cloud. We are in the process of creating new solutions across platforms. Ixsight has served over 80+ clients in India – for various end user applications across traditional BFSI and telecom sector. In the recent past we are catering to the new generation verticals – Hospitality, ecommerce etc. Ixsight has been featured in the Gartner’s India Technology Hype Cycle and has been recognised by both clients and peers for pioneering and excellent solutions. If you wish to play a direct part in creating new products, building IP and being part of Product Creation - Ixsight is the place.