Job Description: Big Data Engineer Status: Immediate JoinerTotal Experience: Minimum 5 years CTC - 6LPA to 18LPA Location: Bangalore The Big Data Engineer is part of an agile development team. I am building and working on enterprise software applications. The Big Data Engineer will be involved in all areas of system management from environment setup to production deployment. About Happymonk: Happy monk is AI and Emerging Technologies Thinktank helping clients reimagine how they serve their connected customers and operate enterprises. We are looking for an experienced AI specialist to join the revolution, using deep learning, neuro-linguistic programming (NLP) computer vision, chatbots, and robotics to help us improve various business outcomes and drive innovation. You will join a multidisciplinary team helping to shape our AI strategy and showcasing the potential for AI through early-stage solutions. This is an excellent opportunity to take advantage of emerging trends and technologies to a real-world difference. As you Build Real-time data pipelines create infrastructure, interpret and clean our data, we will rely on you to ask questions, connect the dots, and uncover opportunities that lie hidden within—all with the ultimate goal of realizing the data’s full potential. You will join a team of Research and developer specialists and engineers but will “slice and dice” data using your methods, creating new visions for the future. More information can be found at www.happymonk.co Duties and Responsibilities- The duties and responsibilities of the Big Data engineering. Engineer include: Working with an agile team to setup and configure cloud-based environments. Design and develop secure cloud system architectures by established standards. Actively participate in design meetings with the development team. Design and implement a highly scalable CI/CD pipeline. Analyzing and resolving technical and application problems. Adhering to high-quality development principles while delivering solutions on-time and on-budget. Experience in dimensional data modeling, ETL development, and Data Warehousing, Data Lakes Designing, implementing, and testing through azure and private hostings Leading and mentoring as a senior member of the Engineering team Developing and following best practices relative to design, implementation, and testing Producing professional, documented designs Prototyping new ideas or technologies to prove efficacy and usefulness in production Following agile/Scrum practices A rebuilt entire operating platform on cloud technologies after operating for many years on traditional, self-hosted infrastructure components Develop Data streams, and REST API's to integrate with partners and customers in real-time Built a generic service structure on azure that is deployed and scaled to run a variety of platform components dynamically Building a next-generation tools platform for creating, managing and deploying multi-channel outreach campaigns in the AWS cloud, designed as a single page web UI and rules engine leveraging .Net, jQuery, WCF, and JSON Constructing a state-of-the-art data lake using EMR, Spark, NiFi, Kafka, Java Building voice-enabled applications via SIP Exploring AI bots to help answer common healthcare-related questions Work on Electronic sensor data through protocols like MQTT, OVIN based protocols Design and develop large scale, high-volume, and high-performance data pipelines with large sets of data from different sources Create Data pipes technical leadership in the Big Data space (Vertica, Hadoop, SQL Server, Oozie, Hive, Avro, Spark, Scala, Java, Python, RedShift, Scala) Select and integrate Big Data tools and frameworks required to provide capabilities required by business functions, while keeping in mind hardware, software & financial constraints Build data pipelines that are scalable, repeatable, and secure, and can serve multiple purposes. Desired Skills & Experience - Minimum Requirements Master’s degree in Computer Science or a related discipline. Strong understanding of Linux systems. Strong within C, C++ Java, Scala, Golang a plus Knowledge of scripting languages a plus Knowledge of cloud architecture concepts (IaaS, PaaS, SaaS) Basic understanding of Containers vs VMs. Build extensive monitoring of important services. Strong desire to expand knowledge in modern cloud architectures Knowledge of System Security Concepts (PenTest, Vulnerability analysis) Familiarity with version control concepts (SVN) Knowledge of testing principles Preferred Technical Cloud Skills (knowledge a plus) Experience with IaaS/PaaS/SaaS development environments (e.g. AWS, Heroku, etc.) Securing Web applications (Reverse proxies, firewalls, e.g. Cloudflare, etc.) Experience with container orchestration systems, Kubernetes Preferred.
Skills Requirements Knowledge of Hadoop ecosystem installation, initial-configuration and performance tuning. Expert with Apache Ambari, Spark, Unix Shell scripting, Kubernetes and Docker Knowledge on python would be desirable. Experience with HDP Manager/clients and various dashboards. Understanding on Hadoop Security (Kerberos, Ranger and Knox) and encryption and Data masking. Experience with automation/configuration management using Chef, Ansible or an equivalent. Strong experience with any Linux distribution. Basic understanding of network technologies, CPU, memory and storage. Database administration a plus.Qualifications and Education Requirements 2 to 4 years of experience with and detailed knowledge of Core Hadoop Components solutions anddashboards running on Big Data technologies such as Hadoop/Spark. Bachelor degree or equivalent in Computer Science or Information Technology or related fields.
Key Responsibilities: Lead and guide a data science program to help understand how the wealth of security data can drive insights into weaknesses, threats, and opportunities. Use analytical rigor and statistical methods, programming, data modeling, simulation, and advanced mathematics to analyze large amounts of data, recognizing patterns, identifying opportunities, posing business questions, and making valuable discoveries. Identify/develop appropriate machine learning/data mining/text mining techniques to enable better business outcomes. Understand and analyze data sources including sampling biases, accuracy, and coverage. Break apart problems scientifically, providing insight into your recommendations and findings to both technical and non-technical partners. Research new ways for modeling and predictive behavior for large scale projects. Generate and test hypotheses, designing experiments to answer targeted questions of advanced complexity. Document projects including business objective, data gathering and processes, leading approaches, final algorithm, and detailed set of results and analytical metrics. Validate score performance. Document and present model process and performance. What will you need? Required Skills: Prior experience in performing the same role in a SaaS security product company. Minimum 8 years of relevant work experience in similar roles. Advanced degree in Machine Learning, Computer Science, Electrical Engineering, Physics, Statistics, Applied Math, or other quantitative fields. Highly technical with both tactical and strategic capabilities. Hands-on experience implementing machine learning and security intelligence solutions. Proven track record in modifying and applying advanced algorithms to address practical problems. Deep understanding of algorithms, machine learning and data science. Confident interacting with business peers to understand and identify use case, with a strong ability to articulate solutions and present them to business partners. Sound knowledge of security in Cloud platforms such as AWS, GCP, Azure. Strong coding skills in one of the following: Python, R, or PySpark. Experience with Hadoop and NoSQL or related technologies. Knowledge of NLP/Text mining techniques and related open source tools. Outstanding collaboration and communication skills. Ability to effectively collaborate with distributed team. Understand and practice agile development methodology. Nice to Have: Understanding of DevOps, microservices architecture and container/Docker technologies.
General Accountabilities/Job Responsibilities• Participation in the requirements analysis, design, development and testing of applications.• The candidate is expected to write code himself/herself.• The candidate is expected to write high level code, code review, unit testing and deployment.• Practical application of design principles with a focus on the user experience, usability, templatedesigns, cross browser issues and client server concepts.• Contributes to the development of project estimates, scheduling, and deliverables.• Works closely with QA team to determine testing requirements to ensure full coverage and bestquality of product.• There is also the opportunity to mentor and guide junior team members in excelling their jobs.Job Specifications• BE/B. Tech. Computer Science or MCA from a reputed University.• 6+ Years of experience in software development, with emphasis on JAVA/J2EE Server sideprogramming.• Hands on experience in Core Java, Multithreading, RMI, Socket programing, JDBC, NIO,webservices and Design patterns.• Should have Knowledge of distributed system, distributed caching, messaging frameworks, ESBetc.• Knowledge of Linux operating system and PostgreSQL/MySQL/MongoDB/Cassandra database isessential.• Additionally, knowledge of HBase, Hadoop and Hive are desirable.• Familiarity with message queue systems and AMQP and Kafka is desirable.• Should have experience as a participant in Agile methodologies.• Should have excellent written and verbal communication skills and presentation skills.• This is not a Fullstack requirement, we are purely looking out for Backend resources
Roles and responsibilities: Responsible for development and maintenance of applications with technologies involving Enterprise Java and Distributed technologies. Experience in Hadoop, Kafka, Spark, Elastic Search, SQL, Kibana, Python, experience w/ machine learning and Analytics etc. Collaborate with developers, product manager, business analysts and business users in conceptualizing, estimating and developing new software applications and enhancements.. Collaborate with QA team to define test cases, metrics, and resolve questions about test results. Assist in the design and implementation process for new products, research and create POC for possible solutions. Develop components based on business and/or application requirements Create unit tests in accordance with team policies & procedures Advise, and mentor team members in specialized technical areas as well as fulfill administrative duties as defined by support process Work with cross-functional teams during crisis to address and resolve complex incidents and problems in addition to assessment, analysis, and resolution of cross-functional issues.
Recko Inc. is looking for data engineers to join our kick-ass engineering team. We are looking for smart, dynamic individuals to connect all the pieces of the data ecosystem. About Recko: Recko was founded in 2017 to organise the world's transactional information and provide intelligent applications to finance and product teams to make sense of the vast amount of data available. With the proliferation of digital transactions over the past two decades, Enterprises, Banks and Financial institutions are finding it difficult to keep a track on the money flowing across their systems. We are building products which enable them to handle and monitor massive volumes of transactional data without writing a single line of code and ensure the right amounts are flowing between the right beneficiaries, with the right deductions at the right time. Over the last few months, we have grown to a point where we are processing more than 25 million transactions monthly for our customers. Recko is a Series A funded startup, backed by marquee investors like Vertex Ventures, Prime Venture Partners and Locus Ventures. Traditionally enterprise software is always built around functionality. We are reimagining enterprise software to be built around the user. We believe software is an extension of one's capability, and it should be delightful and fun to use. Working at Recko: We believe that great companies are built by amazing people. At Recko, We are a group of young Engineers, Product Managers, Analysts and Business folks who are on a mission to bring consumer tech DNA to enterprise fintech applications. The current team at Recko is 35 members strong with stellar experience across fintech, e-commerce, digital domains at companies like Flipkart, PhonePe, Ola Money, Belong, Razorpay, Grofers, Jio, Oracle etc. We are growing aggressively across verticals. About the Role: What are we looking for: 2+ years of development experience in at least one of MySQL, Oracle, PostgreSQL or MSSQL and with Big Data frameworks / platforms / data stores like Apache Drill, Arrow, Hadoop, HDFS, Spark, MapR etc Strong experience setting up data warehouses, data modeling, data wrangling and dataflow architecture on the cloud 2+ experience with public cloud services such as AWS, Azure, or GCP and languages like Java/ Python etc 2+ years of development experience in Amazon Redshift, Google Bigquery or Azure data warehouse platforms preferred Knowledge of statistical analysis tools like R, SAS etc Familiarity with any data visualization software A growth mindset and passionate about building things from the ground up and most importantly, you should be fun to work with As a data engineer at Recko, you will: Create and maintain optimal data pipeline architecture, Assemble large, complex data sets that meet functional / non-functional business requirements. Identify, design, and implement internal process improvements: automating manual processes, optimizing data delivery, re-designing infrastructure for greater scalability, etc. Build the infrastructure required for optimal extraction, transformation, and loading of data from a wide variety of data sources using SQL and AWS big data' technologies. Build analytics tools that utilize the data pipeline to provide actionable insights into customer acquisition, operational efficiency and other key business performance metrics. Work with stakeholders including the Executive, Product, Data and Design teams to assist with data-related technical issues and support their data infrastructure needs. Keep our data separated and secure across national boundaries through multiple data centers and AWS regions. Create data tools for analytics and data scientist team members that assist them in building and optimizing our product into an innovative industry leader. Work with data and analytics experts to strive for greater functionality in our data systems.
About the Company, Conviva:Conviva is the leader in streaming media intelligence, powered by its real-time platform. More than 250 industry leaders and brands – including CBS, CCTV, Cirque Du Soleil, DAZN, Disney+, HBO, Hulu, Sky, Sling TV, TED, Univision, and Warner Media – rely on Conviva to maximize their consumer engagement, deliver the quality experiences viewers expect and drive revenue growth. With a global footprint of more than 500 million unique viewers watching 150 billion streams per year across 3 billion applications streaming on devices, Conviva offers streaming providers unmatched scale for continuous video measurement, intelligence and benchmarking across every stream, every screen, every second. Conviva is privately held and headquartered in Silicon Valley, California, with offices around the world. For more information, please visit us at www.conviva.com.What you get to do: Be a thought leader. As one of the senior most technical minds in the India centre, influence our technical evolution journey by pushing the boundaries of possibilities by testing forwarding looking ideas and demonstrating its value. Be a technical leader: Demonstrate pragmatic skills of translating requirements into technical design. Be an influencer. Understand challenges and collaborate across executives and stakeholders in a geographically distributed environment to influence them. Be a technical mentor. Build respect within team. Mentor senior engineers technically andcontribute to the growth of talent in the India centre. Be a customer advocate. Be empathetic to customer and domain by resolving ambiguity efficiently with the customer in mind. Be a transformation agent. Passionately champion engineering best practices and sharing across teams. Be hands-on. Participate regularly in code and design reviews, drive technical prototypes and actively contribute to resolving difficult production issues.What you bring to the role: Thrive in a start-up environment and has a platform mindset. Excellent communicator. Demonstrated ability to succinctly communicate and describe complexvtechnical designs and technology choices both to executives and developers. Expert in Scala coding. JVM based stack is a bonus. Expert in big data technologies like Druid, Spark, Hadoop, Flink (or Akka) & Kafka. Passionate about one or more engineering best practices that influence design, quality of code or developer efficiency. Familiar with building distributed applications using webservices and RESTful APIs. Familiarity in building SaaS platforms on either in-house data centres or public cloud providers.
About BlackHawk Network:Blackhawk Network is building a digital platform and products that bring people and brands together. We facilitate cross channel payments via cash-in, cash-out and mobile payments. By leveraging blockchain, smart contracts, serverless technology, real time payment systems, we are unlocking the next million users through innovation. Our employees are our biggest assets! Come find out how we engage, with the biggest brands in the world. We look for people who collaborate, who are inspirational, who have passion that can make a difference by working as a team while striving for global excellence. You can expect a strong investment in your professional growth, and a dedication to crafting a successful, sustainable career for you. Our teams are composed of highly talented and passionate 'A' players, who are also invested in mentoring and enabling the best qualities. Our vibrant culture and high expectations will kindle your passion and bring out the best in you! As a leader in branded payments, we are building a strong diverse team and expanding in ASIA PACIFIC –we are hiring in Bengaluru, India! This is an amazing opportunity for problem solvers who want to be a part of an innovative and creative Engineering team that values your contribution to the company. If this role has your name written all over it, please contact us apply now with a resume so that we explore further and get connected. If you enjoy building world class payment applications, are highly passionate about pushing the boundaries of scale and availability on the cloud, leveraging the next horizon technologies, rapidly deliver features to production, make data driven decisions on product development, collaborate and innovate with like-minded experts, then this would be your ideal job. Blackhawk is seeking passionate backend engineers at all levels to build our next generation of payment systems on a public cloud infrastructure. Our team enjoys working together to contribute to meaningful work seen by millions of merchants worldwide.As a Senior SDET, you will work closely with data engineers to automate developed features and manual testing of the new data ETL Jobs, Data pipelines and Reports. You will be responsible for owning the complete architecture of automation framework and planning and designing automation for data ingestion, transformation and Reporting/Visualization. You will be building high-quality automation frameworks to cover end to end testing of the data platforms and ensure test data setup and pre-empt post production issues by high quality testing in the lower environments. You will get an opportunity to contribute at all levels of the test pyramid. You will also work with customer success and product teams to replicate post-production release issues. Key Qualifications Bachelor’s degree in Computer Science, Engineering or related fields 5+ years of experience testing data ingestion, visualization and info delivery systems. Real passion for data quality, reconciliation and uncovering hard to find scenarios and bugs. Proficiency in at least one programming language (preferably Python/Java) Expertise in end to end ETL (E.g. DataStage, Matillion) and BI platforms (E.g. MicroStrategy, PowerBI) testing and data validation Experience working with big data technologies such as Hadoop and MapReduce is desirable Excellent analytical, problem solving and communication skills. Self-motivated, results oriented and deadline driven. Experience with databases and data visualization and dashboarding tools would be desirable Experience working with Amazon Web Services (AWS) and Redshift is desirable Excellent knowledge of Software development lifecycle, testing Methodologies, QA terminology, processes, and tools Experience with automation using automation frameworks and tools, such as TestNG, JUnit and Selenium
REQUIREMENT: Previous experience of working in large scale data engineering 4+ years of experience working in data engineering and/or backend technologies with cloud experience (any) is mandatory. Previous experience of architecting and designing backend for large scale data processing. Familiarity and experience of working in different technologies related to data engineering – different database technologies, Hadoop, spark, storm, hive etc. Hands-on and have the ability to contribute a key portion of data engineering backend. Self-inspired and motivated to drive for exceptional results. Familiarity and experience working with different stages of data engineering – data acquisition, data refining, large scale data processing, efficient data storage for business analysis. Familiarity and experience working with different DB technologies and how to scale them. RESPONSIBILITY: End to end responsibility to come up with data engineering architecture, design, development and then implementation of it. Build data engineering workflow for large scale data processing. Discover opportunities in data acquisition. Bring industry best practices for data engineering workflow. Develop data set processes for data modelling, mining and production. Take additional tech responsibilities for driving an initiative to completion Recommend ways to improve data reliability, efficiency and quality Goes out of their way to reduce complexity. Humble and outgoing - engineering cheerleaders.
Spark / Scala experience should be more than 2 years. Combination with Java & Scala is fine or we are even fine with Big Data Developer with strong Core Java Concepts. - Scala / Spark Developer. Strong proficiency Scala on Spark (Hadoop) - Scala + Java is also preferred Complete SDLC process and Agile Methodology (Scrum) Version control / Git
A data engineer with AWS Cloud infrastructure experience to join our Big Data Operations team. This role will provide advanced operations support, contribute to automation and system improvements, and work directly with enterprise customers to provide excellent customer service.The candidate,1. Must have a very good hands-on technical experience of 3+ years with JAVA or Python2. Working experience and good understanding of AWS Cloud; Advanced experience with IAM policy and role management3. Infrastructure Operations: 5+ years supporting systems infrastructure operations, upgrades, deployments using Terraform, and monitoring4. Hadoop: Experience with Hadoop (Hive, Spark, Sqoop) and / or AWS EMR5. Knowledge on PostgreSQL/MySQL/Dynamo DB backend operations6. DevOps: Experience with DevOps automation - Orchestration/Configuration Management and CI/CD tools (Jenkins)7. Version Control: Working experience with one or more version control platforms like GitHub or GitLab8. Knowledge on AWS Quick sight reporting9. Monitoring: Hands on experience with monitoring tools such as AWS CloudWatch, AWS CloudTrail, Datadog and Elastic Search10. Networking: Working knowledge of TCP/IP networking, SMTP, HTTP, load-balancers (ELB) and high availability architecture11. Security: Experience implementing role-based security, including AD integration, security policies, and auditing in a Linux/Hadoop/AWS environment. Familiar with penetration testing and scan tools for remediation of security vulnerabilities.12. Demonstrated successful experience learning new technologies quicklyWHAT WILL BE THE ROLES AND RESPONSIBILITIES?1. Create procedures/run books for operational and security aspects of AWS platform2. Improve AWS infrastructure by developing and enhancing automation methods3. Provide advanced business and engineering support services to end users4. Lead other admins and platform engineers through design and implementation decisions to achieve balance between strategic design and tactical needs5. Research and deploy new tools and frameworks to build a sustainable big data platform6. Assist with creating programs for training and onboarding for new end users7. Lead Agile/Kanban workflows and team process work8. Troubleshoot issues to resolve problems9. Provide status updates to Operations product owner and stakeholders10. Track all details in the issue tracking system (JIRA)11. Provide issue review and triage problems for new service/support requests12. Use DevOps automation tools, including Jenkins build jobs13. Fulfil any ad-hoc data or report request queries from different functional groups
Key Result Areas · Create and maintain optimal data pipeline, · Assemble large, complex data sets that meet functional / non-functional business requirements. · Identify, design, and implement internal process improvements: automating manual processes, optimizing data delivery, re-designing infrastructure for greater scalability, etc. · Keep our data separated and secure · Create data tools for analytics and data scientist team members that assist them in building and optimizing our product into an innovative industry leader. · Build analytics tools that utilize the data pipeline to provide actionable insights into key business performance metrics. · Work with stakeholders including the Executive, Product, Data and Design teams to assist with data-related technical issues and support their data infrastructure needs. · Work with data and analytics experts to strive for greater functionality in our data systems Knowledge, Skills and Experience Core Skills: We are looking for a candidate with 5+ years of experience in a Data Engineer role. They should also have experience using the following software/tools: · Experience in developing Big Data applications using Spark, Hive, Sqoop, Kafka, and Map Reduce · Experience with stream-processing systems: Spark-Streaming, Strom etc. · Experience with object-oriented/object function scripting languages: Python, Scala etc · Experience in designing and building dimensional data models to improve accessibility, efficiency, and quality of data · Should be proficient in writing Advanced SQLs, Expertise in performance tuning of SQLs. Experience with data science and machine learning tools and technologies is a plus · Experience with relational SQL and NoSQL databases, including Postgres and Cassandra. · Experience with Azure cloud services is a plus · Financial Services Knowledge is a plus
· Advanced Spark Programming Skills · Advanced Python Skills · Data Engineering ETL and ELT Skills · Expertise on Streaming data · Experience in Hadoop eco system · Basic understanding of Cloud Platforms · Technical Design Skills, Alternative approaches · Hands on expertise on writing UDF’s · Hands on expertise on streaming data ingestion · Be able to independently tune spark scripts · Advanced Debugging skills & Large Volume data handling. · Independently breakdown and plan technical Tasks
Hiring for a funded fintech startup based out of Bangalore!!! Our Ideal Candidate We are looking for a Senior DevOps engineer to join the engineering team and help us automate the build, release, packaging and infrastructure provisioning and support processes. The candidate is expected to own the full life-cycle of provisioning, configuration management, monitoring, maintenance and support for cloud as well as on-premise deployments. Requirements 5-plus years of DevOps experience managing the Big Data application stack including HDFS, YARN, Spark, Hive and Hbase Deeper understanding of all the configurations required for installing and maintaining the infrastructure in the long run Experience setting up high availability, configuring resource allocation, setting up capacity schedulers, handling data recovery tasks Experience with middle-layer technologies including web servers (httpd, ningx), application servers (Jboss, Tomcat) and database systems (postgres, mysql) Experience setting up enterprise security solutions including setting up active directories, firewalls, SSL certificates, Kerberos KDC servers, etc. Experience maintaining and hardening the infrastructure by regularly applying required security packages and patches Experience supporting on-premise solutions as well as on AWS cloud Experience working with and supporting Spark-based applications on YARN Experience with one or more automation tools such as Ansible, Teraform, etc Experience working with CI/CD tools like Jenkins and various test report and coverage plugins Experience defining and automating the build, versioning and release processes for complex enterprise products Experience supporting clients remotely and on-site Experience working with and supporting Java- and Python-based tech stacks would be a plus Desired Non-technical Requirements Very strong communication skills both written and verbal Strong desire to work with start-ups Must be a team player Job Perks Attractive variable compensation package Flexible working hours – everything is results-oriented Opportunity to work with an award-winning organization in the hottest space in tech – artificial intelligence and advanced machine learning
Data Engineering role at ThoughtWorks ThoughtWorks India is looking for talented data engineers passionate about building large scale data processing systems to help manage the ever-growing information needs of our clients. Our developers have been contributing code to major organizations and open source projects for over 25 years now. They’ve also been writing books, speaking at conferences, and helping push software development forward -- changing companies and even industries along the way. As Consultants, we work with our clients to ensure we’re delivering the best possible solution. Our Lead Dev plays an important role in leading these projects to success. You will be responsible for - Creating complex data processing pipelines, as part of diverse, high energy teams Designing scalable implementations of the models developed by our Data Scientists Hands-on programming based on TDD, usually in a pair programming environment Deploying data pipelines in production based on Continuous Delivery practices Ideally, you should have - 2-6 years of overall industry experience Minimum of 2 years of experience building and deploying large scale data processing pipelines in a production environment Strong domain modelling and coding experience in Java /Scala / Python. Experience building data pipelines and data centric applications using distributed storage platforms like HDFS, S3, NoSql databases (Hbase, Cassandra, etc) and distributed processing platforms like Hadoop, Spark, Hive, Oozie, Airflow, Kafka etc in a production setting Hands on experience in (at least one or more) MapR, Cloudera, Hortonworks and/or Cloud (AWS EMR, Azure HDInsights, Qubole etc.) Knowledge of software best practices like Test-Driven Development (TDD) and Continuous Integration (CI), Agile development Strong communication skills with the ability to work in a consulting environment is essential And here’s some of the perks of being part of a unique organization like ThoughtWorks: A real commitment to “changing the face of IT” -- our way of thinking about diversity and inclusion. Over the past ten years, we’ve implemented a lot of initiatives to make ThoughtWorks a place that reflects the world around us, and to make this a welcoming home to technologists of all stripes. We’re not perfect, but we’re actively working towards true gender balance for our business and our industry, and you’ll see that diversity reflected on our project teams and in offices. Continuous learning. You’ll be constantly exposed to new languages, frameworks and ideas from your peers and as you work on different projects -- challenging you to stay at the top of your game. Support to grow as a technologist outside of your role at ThoughtWorks. This is why ThoughtWorkers have written over 100 books and can be found speaking at (and, ahem, keynoting) tech conferences all over the world. We love to learn and share knowledge, and you’ll find a community of passionate technologists eager to back your endeavors, whatever they may be. You’ll also receive financial support to attend conferences every year. An organizational commitment to social responsibility. ThoughtWorkers challenge each other to be just a little more thoughtful about the world around us, and we believe in using our profits for good. All around the world, you’ll find ThoughtWorks supporting great causes and organizations in both official and unofficial capacities. If you relish the idea of being part of ThoughtWorks’ Data Practice that extends beyond the work we do for our customers, you may find ThoughtWorks is the right place for you. If you share our passion for technology and want to help change the world with software, we want to hear from you!