![Product Company Chennai based's logo](/_next/image?url=https%3A%2F%2Fcdn.cutshort.io%2Fpublic%2Fimages%2Fdefault_company_picture.jpg&w=3840&q=75)
- Hands-on programming expertise in Java OR Python
- Strong production experience with Spark (Minimum of 1-2 years)
- Experience in data pipelines using Big Data technologies (Hadoop, Spark, Kafka, etc.,) on large scale unstructured data sets
- Working experience and good understanding of public cloud environments (AWS OR Azure OR Google Cloud)
- Experience with IAM policy and role management is a plus
![companies logos](/_next/image?url=https%3A%2F%2Fcdn.cutshort.io%2Fpublic%2Fimages%2Fhiring_companies_logos-v2.webp&w=3840&q=80)
Similar jobs
![skill icon](/_next/image?url=https%3A%2F%2Fcdn.cutshort.io%2Fpublic%2Fimages%2Fskill_icons%2Fpython.png&w=32&q=75)
Qualifications :
- Minimum 2 years of .NET development experience (ASP.Net 3.5 or greater and C# 4 or greater).
- Good knowledge of MVC, Entity Framework, and Web API/WCF.
- ASP.NET Core knowledge is preferred.
- Creating APIs / Using third-party APIs
- Working knowledge of Angular is preferred.
- Knowledge of Stored Procedures and experience with a relational database (MSSQL 2012 or higher).
- Solid understanding of object-oriented development principles
- Working knowledge of web, HTML, CSS, JavaScript, and the Bootstrap framework
- Strong understanding of object-oriented programming
- Ability to create reusable C# libraries
- Must be able to write clean comments, readable C# code, and the ability to self-learn.
- Working knowledge of GIT
Qualities required :
Over above tech skill we prefer to have
- Good communication and Time Management Skill.
- Good team player and ability to contribute on a individual basis.
- We provide the best learning and growth environment for candidates.
Skills:
NET Core
.NET Framework
ASP.NET Core
ASP.NET MVC
ASP.NET Web API
C#
HTML
● Able contribute to the gathering of functional requirements, developing technical
specifications, and project & test planning
● Demonstrating technical expertise, and solving challenging programming and design
problems
● Roughly 80% hands-on coding
● Generate technical documentation and PowerPoint presentations to communicate
architectural and design options, and educate development teams and business users
● Resolve defects/bugs during QA testing, pre-production, production, and post-release
patches
● Work cross-functionally with various bidgely teams including: product management,
QA/QE, various product lines, and/or business units to drive forward results
Requirements
● BS/MS in computer science or equivalent work experience
● 2-4 years’ experience designing and developing applications in Data Engineering
● Hands-on experience with Big data Eco Systems.
● Hadoop,Hdfs,Map Reduce,YARN,AWS Cloud, EMR, S3, Spark, Cassandra, Kafka,
Zookeeper
● Expertise with any of the following Object-Oriented Languages (OOD): Java/J2EE,Scala,
Python
● Strong leadership experience: Leading meetings, presenting if required
● Excellent communication skills: Demonstrated ability to explain complex technical
issues to both technical and non-technical audiences
● Expertise in the Software design/architecture process
● Expertise with unit testing & Test-Driven Development (TDD)
● Experience on Cloud or AWS is preferable
● Have a good understanding and ability to develop software, prototypes, or proofs of
concepts (POC's) for various Data Engineering requirements.
![skill icon](/_next/image?url=https%3A%2F%2Fcdn.cutshort.io%2Fpublic%2Fimages%2Fskill_icons%2Fpython.png&w=32&q=75)
Requirements:
● Understanding our data sets and how to bring them together.
● Working with our engineering team to support custom solutions offered to the product development.
● Filling the gap between development, engineering and data ops.
● Creating, maintaining and documenting scripts to support ongoing custom solutions.
● Excellent organizational skills, including attention to precise details
● Strong multitasking skills and ability to work in a fast-paced environment
● 5+ years experience with Python to develop scripts.
● Know your way around RESTFUL APIs.[Able to integrate not necessary to publish]
● You are familiar with pulling and pushing files from SFTP and AWS S3.
● Experience with any Cloud solutions including GCP / AWS / OCI / Azure.
● Familiarity with SQL programming to query and transform data from relational Databases.
● Familiarity to work with Linux (and Linux work environment).
● Excellent written and verbal communication skills
● Extracting, transforming, and loading data into internal databases and Hadoop
● Optimizing our new and existing data pipelines for speed and reliability
● Deploying product build and product improvements
● Documenting and managing multiple repositories of code
● Experience with SQL and NoSQL databases (Casendra, MySQL)
● Hands-on experience in data pipelining and ETL. (Any of these frameworks/tools: Hadoop, BigQuery,
RedShift, Athena)
● Hands-on experience in AirFlow
● Understanding of best practices, common coding patterns and good practices around
● storing, partitioning, warehousing and indexing of data
● Experience in reading the data from Kafka topic (both live stream and offline)
● Experience in PySpark and Data frames
Responsibilities:
You’ll
● Collaborating across an agile team to continuously design, iterate, and develop big data systems.
● Extracting, transforming, and loading data into internal databases.
● Optimizing our new and existing data pipelines for speed and reliability.
● Deploying new products and product improvements.
● Documenting and managing multiple repositories of code.
About Us:
6sense is a Predictive Intelligence Engine that is reimagining how B2B companies do
sales and marketing. It works with big data at scale, advanced machine learning and
predictive modelling to find buyers and predict what they will purchase, when and
how much.
6sense helps B2B marketing and sales organizations fully understand the complex ABM
buyer journey. By combining intent signals from every channel with the industry’s most
advanced AI predictive capabilities, it is finally possible to predict account demand and
optimize demand generation in an ABM world. Equipped with the power of AI and the
6sense Demand PlatformTM, marketing and sales professionals can uncover, prioritize,
and engage buyers to drive more revenue.
6sense is seeking a Staff Software Engineer and data to become part of a team
designing, developing, and deploying its customer-centric applications.
We’ve more than doubled our revenue in the past five years and completed our Series
E funding of $200M last year, giving us a stable foundation for growth.
Responsibilities:
1. Own critical datasets and data pipelines for product & business, and work
towards direct business goals of increased data coverage, data match rates, data
quality, data freshness
2. Create more value from various datasets with creative solutions, and unlocking
more value from existing data, and help build data moat for the company3. Design, develop, test, deploy and maintain optimal data pipelines, and assemble
large, complex data sets that meet functional and non-functional business
requirements
4. Improving our current data pipelines i.e. improve their performance, SLAs,
remove redundancies, and figure out a way to test before v/s after roll out
5. Identify, design, and implement process improvements in data flow across
multiple stages and via collaboration with multiple cross functional teams eg.
automating manual processes, optimising data delivery, hand-off processes etc.
6. Work with cross function stakeholders including the Product, Data Analytics ,
Customer Support teams for their enablement for data access and related goals
7. Build for security, privacy, scalability, reliability and compliance
8. Mentor and coach other team members on scalable and extensible solutions
design, and best coding standards
9. Help build a team and cultivate innovation by driving cross-collaboration and
execution of projects across multiple teams
Requirements:
8-10+ years of overall work experience as a Data Engineer
Excellent analytical and problem-solving skills
Strong experience with Big Data technologies like Apache Spark. Experience with
Hadoop, Hive, Presto would-be a plus
Strong experience in writing complex, optimized SQL queries across large data
sets. Experience with optimizing queries and underlying storage
Experience with Python/ Scala
Experience with Apache Airflow or other orchestration tools
Experience with writing Hive / Presto UDFs in Java
Experience working on AWS cloud platform and services.
Experience with Key Value stores or NoSQL databases would be a plus.
Comfortable with Unix / Linux command line
Interpersonal Skills:
You can work independently as well as part of a team.
You take ownership of projects and drive them to conclusion.
You’re a good communicator and are capable of not just doing the work, but also
teaching others and explaining the “why” behind complicated technical
decisions.
You aren’t afraid to roll up your sleeves: This role will evolve over time, and we’ll
want you to evolve with it
You will:
- Create highly scalable AWS micro-services utilizing cutting edge cloud technologies.
- Design and develop Big Data pipelines handling huge geospatial data.
- Bring clarity to large complex technical challenges.
- Collaborate with Engineering leadership to help drive technical strategy.
- Project scoping, planning and estimation.
- Mentor and coach team members at different levels of experience.
- Participate in peer code reviews and technical meetings.
- Cultivate a culture of engineering excellence.
- Seek, implement and adhere to standards, frameworks and best practices in the industry.
- Participate in on-call rotation.
You have:
- Bachelor’s/Master’s degree in computer science, computer engineering or relevant field.
- 5+ years of experience in software design, architecture and development.
- 5+ years of experience using object-oriented languages (Java, Python).
- Strong experience with Big Data technologies like Hadoop, Spark, Map Reduce, Kafka, etc.
- Strong experience in working with different AWS technologies.
- Excellent competencies in data structures & algorithms.
Nice to have:
- Proven track record of delivering large scale projects, and an ability to break down large tasks into smaller deliverable chunks
- Experience in developing high throughput low latency backend services
- Affinity to spatial data structures and algorithms.
- Familiarity with Postgres DB, Google Places or Mapbox APIs
What we offer
At GroundTruth, we want our employees to be comfortable with their benefits so they can focus on doing the work they love.
- Unlimited Paid Time Off
- In Office Daily Catered Lunch
- Fully stocked snacks/beverages
- 401(k) employer match
- Health coverage including medical, dental, vision and option for HSA or FSA
- Generous parental leave
- Company-wide DEIB Committee
- Inclusion Academy Seminars
- Wellness/Gym Reimbursement
- Pet Expense Reimbursement
- Company-wide Volunteer Day
- Education reimbursement program
- Cell phone reimbursement
- Equity Analysis to ensure fair pay
Data Engineer
- High Skilled and proficient on Azure Data Engineering Tech stacks (ADF, Databricks) - Should be well experienced in design and development of Big data integration platform (Kafka, Hadoop). - Highly skilled and experienced in building medium to complex data integration pipelines for Data at Rest and streaming data using Spark. - Strong knowledge in R/Python. - Advanced proficiency in solution design and implementation through Azure Data Lake, SQL and NoSQL Databases. - Strong in Data Warehousing concepts - Expertise in SQL, SQL tuning, Data Management (Data Security), schema design, Python and ETL processes - Highly Motivated, Self-Starter and quick learner - Must have Good knowledge on Data modelling and understating of Data analytics - Exposure to Statistical procedures, Experiments and Machine Learning techniques is an added advantage. - Experience in leading small team of 6/7 Data Engineers. - Excellent written and verbal communication skills
|
We are hiring for Senior Data Architect for a reputed company
Experience required- 10-19 yrs
Skills required- Having hands on experience on Kafka, Stored procedures, Snowflakes.
![skill icon](/_next/image?url=https%3A%2F%2Fcdn.cutshort.io%2Fpublic%2Fimages%2Fskill_icons%2Fpython.png&w=32&q=75)
Job Description
-
Design, development and deployment of highly-available and fault-tolerant enterprise business software at scale.
-
Demonstrate tech expertise to go very deep or broad in solving classes of problems or creating broadly leverage-able solutions.
-
Execute large-scale projects - Provide technical leadership in architecting and building product solutions.
-
Collaborate across teams to deliver a result, from hardworking team members within your group, through smart technologists across lines of business.
-
Be a role model on acting with good judgment and responsibility, helping teams to commit and move forward.
-
Be a humble mentor and trusted advisor for both our talented team members and passionate leaders alike. Deal with differences in opinion in a mature and fair way.
-
Raise the bar by improving standard methodologies, producing best-in-class efficient solutions, code, documentation, testing, and monitoring.
Qualifications
• 15+ years of relevant engineering experience.
-
Proven record of building and productionizing highly reliable products at scale.
-
Experience with Java and Python
-
Experience with the Big Data technologie is a plus.
-
Ability to assess new technologies and make pragmatic choices that help guide us towards a long-term vision
-
Can collaborate well with several other engineering orgs to articulate requirements and system design
Additional Information
Professional Attributes:
• Team player!
• Great interpersonal skills, deep technical ability, and a portfolio of successful execution.
• Excellent written and verbal communication skills, including the ability to write detailed technical documents.
• Passionate about helping teams grow by inspiring and mentoring engineers.
![skill icon](/_next/image?url=https%3A%2F%2Fcdn.cutshort.io%2Fpublic%2Fimages%2Fskill_icons%2Fpython.png&w=32&q=75)
• Responsible for developing and maintaining applications with PySpark
Must-Have Skills:
![skill icon](/_next/image?url=https%3A%2F%2Fcdn.cutshort.io%2Fpublic%2Fimages%2Fskill_icons%2Fpython.png&w=32&q=75)
![skill icon](/_next/image?url=https%3A%2F%2Fcdn.cutshort.io%2Fpublic%2Fimages%2Fskill_icons%2Fscala.png&w=32&q=75)
![icon](/_next/image?url=https%3A%2F%2Fcdn.cutshort.io%2Fpublic%2Fimages%2Fsearch.png&w=48&q=75)
![companies logos](/_next/image?url=https%3A%2F%2Fcdn.cutshort.io%2Fpublic%2Fimages%2Fhiring_companies_logos-v2.webp&w=3840&q=80)